One thing that has changed my opinion on some things over the last few years is AI. I thought most of the claims made about its radically socially destructive potential (both positive and negative) were hype. That’s partly because they often same people who made highly exaggerated claims about cryptocurrency. some too were similar I think we should prioritize what we know matters here and now (climate change, nuclear weapons, pandemics) over potential disasters that are purely speculative. Given that Silicon Valley companies are constantly promising new revolutions, I always try to remember that companies with strong financial incentives tend to make modest improvements. Or even outright fraudas a breakthrough.
But when I’ve actually used some of the various technologies that are lumped together as “artificial intelligence,” my reaction has been, “No, this is actually very powerful… and this is just It’s just the beginning.” Many of my fellow leftists have a negative attitude toward AI’s capabilities, citing AI failures (such as basic mathematical mistakes and “hallucinations” in ChatGPT, and many of the “arts” produced by AI). I think we tend to rejoice in things like ugliness, poor image generators, etc.) . ). There’s even a certain desire for AI to be bad at what it does, because no one wants to think that many of the things we do on a daily basis can be automated. But let’s be honest, the technological advances we’re seeing are mind-blowing. If you’re training to argue with someone, having ChatGPT play the role of your opponent gives you virtually perfect performance. I remember a few years ago when chatbots were so laughably incompetent that it was easy to believe that it was impossible for them to pass the Turing test. Now, not only ChatGPT pass the test But it is I’m better at being “human” than most humans.. And again, this is just the beginning.
Personally, I really enjoy generative AI. I use it ridiculously. Make fun photos with image generator or production parody radio station. It’s fun to write lyrics and make them look like AI music generators. beach boysor enter the Communist Party Manifesto into a voice generator, I made Donald Trump read it all.. I once did an entire book, echoland, It imagines a future where humanity utilizes AI. My personal experience with new generative AI programs is that they greatly liberate my creativity. I thought I hated AI, but now I love it.
But I also find it scary and don’t see why the alarm isn’t more widespread. The deepfake audio generator alone is pretty bad. You can upload about 20 seconds of a person’s voice and it will reproduce it almost perfectly. Everything becomes fun and games when you use that capacity for: make fun of joe roganbut of course Scammers noticed you right away This can be used to pretend to be people’s loved ones and scam money out of them by pretending their children have been in a horrible accident. It certainly has very beneficial applications. For example, AI voice cloning is give to ALS patients In some ways, it’s a miracle that we were able to use our voices again. But that is
Being able to replicate the functions of human intelligence on machines is both extremely exciting and extremely dangerous. Personally, I am very concerned about military applications of AI in an era of great power competition. of Autonomous weapons arms race It seems to me to be one of the most dangerous things happening in the world today, yet it receives very little coverage in the press. of Possible harm There are endless messages from AI. If computers could replicate the abilities of human scientists, it would be easier for fraudsters to create viruses that could cause pandemics far worse than the coronavirus. They could make bombs. It is possible to carry out a large-scale cyber attack. From deepfake porn to the empowerment of authoritarian governments to the potential for improperly programmed AI to do new and devastating harms we never thought possible, the rapid advances in these technologies are clearly making a huge difference. It’s dangerous. It means we are at risk from organizations over which we have no control.
Newsom vetoes safety bill
In California, Gavin Newsom I just vetoed the bill. SB 1047, which would have tried to put some kind of safety fence around AI development. The problem is that much AI is developed by commercial companies, where short-term incentives to maximize profits motivate them to develop tools that cause widespread social harm (for which the companies themselves do not pay). It’s possible. Therefore, it is critical that states intervene to ensure that AI is developed safely.
Garrison Lovely Write a comprehensive review For Jacobin, California’s law would have taken some fundamental steps to ensure that AI companies develop their products responsibly. If their product caused a catastrophe that caused significant damage, they would be held responsible. Whistleblower protection would have been in place. But Newsom vetoed the bill, repeating his usual free-market criticism that it stifles innovation. This means, as Lovely writes, “there is no mandatory safety protocol for developing the largest and most powerful AI models.” In doing so, Newsom said, “Big Tech is trying to protect democracy and all those who could someday be harmed by AI models built by companies caught in cutthroat competition for supremacy and profits.” He brought about a great victory with his own hands.” that This is not the first time Mr. Newsom has put business interests ahead of the public interest.
AI safety law is not something that can be settled on a neat “left-right” line. LaBrie said the bill was supported by labor unions, Jane Fonda, Elon Musk, many AI company employees, and talented altruists, and opposed by Google, Nancy Pelosi, and Ro Khanna. He points out that he has attracted a “strange kindred spirit” of people. But it’s clear that companies looking to reap a windfall from AI don’t want even relatively lenient regulation to slow its development.
LaBrie quotes a liberal-leaning academic who says it’s too early to know whether this type of law is needed. But that’s a bad way to think about risk. We do not know how devastating the damage caused by uncontrolled AI development will be. That’s why we should err on the side of too much regulation rather than too little regulation. For example, I don’t think anyone can say for sure how likely it is that AI will be used to create a virus that destroys human civilization. Maybe it’s quite unlikely. But given the magnitude of the risk, I don’t want to settle for something that is completely unlikely. We need to reduce the chances of this happening to as close to zero as possible. And I don’t think we need to worry about stifling some innovation in the process or forcing companies to take five years to do something that should have taken one year. The stakes are too high because we don’t know what this technology will do for us. I have gone on record as skeptical of the hypothesis that rogue AI with the ability to improve its own intelligence could turn against humanity and drive us to extinction. But I don’t have to think of any particularly likely scenario where I would at least want to make sure that an intelligent machine always has an “off switch” built into it. Because the cost of safety is so low compared to the cost of the worst-case outcome, it is not at all difficult to write into law the types of basic AI safeguards that SB 1047 provides. Contains.
I’m more excited about AI than many people around me, but I’m also deeply excited about the fact that we’re moving toward a future where we’re developing incredibly powerful technologies that are virtually unregulated. I’m concerned. Think long-term about the potentially dire consequences of our actions. Unfortunately, Democratic politicians like Newsom and Pelosi seem uninterested in changing that. We may look back on this moment and wonder how they could have been so stupid.