The problem with most previous attempts to regulate AI is that lawmakers focus on mythical future AI experiences rather than truly understanding the new risks that AI actually poses.
That’s what Andreessen Horowitz general partner VC Martin Casado made to a standing room audience at TechCrunch Disrupt 2024 last week. Casado, who heads a16z’s $1.25 billion infrastructure business, has invested in AI startups including World Labs, Cursor, Ideogram and Braintrust.
“Innovative technology and regulation have been an ongoing debate for decades, right? So the common thread in all discussions about AI is that it seems to come out of nowhere. ,” he told the audience. “It’s like they’re trying to come up with a whole new set of regulations without taking those lessons into account.”
For example, he said, “Have you actually looked at the definition of AI in these policies?” Well, you can’t even define it. ”
Casado was among the voices in Silicon Valley celebrating California Governor Gavin Newsom’s veto of the state’s proposed AI governance law, SB 1047. The law hoped to introduce so-called kill switches to super-scale AI models. Please turn them off. Opponents of the bill said it was so poorly worded that, far from saving us from the AI monsters of our imagined future, it would simply confuse and hinder California’s hot AI development scene. Ta.
“I think California’s attitude toward AI, which suggests a preference for bad laws based on science fiction concerns over tangible risks, is why founders are hesitant to move here. “I hear it all the time,” he wrote in a post on X weeks before the bill was vetoed. .
Although this particular state law has been repealed, the fact that it existed still bothers Casado. He said more bills crafted in a similar manner could come to fruition if politicians decide to pander to the public’s fears about AI rather than regulating how the technology actually works. I am concerned that there is.
He understands AI technology better than most. Before joining the storied venture capital firm, Casado founded two other companies, including network infrastructure company Nicira, which he sold to VMware for $1.26 billion a little more than a decade ago. Previously, Mr. Casado was a computer security expert at Lawrence Livermore National Laboratory.
He said many proposed AI regulations were not proposed or supported by many of the people who best understand AI technology, including academics and the commercial sector that develops AI products. Ta.
“We have to have a different concept of marginal risk than we have in the past. For example, how is today’s AI different from someone who uses Google? How is today’s AI different from someone who simply uses the Internet? How is it different? Once you have a model of how it is different, you have a concept of the marginal risk and you can apply policies to address that marginal risk,” he said. .
“I think it’s a little early to start looking at a bunch of regulations to really understand what you’re trying to regulate,” he argues.
The counterargument, and some of the audience members raised it, is that the world doesn’t really understand what harm the internet and social media can do before they harm us. It was just that I didn’t understand. When Google and Facebook launched, no one expected them to dominate online advertising and collect so much data about individuals. When social media was young, no one understood cyberbullying or echo chambers.
Supporters of AI regulation now often point to this past situation and argue that these technologies should have been regulated earlier.
Casado’s reaction?
“A robust regulatory regime, developed over 30 years, still exists and is well-equipped to create new policies around AI and other technologies.Certainly, just at the federal level, , which includes regulators ranging from the Federal Communications Commission to the House Science, Space, and Technology Committee, said Wednesday after the election that AI regulation should follow the path already laid out by existing regulators. When I asked Casado if he supported this view, he said yes.
But he also doesn’t think AI should be targeted because of problems with other technologies. Instead, you should target the technology that caused the problem.
“If we do something wrong on social media, you can’t put it on an AI and fix it,” he says. “The people who regulate AI say, ‘Oh, we did it wrong on social, so let’s do it right with AI,’ which is nonsense. Let’s solve it on social.”