Manjeet Rege, professor of data science and software engineering at the University of St. Thomas’ School of Engineering, recently spoke with WCCO Radio about the current state of AI regulation, the role of self-regulation by technology companies, and potential changes in AI. Policies associated with future changes in U.S. leadership.
Host: What is your baseline? Are there any regulations regarding AI currently?
Reggae: Not at the federal level. During the Biden era, an executive order was issued at the end of 2023 that essentially provided guidelines for ethical and responsible AI implementation. But over the last four to five years, and especially the last two years with the rise of generative AI, the United States has taken a very market-driven approach. Self-regulation is left to the big tech companies. As a result, states have begun to take over and implement some of these measures in the absence of regulation at the federal level, as we are seeing in California.
Host: Yes, California has an AI transparency law. So what is it? Do we need something like that in Minnesota or nationally?
Rege: I think there needs to be a balance between enabling innovation and providing a framework for responsible AI development and deployment. California’s law, at least, has received some pushback from people in Silicon Valley because it places some of the blame on developers. From what I’ve read, the first draft of this law suggests that the same underlying technology may be used for both coding and spreading disinformation, and how it will ultimately be used. AI developers cannot be held responsible for what happens.
Moderator: Hmm, so if something malicious happens, the producer will be held responsible?
Reggae: Yes. And then, from a global perspective, there’s the European Union’s AI law, which actually protects consumer data. EU law states that if generative AI is used to generate content there, it must be labeled with “Yes, this was created with AI.” As a result, you can tell the real thing from the fake one.