Experts studying advances in artificial intelligence are now warning that AI models could create the next “enhanced pathogens that could cause a major epidemic or pandemic.”
The declaration was made in a paper published in Science by co-authors from Johns Hopkins, Stanford and Fordham Universities, which said that AI models “have been trained or are capable of meaningfully manipulating large amounts of biological data for purposes ranging from speeding up drug and vaccine design to improving crop yields.”
“However, as with any powerful new technology, such biological models carry substantial risks. Because of their versatility, the same biological models that enable the design of benign viral vectors for delivering gene therapy may also be used to design more pathogenic viruses capable of evading vaccine-induced immunity,” the researchers wrote in their summary.
“Voluntary efforts among developers to evaluate the potential hazards of biological models are meaningful and important, but are not sufficient,” the paper continues. “We suggest that national governments, including the U.S., enact legislation and establish mandatory regulations to prevent advanced biological models from contributing significantly to large-scale hazards, such as the creation of novel or enhanced pathogens that could cause large-scale epidemics or pandemics.”
Army plans to deploy AI to drive new strategies to protect soldiers
While today’s AI models likely don’t “contribute significantly” to biological risks, “the essential elements for creating advanced biological models of great concern already exist or may soon exist,” Time quoted the study authors as saying.
They reportedly recommended that the government create a “battery of tests” that biological AI models must undergo before they are released to the public, after which authorities can decide how much access to the models should be restricted.
“We need to start planning now,” Anita Cicero, associate director of the Johns Hopkins Center for Health Security and one of the study’s co-authors, told TIME. “We’re going to need systematic government oversight and requirements to reduce the risks of these particularly powerful tools in the future.”
Cicero reportedly added that without proper oversight, biological risks from AI models could become a reality “within the next 20 years, maybe much less than that.”
Elon Musk backs California AI regulation bill: ‘Difficult decision’
“If there’s any question about whether AI can be used to cause a pandemic 100% and how far out we should be worried, I think AI is advancing at a rate that most people can’t anticipate,” Paul Powers, an AI expert and CEO of Fizna, a company that uses computers to analyze 3D models and geometric objects, told Fox News Digital.
“The problem is that it’s not just governments and large corporations that have access to these increasingly powerful capabilities, but individuals and small businesses as well,” he continued, but pointed out that “the problem with regulation here is, first of all, everyone wants a global set of rules on this, but the reality is that it’s enforced domestically. Secondly, regulation is not moving at the speed of AI. Regulation can’t even keep up with the technology at its traditional speed.”
“What they’re proposing is to have the government approve certain AI training models and certain AI applications, but the reality is, how do you oversee that?” Powers said.
Click here to get the FOX News app
“Certain nucleic acids are essentially the building blocks of potential pathogens and viruses,” Powers added. “I would start there… by really policing who has access to the building blocks in the first place.”