A survey of annual reports from some of the largest U.S. companies is increasingly highlighting artificial intelligence as a potential risk factor.
According to a report from research firm Arize AI, the number of Fortune 500 companies citing AI as a risk has reached 281, representing 56.2% of companies and a 473.5% increase from the previous year, when only 49 companies cited AI risks.
“If one thing is clear from the annual Fortune 500 reports, it’s that generative AI is impacting a wide range of industries, including those that have yet to adopt the technology,” the report states. “Given that most mentions of AI are risk factors, there is a real opportunity for companies to differentiate themselves by highlighting innovation and providing context for how they are using generative AI.”
Indeed, the spike in warnings coincides with an explosion in awareness and interest in AI following OpenAI’s release of ChatGPT in late 2022. The number of companies mentioning AI increased by 152% to 323.
Now that AI has captured the attention of American companies, the risks and opportunities are coming into focus, and companies are identifying where the potential downsides come from.
But some companies are more concerned than others. The media and entertainment industry is the most concerned, with 91.7% of Fortune 500 companies in that sector citing AI as a risk, according to Arise. That’s as AI spreads across the industry, with performers and companies putting in place defenses against the new technology.
“New technological developments, including the development and use of generative artificial intelligence, are rapidly evolving,” streaming giant Netflix said in its annual report. “If our competitors use such technologies to gain an advantage, our competitive position and operating results may be adversely affected.”
Hollywood giant Disney said the rules governing new technologies such as generative AI are “unfinalized” and could ultimately affect how it earns revenue from the use of its intellectual property and how entertainment products are made.
According to Arise, 86.4% of software and technology companies, 70% of telecommunications companies, 65.1% of healthcare companies, 62.7% of financial companies and 60% of retailers also issued warnings.
In contrast, just 18.8% of automotive companies are warning of AI risks, 37.3% of energy companies, and 39.7% of manufacturing companies.
Warnings also came from companies that embed AI in their products, with Motorola saying that “AI may not always work as intended, data sets may be incomplete or may contain information that is unlawful, biased, harmful or offensive, which could have an adverse effect on our financial results, reputation or customer acceptance of our AI products.”
Pointing to its AI and Customer 360 platform, which provides information about its customers’ customers, Salesforce said: “If we enable or offer solutions that are controversial because of their perceived or actual impact on human rights, privacy, employment or other societal conditions, we could experience new or increased scrutiny by governments or regulators, damage to our brand or reputation, competitive harm, or legal liability.”
AI has also been cited as a risk when it comes to cybersecurity and data breaches. In fact, the recent Def Con security conference highlighted the importance of AI in cybersecurity.
Meanwhile, a study published in the Hospitality Market & Management Journal in June found that consumers are less willing to buy a product if it has an “AI” label on it.
Consumers need to be convinced of the benefits of AI in a particular product, according to Dogan Gersoy, a professor of hospitality management at Washington State University’s Carson College of Business and one of the study’s authors.
“A lot of people ask, ‘Why do you need AI in your coffee maker? Why do you need AI in your refrigerator? Why do you need AI in your vacuum cleaner?'” he told Fortune earlier this month.