Last week on “The Daily Show,” Mark Cuban has suggested that the AI race is ultimately about power.“Nothing is more empowering than military power and AI,” he said.
British historian Lord Acton would have given an apt response with his famous dictum, “Absolute power corrupts absolutely.” And as communicators continue to watch the battle between private sector lobbying, state regulation and federal regulation play out in real time, it’s hard to argue with Cuban’s sentiments.
Other notable news for communications professionals include California’s controversial AI regulation bill heading to a vote later this month, and the Democratic National Convention taking place in Chicago amid a flood of deepfakes aimed at swaying voter sentiment regarding the 2024 presidential election.
This week, we bring you what communicators need to know about AI.
risk
As the Democratic National Convention gets underway in Chicago this week, coverage is focused on the surrogates, speeches and memorable moments leading up to Vice President Kamala Harris formally accepting the presidential nomination on Thursday.
The November election will feature many historic firsts, but the widespread use of deepfake technology to misrepresent candidates and positions is also unprecedented.
Microsoft hosted a luncheon at Chicago’s Drake Hotel on Monday to coach people on using tools to help detect deceptive AI content and deepfakes amid the proliferation of AI-manipulated media.
The Chicago Sun-Times reported::
“This is both a global challenge and an opportunity,” said Ginny Badanes, general manager of Microsoft’s Democracy Forward program. “Obviously, we’re thinking about the U.S. election, because it’s coming up and it’s going to have such huge implications, but it’s also important to look back at other big elections that have taken place.”
According to Badanes, one of the world’s most problematic political deepfake attacks occurred in Slovakia in October, just two days before the Central European country held parliamentary elections. AI technology was used to create fake recordings of a leading political candidate boasting about rigging the election. The recordings were then spread online, resulting in the candidate’s narrow defeat.
In a report this month, Microsoft warned that Russian actors were “targeting the US elections with distinctive fake videos.”
These countless examples highlight a troubling pattern of bad actors attempting to steer voter behavior, unfolding as an AI-assisted evolution of micro-targeting campaigns that weaponize Facebook users’ psychological profiles to pump misinformation into their feeds. Ahead of the 2016 elections.
Again, the villains are foreign and domestic. Trump Falsely suggesting Taylor Swift was endorsing him this week Swift and her fans posted fake images of themselves dressed in pro-Trump attire. Last week, Elon Musk released an image generation feature on X’s AI chatbot, Grok, that lets users generate AI images with little to no filters or guidelines. As Rolling Stone reports: It didn’t work.
This may get worse before it gets better, which may explain why. The Verge reports The San Francisco City Attorney’s Office is suing 16 of the most popular “AI stripping” websites that do exactly what they sound like.
This may also explain why the financial world is only beginning to realise how risky investments in currently unregulated AI are.
Marketplace Report The Eurekahedge AI hedge fund has lagged the S&P 500 index, “proving that machines don’t learn from their investment mistakes,” he said.
meanwhile, New reports According to a survey by LLM assessment platform Arize, one in five Fortune 500 companies now mention generative AI or LLM in their annual reports, and among them, researchers found that the number of companies positioning AI as a risk factor has increased by 473.5% since 2022.
What would a benchmark for AI risk assessment look like? Bo Li, an associate professor at the University of Chicago, led a group of colleagues from multiple universities to develop a taxonomy of AI risk and a benchmark to assess which LLMs are most likely to violate the rules.
Li and his team analyzed government AI regulations and guidelines from the US, China, and the EU, along with the usage policies of 16 major AI companies.
WIRED report:
Understanding the risk landscape and the strengths and weaknesses of specific models may become increasingly important for companies looking to deploy AI in specific markets or for specific use cases. For example, a company looking to use LLMs for customer service may be more concerned about a model’s tendency to produce offensive language when provoked than its ability to design nuclear weapons.
Bo said the analysis also uncovered some interesting questions about how AI should be developed and regulated. For example, the researchers found that government rules are not as comprehensive as companies’ overall policies, suggesting there is room for stronger regulation.
The analysis also suggests that some companies need to do more to ensure their models are safe: “When you test models against a company’s own policies, they’re not always compliant,” Bo said. “That means there’s a lot of room for improvement.”
This conclusion highlights the impact corporate communicators can have in shaping internal AI policies and defining responsible use cases: You will be the glue that holds your organization’s AI efforts together as they grow.
Just as a crisis management plan has stakeholders across business functions, your company’s AI strategy should start with a task force that includes heads across departments and functions to ensure all leaders are communicating guidelines, procedures, and use cases from the same playbook, while also serving as the eyes and ears for identifying emerging risks.
Regulation
Last Thursday, the California Assembly’s Appropriations Committee approved an amended version of a bill that would require companies to test the safety of AI technology before releasing it to the public. The bill, SB1047, would allow the state’s attorney general to sue companies if AI causes harm, such as death or massive property damage. A formal vote is expected by the end of this month.
Not surprisingly, there’s fierce debate in the tech industry over the details of the bill.
The New York Times reports::
The bill’s author, Senator Scott Wiener, made some concessions to appease tech industry critics, including OpenAI, Meta and Google. The changes also reflect suggestions from another prominent startup, Anthropic.
The bill would not create a new agency for AI safety, instead transferring regulatory duties to an existing California government department. It would also hold companies liable for violations of the law only if their technology causes actual harm or imminent danger to public safety. The bill previously allowed companies to be punished for failing to comply with safety regulations even if no harm had yet occurred.
“The new amendments reflect months of constructive dialogue with stakeholders in industry, startups and academia,” said Dan Hendrix, founder of the San Francisco nonprofit Center for AI Safety, which helped draft the bill.
A Google spokesperson said the company’s previous concerns “remain valid.” Anthropik said it was still considering the changes. OpenAI and Meta declined to comment on the proposed amendments.
“We can promote both innovation and safety — they are not mutually exclusive,” Wiener said in a statement Thursday. He said he believes the proposed changes address many of the tech industry’s concerns.
Last weekend, California Rep. Nancy Pelosi A statement was issued Pelosi expressed concerns about the bill. Citing Biden’s approach to AI He also warned against stifling innovation.
“The view of many in Congress is that SB 1047 is well-intentioned but poorly informed,” Pelosi said.
Pelosi cited work from leading AI researchers and thought leaders to denounce the bill but offered few indications of next steps toward pursuing federal regulation.
In responseCalifornia Sen. Scott Wiener, the bill’s sponsor, disagreed with Pelosi.
“The bill only “We call on the largest AI developers to do what they have repeatedly promised: conduct basic safety testing of their very powerful AI models,” Wiener added.
The disconnect highlights the frustrating push-pull between those who warn against taking an accelerationist approach to AI and those who publicly point out that it stifles innovation — a key point of contention for those who work on AI policy and lobby on behalf of big tech companies.
This also speaks to the limitations of thought leadership. Consider an editorial article A statement released last month by David Zapolsky, Amazon’s senior vice president of global public policy and general counsel, calling for alignment of global policies on responsible AI. The article convincingly positions Amazon as an agent of responsible AI reform, highlighting Amazon’s willingness to work with governments on “voluntary initiatives” and its work to research and implement responsible use safeguards in its products.
The article does a great job of positioning Amazon as an industry leader, but does not mention federal regulation once. The idea that public-private partnerships could be a sufficient substitute for formal regulation appears indirectly through multiple mentions of partnerships, setting a precedent for the recent influx of AI lobbyists into Congress.
“The number of lobbyist Hired lobby The White House artificial intelligenceRelated Issues Grown Up It went from 323 in the first quarter to 931 in the fourth quarter.” Remind Public Citizen.
As more companies adopt principles around responsible AI use at the expense of government oversight, it will be important to understand what gaps exist between their public claims about the effectiveness of their responsible AI efforts and how those efforts play out internally.
If you’re part of an organization large enough that you have colleagues who work in-house in public affairs or public policy, this is a reminder that aligning your public affairs and corporate communications efforts with your internal efforts is an important step in mitigating risk.
Someone who has true control over deployments and use cases in-house can lay out guidelines and methods around ethical use cases, continuous learning, etc. True thought leadership doesn’t take the form of product promotion, but rather demonstrating work through actions and results.
What trends and news are you following in the AI space? What would you like to see covered in our 100% human-written bi-weekly AI roundup? Let us know in the comments.
Justin Joffe is editorial director and editor-in-chief at Ragan Communications. Follow him on LinkedIn.