Google executives on Wednesday gave details on how the tech giant will set a diversity initiative to sunset, and removed the pledge to build artificial intelligence for weapons and surveillance at all staff meetings. did.
Melonie Parker, former Google’s head of diversity, said the company is “updating” its diversity and inclusion employee training program and a wider training program with “DEI content.” It was the first time a company executive has addressed its entire staff since Google announced it would not follow diversity employment goals and it would remove its pledge not to build militarized AI. Chief Justice Kent Walker said a lot has changed since Google first introduced AI principles in 2018. He said that it is “good for society” for the company to be part of the evolving geopolitical debate, in response to a question about why the company removed the ban on AI construction for weapons and surveillance. Ta.
Parker said as a federal contractor, the company is considering all its programs and initiatives in response to Donald Trump’s executive order that directs federal agencies and contractors to dismantle DEI’s work. Ta. Parker’s role has been changed from Chief Diversity Officer to Vice Chairman of Googler Engagement.
“What’s not changing is that we’ve always hired the best people for our job,” she said, according to a recording of the meeting reviewed by the Guardian.
Google CEO Sundar Pichai said the company is “deeply caring” about hiring a workforce that represents a diversity of global users, but if the company does not follow the rules and regulations of its operations He said that it would not be possible.
“We endure our values, but we have to follow the legal direction depending on how they evolve,” Pichai said.
Pichai and other executives who were speaking from Paris while attending the international AI summit had responded to questions their employees had posted on internal forums. Some of these questions have been coordinated among non-technical workers activists groups to force apartheid to force engineers to answer dramatic moves from technology giants away from previous core values. It was part of the effort.
According to screenshots reviewed by the Guardian, employees have submitted 93 questions about the company’s decision to remove the pledge not to build AI weapons and the company’s decision to remove more than 100 Google’s announcement. I did. The company recently moved to using AI to summarise similar questions employees had before regular scheduled staff meetings (known as TGIF).
Last week, Google joined Meta and Amazon to shift with a focus on a culture of inclusivity in favor of policies shaped by the Trump administration’s image. In addition to removing mentions of its commitment to diversity, equity and inclusion (DEI) from its filing with the Securities and Exchange Commission, the company will no longer set employment targets for people from undervalued backgrounds. He said. The company has also removed the language from publicly available AI principles. This said it would not build AI for harmful purposes such as weapons and surveillance.
“We are asked to sit at the table in some important conversations. I think it’s good for society that Google plays a role in conversations in the areas we specialize in – Some of the work on cybersecurity, or biology, said Walker, the top legal director. “Some of the strict bans that were in the first version of AI principles (the first version) were, I was It may not work out in a more subtle conversation that we have now, but it remains that our North Star stretches the stars through everything. This means that profits greatly reduce risk. That means it’s going to surpass it.”
Google has long tried to give the impression that it is linking the boundaries of designated corporate and cultural values and government and defence contracts. After employee protests in 2018, the company withdraws from the US Department of Defense Project Maven to analyze drone footage using AI, and announces AI principles and values.
However, since then, the company has started working with the Pentagon again after securing a $9 billion joint combat cloud capability agreement with Microsoft, Amazon and Oracle. Google has also signed an aggressive contract to provide AI to the Israeli Defence Forces. The tech giant was working on time to distance contracts called Project Nimbus from the military sector of the Israeli government, but the Washington Post not only did the company collaborate with the IDF, but it also had more AI. The document revealed that it was in a hurry to meet new requirements. Access after the attack on October 7th. It is unclear how IDF uses Google’s AI capabilities, but as reported by the Guardian, Israeli military uses AI for many military purposes that helps to find and identify bombing targets .
In a statement, Google spokesman Anna Kowalczyk said that the company’s work with the Israeli government is “directed towards highly sensitive, classified or military workloads related to weapons or intelligence reporting agencies.” Not there,” he said.
Dive weekly into the way technology shapes our lives
Privacy Notice: Newsletters may contain information about charities, online advertising, and content funded by external parties. For more information, please refer to our Privacy Policy. We use Google Recaptcha to protect our website and the application of Google Privacy Policy and Terms of Use.
After the newsletter promotion
The organizers of Apartheid’s Tech No Tech said the DEI and AI announcements are deeply connected. “People-operated SVP Fiona Sikkoni has internally reported that the move to dismantle the DEI program was made to isolate government contracts from “risks,” the group said on Tuesday. wrote in a call for lawsuit. “It is important to note that a large portion of government spending on technology services is spent through the military.”
For each category of employee questions, Google’s internal AI summarises all queries into a single query. AI distilled questions about the development of AI weapons as follows: “We recently removed a section from the AI Principles page, which we pledged to not use technology in potentially harmful applications such as weapons and surveillance. Why did we remove this section?”
The company doesn’t show all the questions that are displayed, but the list provides some snapshots of them. Questions asked by employees include how the updated AI principles ensure that the company’s tools are “not misused for harmful purposes,” and that the management “candidly and without Corp Speak and Legalese.” Please talk to me.”
The third most popular question employees asked was why AI summary is so bad.
“The AI summary of questions about questions is terrible. Can we go back to answering questions people actually asked?” I read.