Microsoft’s chief diversity officer says talent diversity and investment will help solve bias issues in AI.
At the start of 2023, Microsoft found itself caught in a PR storm. The company was trying to demonstrate its progress in artificial intelligence after investing billions of dollars in OpenAI, the developer of ChatGPT. The company added an AI-powered chatbot to its Bing search engine, becoming one of the first legacy tech companies to incorporate AI into its flagship product, but things went wrong as soon as people started using it.
It drew international attention after a New York Times reporter was “deeply upset” by a conversation he had with Bing. Users soon began sharing screenshots that showed the tool using racist language and declaring plans for world domination. Microsoft quickly issued a fix, limiting the AI’s responses and capabilities. In the months that followed, the company replaced Bing’s chatbot with Copilot, now available as part of its Microsoft 365 software and Windows operating system.
Microsoft is not the only company facing AI controversy. Critics say the fiasco is evidence of a wider complacency about the dangers of AI in the tech industry. Google’s Bard tool, for example, famously gave an inaccurate answer to a question about a telescope during a live press demo – a mistake that wiped $100bn (£82bn) from the company’s value. The AI model, now called Gemini, was later accused of “woke” bias after the tool appeared to be unwilling to generate images of white people in response to certain prompts.
Still, Microsoft says that with the right safeguards in place, AI can be a tool to promote fairness and representation, and one solution it proposes to address the issue of bias in AI is to make the teams building the technology itself more diverse and inclusive.
“This is more important than ever before as we think about building inclusive AI and inclusive technology for the future,” said Lindsay Rae McIntyre, Microsoft’s chief diversity officer, who joined the company in 2018.
A former deaf teacher, McIntyre has worked in human resources for the tech industry, including at IBM, for more than 20 years, and has lived and worked across the U.S. as well as in Singapore and Dubai. Now, she says, her team at Microsoft is increasingly focused on embedding inclusive practices in the company’s AI research and development to ensure better representation “at all levels of the company.”
There’s good reason for this focus: the adoption of AI products has breathed new life into the nearly 50-year-old brand, which in July announced a 15% rise in annual revenue to $64.7bn (£49.2bn), mainly due to growth in its Azure cloud business, which has been a big beneficiary of the AI boom as customers train their systems on the platform.
These efforts also bring the company closer to a long-standing goal of building technology that “understands us,” as CEO Satya Nadella recently put it. But to be empathetic, relevant and accurate, AI — and more specifically, large-scale language models like ChatGPT — needs to be trained by a more diverse group of developers, engineers and researchers, McIntyre said.
This may not be a foolproof solution: The large-scale language models that underpin tools like Copilot, ChatGPT, and Gemini are built using huge datasets collected from across the internet, and any bias in the training data can be very hard to prevent from showing up later, which is a particular concern when artificial intelligence is used in the real world.
Still, Microsoft believes AI can support diversity and inclusion (D&I) if these ideals are built into AI models from the beginning. The company maintains that inclusivity has always been a big part of its culture, from its Xbox Adaptive Controller designed for users with special needs to the accessibility features in many of its Microsoft 365 products. But the pressure to keep up with the pace and scale of AI growth has increased the need for Microsoft to invest in diversity within its own company.
In its 2023 diversity report, Microsoft reported that about 54.8% of its core workforce is made up of racial and ethnic minorities, which is partially in line with rivals such as Apple and Google. Meanwhile, 31.2% of Microsoft’s employees are women, a few percentage points lower than Apple and Google.
Below, McIntrye speaks to the BBC about addressing bias in generative AI, working across cultures to be inclusive, and what Microsoft is doing to prepare its own workforce for the rapid evolution of AI.
How is Microsoft addressing the bias in generative AI that impacts these kinds of tools?
We’re investing heavily in this, but also educating people on elements of bias and inclusivity through all the work we do.
At Microsoft, we believe AI technology should work fairly. We continue to invest in research to identify, measure, and mitigate different types of fairness-related harms, and we are committed to innovating new ways to proactively test AI systems, as outlined in our Responsible AI Standards. To achieve this, we collaborate with a range of experts, including anthropologists, linguists, and social scientists, all of whom bring valuable perspectives that advance and challenge the thinking of our engineers and developers.
To develop and deploy AI systems responsibly, we are centering D&I to ensure a broad range of backgrounds, skills, and experiences are represented across the Microsoft teams that conceive and build AI, to ensure that AI is developed in ways that are inclusive of all users, and to ensure that leaders making decisions about these teams and products are equipped with the tools to understand issues of privilege, power, and bias.
Microsoft faced strong backlash this summer when news broke that it was cutting its diversity and inclusion team. The company later clarified that the team was remaining intact. Has anything changed about the company’s approach on this matter?
We are truly fortunate that Microsoft’s commitment to diversity and inclusion is unwavering and unchanging. We are expanding. This news cycle reminded me of how many people depend on us to deliver inclusive technology, to stay at the forefront of diversity and inclusion experiences, to share our learnings, and to listen to how others in our industry are experiencing this moment. It’s not because we know more than anyone else. Of course not. We’re constantly learning. But we have resources that not every company has.
What other areas of focus do you have around AI and inclusion?
What’s pretty dynamic is making (AI) available in more languages. When you work in a language that’s different from your native language, your brain has to work harder, you’re less productive, and you can’t express your true self. There’s a real opportunity with AI to make it even more thoughtful and relevant across global languages. It’s a huge task, but we’re learning more every day, and AI is getting smarter and smarter.
Beyond AI, we’ve partnered with external organizations that support the LGBTQ+ community, as well as our own employee resource group, on adding pronouns to user profiles in Microsoft 365. This helps people feel seen, recognized and cared for in technology, which is the experience we want people to have.
How do we help our 230,000 employees understand that allyship can look different in different cultures?
We have a core strategy but we localize it. For example, in India, we ran a D&I experience with managers and leaders at the intersection of race, ethnicity and religion. Through feedback, we can also have Copilot pop up qualitative and quantitative responses that we can use more effectively when we roll it out to our entire workforce.
We’ve done similar work for Indigenous communities in Australia and New Zealand, and in the Middle East and Africa we wanted to better support employees and their families going through menopause – something that isn’t always discussed in cultures around the world.
We also need to think about simple things like the devices, the languages people are using, and accessibility features. Everyone wants AI to be knowledgeable, relevant, and empathetic. We want to feel like the AI understands us and what we’re trying to accomplish. So if we can bring cultural context into our AI experience, it makes for a more satisfying experience. And the expectations we place on technology ultimately come from having a diverse workforce that’s working with, building, and creating AI from the ground up.
Microsoft by the numbers:
228,000: Microsoft’s worldwide workforce
190: Number of countries in which Microsoft operates
9: Employee-led global resource groups to foster an inclusive work environment
Over 5 billion chats and images generated through Copilot to date
60,000+ Azure AI cloud customers build AI into their apps and services
As AI evolves, how can we ensure that talented people aren’t left behind?
We have an amazing AI learning hub where we have access to the latest and greatest learning information. We also have employee resource groups that are upskilling and using some of the content to educate themselves. Whether they’re engineers, field sellers, or program managers, we bring AI (courses) to them to be more productive. This also applies to our HR employees. We have a large HR team that is focused on how we can bring AI to the HR experience at Microsoft.
How is Microsoft using AI in HR?
For example, Copilot helps us quickly respond to employee questions, and we’ve also introduced AI skills courses to ensure all employees have a common understanding of AI technology as we apply it across different business areas.
For organizations considering how to bring AI to HR operations, there are three strategies to consider: First, form learning communities with experts to develop insights together. Get curious about the technology, such as engaging with Copilot to learn how it can help build inclusion within your organization. Finally, ensure human-centered design with empathy and people’s experience at the center.