LinkedIn has stopped using UK user data to train its artificial intelligence (AI) models after regulators raised concerns.
Users around the world of the Microsoft-owned, career-focused social networking site have quietly given their consent to having their data used to train the site’s AI models.
But the UK Information Commissioner’s Office (ICO) said on Friday it was “pleased” to confirm that LinkedIn had suspended its use of UK users’ information.
LinkedIn said it welcomes the opportunity to further engage with the ICO.
“We are pleased that LinkedIn has considered the concerns we raised about its approach to using information about UK users to train its AI-generative models,” said Stephen Almond, executive director of the ICO.
Many major tech companies, including LinkedIn, are turning to user-generated content on their platforms as a new source of data to train their AI tools.
“Generative” AI tools, such as chatbots like OpenAI’s ChatGPT and image generators like Midjourney, learn from vast amounts of text and image data.
However, a LinkedIn spokesperson told BBC News that the company believes users should have control over their data.
As a result, the company is offering UK users the ability to opt out of having their data used to train AI models.
“We have always used some form of automation in LinkedIn products and have always been clear that users have choice over how their data is used,” it added.
Social platforms, where users post about their lives and work, can provide a wealth of material to help make your tool sound more natural.
“The current reality is that many people are looking for help creating a first draft of their resume – they’re looking for help crafting their message to recruiters for their next career opportunity,” a LinkedIn spokesperson said.
“At the end of the day, people want an edge in their careers, and our gen-AI service is aimed at helping them do that.”
In its global privacy policy, the company said user data helps develop AI services, and a help article said data is also processed when users interact with tools that, for example, give suggestions on how to write a post.
This no longer applies to users in the UK, the European Union, the European Economic Area or Switzerland.
Meta and X (formerly Twitter), along with LinkedIn, are among the platforms hoping to use content posted on their platforms to help develop generative AI tools.
But it faces regulatory hurdles in the UK and EU, where strict privacy rules limit how and when personal data can be collected.
Meta halted plans to train its AI tools using public posts, comments and images of UK adults in June following criticism and concerns from the ICO.
The company recently began notifying UK users of Facebook and Instagram about its plans again and clarified the opt-out procedure after consulting with data watchdogs.
LinkedIn is likely to face a similar process in the future before resuming plans to train its tools with data from UK users.
“To make the most of generative AI and the opportunities it brings, it is vital that citizens can trust that their privacy rights will be respected from the start,” the ICO’s Almond said.
He said regulators would “continue to monitor” developers such as Microsoft and LinkedIn to make sure they were protecting the data rights of UK users.