LinkedIn has suspended the training of its AI models using UK users’ data, following intervention by the Information Commissioner’s Office (ICO).
The professional networking platform has recently faced scrutiny for using user data to train its AI models. The ICO, the body responsible for regulating data privacy laws in the UK, stepped in after users from around the world raised concerns about being quietly opted into having their data used for AI training without their explicit consent.
In light of recent developments, the process has now been brought to a halt in the UK.
In a blog post, Blake Lawit, Senior Vice President and General Counsel at LinkedIn, addressed the changes to the Privacy Policy that generated the concerns. “At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice,” the post stated.
In response, Stephen Almond of the ICO expressed satisfaction with LinkedInโs decision to pause the AI training.ย He said, “We are pleased that LinkedIn has taken our concerns about its approach to training generative AI models with UK user data into account. We welcome LinkedIn’s confirmation that it has suspended this model training, pending further discussions with the ICO.”ย
This suspension is part of the ICO’s ongoing efforts to ensure that companies comply with data privacy regulations when using UK users’ personal data for AI training.
Recently, the ICO approved Meta’s plan to use UK usersโ social media posts for AI training after a temporary halt.ย LinkedIn could follow a similar path if it adopts appropriate measures and complies with the UK GDPR and the Data Protection Act 2018, the leading data protection laws in the UK.
As LinkedIn makes adjustments to its process in the UK, it is important that other regulators monitor the situation to confirm that the platform is meeting privacy requirements and using AI responsibly. Continued oversight will help ensure that user data is protected and that AI practices remain transparent and ethical.