Skip to main content

The Data Protection Commission (DPC) announced on Thursday that it has launched a cross-border statutory inquiry into Google Ireland Limited (Google). The investigation focuses on Google’s Pathways Language Model 2 (PaLM 2).

Google’s PaLM 2 is an advanced language model with enhanced multilingual, reasoning, and coding capabilities. According to Google, its capabilities surpass those of previous state-of-the-art language models, including its predecessor, PaLM.

The DPC is an independent national body that ensures that the fundamental right to personal data protection for people in the EU is protected. It also serves as Ireland’s supervisory authority for the General Data Protection Regulation (GDPR). With Google’s European headquarters based in Dublin, the DPC acts as its primary regulator on behalf of the EU, responsible for enforcing the GDPR.

The basis of the inquiry—cross-border processing—refers to situations where an organisation with establishments in multiple EU Member States handles personal data, or where a single-location organisation processes data of people in more than one Member State.

The investigation will examine whether Google is compliant with its legal obligation, under Article 35 of the General Data Protection Regulation (GDPR), to carry out a Data Protection Impact Assessment (DPIA). A DPIA is required when data processing activities could potentially pose high risks to individuals’ privacy. This assessment involves evaluating the potential impacts of data processing on privacy and implementing measures to mitigate any identified risks.

While Google has not disclosed specific details about the use of individual user data in PaLM 2, the company generally collects and uses data from its services to enhance its AI systems. According to its privacy policy, Google gathers data, including user activity, device information, location data, publicly available information, and interactions with other Google products. This data may contribute to the broader training dataset used for models like PaLM 2. Additionally, the PaLM 2 Technical Report mentions that publicly available sources, such as translation systems and evaluations, are used as part of the dataset for training the model.

In a statement to Fortune, a Google representative said, “We take seriously our obligations under the GDPR and will work constructively with the DPC to answer their questions.”

The EU is serious about regulating the use of AI with respect to human rights and privacy, as well as ensuring compliance with the GDPR. In June, Meta agreed to pause training its Llama AI model with public comments from European Facebook and Instagram users. Shortly after, X (formerly Twitter) halted the use of European users’ data for training its Grok AI model following regulatory action. Most recently, an American facial recognition company, Clearview AI, was sanctioned for violating the GDPR. The company included publicly sourced images of European individuals in its database without obtaining consent from them.

As AI technologies advance, regulatory bodies are intensifying their oversight to ensure that organisations uphold their legal obligations and protect individuals’ privacy. These investigations signal a continued commitment by authorities to enforce firm data protection standards across the industry.

About the author: