Skip to main content

Data security and  privacy is a serious challenge being faced by numerous companies. Artificial Intellience (AI) companies are under tight scrutiny by regulatory bodies, who are working hard to improve user data privacy and safety. In a recent development, Italy’s Data Protection Authority placed a fine of €15 million ($15.66 million) on OpenAI, the parent company of ChatGPT. This fine is as a result of numerous serious privacy violations in how ChatGPT handles personal information. 

An investigation which began in March 2023, discovered that OpenAI failed to report a significant security breach that occurred that same month. Also, nearly a year ago, Garante (Garante per la protezione dei dati personali) found that ChatGPT has been using user’s personal information to train its AI models in violation of the European Union’s General Data Protection Regulation (GDPR). They also failed to  properly inform users about how their data was being used.

Another critical concern raised by the authorities is the lack of an age verification system. Without proper age checks, children under 13 might receive responses inappropriate for their age and self-awareness. This is very concerning because generative AI tools such as ChatGPT has widespread use and accessibility.

Asides being levied with the €15 million fine, the Italian Data Protection Authority is ordering OpenAI to carry out a six-month public campaign across various media platforms such as television, radio, newspapers and the the internet. This campaign is to promote public understanding of how ChatGPT works, what data they collect from both users and non-users,   and how people can protect their privacy rights, including the ability to delete, rectify or opt out of having their data used for AI training.

In response to this, OpenAI has labelled the fine “disproportionate” and plans to appeal. “When the Garante ordered us to stop offering ChatGPT in Italy in 2023, we worked with them to reinstate it a month later,” an OpenAI spokesperson said Friday in an emailed statement. “They’ve since recognized our industry-leading approach to protecting privacy in AI, yet this fine is nearly 20 times the revenue we made in Italy during the relevant period.” However, they’ve also expressed commitment to working with privacy authorities worldwide to develop AI that respects privacy rights.

Companies need to ensure that they adhere to regulatory requirements so that they do not face such consequences. It is important that both AI companies and regulatory bodies work hand in hand to build stronger data privacy laws that would promote innovation and security. Users need to understand how their personal data is being gathered, processed and used to train AI Systems that we interact with daily. 

About the author: