A Norwegian man, Arve Hjalmar Holmen, has filed a complaint against OpenAI after ChatGPT falsely claimed he had murdered his two children and had been sentenced to 21 years in prison.
Holmen, a private citizen with no public profile, was shocked to discover that when he asked ChatGPT about himself, it generated a completely false story, attributing a tragic crime to him.
According to the chatbotโs response, Holmenโs sons, aged seven and ten, were found dead in a pond near their home in Trondheim, Norway, in December 2020. It further claimed that Holmen had been convicted of their murder, a claim that has no basis in reality. While some details were eerily close to his real life, such as his home town, the number of children he has, and their age gap, Holmen has never been accused or convicted of any crime.
ChatGPT’s Responseย
credit: BBC
The misinformation deeply disturbed Holmen, who fears the impact such false claims could have on his personal and professional life if they were ever shared or leaked. โSome think that there is no smoke without fire,โ he said. โThe fact that someone could read this output and believe it is true is what scares me the most.โ
Holmen, supported by digital rights group Noyb, has taken legal action by filing a complaint with the Norwegian Data Protection Authority. The complaint argues that OpenAI has violated GDPR provisions regarding data accuracy and calls for the company to correct its model to prevent similar errors in the future. Noyb has also demanded that OpenAI face a fine for the chatbotโs defamatory response.
AI chatbots, including ChatGPT, are known to produce false or misleading information, a phenomenon known as โhallucination.โ These tools generate responses based on probability rather than verified facts, leading to convincingly written but entirely incorrect statements. Despite disclaimers that AI-generated content may be inaccurate, Noybโs lawyer Joakim Sรถderberg argues that this is not enough. โYou can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,โ he stated.
OpenAI responded to the complaint, acknowledging the issue but stating that the incident involved an older version of ChatGPT. The company has since updated its model to include real-time web searches, which it claims has improved accuracy and reduced hallucinations.
Holmenโs case highlights the challenges in AI-generated content and the risks of misinformation. While AI developers continue to refine their models, cases like this raise concerns about accountability, accuracy, and the potential harm these systems can cause when they get things wrong.