Skip to main content

AI Hallucinations, also known as Confabulation, is one of the major flaws of Generative AI Technology. It is a growing concern that has once again reached the courtroom. This recent incident involves a well known Stanford Professor Jeff Hancock who is in hot water regarding Minnesota’s election misinformation law. 

What caused the controversy was that a legal document submitted by Prof Hancock in support of a new legislation titled “Use of Deep Fake Technology to Influence An Election” contained resources that didn’t exist. The law is to ban the use of deepfake technology to influence an election. It is however being opposed in the federal court by a conservative YouTuber and Republican state Rep. Mary Franson of Alexandria for violating First Amendment free speech protections

The Minnesota Reformer reports that at the request of Attorney General Keith Ellison, Hancock provided legal document which included references to non-existent sources which seem to have been hallucinated by Large Language Models (LLMs) like ChatGPT.

An example was a supposed 2023 study titled “The Influence of Deepfake Videos on Political Attitudes and Behavior,” allegedly published in the Journal of Information Technology & Politics. However, no study by that name appears in that journal, academic databases don’t have any record of it existing and the specific journal pages referenced contain two entirely different articles.  A libertarian law professor Eugene Volokh found that another citation to a study allegedly titled “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance,” does not appear to exist.

There have been other incidents of lawyers falling victims of AI Hallucination, however this situation is particularly noteworthy because Prof Hancock is the founding director of the Stanford Social Media Lab and is well-known for his research on how people use deception with technology, from sending texts and emails to detecting fake online reviews. This puts a stain on the credibility of this legal document and makes this incident of fake citations especially embarrassing.

To prevent similar incidents, professionals need to apply more caution and safety measures when using AI tools. They should ensure that they thoroughly verify all AI-generated content or citations through multiple authoritative sources. They could also maintain detailed records of original research materials, cross-reference academic papers through established databases, and establish a systematic fact-checking process.

This incident serves as a reminder that any individual or experts in the field can fall prey to AI hallucinations. With the massive adoption of AI in academics, creatives, law and other sectors, it is important that users maintain high verification standards. Companies that develop these tools should also work on improving the accuracy of their LLMS by finding solutions to combat AI Hallucinations.

About the author: