Skip to main content

Stanford Professor Jeff Hancock recently acknowledged using ChatGPT to assist in drafting a court declaration, which resulted in the creation of hallucinated citations. This declaration which was submitted by Prof Hancock supported proposed legislation designed to prevent deepfake technology’s from influencing elections. However, it faced legal opposition from a conservative YouTuber and Republican state Representative Mary Franson of Alexandria.

In his court filing, Hancock took responsibility for the citation errors. He explained that he used GPT-4o and Google Scholar in his research processes, specifically to identify relevant academic articles that could support his expertise on AI’s impact on misinformation.ย 

โ€œI wrote and reviewed the substance of the declaration, and I stand firmly behind each of the claims made in it, all of which are supported by the most recent scholarly research in the field and reflect my opinion as an expert regarding the impact of AI technology on misinformation and its societal effects,โ€ Hancock stated.  However, he admitted to overlooking some critical errors during the research and drafting process.

The citation errors occurred when Hancock used GPT-4o to expand on his original bullet points. He had inserted “[cite]” placeholders intended to remind himself to add proper citations later. Instead, the AI generated entirely fabricated citations at those points, a phenomenon known as “hallucinations” or โ€œconfabulationโ€ in AI.

“I did not intend to mislead the Court or counsel,” Hancock wrote in his filing. He expressed sincere regret for any confusion caused by the citation errors while firmly defending the core content of his declaration.

This incident highlights the ongoing challenges of integrating AI tools into various industry research. It emphasizes the importance of careful human verification when working with AI-generated content.

About the author: