The Alaska Education Board has recently come under scrutiny following the discovery of fake sources in a policy draft generated by Artificial Intelligence (AI).
State Education Commissioner Deena Bishop used Generative AI (GenAI) to draft a policy on cellphone use in schools, resulting in a document that included citations for academic studies that do not exist.
This type of error, known as an ‘AI Hallucination,’ occurs when an AI model generates information that is incorrect, misleading, or biased.
Initially posted on the Alaska Department of Education and Early Development (DEED) website, the policy draft cited studies supporting the restriction of cellphones in schools. Some of these citations included:
- A study titled “Banning Mobile Phones Improves Student Performance: Evidence from a Quasi-Experiment” in Computers in Human Behavior.
- A 2019 study from the American Psychological Association.
- Additional studies from the Journal of Educational Psychology.
However, four out of six of these citations were found to be non-existent. The documentโs reference section also included URLs that led to articles on entirely different topics. For instance, the URL for the supposed โComputers in Human Behaviorโ study led to an article titled โSexualized Behaviors on Facebook,โ with no evidence of the referenced cellphone study.
Similarly, two studies the state claimed were published in the โJournal of Educational Psychologyโ could not be found.
The document did not mention the use of AI, and the fake references were only discovered after public scrutiny prompted an investigation.
At first, a department spokesperson referred to the false sources as “placeholders.” However, Commissioner Deena Bishop later clarified that they were part of a first draft and that she had used GenAI to create the citations. She explained that she noticed the error before the board meeting and sent the correct citations to board members, who then adopted the resolution.
Ellie Pavlick, an assistant professor of Computer Science and Linguistics at Brown University and a research scientist for Google DeepMind, reviewed the draft and confirmed that the citations resembled typical AI-generated hallucinations. โThat is exactly the type of pattern that one sees with AI-hallucinated citations. Itโs just very typical that you would see these fake citations that would have a real journal, sometimes even a real person, a plausible name, but not correspond to a real thing.โ
She also expressed concerns about public trust, noting that reliance on fake AI-generated content makes formal documents less likely to be trusted by the public. โI think thereโs also a real concern, especially when people in positions of authority use this, because of this kind of degrading of trust thatโs already there,โ she said. โOnce it comes out a few times that information is fake, whether intentionally or not, it becomes easy to dismiss anything as fake.โ
She pointed to a legal case last year where an attorney used an AI chatbot to write a filing. The chatbot cited non-existent cases, which the lawyer then used in court. The judge in charge considered sanctioning the lawyer.
This case serves as a cautionary tale for the use of AI. While AI is helpful in performing tasks, it is prone to errors. Therefore, maintaining proper oversight over AI-generated content is important to ensure accuracy and reliability.