Skip to main content

One of the major flaws of AI technology, especially generative AI tools like ChatGPT, is that sometimes it produces false, biased or misleading information or results. This issue is popularly known as AI Hallucination or Confabulation. This impacts the credibility of AI models, making it difficult for the technology to be fully adopted in important areas such as education and healthcare

Microsoft, a top player in the AI industry has discovered a way to combat AI hallucinations. The company has filed for a patent on a new technical method that could help stop AI from making things up. This patent, filed last year, was submitted to the US Patent & Trademark Office (USPTO)  and made public on October 31. It is called “Interacting with a Language Model using External Knowledge and Feedback.”

The main feature of this solution is the Response-Augmenting System (RAS). It is designed to enhance the AI, by automatically getting additional information from external sources based on the userโ€™s query. It works almost like a fact checker for AI conversations. This means that when a user makes a query, this system automatically searches through online sources and databases to find supporting information, rather than relying solely on the AI’s built-in knowledge. If the AI gives an answer that doesn’t match up with available information from reliable sources, the RAS marks it as “not useful”. It warns the user that the information might be wrong, giving them the option to provide feedback.

One important aspect of this solution is that it doesn’t require companies to completely rebuild or retrain their AI models. Instead, it works alongside existing systems, making it an accessible tool for reducing false information generated by AI. It essentially adds a layer of safety without having to rebuild the whole system. 

Microsoft has clarified that this new patent is entirely separate from their existing Azure AI Content Safety tool, which already provides fact-checking capabilities for business AI chatbots. The Content Safety tool works behind the scenes to ย checks if AI responses are backed by real facts before showing them to users.

Although this Microsoft patent is currently still under review by the USPTO, if approved, this technology could become a valuable addition to Microsoft’s AI product. At large, it couldย  be a major step in combating the threat of AI Hallucination. However, it’s worth noting that patent approval doesn’t guarantee the technology will be implemented, as companies often patent ideas without necessarily developing them.ย 

If it works, this technology could help make AI more trustworthy and useful in important areas where being accurate is very critical. This approach could help make AI interactions more reliable while keeping the implementation process straightforward for developers.

About the author: