Microsoft has developed a new feature called "Correction” to combat the issue of AI "hallucinations," where chatbots present false and fabricated information as fact. It automatically identifies and rectifies inaccuracies in AI-generated text, aiming to improve the reliability and trustworthiness of AI-powered communication.
The feature is currently available as part of Microsoft’s Azure AI Content Safety API and can be used with any text-generating AI model, including Meta’s Llama and OpenAI’s GPT-4o.
However, it is being tested as a preview model at the moment.
“Correction is powered by a new process of utilising small language models and large language models to align outputs with grounding documents. We hope this new feature supports builders and users of generative AI in fields such as medicine, where application developers determine the accuracy of responses to be of significant importance,” a Microsoft spokesperson told TechCrunch.
How “Correction” feature works
Microsoft's Correction uses two “meta models” to identify and rewrite hallucinations. One model detects potential errors, while the other attempts to correct them using a provided source of truth.
“Correction can significantly enhance the reliability and trustworthiness of AI-generated content by helping application developers reduce user dissatisfaction and potential reputational risks,” the Microsoft spokesperson was quoted as saying.
“It is important to note that groundedness detection does not solve for ‘accuracy,’ but helps to align generative AI outputs with grounding documents,” the spokesperson added.
Addressing hallucinations in Google Gemini
Earlier this year, Google launched Gemini 1.5 Pro on Vertex AI and AI Studio with a “code execution” feature that helps refine and improve the code generated by the model, reducing errors through an iterative process. This is achieved through fine-tuning and a process known as "grounding," which allows the model to be adapted to particular contexts and use cases.