Microsoft launches ‘Correction’ feature: What is it and its working

Microsoft has launched a 'Correction' feature to combat AI 'hallucinations,' where chatbots present false information as fact. Part of Azure AI Content Safety API, it works with various text-generating models like Meta’s Llama and OpenAI’s GPT-4o. Using meta models, it detects and corrects errors, enhancing the reliability of AI-generated content.
Microsoft launches ‘Correction’ feature: What is it and its working
Microsoft has developed a new feature called "Correction” to combat the issue of AI "hallucinations," where chatbots present false and fabricated information as fact. It automatically identifies and rectifies inaccuracies in AI-generated text, aiming to improve the reliability and trustworthiness of AI-powered communication.
The feature is currently available as part of Microsoft’s Azure AI Content Safety API and can be used with any text-generating AI model, including Meta’s Llama and OpenAI’s GPT-4o.
However, it is being tested as a preview model at the moment.
“Correction is powered by a new process of utilising small language models and large language models to align outputs with grounding documents. We hope this new feature supports builders and users of generative AI in fields such as medicine, where application developers determine the accuracy of responses to be of significant importance,” a Microsoft spokesperson told TechCrunch.

How “Correction” feature works


Microsoft's Correction uses two “meta models” to identify and rewrite hallucinations. One model detects potential errors, while the other attempts to correct them using a provided source of truth.
“Correction can significantly enhance the reliability and trustworthiness of AI-generated content by helping application developers reduce user dissatisfaction and potential reputational risks,” the Microsoft spokesperson was quoted as saying.
“It is important to note that groundedness detection does not solve for ‘accuracy,’ but helps to align generative AI outputs with grounding documents,” the spokesperson added.

Addressing hallucinations in Google Gemini


Earlier this year, Google launched Gemini 1.5 Pro on Vertex AI and AI Studio with a “code execution” feature that helps refine and improve the code generated by the model, reducing errors through an iterative process. This is achieved through fine-tuning and a process known as "grounding," which allows the model to be adapted to particular contexts and use cases.
author
About the Author
TOI Tech Desk

The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk’s news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.

End of Article

Latest Mobiles

FOLLOW US ON SOCIAL MEDIA