Microsoft has developed a new “Correction” tool aimed at preventing erroneous content produced by artificial intelligence models. In simple terms, Microsoft intends to solve the issue of AIs generating incorrect or fabricated content by using one AI to check another. This problem is generally referred to as “AI hallucinations.”
Microsoft Aims to Provide Solutions with This Tool
Generative artificial intelligence (Gen AI) has rapidly gained popularity in recent years. However, some of these AI-based chatbots tend to produce misinformation. To address these errors, Microsoft is launching a new ‘Correction’ feature.
This feature is built on the company’s existing ‘Groundedness Detection’ technology. Essentially, this new tool verifies the text produced by the AI by cross-referencing it with a supporting document provided by the user.
Microsoft’s new correction tool will be offered under the Microsoft Azure AI Safety API. Users of generative AI platforms like OpenAI’s GPT-4o and Meta’s Llama will soon have the opportunity to experience this new feature. The correction feature will attempt to flag potential inaccuracies and then check these contents against a reliability source.
While providing a real-time solution for AI hallucinations, Microsoft is also developing another tool called ‘Evaluations.’ This tool enables the AI to perform risk assessments through a hidden inference layer before generating content.
The new tools are designed to ensure that sensitive information remains secure and confidential, helping to prevent potential privacy violations during the AI’s information production process. With these two new tools, Microsoft aims to tackle two of the biggest issues in the world of artificial intelligence.