Salesforce has launched its new chatbot, Einstein Copilot, designed for businesses. According to Salesforce executives, Einstein Copilot carries less risk of hallucinations (i.e., generating incorrect or meaningless information) compared to other AI chatbots, aiming to address issues often encountered by other AI chatbots.
Salesforce’s new AI chatbot Einstein Copilot unveiled
Salesforce’s Vice President of Product Marketing, Patrick Stokes, explains that Einstein Copilot differs from other AI chatbots due to its use of a company’s internal data. The bot scans internal company data before sending it to large language models (LLMs), ensuring it works with accurate information. This reduces the risk of generating incorrect information.
Additionally, Einstein Copilot includes a security layer to protect data shared with LLMs, aiming to prevent misuse of company data by chatbots. The bot also collects real-time customer feedback to detect system weaknesses and features a mechanism to identify hallucinations.
Stokes acknowledges that hallucinations cannot be entirely prevented, but they are working to develop transparent and secure technologies to keep them to a minimum. Salesforce’s CMO, Ariel Kelmen, notes that large language models are inherently designed to dream, which is why hallucinations can occur.
Thus, Salesforce’s Einstein Copilot aims to minimize the risk of hallucinations compared to other AI chatbots. However, they acknowledge that, due to the nature of AI technology, a completely hallucination-free world might not be achievable. This presents a significant challenge for the AI industry regarding reliability and accuracy.
{{user}} {{datetime}}
{{text}}