OpenAI’s voice transcription tool Whisper is used extensively in medical centers to translate and transcribe patient interviews. But according to a new report, the tool is prone to producing information that is not in the original recordings. According to ABC News and Engadget, the Whisper tool can occasionally insert phrases that are not in the transcriptions, even incorrect drug names and erroneous comments. So, why does he behave like this? What risks does it pose in the medical field? Details are in our news…
OpenAI’s Whisper tool can generate false information and lie!
It is stated that Whisper’s false transcriptions, called “hallucinations”, can cause serious problems, especially in high-risk sectors such as the medical field. Although OpenAI has warned against using the tool in such high-risk areas, its widespread use continues despite these warnings.
For example, a machine learning engineer reported finding incorrect transcriptions in half of a 100-hour recording, while another developer said he found similar errors in all 26,000 transcriptions he analyzed. Research suggests that such errors could cause problems in millions of recordings around the world.
In fact, the situation is so dire that it becomes even more risky, especially with OpenAI integrating its Whisper tool with major platforms such as Oracle and Microsoft Cloud. These cloud services, which serve thousands of users worldwide, have been instrumental in the spread of inaccurate transcriptions.
Furthermore, studies by academics such as Allison Koenecke and Mona Sloane have found that 40% of the misinformation Whisper generates could be harmful. For example, it was observed to misinterpret an ordinary statement as “he took a big piece of crucifix and killed people”.
OpenAI stated that they will review these reports and take this feedback into account in future updates. What do you think about this? Is artificial intelligence not as reliable as we are told? You can write your opinions in the comments section below.
{{user}} {{datetime}}
{{text}}