Google has designed a new AI model called DolphinGemma to decipher dolphins’ vocal communication. The open-source model emerged as part of a joint effort with the Wild Dolphin Project (WDP), which has been studying dolphins since 1985.
Google has designed an AI that can talk to dolphins
Unlike traditional large language models, DolphinGemma works on audio instead of text. The model focuses on taking in sounds and predicting the next sound, applying the logic of human-language AI to the audio data in the process.
The clicks, whistles, and similar sounds that dolphins make have long intrigued scientists. Research has shown that these sounds are not random, but rather carry specific meanings related to social interactions.
For example; signature whistles that allow them to recognize each other or high-pitched sounds heard during fights indicate that dolphins may have a complex and structured communication system. However, the question of whether this system can be considered a language has not yet been answered.
To answer this question, Google trained the DolphinGemma model using the vast sound archive that WDP has been collecting for years. This archive contains thousands of hours of recordings of dolphins in their natural environment.
Using Google’s SoundStream technology, DolphinGemma converts these sounds into symbols that AI can understand and allows it to learn certain patterns over time. The model’s output will form the basis for generating artificial sounds that can be understood by other dolphins in the future.
Google plans to make the DolphinGemma model accessible to all researchers this summer. What do you think about this? Share your thoughts with us in the comments section below.