In the case launched after the suicide of a 14-year-old boy in the US state of Florida, it was decided that technology giant Google and artificial intelligence startup Character.AI will be put on trial. The case will also examine the impact of artificial intelligence applications on young users.
Google got into trouble with artificial intelligence
A young man named Sewell Setzer ended his life in February 2024. The unfortunate young man’s mother, Megan Garcia, claims that the connection her son established with an artificial intelligence chat application deeply affected his psychology and led him to commit suicide.

Garcia states that her son distanced himself from his social circle and became disinterested in everything due to the mental problems he was experiencing. It is stated that Setzer developed a deep bond with an artificial intelligence-based chatbot during this process.
In his correspondence, Setzer knew that the bot was not real, but he continued to relate to it as if it were a human. According to the case file, it was revealed that the young man sent a message saying “I’m coming home” to a chatbot imitating a “Game of Thrones” character just before his suicide. The bot in question is said to act as both a therapist and a romantic partner.
Judge Anne Conway rejected Google and Character.AI’s demands that the content produced by the AI chatbots fall within the scope of freedom of expression. The application to dismiss the case on these grounds was also rejected.
The court also found the argument that Google was not directly responsible for the development of the Character.AI application insufficient. The court ruled that Google could not be exempted from the lawsuit due to reasons such as licensing the technology and hiring the company’s founding team.
So what do you think about this issue? You can share your views with us in the comments section below.