OpenAI has faced a serious crisis related to its Sora deepfake creation tool. Sora, an AI tool that created unauthorized images of the famous actor Bryan Cranston, has drawn criticism from Hollywood. In response, OpenAI reached an agreement with the actors’ union SAG-AFTRA and leading talent agencies.
OpenAI Sora deepfake protections strengthened
At the center of the controversy was the fact that the Sora 2 video tool allowed users to copy the voices and likenesses of celebrities like Bryan Cranston without permission. After the Breaking Bad star discovered the situation and contacted his union, OpenAI took swift action. The company apologized for what it called “unwanted creations” and announced that it was strengthening its protections on the platform.
This incident has reignited concerns about the impact of AI on the creative industries. SAG-AFTRA, Cranston, and major talent agencies like CAA and UTA issued a joint statement with OpenAI. This collaboration is seen as a significant step toward protecting the digital identities of artists. OpenAI has also developed new policies that will give artists greater control over the use of their likenesses.
Full support for the NO FAKES Act
Another key outcome of the negotiations was the “NO FAKES Act.” Both OpenAI and Hollywood stakeholders expressed their full support for this federal bill, which aims to protect individuals from digital copies of themselves. This development could set a precedent for future regulations between tech companies and content creators.
So, what are your thoughts on AI deepfake technology? Share your thoughts with us in the comments!
{{user}} {{datetime}}
{{text}}