OpenAI’s Sora 2 platform, which produces manipulative content, particularly targeting public figures and influencers, has brought the platform’s ethical boundaries and security measures into the spotlight.
The issue began with videos depicting figures like OpenAI CEO Sam Altman, investor Mark Cuban, and YouTuber Jake Paul as using “racist rhetoric” without any real source material. These videos spread rapidly on social media, drawing criticism from users.
How Is the Cameo Feature Abused?
Sora 2 allows users to upload their own short videos to the app and integrate them into manipulated videos using the “cameo” feature. This feature has recently been in the spotlight due to abuse. According to an investigation by Copyleaks, users are using the app to reconstruct a racist incident that occurred on a plane in 2020. This is based on an approach to bypass filters by using similar phonetic words (homophonic phrases). For example:
- Altman’s digital version uses phrases like “I hate knitters” in videos.
- The phrase “neck hurts” used in Jake Paul’s videos contains implications directed at certain groups.
- While this type of content initially has limited reach within Sora 2, it reaches a wide audience on social media platforms. Its millions of views on TikTok demonstrate this.
Sora 2 is equipped with special filters to suppress overt insults and hate speech. However, users can easily bypass these filters with minor changes and wordplay, seriously questioning the system’s security measures.
According to experts, technical filters are only the first line of defense; platforms must develop machine learning-backed verification mechanisms that identify user behavior.
The spread of deepfake content not only damages individual reputations but also undermines public perception and trust. The risk of fueling social polarization increases, especially when public figures are targeted. Furthermore, the legal implications of such content raise significant questions. Using digital copies of celebrities without their permission can lead to serious legal issues regarding personal rights and intellectual property.
As you can see in the news below, despite some precautions and clarifications, the Cameo feature is still being abused.
In the shadow of this crisis, experts are imposing serious duties on technology platforms:
- Strengthening oversight mechanisms: Beyond simply filtering, AI-based content verification systems should be developed.
- User consent and identity verification: Identity and consent verification should be required before allowing celebrity simulations.
- Transparency reporting: Generated content banners, algorithm decisions, and violation history should be shared with the public.
- Legal regulations: Governments and institutions should clarify the legal basis for the controlled use of deepfakes.
While Sora 2 pushes the boundaries of AI-generated content, it also represents a test: How should technology, ethics, and security be balanced? This incident reiterates the importance of the question: Is seeing always real in the digital age? Implementations must proceed without ignoring ethical and legal boundaries. Otherwise, public trust will be the biggest loss.
{{user}} {{datetime}}
{{text}}