ShiftDelete.Net Global

The Sora 2 scandal grows: Celebrities targeted!

Ana sayfa / News

OpenAI’s Sora 2 platform, which produces manipulative content, particularly targeting public figures and influencers, has brought the platform’s ethical boundaries and security measures into the spotlight.

The issue began with videos depicting figures like OpenAI CEO Sam Altman, investor Mark Cuban, and YouTuber Jake Paul as using “racist rhetoric” without any real source material. These videos spread rapidly on social media, drawing criticism from users.

Sora 2 allows users to upload their own short videos to the app and integrate them into manipulated videos using the “cameo” feature. This feature has recently been in the spotlight due to abuse. According to an investigation by Copyleaks, users are using the app to reconstruct a racist incident that occurred on a plane in 2020. This is based on an approach to bypass filters by using similar phonetic words (homophonic phrases). For example:

Sora 2 is equipped with special filters to suppress overt insults and hate speech. However, users can easily bypass these filters with minor changes and wordplay, seriously questioning the system’s security measures.

According to experts, technical filters are only the first line of defense; platforms must develop machine learning-backed verification mechanisms that identify user behavior.

The spread of deepfake content not only damages individual reputations but also undermines public perception and trust. The risk of fueling social polarization increases, especially when public figures are targeted. Furthermore, the legal implications of such content raise significant questions. Using digital copies of celebrities without their permission can lead to serious legal issues regarding personal rights and intellectual property.

As you can see in the news below, despite some precautions and clarifications, the Cameo feature is still being abused.

In the shadow of this crisis, experts are imposing serious duties on technology platforms:

While Sora 2 pushes the boundaries of AI-generated content, it also represents a test: How should technology, ethics, and security be balanced? This incident reiterates the importance of the question: Is seeing always real in the digital age? Implementations must proceed without ignoring ethical and legal boundaries. Otherwise, public trust will be the biggest loss.

Yorum Ekleyin