OpenAI’s Sora app is climbing fast, but so is the controversy. After racking up over a million downloads within its first week, the AI-driven video platform has already ignited widespread concern about deepfakes, digital identity, and trust in what we see.
Sora app’s surreal videos blur the real and the fake

On the surface, Sora looks like the next viral short-form app. But instead of user-shot footage, every clip is generated by AI. Whether it’s Queen Elizabeth leaping off a pub table or the Predator flipping burgers at McDonald’s, the results are hyperreal, absurd, and often eerily convincing.
Users simply scan their face, record a short voice prompt, and then feed the system a few lines of text. In return, Sora generates polished 10-second videos, complete with voices, effects, and uncanny realism.
Deepfake concerns swirl around the Sora app
What started as entertainment quickly turned disturbing. Within days, AI-generated clips of dead celebrities, including Robin Williams, Martin Luther King Jr., and Tupac, began spreading online. Some were comedic. Others were jarring.
Family members of those depicted are pushing back. Studios are demanding action. Critics say it’s not just about taste, it’s about the erosion of truth itself.
As deepfake expert Sam Gregory put it, the damage isn’t always in the fakes themselves. It’s in the “fog of doubt” that settles across all media, making it harder to know what’s real and who to trust.
What makes the Sora app different and more dangerous
Sora’s viral appeal comes from more than just visuals. The app encourages users to insert themselves or their friends into videos through a feature called Cameos. Although every clip carries a visible watermark, third-party sites are already offering tools to strip them out.
Here’s what sets Sora apart:
- 10-second hyperreal video generation
- Built-in voice cloning and face capture
- Celebrity impersonation with no upfront restriction
- Public sharing across platforms is encouraged
- Watermarks easily removed via external tools
Despite that, there’s currently no way to upload or share real footage. Everything comes from prompts and it shows just how powerful text-to-video has become.
The Sora app challenges how we define identity online
Beyond copyright violations, the real crisis brewing around the Sora app is about digital identity. When anyone can summon a fake clip of a public figure saying anything, reputations and legacies become fair game. That includes the living and the dead.
OpenAI’s opt-out policy for copyright has also added fuel to the fire. Until owners request removal, their intellectual property is fair use within Sora’s system.
It growth reveals a deeper trust problem
A million users in under a week shows the demand is real, but so is the risk. With AI’s grip on content creation tightening, the Sora app might be remembered not just for what it could generate, but for what it helped erase: clarity, consent, and trust.
And once that’s gone, no watermark can bring it back.

