US President Donald Trump has enacted an important legal regulation against deepfake content produced by artificial intelligence and private images shared without consent. The law, called the “Take It Down Act,” quickly passed both the House of Representatives and the Senate with broad bipartisan support and entered into force.
Trump administration targets deepfake content
The new law will provide legal protection against the spread of private images spread without the consent of individuals and content created by manipulating them with artificial intelligence on the internet. The increasing number of digital harassment and image abuse cases in recent years have caused serious concerns in many segments of society.

The accessibility of artificial intelligence and the advancement of deep learning techniques have facilitated the production of fake but very close-to-real videos. With the new law, the way has been opened for sanctions against both content producers and the platforms hosting these content in the event of unauthorized use of individuals’ images.
It was reported that within the framework of the regulation, victimized individuals will be able to directly apply to digital platforms to have their private images removed and that the content will be mandatory to be deleted within a certain period of time.
The “Take It Down Act” targets not only the production of non-consensual content, but also its distribution. According to the law, individuals who share such content and websites that host it could face legal liability.
The adoption of the law also brings significant responsibilities to social media platforms and content providers. These organizations are expected to respond quickly to user complaints and identify and remove violative content.