As deepfake videos become increasingly realistic, detecting fake content is becoming a more complex problem worldwide. Cornell University researchers have developed a remarkable new technology to combat this problem. The developed method allows verifying the authenticity of video content by embedding digital watermarks, invisible to the naked eye, into the light sources illuminating the scene.
Deepfake content will be detected with invisible light
The new method employs a technique called “noise-coded illumination.” This technique uses an invisible code system consisting of small changes integrated into the frequency and brightness of light. The embedded codes are recorded, producing time-stamped, low-resolution “code videos.”
The resulting data is used as a reference to determine whether an image is genuine. Code videos can identify fake insertions, deleted scenes, or digital manipulations in suspicious content.
The Cornell University team is reported to have successfully placed the digital watermark directly onto the light source, offering a significant advantage over existing systems. Traditional digital watermarks use codes placed at specific pixel-level points on an image and detectable only with specialized software or hardware. However, the light watermark used in this new system can be natively recorded by any device in the environment, from professional cameras to smartphones.
The system, which works with programmable light sources, can be implemented through software using monitors, studio lights, or specialized LEDs. Standard light sources use a specialized chip approximately the size of a postage stamp.
This chip integrates the watermark into the light by creating frequency and brightness changes imperceptible to the human eye. Because each light source produces a unique code, it is possible to use multiple independent codes in the same scene. Using three different codes simultaneously further enhances the detection of forgery attempts.
Field tests have confirmed that the technology works with different skin tones and in certain outdoor conditions. However, the research team notes that this method alone cannot solve the problem of fake content. Artificial intelligence-generated fake content generation methods are constantly evolving, necessitating the simultaneous advancement of verification technologies.