Google makes many improvements to children’s protection and safety for its phone software. The AI automatically scans the content of users’, then decides its safety. According to a report, Google’s AI flagged a parent’s images of his child’s groin as child sexual abuse material (CSAM). Here are the details about Google’s AI.
Google AI flagged parents’ accounts for potential abuse
The company closed the father’s accounts and filed a report with the National Center for Missing and Exploited Children (NCMEC). The father said he used his phone to take photos of an infection on his kid’s groin. It was the pandemic time, and some doctor’s offices were still closed. That’s why the nurse asked him to send the images of the wound.
This incident happened in February 2021. The father said after using his phone to take photos, the technology company flagged the images as child sexual abuse material (CSAM). He received a notification from Google two days after the photos, stating that the company locked his accounts due to “harmful content” that was “a severe violation of Google’s policies and might be illegal.”
Google stated in the fighting abuse on our own platforms and services, “We identify and report CSAM with trained specialist teams and cutting-edge technology, including machine learning classifiers and hash-matching technology, which creates a “hash”, or a unique digital fingerprint, for an image or a video so it can be compared with hashes of known CSAM. When we find CSAM, we report it to the National Center for Missing and Exploited Children (NCMEC), which liaises with law enforcement agencies around the world.”
The father lost access to his emails, contacts, photos, and even his phone number because he used Google Fi‘s mobile service according to the report. He tried to appeal the decision. However, the company refused the request. The police department opened an investigation into the parent in December 2021, but the investigator found him innocent.
{{user}} {{datetime}}
{{text}}