Google scans Gmail and Drive accounts for cartoons depicting child sexual abuse
This type of content is illegal to own and can be detected by Google’s anti-child sexual material (CSAM) systems, a fact not previously discussed in public.
One of the issues that tech companies need to solve is the sharing of child sexual abuse (CSAM) images on their platforms. It’s an ever-evolving issue, with both concerns over user privacy and the protection of minors in a constant state of balance.
The search warrant in question was from late 2020 in Kansas. We’re not naming the subject of the warrant, as the state never brought charges forward.
Google’s systems found “digital art or cartoons depicting children engaged in sexually explicit conduct or engaged in sexual intercourse” in the artist’s Google Drive account. That was likely an automated scan, with AI looking for “hashes,” digital descriptions of a file, created from previously-known CSAM material.
Like Apple, other big tech companies also routinely scan email for CSAM images. It’s easier to tackle in email services, which are unencrypted by design.
The tricky part is when those tech companies need to search encrypted services, like iCloud Photos, for illegal content.
How do they balance user privacy with the need to keep children safe? How are they securing the tools so they can’t be misused? What happens in the case of false positives?
These are all questions that perhaps are too big for any company to answer. One possible course of action for NCMEC would be to form a consortium with the big tech companies. This way, everyone would be on the same page. It would also be necessary to include privacy advocates to protect user privacy.
- Is Apple’s controversial CSAM photo detection feature dead in the water?
- iOS 15.2 beta blurs nude images for kids using the Messages app
- Your WhatsApp messages aren’t actually private – moderators can read them
- Did you know Apple is already scanning your iCloud Mail for child abuse photos?