Recently Google helped law enforcement officials in arresting a Texas man who was allegedly trying to send a abusive image of a girl . The offender named John Henry Skillern was trying to send the picture to his friend, but he was caught by an email scanning bot of Google. Google reported incident to National Center for Missing and Exploited Children, Houston , which further coordinated with local law enforcement to arrest the criminal. Similarly the tech giants efforts have been much helpful in preventing child abuse and porn crime in past. Google is working in close collaboration with National Center for Missing and Exploited Children and the Internet Watch Foundation to track down and curb child abuse and pornography. This news have been covered by all major media outlets like The Telegraph, Huffingtonpost and have brought a lot of praise to Google.
The Other Side
But there is other side of story too. This news have brought positive comments from child welfare organizations but it have also brought sharp reviews from privacy and free internet advocates. People are concerned about their privacy, though this systems prevents criminals, it also peeks into personal data of innocent citizen. As BBC have pointed out that this system is an integral part of Google's system and each and every mail passing through its system is scanned. Google uses this information for providing personalised ads and results and while re-targeting you. So this means that Google is using this system to snoop in every innocent users account. A few of the web experts have even suggested to move away and look at other privacy protected venues.
How It Works
The privacy concerns look really horrible though but its not that gruesome if we look closely. So here we have brought a closer look at the working of this scanning system.
How Gmail scanning works |
This system is designed to scan every mail for pattern. Google scans patterns to gather business intelligence about the markets. But this system could also be used to scan other kind of patterns. The anti child abuse patterns come's in scene here. Whenever any kind of child abuse or pornography offence is reported, these result are hashed to produce unique signature and saved in a database of all known abusive images.
So whenever a sender sends a mail, here is what happens:
- Google's mail bots scan your mail for patterns, and every image hash is also scanned.
- The notice worthy thing here is that your actual image is never used, only its hash is used, this was the image never goes inside the system and remains privet.
- If the image hash is matched with a known abusive image then a positive match flag is set. All the records in database are previously known and confirmed abuse incident. And the hash are unique for every file and matches only for exactly same files. So if you are sending an image of little your Sally or Joe, there are know chances of false matching as hash for you pictures will be unique and non reported.
- If a positive match is found against a image then a human reviewer takes a final look at it. This is the first place where your image are exposed to a human. These reviewers are from National Center for Missing and Exploited Children and the Internet Watch Foundation are actively working for child safety and are good guys for sure. So those concerned with privacy can take a breath of relief.
- If a reviewer find's an image offensive, it is reported to law enforcement for further tracking and arresting. And if image is found non offensive it is trashed and pattern is reviewed.
Conclusion
The system seems to be intrusive, but actually its not that much of a threat. The system doesn't flags any new image, it just tries to find out previously reported images. So offensive images might go undetected, but it ensure that no false positive is generated. Also this all system uses hash's, therefore assuring that original pictures are never used. Hash's assures only uniqueness, and cant be used to recreate original image, so your images are safely stored away from any human eyes inside system. The first interaction of images occurs only after this much of security and the results are limited to only the organisations responsible for protection of children. So we can take at-least this much of risk so that our children are safe.
0 comments:
Post a Comment