Google’s video recognition robots are easily deceived, research shows

Google’s ability to use its computer systems to spot extremist videos online has been called into question after researchers managed to easily deceive software that the internet giant uses to identify clips.

The program can be tricked by humans with little technical knowledge by inserting individual frames into the videos, according to researchers at the University of Washington.

They managed to convince Google’s “cloud video intelligence” tool that a video about animals was about a car by flashing a frame of one every two seconds. The images are too quick to be seen by the human eye but were enough to fool the software into wrongly classifying the video.

The researchers said the technology’s applications could include “detecting if a video contains child pornography, or terror and racial propaganda”, as well as helping video hosting sites, such as YouTube, identify inappropriate content.

While it is not the same program that Google’s YouTube uses to police online videos, experts said the flaws raise fresh concerns over the ability of computers to stay one step ahead of humans when it comes to policing extremist content.

Some of the biggest brands in Britain and the United States have pulled millions of dollars in advertising from YouTube over concerns about it appearing next to terrorist material. This week Google said it would step up its use of artificial intelligence to flag extremist content.

“We clearly showed their system is brittle,” said Radha Poovendran, chair of the University of Washington’s electrical engineering department. “Google must be cheered for trying to use AI to do video understanding. But I do not believe Google can make any promise that would hold water about being able to remove all bad videos.”

It comes as Google this week said it will step up the use of similar software on YouTube to analyse the content and context of videos, and flag those that may contain extremist material.
The researchers said they were able to fool the new software without any knowledge of how the system, which was released as a trial to computer developers a few weeks ago, works.

“The average user wanting to watch a video is not going to be spending time bypassing this,” said Mr Poovendran. “But often those who intent to put illegal videos are very, very tech savvy. They can figure out how to bypass the Google’s AI system easily.”

Hossein Hosseini, the lead author on the study added: “Such vulnerability of the video annotation system seriously undermines its usability in real-world applications.”