Can AI tackle the threat of counterfeit news?

Lawrence Jones

Tuesday 6 March 2018

The involvement (or not) of Russia in the 2016 US presidential election has catapulted the concept of ‘fake news’ into the mainstream. The idea that our news could be manipulated has come as a surprise to many, but things could get more complicated.

Earlier this year, Gartner set out its predictions for the future, and warned that by 2020 unscrupulous tech users would be able to create digital images, videos, documents and sounds that are real enough to convince humans and, in some cases, computers.

The research company claims that increasingly advanced machines will be able to create this content automatically. In the hands of a nefarious individual, or a group acting on behalf of a rogue nation state, it could fundamentally alter how we perceive the information we read and how we use the internet.

The question is: just how will we know the truth?

Detecting the truth

“The first thing to say is that we’re not talking about crude Photoshop editing,” says Andrew Farmer, CEO at AI and app developers MyOxygen.

“We’re already seeing the ability of experts to manipulate or create incredibly realistic content that can confuse experts,” Farmer says. He believes that counterfeit content will be indistinguishable from the real thing. “The quality of the work produced is as realistic as the genuine article. For most people, it’s indistinguishable from real life.”

Creating lifelike imitations of the world is fine – in fact, it’s the basis of the multibillion-dollar video games industry. It’s how they are used that’s the problem.

Fake news can be generated to help shape (or perhaps misshape) our perception of current events – even elections. When the concept of fake news becomes a political issue, you know things are getting serious.

For businesses, counterfeit reality poses a problem. If consumers can’t trust the internet, then their commercial models could be shaken.

The potential for counterfeit reality to subvert the internet shouldn’t be understated. Machines could be used to create realistic product reviews, destroying the trust-based, crowdsourced review models of companies like Amazon, TripAdvisor and others.

In more mundane terms, the ability to mass generate content that passes current filters could subvert the algorithms used by search engines, fundamentally damaging the faith that customers have in their use.

The battle for relevance has never been more important. 

Future for AI

“We’re already seeing internet users beginning to question the accuracy of the content they see online,” Farmer says. What’s less clear is that, if counterfeit reality is such a threat, who is responsible for tackling it?

“Facebook and Google are distribution channels for some of the fake content,” he says.

The dangers of counterfeit reality pose serious problems for existing tech giants. Discussions about whether Facebook, Google and other news aggregators are publishers continue, with these global corporations seemingly happy to shift their position as it suits them.

Farmer believes that counterfeit reality could challenge the way we view these big companies: “They will need to control it or stand to be less trustworthy brands than they are now.”

With billions of pages of new content created every day, they are under increasing pressure to find new ways of exercising editorial control.

Step forward, artificial intelligence (AI).

Generative adversarial networks (GANs) are AI systems that automatically compares two sets of data, selecting the real and the fake. Facebook is investing in the technology hoping to develop machine learning systems capable of spotting counterfeit news. But the processing power needed and the complexity of the challenge are already posing problems for some of the biggest and well-resourced companies in the world.

Far from leading technological progress, however, the internet giants are playing catch-up with the technology to spot counterfeit reality behind the ability to create it. 

A fake news future?

Fake news entered the dictionary this year and has been a defining feature of our news agenda. The ability for systems to create and distribute counterfeit content is a huge risk for businesses. We have to hope that the world’s leading tech companies are developing solutions advanced enough to deal with this problem. If not, we could be seeing a whole lot more fake news.a


Building the next-gen data centre

Where traditional and web-scale apps co-exist