The adage ‘seeing is believing’ is no longer relevant when it comes to the internet, and experts say it’s not going to get better anytime soon.

Key Takeaways

  • New research reveals people can’t separate AI-generated images from real ones.Participants rated AI-generated images as more trustworthy.Experts believe people should stop trusting anything they see on the internet.

A recent study found that images of faces generated by artificial intelligence (AI) were not only highly photo-realistic, but they also appeared more virtuous than real faces. 

“Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley, and are capable of creating faces that are indistinguishable and more trustworthy than real faces,” observed the researchers.

That Person Doesn’t Exist

The researchers, Dr. Sophie Nightingale from Lancaster University and Professor Hany Farid from the University of California, Berkeley, conducted the experiments after acknowledging the well-publicized threats of deep fakes, ranging from all kinds of online fraud to invigorating disinformation campaigns.

“Perhaps most pernicious is the consequence that, in a digital world in which any image or video can be faked, the authenticity of any inconvenient or unwelcome recording can be called into question,” the researchers contended.

They argued that while there’s been progress in developing automatic techniques to detect deep-fake content, current techniques are not efficient and accurate enough to keep up with the constant stream of new content being uploaded online. This means it’s up to the consumers of online content to sort out the real from the fake, the duo suggests. 

Jelle Wieringa, a security awareness advocate at KnowBe4, agreed. He told Lifewire over email that combatting actual deep fakes themselves is extremely hard to do without specialized technology. “[Mitigating technologies] can be expensive and difficult to implement into real-time processes, often detecting a deepfake only after the fact.” 

With this assumption, the researchers performed a series of experiments to determine whether human participants can distinguish state-of-the-art synthesized faces from real faces. In their tests, they found that despite training to help recognize fakes, the accuracy rate only improved to 59%, up from 48% without training.

This led the researchers to test if perceptions of trustworthiness could help people identify artificial images. In a third study, they asked participants to rate the trustworthiness of the faces, only to discover that the average rating for synthetic faces was 7.7% more trustworthy than the average rating for real faces. The number might not sound like much, but the researchers claim it is statistically significant.

Deeper Fakes

Deep fakes were already a major concern, and now the waters have been muddied further by this study, which suggests such high-quality fake imagery could add a whole new dimension to online scams, for instance, by helping create more convincing online fake profiles.

“The one thing that drives cybersecurity is the trust people have in the technologies, processes, and people that attempt to keep them safe,” shared Wieringa. “Deep fakes, especially when they become photorealistic, undermine this trust and, therefore, the adoption and acceptance of cybersecurity. It can lead to people becoming distrustful of everything they perceive.”

Chris Hauk, consumer privacy champion at Pixel Privacy, agreed. In a brief email exchange, he told Lifewire that photorealistic deep fake could cause “havoc” online, especially these days when all kinds of accounts can be accessed using photo ID technology.

Corrective Action

Thankfully, Greg Kuhn, Director of IoT, Prosegur Security, says that there are processes that can avoid such fraudulent authentication. He told Lifewire via email that AI-based credentialing systems match a verified individual against a list, but many have safeguards built in to check for “liveness.” 

“These types of systems can require and guide a user to perform certain tasks such as smile or turn your head to the left, then right. These are things that statically generated faces could not perform,” shared Kuhn.

The researchers have proposed guidelines to regulate their creation and distribution to protect the public from synthetic images. For starters, they suggest incorporating deeply ingrained watermarks into the image- and video-synthesis networks themselves to ensure all synthetic media can be reliably identified.

Until then, Paul Bischoff, privacy advocate and editor of infosec research at Comparitech, says people are on their own. “People will have to learn not to trust faces online, just as we’ve all (hopefully) learned not to trust display names in our emails,” Bischoff told Lifewire via email.

Get the Latest Tech News Delivered Every Day