Journalists Aim To Detect Deepfakes

Artificial intelligence is fueling the next phase of misinformation. The new type of synthetic media known as Deepfakes poses major challenges for newsrooms when it comes to verification.

Journalists have an important role in informing the public about the dangers and challenges of artificial intelligence technology. Reporting on these issues is a way to raise awareness and inform the public.

How can you detect Deepfakes?

Across the industry, news organisations can consider multiple approaches to help authenticate media if they suspect alterations.

“There are technical ways to check if the footage has been altered, such as going through it frame by frame in a video editing program to look for any unnatural shapes and added elements, or doing a reverse image search,” said Natalia V. Osipova, a senior video journalist at the Wall St. Journal. 

But the best option is often traditional reporting: “Reach out to the source and the subject directly, and use your editorial judgment.”

Examining the source

If someone has sent in suspicious footage, a good first step is to try to contact the source. How did that person obtain it? Where and when was it filmed? Getting as much information as possible, asking for further proof of the claims, and then verifying is key.

If the video is online and the uploader is unknown, other questions are worth exploring: Who allegedly filmed the footage? Who published and shared it, and with whom? Checking the metadata of the video or image with tools like InVID or other metadata viewers can provide answers. 

In addition to going through this process internally, we collaborate with content verification organizations such as Storyful and the Associated Press. This is a fast-moving landscape with emerging solutions appearing regularly in the market. 

For example, new tools including TruePic and Serelay use blockchain to authenticate photos. Regardless of the technology used, the humans in the newsroom are at the center of the process.

“Technology alone will not solve the problem,” said Rajiv Pant, chief technology officer at the Journal. “The way to combat Deepfakes is to augment humans with artificial intelligence tools.”

Finding older versions of the footage

Deepfakes are often based on footage that is already available online. Reverse image search engines like Tineye or Google Image Search are useful to find possible older versions of the video to suss out whether an aspect of it was manipulated. 

Examining the footage

Editing programs like Final Cut enable journalists to slow footage down, zoom the image, and look at it frame by frame or pause multiple times. This helps reveal obvious glitches: glimmering and fuzziness around the mouth or face, unnatural lighting or movements, and differences between skin tones are telltale signs of a Deepfake.

In addition to these facial details, there might also be small edits in the foreground or background of the footage. Does it seem like an object was inserted or deleted into a scene that might change the context of the video (e.g. a weapon, a symbol, a person, etc.)? Again, glimmering, fuzziness, and unnatural light can be indicators of faked footage.

In the case of audio, watch out for unnatural intonation, irregular breathing, metallic sounding voices, and obvious edits. These are all hints that the audio may have been generated by artificial intelligence. 

However, it’s important to note that image artifacts, glitches, and imperfections can also be introduced by video compression. That’s why it is sometimes hard to conclusively determine whether a video has been forged or not.

The democratisation of deepfake creation adds to the challenge

A number of companies are creating technologies, often for innocuous reasons, that nonetheless could eventually end up being used to create Deepfakes. Some examples:

Object extraction

Adobe is working on Project Cloak, an experimental tool for object removal in video, which makes it easy for users to take people or other details out of the footage. The product could be helpful in motion picture editing. But some experts think that micro-edits like these, the removal of small details in a video, might be even more dangerous than blatant fakes since they are harder to spot.

Weather alteration

There are algorithms for image translation that enable users to alter the weather or time of day in a video, like this example developed by chip manufacturer Nvidia by using generative adversarial networks. These algorithms could be used for post-production of movie scenes shot during days with different weather. 

But this could be problematic for newsrooms and others, because in order to verify footage and narrow down when videos were filmed, it is common to examine the time of day, weather, position of the sun, and other indicators for clues to inconsistencies.

Artificial voices

Audio files can also be manipulated automatically: One company, Lyrebird, creates artificial voices based on audio samples of real people. One minute of audio recordings is enough to generate an entire digital replica that can say any sentence the user types into the system. Applications of this technology include allowing video game developers to add voices to characters. 

Off-the-shelf consumer tools that make video and audio manipulation easier may hasten the proliferation of Deepfakes. Some of the companies behind these tools are already considering safeguards to prevent misuse of their tech. 

“We are exploring different directions including crypto-watermarking techniques, new communication protocols, as well developing partnerships with academia to work on security and authentication,” said Alexandre de Brébisson, CEO and cofounder of Lyrebird. 

Deepfakes’ ramifications for society

While these techniques can be used to significantly lower costs of movie, gaming, and entertainment production, they represent a risk for news media as well as society more broadly. 

For example, fake videos could place politicians in meetings with foreign agents or even show soldiers committing crimes against civilians. False audio could make it seem like government officials are privately planning attacks against other nations.

“We know Deepfakes and other image manipulations are effective — this kind of fakery can have immediate repercussions,” said Roy Azoulay, founder and CEO of Serelay, a platform that enables publishers to protect their content against forgeries. “The point we need to really watch is when they become cheap, because cheap and effective drives diffusion.”

Lawmakers like senators Mark Warner and Marco Rubio are already warning of scenarios like these and working on possible strategies to avoid them. What’s more, Deepfakes could be used to deceive news organizations and undermine their trustworthiness. 

Publishing an unverified fake video in a news story could stain a newsroom’s reputation and ultimately lead to citizens further losing trust in media institutions. Another danger for journalists: personal deepfake attacks showing news professionals in compromising situations or altering facts, again aimed at discrediting or intimidating them. 

As Deepfakes make their way into social media, their spread will likely follow the same pattern as other fake news stories. In a MIT study investigating the diffusion of false content on Twitter published between 2006 and 2017, researchers found that “falsehood diffused significantly farther, faster, deeper, and more broadly than truth in all categories of information.” 

False stories were 70 percent more likely to be retweeted than the truth and reached 1,500 people six times more quickly than accurate articles.

What’s next

Deepfakes are not going away anytime soon. It’s safe to say that these elaborate forgeries will make verifying media harder, and this challenge could become more difficult over time. 

“We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What’s going to happen next?” said Hany Farid, a photo-forensics expert, who will join the University of California, Berkeley faculty next year. He said the next 18 months will be critical: “I do think that the issues are coming to a head,” adding that he expects researchers will have made advances before the 2020 election cycle.

Despite the current uncertainty, newsrooms can and should follow the evolution of this threat by conducting research, by partnering with academic institutions, and by training their journalists how to leverage new tools.

NiemanLab:

You Might Also Read:

Realistic Fake Videos Threaten Democracy

« AI For Cyber - You Don’t Need To Know The Threat, Just The Network
South Korea To Triple Investment In Blockchain »

Infosecurity Europe
CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

CYRIN

CYRIN

CYRIN® Cyber Range. Real Tools, Real Attacks, Real Scenarios. See why leading educational institutions and companies in the U.S. have begun to adopt the CYRIN® system.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Directory of Cyber Security Suppliers

Directory of Cyber Security Suppliers

Our Supplier Directory lists 8,000+ specialist cyber security service providers in 128 countries worldwide. IS YOUR ORGANISATION LISTED?

Clayden Law

Clayden Law

Clayden Law advise global businesses that buy and sell technology products and services. We are experts in information technology, data privacy and cybersecurity law.

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

Morphisec

Morphisec

Morphisec's world leading prevention-first software stops ransomware and other advanced attacks from endpoint to the cloud.

Watchcom Security Group

Watchcom Security Group

Watchcom is one of Norway's foremost suppliers of information security consultancy services.

SGCyberSecurity

SGCyberSecurity

SGCyberSecurity is Singapore's No.1 Cyber Security portal. From this platform, you will be able to find useful articles, resources and connect with the security companies for your business needs.

Yelbridges

Yelbridges

Yelbridges offer high quality IT security & risk management services to mitigate business risks.

Lightship Security

Lightship Security

Lightship Security is an accredited Common Criteria and FIPS 140-2 IT security testing laboratory that specializes in test conformance automation solutions and IT product security certifications.

ArcRan Information Technology

ArcRan Information Technology

ArcRan concentrates on developing comprehensive cybersecurity solutions for smart city applications. We believe that cybersecurity is the fundamental enabler of IoT development.

Adzuna

Adzuna

Adzuna is a search engine for job ads used by over 10 million visitors per month that aims to list every job everywhere, including thousands of vacancies in Cybersecurity.

Field Effect Software

Field Effect Software

Field Effect Software build sophisticated and integrated IT security, threat surface reduction, training and simulation capabilities for enterprises and small businesses.

Bloc Ventures

Bloc Ventures

Bloc Ventures is an investment company providing long-term, ‘patient’ equity capital to early stage unquoted deep technology companies.

Netpoleon Group

Netpoleon Group

Netpoleon is a leading provider of integrated security, networking solutions and value added services.

Kontron

Kontron

Kontron offers a combined portfolio of secure hardware, middleware and services for Internet of Things (IoT) and Industry 4.0 applications.

Red Access

Red Access

Red Access provides the first SaaS-based platform to protect web browsing from cyber threats on any browser and any in-app while ensuring frictionless user experience.

Avalon Cyber

Avalon Cyber

Arm your organization in the fight against cyberattacks by partnering with the experts at Avalon Cyber.

CyberSecureRIA

CyberSecureRIA

We founded CyberSecureRIA specifically to secure and support RIAs. We exist to secure SEC-registered RIAs, and keep them compliant with cybersecurity regulations.

Convergence Networks

Convergence Networks

Convergence Networks is one of North America's leading Managed Services & Security Providers.

Mode

Mode

Mode is an out-of-band communication and crisis collaboration platform. One platform to manage your cyber crisis response. Stay connected when it's needed most.