The Distorted Reality Of Algorithms

It was one of January’s most viral videos. Logan Paul, a YouTube celebrity, stumbles across a dead man hanging from a tree. The 22-year-old, who is in a Japanese forest famous as a suicide spot, is visibly shocked, then amused. “Dude, his hands are purple,” he says, before turning to his friends and giggling. “You never stand next to a dead guy?”

Paul, who has 16 million mostly teen subscribers to his YouTube channel, removed the video from YouTube 24 hours later amid a furious backlash. It was still long enough for the footage to receive 6m views and a spot on YouTube’s coveted list of trending videos.

This conveyor belt of clips, which auto-play by default, are designed to seduce us to spend more time on Google’s video broadcasting platform. Where do they lead?

The answer was a slew of videos of men mocking distraught teenage fans of Logan Paul, followed by CCTV footage of children stealing things and, a few clicks later, a video of children having their teeth pulled out with bizarre, homemade contraptions.

I had cleared my history, deleted my cookies, and opened a private browser to be sure YouTube was not personalising recommendations.

This was the algorithm taking me on a journey of its own volition, and it culminated with a video of two boys, aged about five or six, punching and kicking one another.

“I’m going to post it on YouTube,” said a teenage girl, who sounded like she might be an older sibling. “Turn around and punch the heck out of that little boy.” They scuffled for several minutes until one had knocked the other’s tooth out.

There are 1.5 billion YouTube users in the world, which is more than the number of households that own televisions. What they watch is shaped by this algorithm, which skims and ranks billions of videos to identify 20 “up next” clips that are both relevant to a previous video and most likely, statistically speaking, to keep a person hooked on their screen.

Company insiders say the algorithm is the single most important engine of YouTube’s growth. In one of the few public explanations of how the formula works, an academic paper that sketches the algorithm’s deep neural networks, crunching a vast pool of data about videos and the people who watch them, YouTube engineers describe it as one of the “largest scale and most sophisticated industrial recommendation systems in existence”.

Lately, it has also become one of the most controversial. The algorithm has been found to be promoting theories about the Las Vegas mass shooting and incentivising, through recommendations, a thriving subculture that targets children with disturbing content such as cartoons in which the British children’s character Peppa Pig eats her father or drinks bleach.

Lewd and violent videos have been algorithmically served up to toddlers watching YouTube Kids, a dedicated app for children. One YouTube creator who was banned from making advertising revenues from his strange videos – which featured his children receiving flu shots, removing earwax, and crying over dead pets, reporter he had only been responding to the demands of Google’s algorithm. “That’s what got us out there and popular,” he said. “We learned to fuel it and do whatever it took to please the algorithm.”

Google has responded to these controversies in a process akin to Whac-A-Mole: expanding the army of human moderators, removing offensive YouTube videos identified by journalists and de-monetising the channels that create them. But none of those moves has diminished a growing concern that something has gone profoundly awry with the artificial intelligence powering YouTube.

Yet one stone has so far been largely unturned. Much has been written about Facebook and Twitter’s impact on politics, but in recent months, academics have speculated that YouTube’s algorithms may have been instrumental in fuelling disinformation during the 2016 presidential election. “YouTube is the most overlooked story of 2016,” Zeynep Tufekci, a widely respected sociologist and technology critic, tweeted back in October. “Its search and recommender algorithms are misinformation engines.”

If YouTube’s recommendation algorithm really has evolved to promote more disturbing content, how did that happen? And what is it doing to our politics?

‘Like reality, but distorted’

Those are not easy questions to answer. Like all big tech companies, YouTube does not allow us to see the algorithms that shape our lives. They are secret formulas, proprietary software, and only select engineers are entrusted to work on the algorithm. Guillaume Chaslot, a 36-year-old French computer programmer with a PhD in artificial intelligence, was one of those engineers.

During the three years he worked at Google, he was placed for several months with a team of YouTube engineers working on the recommendation system. The experience led him to conclude that the priorities YouTube gives its algorithms are dangerously skewed.

“YouTube is something that looks like reality, but it is distorted to make you spend more time online,” he tells me when we meet in Berkeley, California. “The recommendation algorithm is not optimising for what is truthful, or balanced, or healthy for democracy.”

Chaslot explains that the algorithm never stays the same. It is constantly changing the weight it gives to different signals: the viewing patterns of a user, for example, or the length of time a video is watched before someone clicks away.

The engineers he worked with were responsible for continuously experimenting with new formulas that would increase advertising revenues by extending the amount of time people watched videos. “Watch time was the priority,” he recalls. “Everything else was considered a distraction.”

Chaslot was fired by Google in 2013, ostensibly over performance issues. He insists he was let go after agitating for change within the company, using his personal time to team up with like-minded engineers to propose changes that could diversify the content people see.

He was especially worried about the distortions that might result from a simplistic focus on showing people videos they found irresistible, creating filter bubbles, for example, that only show people content that reinforces their existing view of the world. Chaslot said none of his proposed fixes were taken up by his managers. “There are many ways YouTube can change its algorithms to suppress fake news and improve the quality and diversity of videos people see,” he says. “I tried to change YouTube from the inside but it didn’t work.”

YouTube told me that its recommendation system had evolved since Chaslot worked at the company and now “goes beyond optimising for watch time”. The company said that in 2016 it started taking into account user “satisfaction”, by using surveys, for example, or looking at how many “likes” a video received, to “ensure people were satisfied with what they were viewing”.

YouTube added that additional changes had been implemented in 2017 to improve the news content surfaced in searches and recommendations and discourage the promotion of videos containing “inflammatory religious or supremacist” content.

It did not say why Google, which acquired YouTube in 2006, waited over a decade to make those changes. Chaslot believes such changes are mostly cosmetic, and have failed to fundamentally alter some disturbing biases that have evolved in the algorithm. In the summer of 2016, he built a computer program to investigate.

The software Chaslot wrote was designed to provide the world’s first window into YouTube’s opaque recommendation engine. The program simulates the behaviour of a user who starts on one video and then follows the chain of recommended videos tracking data along the way.

It finds videos through a word search, selecting a “seed” video to begin with, and recording several layers of videos that YouTube recommends in the “up next” column. It does so with no viewing history, ensuring the videos being detected are YouTube’s generic recommendations, rather than videos personalised to a user.

And it repeats the process thousands of times, accumulating layers of data about YouTube recommendations to build up a picture of the algorithm’s preferences.

Over the last 18 months, Chaslot has used the program to explore bias in YouTube content promoted during the French, British and German elections, global warming and mass shootings, and published his findings on his website, Algotransparency.com. Each study finds something different, but the research suggests YouTube systematically amplifies videos that are divisive, sensational and conspiratorial.

“On YouTube, fiction is outperforming reality,” Chaslot says.

He believes one of the most shocking examples was detected by his program in the run-up to the 2016 presidential election. As he observed in a short, largely unnoticed blogpost published after Donald Trump was elected, the impact of YouTube’s recommendation algorithm was not neutral during the presidential race: it was pushing videos that were, in the main, helpful to Trump and damaging to Hillary Clinton.

Trump won the electoral college as a result of 80,000 votes spread across three swing states. There were more than 150 million YouTube users in the US. The videos contained in Chaslot’s database of YouTube-recommended election videos were watched, in total, more than 3bn times before the vote in November 2016.

Even a small bias in the videos would have been significant. “Algorithms that shape the content we see can have a lot of impact, particularly on people who have not made up their mind,” says Luciano Floridi, a professor at the University of Oxford’s Digital Ethics Lab, who studies the ethics of artificial intelligence. “Gentle, implicit, quiet nudging can over time edge us toward choices we might not have otherwise made.”

Promoting Conspiracy Theories

Philip Howard, a professor at the Oxford Internet Institute, who has studied how disinformation spread during the election. He questions whether a further factor might have been at play.

“This is important research because it seems to be the first systematic look into how YouTube may have been manipulated,” he says, raising the possibility that the algorithm was gamed as part of the same propaganda campaigns that flourished on Twitter and Facebook.

In testimony to the House intelligence committee, investigating Russian interference in the election, Google’s general counsel, Kent Walker, played down the degree to which Moscow’s propaganda efforts infiltrated YouTube.

The company’s internal investigation had only identified 18 YouTube channels and 1,100 videos suspected of being linked to Russia’s disinformation campaign, he told the committee in December, and generally the videos had relatively small numbers of views. He added:

“We believe that the activity we found was limited because of various safeguards that we had in place in advance of the 2016 election, and the fact that Google’s products didn’t lend themselves to the kind of micro-targeting or viral dissemination that these actors seemed to prefer.”

Walker made no mention of YouTube recommendations.

Senator Mark Warner, the ranking Democrat on the intelligence committee, later wrote to the company about the algorithm, which he said seemed “particularly susceptible to foreign influence”.

The senator demanded to know what the company was specifically doing to prevent a “malign incursion” of YouTube’s recommendation system. Walker, in his written reply, offered few specifics, but said YouTube had “a sophisticated spam and security ¬breach detection system to identify anomalous behavior and malignant incursions”.

Tristan Harris, a former Google insider turned tech whistleblower, likes to describe Facebook as a “living, breathing crime scene for what happened in the 2016 election” that federal investigators have no access to. The same might be said of YouTube.
 
About half the videos Chaslot’s program detected being recommended during the election have now vanished from YouTube, many of them taken down by their creators.

Guardian

You Might Also Read: 

Facebook & Google Are ‘A Menace to Society’:

Claims That Google's Search Algorithm Spread False Information:

Facebook's Algorithm And Russian Ads:

 

« Cyber Attackers Will Soon Kill Somebody
Will AI Make Data Analytics Jobs Obsolete? »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

The PC Support Group

The PC Support Group

A partnership with The PC Support Group delivers improved productivity, reduced costs and protects your business through exceptional IT, telecoms and cybersecurity services.

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

ON-DEMAND WEBINAR: What Is A Next-Generation Firewall (and why does it matter)?

Watch this webinar to hear security experts from Amazon Web Services (AWS) and SANS break down the myths and realities of what an NGFW is, how to use one, and what it can do for your security posture.

FT Cyber Resilience Summit: Europe

FT Cyber Resilience Summit: Europe

27 November 2024 | In-Person & Digital | 22 Bishopsgate, London. Business leaders, Innovators & Experts address evolving cybersecurity risks.

Perimeter 81 / How to Select the Right ZTNA Solution

Perimeter 81 / How to Select the Right ZTNA Solution

Gartner insights into How to Select the Right ZTNA offering. Download this FREE report for a limited time only.

Mitol PerfectBackup

Mitol PerfectBackup

Mitol PerfectBackup provide Enterprise Online Backup, Disaster Recovery and Cloud Computing Services.

Cybercom Group

Cybercom Group

Cybercom offers strategic advice, testing & quality assurance, security solutions, system development, integration, management and operation services.

Cybellum

Cybellum

Cybellum brings the entire product security workflow into one dedicated platform, allowing device manufacturers to keep the connected products they build cyber-secure and cyber-compliant.

Teramind

Teramind

Teramind provides a user-centric security approach to monitor employee behavior in order to identify suspicious activity, detect possible threats, monitor efficiency, and ensure industry compliance.

Farsight Security

Farsight Security

Farsight Security provides the world’s largest real-time actionable threat intelligence on how the Internet is changing.

Orchestra Group

Orchestra Group

Orchestra Group offer a unique integrated cybersecurity defense platform with proactive security policy management and enforcement orchestration.

Prime Technology Services

Prime Technology Services

Prime Tech are a group of Red Hat, Microsoft & Cisco Certified IT Professionals with an impressive track record of consistently delivering value to our corporate clients.

BlueCat Networks

BlueCat Networks

BlueCat is the Adaptive DNS company. Our mission is to help the world’s largest organizations thrive on network complexity, from the edge to the core.

Omega Systems

Omega Systems

Omega Systems is a leading managed service provider (MSP) and managed security service provider (MSSP) to mid-market organizations.

Halcyon

Halcyon

Halcyon is the industry’s first dedicated, adaptive security platform focused specifically on stopping ransomware attacks.

Epic Machines

Epic Machines

Epic Machines is a Value Added Reseller and Managed Security Services provider offering Security Transformation using Cloud-native solutions to commercial and government markets.

Cyviation

Cyviation

Cyviation's mission is to mitigate ever-growing and menacing Cyber Security threats, focusing on aircraft, airlines and airports.

Yarix

Yarix

Yarix is the leading company in Var Group’s Digital Security division and one of the most recognised, innovative and authoritative Italian companies in the IT security sector.

Technoware Solutions

Technoware Solutions

Technoware Solutions is a global company committed to helping entities navigate the digital waters of modernizing their system processes in an ever changing cybersecurity landscape.

Pacific Certifications

Pacific Certifications

Pacific Certifications provide accredited certification, training and support services to help you improve processes, performance and products and services.

Argenta Talent Acquisition

Argenta Talent Acquisition

Argenta Talent Acquisition is a recruitment partner specializing in Space and Defense, Intelligence Community, all things Technical, Cyber, and Logistics.