Algorithms, Lies & Social Media

Achieving a more transparent and less manipulative online media may well be the defining political battle of the 21st century. The Internet has evolved into a ubiquitous and indispensable digital environment in which people communicate, seek information, and make decisions.

Despite offering various benefits, online environments are also replete with smart, highly adaptive choice architectures designed primarily to maximise commercial interests, capture and sustain users’ attention, monetise user data, and predict and influence future behaviour.

This online landscape holds multiple negative consequences for society, such as a decline in human autonomy, rising incivility in online conversation, the facilitation of political extremism, and the spread of disinformation. The Internet and the devices people use to access it represent not just new technological achievements but also entirely new artificial environments. Much like people’s physical surroundings, these are environments in which people spend time, communicate with each other, search for information, and make decisions. Yet the digital world is a recent phenomenon:

The Internet is 50 years old, the Web is 30 years old, and the advanced social Web is merely 15 years old.

New adjustments and features are added to these environments on a continuous basis, making it nearly impossible for most users, let alone regulators, to keep abreast of the inner workings of their digital surroundings. There was a time when the Internet was seen as an unequivocal force for social good. It propelled progressive social movements from Black Lives Matter to the Arab Spring; it set information free and flew the flag of democracy worldwide.

But today, democracy apperas to be in decline and the Internet’s role as driver is palpably clear. From fake news bots to misinformation to conspiracy theories, social media has commandeered mindsets, evoking the sense of a dark force that must be countered by authoritarian, top-down controls.

This paradox, that the Internet is both saviour and executioner of democracy, can be understood through the lenses of classical economics and Cognitive Science. In traditional markets, firms manufacture goods, such as cars or toasters, that satisfy consumers’ preferences.

Markets on social media and the Internet are radically different because the platforms exist to sell information about their users to advertisers, thus serving the needs of advertisers rather than consumers.

On social media and parts of the Internet, users “pay” for free services by relinquishing their data to unknown third parties who then expose them to ads targeting their preferences and personal attributes. This economic model has driven online and social media platforms to exploit the cognitive limitations and vulnerabilities of their users. For instance, human attention has adapted to focus on cues that signal emotion or surprise.

Paying attention to emotionally charged or surprising information makes sense in most social and uncertain environments and was critical within the close-knit groups in which early humans lived. In this way, information about the surrounding world and social partners could be quickly updated and acted on. But when the interests of the platform do not align with the interests of the user, these strategies become maladaptive. Platforms know how to capitalise on this: To maximise advertising revenue, they present users with content that captures their attention and keeps them engaged.

For example, YouTube’s recommendations amplify increasingly sensational content with the goal of keeping people’s eyes on the screen. Research by the free software organisation Mozilla confirms that YouTube not only hosts but actively recommends videos that violate its own policies concerning political and medical misinformation, hate speech, and inappropriate content.

YouTube is the second-most visited website in the world, and its algorithm drives 70% of watch time on the platform, an estimated 700 million hours every single day.

For years, that recommendation algorithm has helped spread health misinformation, political disinformation, hateful diatribes, and other regrettable content to people around the globe. YouTube’s enormous influence means these films reach a huge audience, having a deep impact on countless lives, from radicalisation to polarisation.

There is common tendency for humans to react more strongly to negative than positive information. “Negativity biases” in human cognition and behaviour are well documented, but existing research is based on small Anglo-American samples and stimuli that are only tangentially related to our political world.

In pursuit of our attention, digital platforms have become paved with misinformation, particularly the kind that feeds outrage and anger. Following recent revelations by a whistle-blower, we now know that Facebook’s newsfeed curation algorithm gave content eliciting anger five times as much weight as content evoking happiness. it has also been reported that political parties in Europe began running more negative ads because they were favoured by Facebook’s algorithm.

Besides selecting information on the basis of its personalised relevance, algorithms can also filter out information considered harmful or illegal, for instance by automatically removing hate speech and violent content. But until recently, these algorithms went only so far. But during the pandemic, these same platforms took a more interventionist approach to false information and vowed to remove or limit Covid-19 misinformation and conspiracy theories. Here, too, the platforms relied on automated tools to remove content without human review.

None of this is transparent to consumers, because Internet and social media platforms lack the basic signals that characterise conventional commercial transactions. When people buy a car, they know they are buying a car. If that car fails to meet their expectations, consumers have a clear signal of the damage done because they no longer have money in their pocket. When people use social media, by contrast, they are not always aware of being the passive subjects of commercial transactions between the platform and advertisers involving their own personal data.

Users are also often unaware of how their news feed on social media is curated and even people who are aware of algorithmic curation tend not to have an accurate understanding of what that involves. A Pew research paper found that 74% of Americans did not know that Facebook maintained data about their interests and traits.

Most commercial sites, from social media platforms to news outlets to online retailers, collect a wide variety of data about their users’ behaviours. Platforms use this data to deliver content and recommendations based on users’ interests and traits, and to allow advertisers to focus advertising to relatively precise segments of the public.

They are often unaware that the information they consume and produce is curated by algorithms. And hardly anyone understands that algorithms will present them with information that is curated to provoke outrage or anger, attributes that fit hand in glove with political misinformation. People cannot be held responsible for their lack of awareness. They were neither consulted on the design of online architectures nor considered as partners in the construction of the rules of online governance.

Several legislative proposals in Europe suggest a way forward, but it remains to be seen whether any of these laws will be passed. In the US there is considerable public and political scepticism about regulations and about governments stepping in to regulate social media content in particular. This scepticism is at least partially justified because paternalistic interventions may, if done improperly, result in censorship.

In March 2022, the Russian parliament approved jail terms of up to 15 years for sharing “fake”, as in contradicting official government position, information about the war against Ukraine, causing many foreign and local journalists and news organisations to limit their coverage of the invasion or to withdraw from the country entirely.

In liberal democracies, regulations must not only be proportionate to the threat of harmful misinformation but also respectful of fundamental human rights. Fears of authoritarian government control must be weighed against the dangers of the status quo.

Achieving a more transparent and less manipulative media may well be the defining political battle of the 21st century.

Nieman Lab:     MozillaPew Reserach:    FreedonHouse:     SagePub:    PNAS:   

Ahmed Al-Rawii / Reserachgate:     Emerald Insight:       

You Might Also Read: 

The Limits Of Social Media Soft Power:

 

« Russia’s AI Plans Might Not Survive The Ukraine War
Remote Access Scams Open The Door To Thieves »

CyberSecurity Jobsite
Perimeter 81

Directory of Suppliers

LockLizard

LockLizard

Locklizard provides PDF DRM software that protects PDF documents from unauthorized access and misuse. Share and sell documents securely - prevent document leakage, sharing and piracy.

CSI Consulting Services

CSI Consulting Services

Get Advice From The Experts: * Training * Penetration Testing * Data Governance * GDPR Compliance. Connecting you to the best in the business.

Resecurity, Inc.

Resecurity, Inc.

Resecurity is a cybersecurity company that delivers a unified platform for endpoint protection, risk management, and cyber threat intelligence.

DigitalStakeout

DigitalStakeout

DigitalStakeout enables cyber security professionals to reduce cyber risk to their organization with proactive security solutions, providing immediate improvement in security posture and ROI.

Syxsense

Syxsense

Syxsense brings together endpoint management and security for greater efficiency and collaboration between IT management and security teams.

CERT-UG/CC

CERT-UG/CC

CERT-UG/CC is the national Computer Emergency Response Team for Uganda, operating under the National Information Technology Authority (NITA-U)

D-Fence

D-Fence

D-Fence high availability security service protects corporate email communication, the company and it's employee's against cyber threats.

Six Degrees

Six Degrees

Six Degrees is a leading secure, integrated cloud services provider. We protect UK organisations and help them thrive in the cloud by giving them secure platforms to innovate and grow.

Wotan Monitoring

Wotan Monitoring

Wotan Monitoring is the software solution for fully automatic process monitoring, infrastructure monitoring and end-to-end monitoring.

XignSYS

XignSYS

XignSys develops innovative password-free and user-friendly Authentication solutions and electronic signature systems for B2B and B2C applications.

IoT Defense

IoT Defense

IoT Defense (IOTD) is a cybersecurity and networking company building solutions that enable the protection of networks and the ever-increasing prevalence of IoT devices.

IoT Security Institute (IoTSI)

IoT Security Institute (IoTSI)

IoT Security Institute is an academic and industry body dedicated to providing frameworks and supporting educational services to assist in managing security within an Internet of Things eco-system.

KDM Analytics

KDM Analytics

KDM Analytics software products automate the NIST risk management framework (RMF) assessment for operational technology (OT) systems.

NeuVector

NeuVector

NeuVector, the leader in Full Lifecycle Container Security, delivers uncompromising end-to-end security from DevOps vulnerability protection to complete protection in production.

Mosaic Insurance

Mosaic Insurance

Mosaic is a next-generation global specialty insurer distinguished by an exceptional team, agile technology, and a structure that combines Lloyd’s of London strength with a global distribution network

Privasee

Privasee

Make GDPR compliance simple with Privasee. Our software makes it easy to protect your data and ensure you’re compliant with the new regulations.

CloudCoCo

CloudCoCo

CloudCoCo help UK businesses of all sizes and industries succeed by providing enterprise-grade technology at small-business prices.

Cyber and Fraud Centre – Scotland

Cyber and Fraud Centre – Scotland

The Cyber and Fraud Centre – Scotland exists to ensure Scottish organisations are as resilient as they can be against cyber and fraud crime.

Wired Assurance

Wired Assurance

Wired Assurance is a testing and assurance company, specialized in software applications and blockchain smart contracts.

Aliro Security

Aliro Security

AliroNet is the world’s first entanglement Advanced Secure Network solution.

SentryMark

SentryMark

Stay a Step Ahead of Emerging Threats. Deviate from the traditional siloed defenses and get the proactive and responsive cybersecurity solutions and services you deserve with SentryMark today.