Social Media’s Troublesome Influence On Politics
Almost 50% of Europeans use social media almost every day. They use to stay informed by news, entertainment, shopping, to contact friends. These technology platforms have changed the way we experience politics, by engaging with political points and enabling minority voices to be heard.
The growth of the internet has amplified fake news and allowed unreliable information to be spread easily. This can change perspectives on political discussions and decisions.
Online users are generally unfamiliar with what data they produce and provide to others, as well as how that data is collected and stored, when they perform basic tasks online. The private sector companies running the online services that we use have become very adept at capturing and keeping that attention, to the extent that our political views and actions can be shaped without us realising what is behind that influence.
- YouTube claims that their video recommender algorithm, which automatically selects videos it thinks a user will be interested in, is responsible for 70% of viewing time on the site. There is also evidence that YouTube's recommendations are drawing viewers into increasingly extremist content.
- Facebook's algorithm, analysing only 300 likes, can predict a user's personality with greater accuracy than their own spouse. This gives rise to concerns over "micro targeting": highly personalised advertisements being directed at users based on their own personalities. If used politically, micro targeting has considerable potential to undermine democratic discourse—a foundation of democratic choice.
- Social media platforms use several behavioural techniques to encourage people to constantly engage and share, with settings and options that make it much more complicated to leave a platform than to sign up to one and this is sometimes called "choice architectures".
- There is also "algorithmic content curation". The algorithms that sort through and select the information we see online are so complex that even their developers have a hard time explaining them. This raises obvious problems for transparency and accountability and is especially problematic because these algorithms can encourage polarised discourse or stop us from receiving certain information.
On platforms like Twitter, Reddit, and Facebook, algorithms prioritise content that has, or is expected to have, a high level of engagement. Behavioural science shows that people have a predisposition to orient towards negative news. The risk is an overexposure of polarising and controversial content and underexposure to less emotive, but more informative content.
When coupled with algorithms that promote content with a high level of engagement, online platforms can easily amplify the reach of false and misleading information. This is particularly concerning when false and misleading information has the potential to set the political agenda, incentivise extremism and ultimately lead to a "post truth" world in which facts have less influence in shaping public opinion than emotion and personal belief.
Social media poses a risk for the fundamental rights to data protection and privacy of users, and even for non-users that extends far beyond what individuals explicitly share with social media sites, because of how much can be inferred from users’ activity.
Ensuring online privacy preserves three core components of democratically empowered voters: freedom of association, truth-finding and opportunities to discover new perspectives. Effective privacy online means a strengthened democracy offline.
Two core attributes from the attention economy and human psychology create the perfect conditions for the spread of misinformation: algorithms that promote attractive, engaging content and people’s strong predisposition to orient towards negative news, as most “fake news” tends to evoke negative emotions such as fear, anger and outrage.
The shape and spread of misinformation is governed by social media network structures; they can give rise to significant distortions in perceived social signals that in turn can affect the entrenchment of attitudes.
There are asymmetries in how false or misleading content and genuine content spread online, with misinformation arguably spreading faster and further than true information. Some of this asymmetry is driven by emotional content and differing levels of novelty. Related to this, the interpretation and classification of misleading content often turns on subtle issues of intent and context that are difficult for third parties - especially algorithms - to ascertain, making it difficult to distinguish legitimate political speech from illegitimate content.
Current social media platform architectures are not primarily designed for democratic discourse, yet they are heavily used for political purposes and debates. The platforms may, for example, provide social signals that can lead to misperceptions about relative group sizes. This has consequences for social movements who can come to believe that their ideas have broader penetration than they actually do.
European Commission: European Commission: Image: Unsplash
You Might Also Read:
Social Media: An Exclusive 3 Part Review: