Wanted: Access To Social Media Data
Within days of Russia’s recent invasion of Ukraine, several social media companies effectively reduced the circulation of Russian state-backed media and anti-Ukrainian propaganda. Facebook, said that it closed down about 40 accounts, part of a larger network that had already spread across Facebook, Instagram, Twitter, YouTube, Telegram, and Russian social media.
“They ran websites posing as independent news entities and created fake personas across social media platforms including Facebook, Instagram, Twitter, YouTube, Telegram and also Russian Odnoklassniki and VK" says a post from Facebook's parent company, Meta... “We’re recommending that people in Ukraine and Russia take steps to strengthen the security of their online accounts to protect themselves from being targeted by threat actors.”
The accounts being warned about used fake identities, with profile pictures, most likely generated with Artificial Intelligence (AI) deepfake tools, posing as news editors, engineers, and scientists in Kyiv. The perpetrators behind the network also created fake news websites that portrayed Ukraine as a failed state betrayed by the West. Disinformation campaigns like this have become pervasive in the vast realm of social media.
Because outsiders don't get access to the inner workings of the handful of companies that run the digital world, the details of where information originates, how it spreads, and how it affects the real world. It’s hard to know about, let alone quantify.
One of the main barriers to greater access is corporate policies aimed at protecting users’ privacy. Social media users are reasonably concerned that outsiders might get their hands on sensitive information and use it for theft or fraud. Users expect information shared in private accounts to stay private, for very good reasons.
Digital Social Behaviour
Human social behaviour has rapidly shifted to digital media services, whether Gmail for email, Skype for phone calls, Twitter and Facebook for micro-blogging, or WhatsApp and SMS for private messaging. This digitalisation of social life offers researchers an unprecedented world of data with which to study human life and social systems. However, accessing this data has become increasingly difficult.
Current regulations were set up to protect the privacy of people’s data, not with the idea of facilitating research that will inform society about the impact of social media.
More and better research depends on social media companies creating a window for outsiders to peer inside.
But for years, Facebook has created hurdles for researchers, journalists and civil society groups attempting to access data via more traditional web-scraping methods and the company’s application programming interfaces.
Facebook has blocked investigative tools from journalists and researchers that track and targeting by disabling the functionality of browser plugins, disallowed the sharing and data-sets amongst researchers and rolled back access for research projects.
Researchers can only access what Facebook wants them to see and until policies exist for regulated platform data transparency, researchers have to scrape, access, preserve and share data independently as best they can. It’s up to researchers themselves to repudiate Facebook’s unhelpful transparency efforts.
Right now, many machine learning algorithms are being built to do what traditionally would be performed by a human in many of the analytical approaches, both quantitative and qualitative.
It is important to understand that whilst the mention of ‘machine learning’ and ‘algorithms’ may hint at quantitative techniques which might be scientifically verifired, all these algorithms are doing is replacing the work a human would do, the analytical output is still qualitative. The techniques used and the processes taken to apply them must be viewed separately to be usefully understood.
Facebook: NiemanLab: Gov.UK: Morning Consult: University of York: SAGE:
You Might Also Read:
Algorithms, Lies & Social Media: