Using Social Media To Track The Pandemic
In the last few years, tracking systems that harvest web data to identify trends, calculate predictions, and warn about potential epidemic outbreaks have proliferated.
These systems integrate crowd sourced data and digital traces, collecting information from a variety of online sources, and they promise to change the way governments, institutions, and individuals understand and respond to health concerns.
Google believed it could use algorithms to track flu-pandemics. People with flu would search for flu-related information, it reasoned, giving the tech giant instant knowledge of the disease’s prevalence.
Google Flu Trends (GFT) would merge this information with flu tracking data to create algorithms that could predict the disease’s trajectory weeks before governments’ own estimates.
- GFT was Google’s flagship syndromic surveillance system, specialising in ‘real-time’ tracking of outbreaks of influenza.
- GFT mined massive amounts of data about online search behavior to extract patterns and anticipate the future of viral activity. But it did a poor job, and Google shut the system down in 2015.
- The Google Flu Trends service was launched in 2008 to track changes in the volume of online search queries related to flu-like symptoms.
Over the last few years, the trend data produced by this service has shown a consistent relationship with the actual number of flu reports collected by the US Centers for Disease Control and Prevention (CDC), often identifying increases in flu cases weeks in advance of CDC records. However, Google Flu Trends is not an early epidemic detection system. Instead, it is designed as a baseline indicator of the trend, or changes, in the number of disease cases.
But after running the project for years, Google quietly abandoned it in 2015. It had failed spectacularly. In 2013, for instance, it miscalculated the peak of the flu season by 140 per cent.
According to the German psychologist Gerd Gigerenzer, this is a good example of the limitations of using algorithms to surveil and study society. The 74-year-old has just written a book on the subject, How to Stay Smart in a Smart World. He thinks humans need to remain in charge in a world increasingly filled with artificial intelligence that tries to replicate human thinking.
As director of the Harding Center for Risk Literacy at the University of Potsdam and former director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development, Gigerenzer is considered one of the world’s most eminent psychologists.
‘Google Flu Trends completely flopped for the simple reason that uncertainty exists, the future is not like the past,’ Gigerenzer says.
‘When using big data, you are fine-tuning the past and you’re hopelessly lost if the future is different. In this case, the uncertainty comes from the behaviour of viruses: they are not really predictable, they mutate. And the behaviour of humans is unpredictable.’
In other words, AI can’t predict ‘Black Swan’ events, major surprises that aren’t anticipated in modelling and plans.
Gigerenzer worries that important decisions are being handed over to AI, despite its clear limitations. He’s concerned, too, that the technology creates huge surveillance powers. ‘I worry about the people behind the technology,’ he says over Zoom from his office in Germany. ‘The government surveillance and the commercial surveillance.’
What scares him is our own passivity. ‘We should be worried about people who aren’t getting smart while technology gets smarter,’ Gigerenzer says. ‘The more sophisticated algorithms become, the more sophisticated people need to become… The algorithms have become better over the last ten years by sheer computational powers, by the video capabilities and other things. On its own, that’s great.
‘But the algorithms have a dangerous double capability: they make our lives easier and more convenient, but they allow us to be surveilled 24/7. We need to have a certain awareness and stop that, otherwise we will end up like China.’
The pandemic has not allayed Gigerenzer’s concerns. ‘The coronavirus crisis has been used by the Chinese to explain to every-one in their country how much more superior an autocratic system is than democracy,’ he says. One controversial adoption of algorithms during the pandemic was for Britain’s school exam system. GCSEs and A-levels were cancelled, and grades dished out by an algorithm. The results were wildly unfair. ‘It was a very bad idea,’ says Gigerenzer.
‘How can you predict how a pupil would be graded? There are commercial companies that develop black box algorithms - an algorithm that’s not transparent - which nobody understands, including the teachers and the educational administrators, and the precision of validity was not independently checked.
‘Yet they have had an immense influence on young people’s lives, they should be banned. The algorithms used for pupil grades were probably very simple ones, but they were secret... They were probably running on some kind of linear equation: they take the old results from the school and the area and a few other things. But they also intervene in our lives just like surveillance.’
Gigerenzer believes one of the particularly nefarious forms of AI are the algorithms deployed by social media giants, dubbed ‘in-human intelligence’ by the historian Niall Ferguson. Facebook, Instagram and YouTube use algorithms designed to maximise the amount of time people spend on their sites.
Gigerenzer believes he has the solution to how we solve the toxicity of social media: we should pay for it. ‘I have made a simple calculation of what it would cost to reimburse the entire Meta or the Facebook group for their advertising earnings, and it’s about £2 per month per person. That is all you need to pay for freedom and for the companies to lose their surveillance capability.’
There are reportedly more than 2.2 million ‘smart homes’ in Britain, with two or more ‘smart’ devices such as fridges, coffee machines and TVs that are networked through a central hub (a smart speaker, control panel or app); and 57 per cent of British homes (15 million) contain at least one smart device. Smart homes leave people vulnerable to blackmail. Hackers can use ransomware to tap into appliances, including Alexa devices and webcams to record and blackmail victims.
But in a survey conducted by Gigerenzer, he found that only one out of every seven people were aware that a smart TV can record what you say. Samsung isn’t even shy about its technological capabilities. According to its privacy policy, users should ‘be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party’.
The use of AI for mass surveillance is more obvious, and no less alarming. AI surveillance technology is now used by at least 75 governments. Of these, 63 use Chinese technology, the majority of which is provided by Huawei.
AI surveillance systems work by analysing live video footage to detect unusual behaviour that might be missed by a human eye. But such surveillance capabilities are useful for autocratic regimes, for example, the mass surveillance system are used in major Chinese cities.
NCBI: Nature: Spectator: Medanthrotheory: JSTOR: Science Direct: PLOS:
You Might Also Read:
Problems With Using Big Data For Policing: