GCHQ Deploys AI To Stop Human Trafficking & Child Sex Abuse
GCHQ is ready to deploy Artificial intelligence (AI) tools which can be used quickly analyse masses of complex data will be used in the fight against child sex abuse, human trafficking criminals and a range of other increasingly sophisticated criminal activities.
The British spy agency's new policy is described in a paper entitled Ethics of AI: Pioneering A New National Security, which explains that GCHQ believes it can use AI to help expose disinformation campaigns by adversary nations trying to undermine democracy. GCHQ says hostile countries are already using AI to automate the production of 'deepfake' videos and audio recordings to influence public opinion.
More than 69 million child sexual exploitation videos and images in 2019 and a large number of such images were found being shared via online platforms, while the rest were exchanged via the dark web or encrypted communications. AI tools could help the organisation automate the scanning of chat rooms for evidence of grooming, to prevent child sexual abuse. Humans would struggle to identify such information, given the massive quantity of data.
By processing massive quantities of data, AI tools could help to map criminal networks that carry out illegal trafficking of drugs, weapons and humans, while concealing their activities using encryption techniques and crypto-currencies.
In recent years, several big technology companies have introduced measures to fight online child abuse. Both YouTube and Facebook have put mechanisms in place to tag and trace videos and images violating their standards. Facebook's algorithms for flagging objectionable images and videos are also available on GitHub. Microsoft has also launched a tool to help people review chat-based conversations and detect online grooming.
The paper considers AI transparency and explains how GCHQ will ensure that AI tools are used in a fair and transparent way, by applying existing tests of necessity and proportionality, using an AI ethical code of practice, and employing more diverse talent to help govern AI use.
The spy agency sets out examples of how they will use the technology, including:
- Fact-checking and detecting deepfake media to tackle foreign state disinformation.
- Mapping international networks that enable human, drugs and weapons trafficking.
- Analysing chat rooms for evidence of grooming to prevent child sexual abuse.
- How the National Cyber Security Centre could analyse activity at scale to identify malicious software to protect the UK from cyber attacks.
"AI is already invaluable in many of our missions as we protect the country, its people and way of life," Jeremy Fleming, the director of GCHQ, said in a statement. He added that AI and other technical developments bring great opportunities, but also "pose significant ethical challenges for all of society... I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI."
GCHQ warns that a growing number of states are turning to AI as a means of spreading disinformation to shape public perceptions and undermine trust. The report comes as the British Government gets ready to publish its Integrated Review into Security, Defence, Development and Foreign Policy.
You Might Also Read: