GCHQ’s AI Report Has A Clear Message
The British intelligence and security agency GCHQ has recently released their new AI and Data Ethics Framework, that describes the agency’s future use of Artificial Intelligence (AI). GCHQ has long been known for its expertise in code breaking and stealing secrets, which is hardly ethical by conventional standards, however, the full title of the publication 'Pioneering a New National Security: The Ethics of Artificial Intelligence', carries an important message.
The GCHQ report has been released ahead of the British government's Integrated Review of Security, Defence, Foreign Policy and Development, soon due for publication and comes after the public announcement of the National Cyber Force being established.
This report is the first sign that the UK intelligence community is stepping out of the shadows to engage in public debate.
Jeremy Fleming GCHQ’s Director says that AI could have a profound impact on the way his organisation operates, from spotting otherwise missed clues to detecting terror plots to identifying the sources of fake news and computer viruses. “AI is now present in every aspect of British life. It enables our telecommunications systems, our smartphones, our banks, our National Health Service... The UK’s global leadership in AI and data science is a major part of what has made the UK a thriving cyber power, and AI stands to add billions to the British economy.”
AI is controversial because it relies on computer algorithms to make decisions based on patterns found in data. It is used alongside human analysts in investigations. GCHQ does not say exactly how it presently uses AI software or which data it analyses, but it relies in part on monitoring people’s phone and messaging data and watching social media profiles.
There are numerous definitions of AI but GCHQ defines it as a type of software that can learn to find complex patterns in data. As the software does so, it can provide them with new insights or forecast future trends, and use them to automate or augment business processes. In some cases, they can use them to manage quite basic, highly-repetitive activities, while in others they might tackle more sophisticated but narrowly defined challenges.
The UK intelligence community has deliberately avoided public debate for most of its existence and GCHQ’s report is the latest evolution to a more public-facing side that will become more and more the norm in national security debate. GCHQ is ahead of other spy agencies with the emergence of the AI dynamic and GCHQ is emerging from the shadows in describing how they intend to use it.
Whether the other major British intelligence agencies, MI5 and SIS will also step into the light remains to be seen but with GCHQ making its case public in this way, it is likely that the other agencies will soon follow.
Will central bodies like the British Joint Intelligence Committee also produce public reports? Other nations already produce these kind of reports. The national Foreign Intelligence Agency Of Estonia has released its own document and the US Director of National Intelligence has also released their own yearly Worldwide Threat Assessment 2006.
The release of the GCHQ report on AI carries its own hidden meaning for public debate on UK national security - that we should be clever enough to read between the lines of code and see the change that is coming. The agency’s report also indicated that AI could help to better identify sources of fake news or spot deep fake images, which come typically from Russia, and more quickly spot and trace malicious virus software that often emerges from China or North Korea.
AI is likely to be used comprehensively, the overarching decisions making process at GCHQ will still be made by people. It states: “GCHQ's specialists share the same concerns voiced by many external experts around using AI to make predictions about individuals, their behaviour and motivations... AI software can help triage and prioritise across our data sources. It can suggest previously unseen patterns and learn to identify valuable behavioural indicators. But it is not yet sophisticated enough to be trusted to make independent decisions based on those outputs. “
In this case, GCHQ expects its use of to AI to resemble the 'augmented intelligence model' in which AI software can collate information from relevant sources and flag significant conclusions for review by a human analyst, but does not automate any action as a result, using it to support the human decision-making process rather than determining it.
GCHQ: Estonia National Intelligence Service: CapX: I-HLS: Sky: RUSI:
Guardian: About Intel: Diginomica:
You Might Also Read:
GCHQ Deploys AI To Stop Human Trafficking & Child Sex Abuse: