Britain Turns To AI To Counter Espionage
Uploaded on 2020-06-08 in TECHNOLOGY-Key Areas-Artificial Intelligence, INTELLIGENCE--Europe, GOVERNMENT-National, FREE TO VIEW, TECHNOLOGY--Developments
Spies will need to use Artificial Intelligence (AI) to counter a range of threats, an intelligence report for the British spy agency GCHQ says. Adversaries are likely to use the technology for attacks in cyberspace and on the political system, and AI will be needed to detect and stop them.
The UK's intelligence and security agency GCHQ commissioned a study into the use of AI for national security purposes. It warns that while the emergence of AI creates new opportunities for boosting national security and keeping members of the public safe, it also presents potential new challenges, including the risk of the same technology being deployed by attackers.
Modern-day cyber security threats require a speed of response far greater than human decision-making allows. Given the rapid increase in the volume and frequency of malware attacks, AI cyber defence systems are increasingly being implemented to proactively detect and mitigate threats. Intelligence and espionage services need to embrace AI in order to protect national security as cyber criminals and hostile nation states increasingly look to use the technology to launch attacks.
The aim of this project is to establish an independent evidence base to inform future policy development regarding national security uses of AI.
The requirement for AI is all the more pressing when considering the need to counter AI-enabled threats to UK national security. Malicious actors will undoubtedly seek to use AI to attack the UK, and it is likely that the most capable hostile state actors, which are not bound by an equivalent legal framework, are developing or have developed offensive AI-enabled capabilities.
In time, other threat actors, including cyber-criminal groups, will also be able to take advantage of these same AI innovations and they will create:
- Threats to digital security include the use of polymorphic malware that frequently changes its identifiable characteristics to evade detection, or the automation of social engineering attacks to target individual victims.
- Threats to political security include the use of ‘deepfake’ technology to generate synthetic media and disinformation, with the objective of manipulating public opinion or interfering with electoral processes.
- Threats to physical security are a less immediate concern. However, increased adoption of Internet of Things (IoT) technology, autonomous vehicles, ‘smart cities’ and interconnected critical national infrastructure will create numerous vulnerabilities which could be exploited to cause damage or disruption.
The research highlights several ways in which intelligence agencies could seek to deploy AI:
- The automation of administrative organisational processes could offer significant efficiency savings, for instance to assist with routine data management tasks, or improve efficiency of compliance and oversight processes.
- For cybersecurity purposes, AI could proactively identify abnormal network traffic or malicious software and respond to anomalous behaviour in real time.
- For intelligence analysis, ‘Augmented Intelligence’ (AuI) systems could be used to support a range of human analysis processes, including:
- Natural language processing and audiovisual analysis, such as machine translation, speaker identification, object recognition and video summarisation.
- Filtering and triage of material gathered through bulk collection.
- Behavioural analytics to derive insights at the individual subject level.
None of the AI use cases identified in the research could replace human judgement and it is thought that systems that attempt to ‘predict’ human behaviour at the individual level are likely to be of limited value for threat assessment purposes.
The use of AuI systems to collate information from multiple sources and highlight significant data items for human review is likely to improve the efficiency of analysis tasks focused on individual subjects. However, concerns over the ethical use of AI are highly subjective and context specific. Experts continue to disagree over fundamental questions such as the relative level of intrusion of machine analysis when compared with human review and despite a proliferation of ethical principles, there is a lack of clarity on how these should be operationalised in different sectors, who should be responsible for oversight and overall scrutiny.
One of the most difficult legal and ethical questions for spy agencies, especially since the Edward Snowden revelation of mass domestic surveillance in the US, is that of justifying the collection of large amounts of data from ordinary people in order to sift it and analyse it to look for those who might be involved in terrorism or other criminal activity.
You Might Also Read: