UK spies will need to use artificial intelligence (AI) to counter a range of threats, an intelligence report says.
Adversaries are likely to use the technology for attacks in cyberspace and on the political system, and AI will be needed to detect and stop them.
But AI is unlikely to predict who might be about to be involved in serious crimes, such as terrorism – and will not replace human judgement, it says.
The report is based on unprecedented access to British intelligence.
The Royal United Services Institute (Rusi) think tank also argues that the use of AI could give rise to new privacy and human-rights considerations, which will require new guidance.
The UK’s adversaries “will undoubtedly seek to use AI to attack the UK”, Rusi says in the report – and this may include not just states, but also criminals.
Fire with fire
The future threats could include using AI to develop deep fakes – where a computer can learn to generate convincing faked video of a real person – in order to manipulate public opinion and elections.
It might also be used to mutate malware for cyber-attacks, making it harder for normal systems to detect – or even to repurpose and control drones to carry out attacks.
In these cases, AI will be needed to counter AI, the report argues.
“Adoption of AI is not just important to help intelligence agencies manage the technical challenge of information overload. It is highly likely that malicious actors will use AI to attack the UK in numerous ways, and the intelligence community will need to develop new AI-based defence measures,” argues Alexander Babuta, one of the authors.
The independent report was commissioned by the UK’s GCHQ security service, and had access to much of the country’s intelligence community.
All three of the UK’s intelligence agencies have made the use of technology and data a priority for the future – and the new head of MI5, Ken McCallum, who takes over this week, has said one of his priorities will be to make greater use of technology, including machine learning.
However, the authors believe that AI will be of only “limited value” in “predictive intelligence” in fields such as counter-terrorism.
The often cited fictional reference is the movie Minority Report where the technology is used to predict those who are about to commit a crime before they have committed it. But the report claims that it is less likely to be viable in real national security situations. . Documents such as terrorism are too rare to provide enough historical data to look for patterns – they happen much less often than other criminal acts, such as burglaries. Even within that data set, the perpetrators’ backgrounds and ideologies vary so much that it is difficult to build a model of a terrorist profile.
There are too many variables to make predictions straightforward, with new events that may be radically different than before, the report claims. All types of profiling can also be discriminatory and lead to new human rights problems. In practice, in areas such as counterterrorism, the report argues that “enhanced” – rather than artificial – intelligence will be the norm – where technology helps human analysts search and prioritize ever-increasing amounts of data, enabling people to do their own assessments.
It will be important to ensure that human operators remain responsible for decisions and that AI does not act as a “black box” from which humans do not understand the basis for making decisions, the report says. Bite for the piece The authors are also wary of some of the hype surrounding AI, and of talk that it will soon be transformative. Instead, they believe we will see the increased growth of existing processes rather than the arrival of new futuristic capabilities.
They believe the UK is in a strong position globally to take the lead, with a concentration of capacity in GCHQ – and more generally in the private sector, and in bodies such as the Alan Turing Institute and the Center for Data Ethics and Innovation. This has the potential to allow the UK to position itself at the forefront of AI use but within a clear framework for ethics, they say. The deployment of AI by intelligence agencies may require new guidance to ensure safeguards and that any intrusion that integrity is necessary and proportionate, the report says.
Read more from Gordon:
One of the thorny legal and ethical issues facing spy agencies, especially since the Edward Snowden disclosures, is how justified it is to collect large amounts of data from ordinary people to capture it and analyze it to look for those who may be involved in terrorism or other criminal activity.
And there is the related question of how far integrity is violated when data is collected and analyzed by a machine compared to when a human sees it.
Private advocates are afraid that artificial intelligence will require collecting and analyzing much larger amounts of data from ordinary people, to understand and search for patterns, which create a new level of intrusion.
The authors of the report believe that new rules will be needed. But overall, they say that it will be important not to be overcrowded with the potential disadvantages of using technology.
“There is a risk of stifling innovation if we become too focused on hypothetical worst cases and speculation about a dystopian future AI-driven surveillance network,” argues Mr Babuta. “Justified ethical issues will be overshadowed if we do not focus on probable and realistic uses of AI in the short to medium term.”