GCHQ has set out how it wants to use artificial intelligence (AI) in the fight against increasingly sophisticated criminal activity.
The security service believes the technology could be a key tool in foiling child sex abuse and trafficking, quickly sifting through ever-growing masses of complex data.
This could mean mapping international networks that carry out human, drugs and weapons trafficking, which are currently concealing their crimes using encryption tools and virtual currencies such as bitcoin.
Advanced systems could scan online chat rooms for evidence of grooming in ways that humans struggle to quickly uncover, hunting down hidden people and illegal services on the dark web.
In the cyber space, AI could also help identify malicious software that has the potential to cripple a business’s ability to work, cause lost revenue or damage to assets.
Almost half of UK firms and a quarter of charities report having a security breach or cyber attack in the last 12 months, with one in five of these leading to significant loss of money or data.
Human analysts would remain at the heart of investigations but AI offers a chance to filter data and point towards fragments.
The agency acknowledges in a new report there are ethical considerations that require attention before deploying such technology further.
“AI, like so many technologies, offers great promise for society, prosperity and security,” said Jeremy Fleming, GCHQ director.
“It’s impact on GCHQ is equally profound. AI is already invaluable in many of our missions as we protect the country, its people and way of life.
“It allows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats – from protecting children to improving cyber security.
“While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ.
“Today we are setting out our plan and commitment to the ethical use of AI in our mission.
“I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI.”
The paper – titled Ethics of AI: Pioneering a New National Security – details how GCHQ plans to devise an AI ethical code of practice and hire more diversely, in a bid to ensure the technology is used appropriately.
This includes the creation of an AI Lab in Manchester which will focus on testing projects.
GCHQ warns that a growing number of states are turning to AI as a means of spreading disinformation to shape public perceptions and undermine trust.
Furthering the agency’s own AI could be adopted to block botnets on social media, as well as spotting so-called “troll farms”.
The report comes as the Government prepares to publish its Integrated Review into security, defence, development and foreign policy.
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules here