Skip to content
Subscribe

AI: NCSC’s Cyber Threat Assessment

Bola Ogbara
Bola Ogbara Connect on LinkedIn
2 min. read

The Government Communications Headquarters (GCHQ), a United Kingdom organization famous for breaking the Enigma code in World War Two, has turned its focus in 2024 to the cybersecurity threats posed by Artificial Intelligence. The National Cyber Security Centre (NCSC), a part of GCHQ, released a series of resources, analyses, and warnings about the evolution of cybercrime in the age of AI.

NCSC Assessment (3)

Both domestically and abroad, artificial intelligence (AI) is a hot-button topic in the cybersecurity field because of its mixed capabilities. Lindy Cameron, CEO of the NCSC, spoke on the issue at the AI Safety Summit in November 2023: “We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat”. Another official, James Babbage (Director General for Threats at the National Crime Agency) discussed the risks that AI brought in ransomware: “Ransomware continues to be a national security threat…[that] is likely to increase in the coming years due to advancements in AI and the exploitation of this technology by cybercriminals. AI services lower barriers to entry, increasing the number of cybercriminals, and will boost their capability by improving the scale, speed and effectiveness of existing attack methods.” 

 

Still, worsening ransomware is just one of several possible negative effects of widespread AI integration. In their 2023 Annual Review, the NCSC delved into the potential risks associated with AI models and discussed how AI could interfere with the election by increasing the spread of disinformation with bots. Considering these important complications, it’s no surprise that the department released another document about AI risks two months later. 

 

On January 24, 2024 the NCSC released a report on “the near-term impact of AI on the cyber threat” , which covered their assessment of AI’s impact on cyber operations in the next two years. The statements made in the assessment are based on the assumption that there won’t be a “significant breakthrough in transformative AI in this period” - which could have serious ramifications for malware. The report also mentions that AI could be used to “enhance cyber security resilience through detection and improved security by design”, which could limit the risk levied by AI-powered cyber threats, but the extent of this is unknown. 

 

Some of the key judgements in the assessment are that all cyber threat actors are already using AI on some level, and that AI “will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years,” which is at least a 95% probability according to their Professional Head of Intelligence Assessment (PHIA) probability yardstick.  

 

According to the report, a large part of AI’s use to threat actors will be (and has been) in social engineering. Typically, translation and grammatical errors are clues that can help identify a fraudulent message from a cyber criminal acting as someone else - but Generative AI (Gen AI) and large language models (LLMs) could remove these mistakes and make it harder to successfully distinguish a phishing attempt from the real thing.   

 

Though AI will make social engineering efforts easier for less capable threat actors, the assessment says that more elaborate uses for AI will only really be used by threat actors who are skilled enough to do so. Vulnerability research as well as malware and exploit development still require human expertise, while complex cyber operations that require training AI with quality data will likely still only be achievable by highly capable state threat actors. 

 

The battleground of cybersecurity is constantly shifting, and AI is the latest weapon in this evolving war. By acknowledging both its dangers and its potential benefits, we can work towards a more secure future where AI is used to protect, not exploit. This balancing act requires a proactive and collaborative approach from governments, businesses, and individuals alike. The coordination between the US and the UK to create the Guidelines for Secure AI System Development is a good starting point, but we still have a long way to go.