During the Bletchley AI Safety Summit in November 2023, international leaders came together to discuss the vast potential of AI models in promoting economic growth, propelling scientific advances, and providing a wide range of public benefits.
They also underscored the security risks that could arise from the irresponsible development and use of AI technologies.
Now governments are evaluating and addressing the potential threats and risks associated with AI.
While it is essential to focus on the risks posed by AI, we must also seize the substantial opportunities it presents to cyber defenders.
For example, AI can improve the detection and triage of cyber attacks and identify malicious emails and phishing campaigns, ultimately making them easier to counteract.
The Summit Declaration highlighted the importance of ensuring that AI is designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy, and responsible for the benefit of all.
The National Cyber Security Centre
NCSC continues to work with international partners and industry to provide guidance on the secure development and use of AI, so that we can realise the benefits that AI offers to society, publishing Guidelines for Secure AI System Development in November 2023.
NCSC Assessment
NCSC Assessment (NCSC-A) is the authoritative voice on the cyber threat to the UK. It fuses all-source information – classified intelligence, industry knowledge, academic material and open source – to provide independent key judgements that inform policy decision making and improve UK cyber security.
It works closely with government, industry and international partners for expert input into assessments.
NCSC-A is part of the Professional Heads of Intelligence Assessment (PHIA). PHIA leads the development of the profession through analytical tradecraft, professional standards, and building and sustaining a cross-government community.
This report uses formal probabilistic language from NCSC-A product to inform readers about the near-term impact on the cyber threat from AI.
Key judgements
- Artificial intelligence will almost certainly increase the volume and heighten the impact of cyber attacks over the next two years. However, the impact on the cyber threat will be uneven (see table 1).
- The threat to 2025 comes from evolution and enhancement of existing tactics, techniques and procedures (TTPs).
- All types of cyber threat actor – state and non-state, skilled and less skilled – are already using Artificial intelligence, to varying degrees.
- Artificial intelligence provides capability uplift in reconnaissance and social engineering, almost certainly making both more effective, efficient, and harder to detect.
- More sophisticated uses of Artificial intelligence in cyber operations are highly likely to be restricted to threat actors with access to quality training data, significant expertise (in both AI and cyber), and resources. More advanced uses are unlikely to be realised before 2025.
- Artificial intelligence will almost certainly make cyber attacks against the UK more impactful because threat actors will be able to analyse exfiltrated data faster and more effectively, and use it to train AI models.
- AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations. This enhanced access will likely contribute to the global ransomware threat over the next two years.
- Moving towards 2025 and beyond, commoditisation of AI-enabled capability in criminal and commercial markets will almost certainly make improved capability available to cyber crime and state actors.
Context
This assessment focuses on how AI will impact the effectiveness of cyber operations and the implications for the cyber threat over the next two years. It does not address the cyber security threat to AI tools, nor the cyber security risks of incorporating them into system architecture.
The assessment assumes no significant breakthrough in transformative AI in this time period. This assumption should be kept under review, as any breakthrough could have significant implications for malware and zero-day exploit development and therefore the cyber threat.
The impact of AI on the cyber threat will be offset by the use of AI to enhance cyber security resilience through detection and improved security by design. More work is required to understand the extent to which AI developments in cyber security will limit the threat impact.
















Comments