MIT Technology Review’s 2026 list of the most important trends in artificial intelligence includes a stark warning about AI’s role in the cybersecurity landscape: the same generative AI capabilities that are accelerating legitimate productivity are making cyberattacks, scams, and infiltrations faster, cheaper, and easier than ever before, at a scale that fundamentally changes the threat environment for every organisation connected to the internet.
The Threat Multiplier Effect
Generative AI has dramatically reduced three key barriers to cyberattack: expertise, time, and cost. Where sophisticated phishing campaigns previously required skilled social engineers writing individually crafted messages, AI can now generate thousands of highly personalised, contextually accurate phishing emails per hour at negligible cost. Where malware development once required deep programming knowledge, AI code generation tools can produce functional exploit code from natural language descriptions.
The same pattern applies to deepfake-based fraud, where AI-generated audio and video of real individuals — executives, family members, public officials — are being used to authorise fraudulent wire transfers, extract sensitive credentials, or manipulate public opinion. MIT notes that between improvements in generative AI and the mass generation of non-consensual synthetic media, what were once theoretical threats are now documented attack patterns appearing across industries globally.
Defensive AI Is Also Improving — But Asymmetrically
Security researchers and AI companies are deploying AI defensively as well — for anomaly detection, threat intelligence synthesis, and automated incident response. The challenge is that the offensive use of AI scales more easily than the defensive use. Attackers need to succeed once; defenders need to succeed every time.
Anthropic’s decision to restrict its Mythos model — specifically cited for its advanced cybersecurity capabilities — and build automated cybersecurity safeguards directly into Claude Opus 4.7 reflect the AI industry’s own recognition that the most capable models represent a meaningful dual-use risk in the cybersecurity domain. The question for organisations in 2026 is not whether AI-powered attacks are coming but how to build defences that operate at AI speed.