A Reality Check on AI in Ransomware

One of the most common questions I get asked in media interviews is “how has AI changed ransomware?”. The reality is that AI has not changed the fundamentals of ransomware, but it has changed the tempo.

Ransomware operations still rely on the same core disciplines: initial access, privilege escalation, lateral movement, evasion, data theft, disruption and extortion. What AI has done is make each of those stages faster, cheaper and easier to scale. It is less like inventing a new weapon and more like putting an existing criminal supply chain on a conveyor belt and upping the voltage on the motor.

One of the clearest examples is phishing. Attackers can now generate far more convincing lures at volume, with better grammar, more believable tone and tighter tailoring to the target’s role, geography, supplier relationships and current projects. The cyber awareness training advice of looking for poor spelling and awkward phrasing matter less when an attacker can produce polished emails in seconds, adapt them for different executives and even localise them for specific regions or business units. In Business Email Compromise attacks AI and large language models have made attacks significantly more convincing by enabling threat actors to mimic the tone, writing style, vocabulary, and conversational patterns of senior executives with far greater accuracy, making fraudulent requests appear far more authentic to recipients.

AI also helps compress the vulnerability exploitation cycle. When vendors publish patches, advisories or code commits, attackers can use AI to accelerate reverse engineering and identify what security flaw was fixed. That shortens the gap between a patch being released and working exploit paths being developed. In practice, that means defenders are under even greater pressure, because every delay in patching gives adversaries more opportunity to turn public remediation into operationalized initial access. I’ve seen vendor patches being released, and within a day exploits weaponised on Ransomware-as-a-Service platforms and active scanning for vulnerable systems: Microsoft patch Tuesday → Exploit Wednesday. Who can even patch critical systems in < 24 hours?

The same acceleration applies to exploit development more broadly. “Vibe coding” lowers the barrier for less capable operators to build tooling, adapt proof-of-concept code, automate payload generation or stitch together scripts into a workable attack chain. It does not eliminate the need for real expertise at the high end, but it does help mediocre operators produce usable tradecraft more quickly. AI can help assemble the rough draft of an exploit, a loader, a credential harvesting workflow or an orchestration script that links reconnaissance, privilege escalation, remote execution, and data staging into a more complete attack path.

AI is also improving operational scale behind the scenes. It can help attackers analyse stolen documents, identify high-value systems and executives, prioritise extortion targets, summarise internal chat logs, map identity relationships and generate follow-on attack actions based on what they find. That matters because ransomware is not just about encryption anymore. It is about understanding the victim quickly enough to apply maximum pressure before defenders regain control.

None of this means AI has made attackers magically sophisticated. It means it has made many of them more efficient. The practical takeaway for defenders is simple: assume the attacker can move faster, write better, learn quicker and automate more of the journey from initial access to impact. In a ransomware context,

AI is not the story on its own. The real story is that it is reducing friction across the entire attack lifecycle and every reduction in friction increases risk for the defender.

Previous
Previous

Why Anthropic Mythos Doesn’t Mean the Sky is Falling…

Next
Next

Why an Emboldened Iran Should Worry Every CISO