Ransomware Turns Strategy Decks into Real-World Consequences Overnight
.There are broadly two types of senior executives I meet when assessing whether an organisation is genuinely prepared for a destructive cyberattack.
The first are the ones who understand, often uncomfortably, that this is a serious problem. They have seen well-funded organisations with bigger teams, better-known brands and apparently mature security programmes still get torn apart by ransomware. They are not under any illusion that resilience appears by magic. Their challenge is not arrogance, it is scale: where to start, how to prioritise, and how to move from today’s posture to something materially more resilient within the limits of budget, regulation and organisational reality. Those are solvable problems. With a clear baseline, agreed business objectives and a pragmatic improvement roadmap, real progress can be made.
Then there is the second type, the ones with misplaced confidence.
The ones who have mistaken green cyber dashboards for evidence, glossy but unrealistic tabletop exercises for preparedness and a growing pile of security tooling for operational capability. They talk about ransomware as though it is just another disruption scenario, one that will be handled so long as backups are immutable and enough money has been spent on the latest silver bullets.
They tend to find out, far too late, that the real problem was never whether they could restore data. It was whether they could restore trust.
Conversations with these kind of executives tends to be calm, measured and almost rehearsed.
“We’ve done a tabletop and we performed well.”
“Our backups are immutable and vaulted.”
“Our cyber risk is green across the board.”
“We’ve received an audit report, and we’re in a good place.”
“Our recovery tests show we can recover within our required RTO.”
To be fair, that confidence hasn’t come from nowhere. It’s been built, gradually, through years of business continuity and disaster recovery (BCDR) thinking.
Deterministic thinking, predictable failure models with a small number of root causes. Systems go down, systems come back. Restore from last known good, resume operations.
That mental model works… right up until the moment it doesn’t.
BCDR Teaches You to Fix Failure, Ransomware Teaches You to Question Reality
In traditional disaster recovery, the limited number of causes are well known:
Power is lost
Equipment fails
Misconfiguration takes a system or network down
Weather or fire takes a facility down
You easily know what broke, you know what “good” looks like. You restore, you validate, and you move on. Recovery is deterministic and can be tested regularly to confirm RTOs and RPOs are achievable.
Ransomware and wiper attacks, and particularly modern, identity-led, living-off-the-land attacks, doesn’t behave like that. What these scenarios introduce is uncertainty about what exactly can be trusted.
Is Active Directory clean?
Are backups actually free from persistence?
Have all the phishing emails that causing the incident between removed from inboxes?
Have we patched the vulnerabilities that let the attack in on the recovered systems?
Were configurations altered months ago?
Is the attacker still in the environment?
You’re no longer restoring systems, you’re reconstructing truth.
The False Comfort of the “Green” Cyber Dashboard
Many executive teams are being sold a version of cyber risk that looks reassuring:
Patch SLAs are met
EDR coverage is high
MFA is deployed
Risk heatmaps are trending down
Everything looks…green. The problem is that these dashboards often measure control presence, not operational effectiveness under attack.
Attackers don’t play on your dashboard, they play in the gaps between what’s measured on by your dashboard:
Security Operations and tooling
IT Operations
Identity and infrastructure ownership
Third-party dependencies
When they succeed, that reassuring green status flips, very quickly, into a situation where no one can answer the most important question: “What do we trust enough to turn back on?”
Immutable Backups Are Necessary. They Are Not Sufficient
Let’s me be clear: immutable, well-protected backups are table-stakes. Without them, recovery is a negotiation and even then the janky encryptors mean you’ll loose some of your data even after paying
The hard truth is that having backups is not the same as having a recovery capability. I may risk sounding like a stuck record, but recovery in a destructive cyberattack scenario is not just about data, It’s about trust.
Establishing this trust depends on answering three critical questions:
Do we understand the attack?
Where did it start?
How did it spread?
What techniques were used?
What persistence mechanisms exist?
What accounts were created?
What vulnerabilities were exploited?
Without this, every restore risks reintroducing the attack.
Can we prove the restored state is clean?
Are we restoring from a point before compromise, or just before detonation?
Has identity infrastructure been rebuilt or just recovered?
Are configurations verified, or assumed?
“Last known good” is often treated as just “last known before encryption”, which isn’t the same as clean.
Have we removed the attack surface that allowed it?
Are vulnerabilities patched or still present?
Have misconfigurations corrected?
Have identity and privilege problems been addressed?
If not, recovery is just resetting the conditions for reinfection.
The Executive Shock: From Recovery to Paralysis
This is the moment most tabletop exercises never simulate, especially when they’re run by people with no practical experience of being inside of organisations after they’ve become the victim of a ransomware attack.
What executives and operational teams need to be prepared for is not the encryption event, not the ransom demand, but the hours and days after, when:
Systems are technically recoverable
Backups are intact
Infrastructure is available
…and yet:
Nothing can be safely brought back online, because:
Identity is untrusted
The network is compromised
Security controls have been evaded
Logs are incomplete or tampered with
No one can assert that persistence is gone
You can’t get physical access to buildings
Playbooks and contact lists aren’t available
This is where deterministic thinking breaks down. Executives primed to expect a BCDR event expect a simple sequence:
Fail…Recover…Resume
Instead, they encounter a decision vacuum:
Unknown…Unknown…High-risk decision under pressure…Reinfection that causes more downtime and loss of confidence
This is where many organisations stall, not because they lack technology but because they have not turned that technology into an operational capability.
Why Most Tabletop Exercises Fail Executives
Many tabletop exercises are designed, intentionally or not, to protect confidence rather than challenge it. They will often:
Assume clean recovery points
Skip over identity compromise
Ignore degraded security tooling
Fast-forward through investigation and eradication
They simulate a bad day, not the loss of control typical to the type of destructive cyberattack I’ve spent the past decade dealing with. As a result, executives rehearse:
Communication
Escalation
Decision-making in a structured environment
All important aspects of dealing with such an attack, but they don’t experience:
Conflicting technical advice
Absence of reliable data
Trade-offs between speed and safety
The reality that every option carries risk
So when the real event happens, the first time they face these real conditions…is during the incident itself.
Cyber Resilience Is Not a Product, It’s a Capability Under Stress
Let me put that bluntly: cyber resilience is not something you can simply buy off the shelf. As someone who works for a vendor, albeit running an operations consulting and incident response function, I appreciate the irony in saying that, but it remains true.
A capable, well-protected data management platform is the keystone of the whole thing. Without it, you are missing the one piece that makes secure response and recovery realistically achievable. But even the keystone does not build the arch by itself. It still takes operational discipline, coordination, investigation, remediation and clear decision-making to turn technology into a genuine resilience capability.
You can invest in:
Better EDR
More tooling
Stronger backup platforms
Automation and orchestration
And all of that matters, but none of it replaces the need for:
A shared operational model between Security, IT, and the business
Clear ownership of trust decisions
Predefined workflows for investigation, containment, eradication and recovery
The ability to rebuild, not just restore, critical systems
Ability to restore trust at every level that could have been compromised: firmware, network, hypervisor, operating system, application, configuration and identity
Cyber resilience is not what you have, it’s what you can execute under duress and when everything you rely on is under question.
Preparing Executives for the Reality They’ll Actually Face
If you want to avoid the “rude awakening,” executive preparation needs to change. It needs to move from:
Confidence-building to assumption-breaking
Deterministic scenarios to ambiguous, degraded conditions
Technology focus to decision-making under uncertainty
Executives need to experience:
The moment when identity cannot be trusted
The delay required to investigate before recovery
The trade-off between restoring quickly and restoring safely
The impact of incomplete or conflicting information
Because in a real ransomware event, the hardest problem isn’t technical, It’s deciding what to do when you don’t have enough certainty to be comfortable.
The Real Question
Most organisations are asking the wrong question: “Can we recover our data?”
The question executives should be asking is “Can we prove we’re safe to operate again?”
Because until you can answer that - backed by evidence, not assumption - you’re not recovering, you’re guessing.