Your Backup Platform Is Probably the Fastest Vulnerability Scanner You Are Not Using
In my recent post on Anthropic Mythos, I referenced the Cloud Security Alliance’s recommendations for reducing the risk posed by AI-enabled attacks through stronger vulnerability management. If AI-assisted attackers can accelerate reconnaissance, exploit selection and attack-path development, then defenders need to accelerate the way they find, prioritise, remediate and validate vulnerabilities. The answer is not another dashboard or another theoretical risk score, it is making better use of the data organisations already protect every day.
In this post, I want to explore how a modern data management platform can help vulnerability management programmes move faster, improve coverage and provide stronger evidence that systems being restored or kept in production can actually be trusted.
The Current State
Most organisations treat vulnerability management as a production scanning problem:
Find the asset.
Vulnerability scan the asset.
Raise the ticket for remediation.
Wait for the owner to respond.
Argue with them for several iterations about attack surface vs. downtime.
Rescan the asset to validate remediation.
Close the ticket.
Rinse/repeat..
That model works, up to a point as it has a very obvious weakness: it depends on being able to interrogate production systems safely, frequently and completely. In the real world, that is often where the process starts to break down.
Fragile systems cannot be scanned aggressively with often critical legacy applications falling over if you look at them too hard. Critical databases sit behind layers of change control. Crown-jewel workloads are excluded from scans because “the business will not accept the risk.” Servers are missing agents. Network paths are blocked. Cloud assets appear and disappear faster than governance processes can keep up.
Then, during a destructive cyberattack, the problem becomes even harder. You are no longer asking, “What vulnerabilities exist in production?”, you’re asking a much more important question: “If we restore this system, are we restoring the same exposure that allowed the attacker in?”
That is where use cases for a modern data management platform become much more interesting.
Not as a replacement for Tenable, Qualys, Rapid7, cloud security posture management or patch management tooling, but as an acceleration layer that makes those existing investments more useful. Protected backup data, immutable snapshots and historical system state give security teams another vantage point: one that can improve scanning coverage, sharpen remediation priorities and validate that recovered systems are safer than the systems that failed.
In other words, backups should not simply help you recover. They should help you understand what you are recovering, where the risk still exists, and whether you are about to put the same exposure back into production.
Solutions like Cohesity already support this direction through capabilities such as CyberScan, powered by Tenable, which allows organisations to scan backup snapshots for vulnerabilities without placing additional load or risk on production systems. That is important, but the operational value is broader than any single integration. The real opportunity is to change the speed, safety and confidence of vulnerability management by turning protected data into an active source of security evidence.
Production Scanning Has a Blast Radius
Security teams like to pretend vulnerability scanning is clean and clinical, sometimes it isn’t. Authenticated scans can be noisy, credentialed checks can create load, some scans trigger EDR false-positive alerts, others annoy application owners and fragile legacy systems can behave unpredictably. As a result critical and exposed systems often end up on exclusion lists or are subject to weakened scan policies. This is especially common in verticals like manufacturing, healthcare, financial services, telco, transport and government. The very systems that matter most are often the ones scanned least aggressively.
That creates a dangerous illusion: the report going to senior executives says the vulnerability management programme is running well: the board sees vulnerability trend lines, but the most important assets may have the least reliable evidence. A data management platform changes the geometry by allowing you to scan a protected copy, mount a snapshot, interrogate a backup image, or examine historical filesystem state without touching production, you reduce the operational fear that slows the whole process down.
It is like inspecting the aircraft in the hangar instead of trying to crawl across the wing while it is still flying.
Use Case 1: Scan Fragile Systems Without Touching Production
The first practical use case is simple: scan backup snapshots of systems that are too fragile, sensitive or politically difficult to scan in production.
Think of the ageing ERP server nobody wants to restart. The manufacturing jump box running on an operating system that should have been retired during the London Olympics. The clinical system wrapped in compensating controls because the vendor went out of business. The domain controller that everyone is too terrified to touch. Traditional vulnerability management struggles here because the scan itself is seen as the biggest operational risk.
By using backup data, the organisation can perform vulnerability discovery against a copy rather than the live workload. That does not remove the need to fix the issue, but it does remove one of the excuses for not knowing.
The output is not theoretical. It can identify missing patches, vulnerable packages, exposed software versions, risky configurations and known exposures that would otherwise sit in the “too hard” pile. That matters because attackers do not care that the system was too fragile to scan, they care that it was vulnerable.
Use Case 2: Prioritise Remediation Based on Recoverability
Most vulnerability programmes prioritise using some combination of CVSS, exploitability, asset criticality and internet exposure. This is needed, but not the complete picture. In a destructive attack, another question matters: “If this system is encrypted, wiped or rebuilt tomorrow, will we restore it with the same vulnerability?” That is a different lens.
A vulnerability on a low-criticality server may be less urgent in normal operations, bzut if that server is part of the recovery chain, hosts privileged tooling, stores automation scripts, provides authentication dependencies, manages backups, supports DNS, is a jump-box or controls deployment pipelines, it may be highly significant during recovery.
A data management platform gives security and IT teams a way to assess vulnerability exposure inside the recovery estate itself.
That means asking:
Which protected workloads contain exploitable vulnerabilities?
Which snapshots are safer recovery candidates?
Which golden images are already stale?
Which recovery points pre-date compromise but still contain exploitable weaknesses?
Which rebuild images would immediately reintroduce attacker opportunity?
This is where vulnerability management stops being a hygiene function and becomes part of cyber resilience.
Use Case 3: Validate Clean Recovery Points Before Restoration
During ransomware recovery, speed creates pressure: executives want systems back, operations teams want access restored, application owners want their platforms online, customers want products and services to resume.
That pressure creates a dangerous temptation: find the most recent usable backup and start restoring. In the incidents my team supports response and recover for on a regular basis, that approach can be risky. A backup may be clean from a malware perspective, but still vulnerable. It may pre-date encryption, but still contain the exposed service, unpatched application, weak configuration or vulnerable driver that enabled the attacker to get in.
That distinction matters. In more than a third of the destructive cyberattacks my team deals with, initial access involved exploitation of a vulnerability. So restoring the last available backup may bring the business back, but it may also put the original attack path straight back into production. That is not recovery to a trusted state, it is a reset button for the adversary.
For example:
Snapshot A is the most recent but contains the vulnerable application version used for initial access.
Snapshot B is older but can be patched before being exposed to the network.
Snapshot C contains suspicious binaries identified through threat hunting.
Snapshot D is clean from malware scanning but still contains vulnerable middleware.
This gives the recovery team options, it also gives leadership evidence. Instead of saying, “We restored from Tuesday because it was available,” the team can say, “We selected this recovery point because it was pre-impact, scanned for known vulnerabilities, checked for malware indicators and remediated before reconnection.”
That is empowers a very different conversation with regulators, cyber insurers and the board.
Use Case 4: Validate That Remediation Actually Happened
I’ll let you into one of the dirtiest secrets in vulnerability management: ticket closure does not always equal risk reduction. A patch ticket is marked complete, a configuration change is recorded, an exception expires, a server owner says the fix was applied, or the scanner does not immediately rescan because the maintenance window has passed or the system is unavailable. Everyone gets on with their day job, until the same exposure appears again during an incident.
Backup-derived scanning can provide an independent validation layer. If snapshots are being taken regularly, those snapshots can help confirm whether the vulnerable software, file, package, driver or configuration persists after the supposed remediation date. This is particularly useful for:
Validating patch campaigns across distributed estates.
Checking whether emergency fixes were applied consistently.
Confirming remediation on systems with limited production scan windows.
Detecting regression when old images or templates are reused.
Verifying that rebuilt systems do not reintroduce known exposures.
The key point here is evidence: vulnerability management needs more than workflow status, it needs proof to be effective. Backups can provide another independent source of that proof.
Use Case 5: Find Vulnerable Software That Asset Management Missed
Since I have already touched on one uncomfortable truth in vulnerability management, here is another one I have learned over three decades in incident response and cybersecurity leadership: asset inventories are almost always wrong. This is not because people are incompetent or malicious, but instead because modern estates are messy: systems move, applications are installed manually, packages are bundled inside vendor products, shadow IT appears, old software persists because nobody knows it is there. Backup data can expose what is actually present across the estate.
That can include:
Legacy Java versions
Server-side JavaScript library vulnerabilities, such as React2Shell, embedded in application templates or deployment bundles.
Vulnerable OpenSSL libraries
Exposed web shells
Old remote management tools
Forgotten database clients
Vulnerable VPN or file transfer components
Risky scripts and binaries
Unsupported operating system artefacts
Old installers that keep being reused
This is especially useful after a major vulnerability disclosure. When something like Log4Shell, MOVEit, React2Shell, CitrixBleed or a critical VPN vulnerability lands, the immediate problem is often not patching: it is knowing where the affected component exists.
Production telemetry will usually find the obvious deployments. Backup data can help find the ones that are easier to miss: the old application server, the forgotten test environment, the archived deployment package, the dormant VM, or the file share still holding a vulnerable installer.
I saw this first-hand during Log4Shell, where I worked with multiple customers to use backup data to identify vulnerable systems that other discovery methods had missed. That is the point: when a critical vulnerability lands, the first challenge is not always remediation. It is knowing where the exposure actually exists. When you are dealing with a CVSS 10.0 vulnerability like React2Shell, search speed matters.
A data management platform gives you a historical map of the estate. It is not perfect, but it is often far richer than the live inventory because it reflects what has actually existed across systems, snapshots and time.
Just as importantly, it can be queried across multiple platforms from a single location, giving security teams a faster way to search for vulnerable components, legacy software, exposed packages and risky artefacts without relying solely on live production telemetry..
Use Case 6: Support Safer Golden Image and Template Management
Many organisations assume their golden images are golden because someone named them that way. In reality, build images, VM templates and recovery images often drift. They accumulate old agents, outdated packages, insecure defaults and inherited configuration weaknesses. In a destructive attack, those images become the foundation of recovery. If the foundation is weak, everything built on top of it inherits the weakness.
A practical use case is to regularly scan protected copies of:
VM templates
Server build images
Domain controller recovery images
Clean room images
Jump box images
Application deployment bundles
Infrastructure-as-code repositories stored in protected shares
Recovery automation packages
This helps answer a basic but critical question: “Are the things we plan to recover from actually safe to recover with?”
That question is still asked far too late in many organisations.
Use Case 7: Accelerate DFIR and Vulnerability Management Collaboration
During an incident, vulnerability management and DFIR teams often operate at different cadences:
DFIR wants to understand the intrusion path.
Vulnerability teams want to know what to patch. Infrastructure teams want to restore service.
Executives want a recovery timeline.
Everyone is waiting for evidence, and backup data can become a shared evidence layer.
The backup platform can expose historical snapshots, vulnerability findings, malware scan results, suspicious file locations and recovery-point comparisons, teams can work from the same map.
For example:
DFIR identifies a vulnerable appliance or server as the likely initial access path.
Vulnerability scanning across snapshots identifies where the same exposure exists elsewhere.
Backup history shows when suspicious files first appeared.
Remediation teams patch or rebuild affected systems.
Recovery teams validate selected restore points before reconnection.
Security operations monitor for reappearance of the same indicators.
That is how cyber recovery works in the workflows my team designs for customers: not as isolated workstreams throwing spreadsheets at each other, but as a coordinated, integrated operational process.
This Does Not Replace Vulnerability Management
My intent is not to overstate the role of the data management platform, it is not a magic vulnerability management wand. It does not remove the need for production scanning, exposure management, patch orchestration, asset inventory, change control or good operational hygiene. It can, however, make those functions faster and safer.
The best way to think about it is this: Traditional vulnerability tooling tells you what is exposed in the live estate, a data management platform can help tell you what exists across time, what is hiding in recovery data, and what you are about to put back into production. That time dimension is the real differentiator that many organisations fail to leverage.
Attackers operate across time: they compromise, persist, stage, modify, wait and detonate. Recovery teams need the same temporal advantage: they need to look backwards, compare states and choose safer paths forward. Backups are not just copies for recovery, they are time-series evidence.
What Good Looks Like
What I look for in mature organisations, when assessing their operational cyber resiliency capability, is whether backup-derived vulnerability scanning has become part of normal security and resilience operations, not something improvised during an emergency. That means:
Routine scanning of protected snapshots for critical workloads
Prioritisation of vulnerabilities that affect recovery dependencies
Validation of golden images and clean room assets
Scanning of candidate recovery points before restoration
Integration of findings into existing vulnerability management workflows
Evidence capture for regulators, auditors and insurers
Integrated procedures between Security Operations, IT Operations, Backup, Infrastructure and DFIR partners
Testing through ransomware and destructive attack exercises
This is not about adding another dashboard for the sake of it, it is about making sure the organisation can answer the uncomfortable questions when the pressure is highest:
What are we restoring?
Do we trust it?
What vulnerabilities does it contain?
Can we remediate before reconnecting it?
Are we about to give the attacker the same path back in?
The Bottom Line
Vulnerability management has spent years trying to become more continuous, more risk-based and more business-aligned. Yet many programmes still struggle with the same operational bottlenecks: fragile systems, incomplete asset inventories, limited scan windows, weak remediation validation and poor integration with recovery planning.
A modern data management platform gives organisations another way to attack those problems. It lets them scan where production scanning is risky. It helps identify vulnerable recovery points before they are restored. It provides historical evidence of exposure. It supports safer remediation validation. It turns protected data into an operational security asset.
Backups are no longer just insurance against data loss. Used properly, they become a vulnerability intelligence source, a recovery assurance layer and a way to reduce the chance that recovery simply rebuilds the conditions of compromise.
In destructive cyberattacks, restoring fast is not enough:
You have to restore clean.
You have to restore patched.
You have to restore to a state the business can actually trust.
As my mantra says: disaster recovery is about restoring data, cyber resiliency is about restoring trust.