Skip to main content
Back to White Papers
Incident White Paper

SolarWinds 2020: When the Monitoring Tool Was the Compromise

What failed, why the trust model that software supply chains depend on became the attack surface, and what boards and resilience leaders should be rehearsing now.

June 10, 2025
9 min read
By Frank Kahle

Executive Summary

Beginning in September 2019, Russia's Foreign Intelligence Service (SVR) gained access to SolarWinds' internal development environment. By February 2020, they had implanted a tool called SUNSPOT into the Orion platform's build process — a tool that silently replaced legitimate source code with a backdoored version every time the software was compiled. Between March and June 2020, SolarWinds distributed trojanised Orion updates to approximately 18,000 organisations, including the US Treasury, the Department of Homeland Security, the Department of Commerce, and private sector companies including Microsoft, Intel, Cisco, and Deloitte. The backdoor — codenamed SUNBURST — went undetected for nine months. It was not discovered by SolarWinds, by any government agency, or by any of the 18,000 recipients of the compromised update. It was found by FireEye, a cybersecurity firm, while investigating a separate breach of its own systems. The GAO called it one of the most widespread and sophisticated hacking campaigns ever conducted against the federal government. The SEC subsequently fined four public companies a combined $7 million for inadequate disclosure of the breach's impact.

What Failed

SolarWinds' Orion platform is a network monitoring and management tool used by IT teams to watch over their infrastructure — servers, applications, network devices, and more. It is, by design, a tool that sees everything. It has privileged access to every system it monitors. It is trusted implicitly by the networks it sits on. And in 2020, it became the delivery mechanism for a nation-state intelligence operation.

The SVR did not attack SolarWinds' source code repository. They attacked something harder to detect: the build process itself. A custom tool called SUNSPOT was deployed into SolarWinds' build environment, where it monitored for Orion compilation processes. When it detected one, it intercepted the build in real time, swapped a single source file — InventoryManager.cs — with a backdoored version, allowed the compiler to process it, then replaced the original file. The resulting binary was then signed with SolarWinds' legitimate code-signing certificate, making it indistinguishable from a genuine update.

This was not a smash-and-grab. The operation included a two-week dormancy period after installation before SUNBURST would activate. It checked for the presence of security tools and sandboxes. It communicated with its command-and-control infrastructure through DNS requests disguised as legitimate Orion traffic. And it was designed to blend so completely into normal network operations that even organisations actively looking for anomalies would not find it — because the anomalous traffic was coming from the monitoring tool itself.

The attackers operated inside the SolarWinds build environment from at least September 2019. They tested their code injection capability before deploying it. They were inside the networks of 18,000 organisations for months before anyone noticed. And when the breach was finally discovered in December 2020, it was not found by any of the monitoring, detection, or intelligence capabilities those organisations had invested in. It was found because FireEye noticed that someone had stolen their own red team tools — and traced the intrusion back to a SolarWinds update.

The Decision Timeline That Leadership Actually Faced

What made SolarWinds uniquely disorienting for leadership teams was the inversion of trust. The compromised component was not an obscure library or an unpatched server. It was the platform responsible for monitoring the health and security of the network. Removing it meant losing visibility. Keeping it meant the attacker retained access. Every organisation faced this paradox simultaneously.

The Critical Windows

Mar – Jun 2020

SolarWinds distributes trojanised Orion updates (versions 2019.4 HF 5 through 2020.2.1). Approximately 18,000 organisations install the backdoored software. SUNBURST activates after a dormancy period and begins communicating with SVR command-and-control infrastructure. No alerts. No anomalies. The monitoring platform itself is the compromise.

Jun – Dec 2020

The SVR conducts follow-on operations against high-value targets. Of the 18,000 organisations that received the trojanised update, approximately 50 are subjected to deeper exploitation — including multiple US federal agencies and major technology companies. The attackers move laterally, escalate privileges, and exfiltrate data. Six months pass without detection.

Dec 8 – 13, 2020

FireEye discloses that its own red team tools have been stolen. Their investigation traces the breach to a compromised SolarWinds Orion update. FireEye notifies SolarWinds and the US government. On December 13, CISA issues Emergency Directive 21-01, ordering all federal agencies to disconnect or power down affected Orion products immediately.

Dec 13, 2020+

Every affected organisation now faces the same question: what do we do with the tool that monitors everything, now that we know it has been compromised? Removing Orion eliminates the backdoor but also eliminates network visibility. Leadership teams must make containment decisions with incomplete information about what the attacker accessed, while simultaneously managing regulatory notification, board communication, and public disclosure.

Why It Went Undetected for Nine Months

The SolarWinds compromise was not a failure of detection technology. It was a failure of the trust model that detection technology depends on. Every security architecture has a set of assumptions about what is trusted. Signed software updates from established vendors sit at the very top of that trust hierarchy. The SVR understood this and designed the operation around it.

The update was signed with a legitimate certificate. Code signing exists to verify that software has not been tampered with. In this case, the tampering occurred before the signing. Because SUNSPOT modified the source code during compilation, the resulting binary was built and signed through SolarWinds' normal release process. Every verification check passed. The certificate was real. The update was genuine — it just contained a backdoor.

The backdoor impersonated legitimate traffic. SUNBURST communicated with its command-and-control infrastructure using DNS requests that mimicked normal Orion telemetry. To a network monitoring team — or to automated detection tools — the traffic looked like their monitoring platform doing its job. The attackers weaponised the fact that Orion was expected to communicate across the network constantly.

The monitoring tool was the blind spot. Organisations invest in network monitoring specifically to detect anomalies. But when the monitoring tool itself is compromised, it creates a structural blind spot. SUNBURST was running inside the tool that was supposed to catch exactly this kind of activity. The more an organisation relied on Orion for visibility, the less likely they were to detect the compromise.

The operation was designed for patience, not speed. Unlike ransomware campaigns that operate on a timeline of hours or days, the SVR operated on a timeline of months. The initial access was in September 2019. The trojanised update was not distributed until March 2020. Follow-on exploitation continued through December 2020. At every stage, the attackers prioritised stealth over speed. They were not trying to monetise access quickly. They were building a long-term intelligence collection capability.

What the Incident Exposed

Build environments are as critical as source code. The security industry has invested heavily in source code scanning, repository access controls, and code review processes. The SolarWinds compromise bypassed all of these. The source code repository was never modified. The attack targeted the build pipeline — the automated process that compiles source code into the binaries that are distributed to customers. Most organisations, even those with mature security programmes, had limited visibility into the integrity of their build environments.

Code signing provides authenticity, not safety. A signed binary tells you who built it. It does not tell you whether the build process was compromised. SolarWinds demonstrated that the code-signing trust model can be exploited without ever touching the signing key itself. If the compromise happens before the signature is applied, the signature validates the compromise.

Vendor trust is inherited risk. When an organisation installs a vendor's software and grants it privileged network access, they are inheriting every vulnerability in that vendor's development, build, and distribution pipeline. The 18,000 organisations that installed the trojanised Orion update did not make a security mistake. They followed their vendor's normal update process. The risk was upstream, invisible, and entirely outside their control.

The SEC treated inadequate disclosure as a governance failure. In October 2024, the SEC fined Unisys ($4 million), Avaya, Check Point, and Mimecast a combined $7 million — not for being breached, but for downplaying the impact in their public disclosures. The message to boards was explicit: discovering you were compromised through a supply chain attack is not the governance failure. Failing to disclose it accurately is.

No one was watching the watchers. SolarWinds Orion had privileged access to the networks it monitored. It was trusted precisely because its function was to provide visibility. But that trust was unverified. No independent mechanism validated that Orion itself was operating with integrity. The tool that was supposed to detect threats became the threat, and the absence of independent verification meant there was no second line of detection to catch it.

The Resilience Lens

The SolarWinds breach reframes a question that most resilience programmes are not built to answer: what happens when a trusted tool, operating exactly as expected, is itself the compromise?

This is not a question about patching, or access controls, or network segmentation. It is a question about the assumptions that sit underneath every security architecture. Every organisation has a set of tools and vendors it trusts implicitly — endpoint protection, identity providers, monitoring platforms, cloud infrastructure. These are the components that security teams rely on to detect and respond to threats. But if any one of those components is compromised at the source, the entire detection model inverts. The tool designed to find threats becomes the threat's delivery mechanism.

The SolarWinds attack was a state-sponsored intelligence operation, but the structural vulnerability it exploited — implicit trust in the software supply chain — is not unique to nation-state threats. The MOVEit breach of 2023 demonstrated that criminal groups can exploit the same trust architecture at industrial scale, targeting managed file transfer platforms to exfiltrate data from thousands of organisations simultaneously. The attack surface is not the specific vendor. It is the trust model itself.

For boards and resilience leaders, the uncomfortable reality is this: the tools your organisation trusts most are the tools an attacker would most want to compromise. And the more privileged access a tool has, the more damage a compromise of that tool can do — and the harder it will be to detect, because the tool's normal behaviour already involves seeing and touching everything on the network.

What Boards Should Be Asking

The typical post-SolarWinds response was to review vendor security questionnaires, add supply chain risk to the corporate risk register, and request more frequent vulnerability scans. These are necessary. They are also insufficient. The questions that matter are the ones that test whether the organisation could survive — and even detect — a compromise of its most trusted tools.

  • Which tools in our environment have the most privileged access — and do we have any independent mechanism to verify that those tools are operating with integrity?
  • If our network monitoring platform, our endpoint protection, or our identity provider were compromised at the vendor level, how would we detect it? Who would notice, and how long would it take?
  • Have our leadership teams ever rehearsed a scenario where a trusted security tool is the attack vector — where the containment action requires removing a tool we depend on for visibility?
  • Do our vendor risk assessments evaluate the security of the vendor's development and build pipeline, or do they stop at certifications and questionnaire responses?
  • If we received a CISA emergency directive tomorrow instructing us to disconnect a critical platform immediately, could we maintain operational continuity while we do it?

If these questions produce hesitation rather than clear answers, the organisation has the same structural gap that made SolarWinds so devastating. The gap is not in technology. It is in the assumption that trusted tools will always behave as expected — and in the absence of any plan for what happens when they do not.

Conclusion

The SolarWinds breach was not just a cybersecurity incident. It was a demonstration of what happens when the trust model that underpins modern software distribution is turned against the organisations that depend on it. Eighteen thousand organisations installed a backdoor because they did what they were supposed to do: they kept their software up to date.

The SVR spent over a year inside SolarWinds' environment before the first trojanised update shipped. They spent another nine months inside their targets' networks before anyone noticed. They were not detected by any of the security tools, threat intelligence feeds, or monitoring capabilities that those organisations had invested in. They were found by accident — because FireEye noticed something else and pulled the thread.

The lesson is not that SolarWinds was uniquely vulnerable. It is that every software vendor with a build pipeline and a distribution mechanism presents the same structural risk. The question for every board is not whether a vendor they trust will be compromised. It is whether they would know if it already has been — and whether they have ever practised what happens next.

That readiness cannot be assumed. It has to be rehearsed.

Rehearse This Scenario

CrisisLoop builds structured executive exercises around real-world incidents like this one. If your leadership team has never rehearsed a supply chain compromise where the trusted tool is the threat and containment means losing visibility, that gap is worth closing before it plays out under real pressure.

Talk to Us About Resilience Rehearsal