Why Healthcare Breaches Keep Looking Like Insiders Even When They Aren’t

In healthcare, some of the most damaging breaches do not look like break-ins. They look like ordinary access.

The login is valid. The system is familiar. The workflow appears routine. The account may even have permission to be there. What is wrong is not the existence of access, but the pattern of behavior: the timing, scope, sequence, and purpose.

That is why so many healthcare breaches resemble insider incidents even when they are not. The actor may be external. The credentials may be stolen. The access path may be trusted. But to defenders, the activity often presents the same way: as behavior that appears authorized until the impact is unmistakable.

This is the core detection problem. And it is one that signature-centric security controls repeatedly struggle to solve.

Healthcare’s visibility problem is different

Healthcare environments are built for continuity, coordination, and broad operational access. Clinicians, billing teams, administrators, vendors, and partner systems all interact with sensitive workflows throughout the day. Electronic health records, revenue-cycle systems, imaging platforms, remote access services, and support tooling must remain available.

That makes healthcare fundamentally different from quieter, more tightly bounded environments. A large amount of sensitive access is normal. A large amount of privileged movement is normal. External connectivity is normal. Shared workflows are normal.

In that kind of environment, malicious activity does not always stand out because it is unauthorized. It often stands out only when viewed against behavior.

An attacker who gains valid credentials does not need to act loudly. If they can move through trusted systems using expected tools, they can resemble a legitimate user, contractor, or administrator long enough to stage ransomware, exfiltrate data, or expand their foothold. By the time the activity is recognized as malicious, the damage may already be operational.

Recent healthcare breaches keep reinforcing the same lesson

Recent healthcare incidents show the same structural pattern: trusted access is often the first layer of camouflage.

The Change Healthcare breach remains one of the clearest examples. Reporting indicated that the attackers used stolen credentials to access the company’s Citrix remote access service, which reportedly did not have multi-factor authentication enabled. That detail matters for more than the usual “enable MFA” lesson. It highlights the deeper issue: once attackers gain valid access, they can begin to operate through pathways that look legitimate. The breach went on to affect roughly 100 million individuals and cause severe operational and financial disruption, but the entry dynamic itself was deceptively ordinary.

The CareCloud incident reflects a similar problem from another angle. The company disclosed unauthorized access to its IT infrastructure, including one environment containing patient health records, along with temporary disruption. Even when full forensic details are not yet public, the challenge is clear. Defenders are not simply trying to spot malware in the abstract. They are trying to distinguish malicious use of access inside systems where normal operations are already sensitive and complex.

The TriZetto breach reinforces the problem of dwell time. Reporting indicated that unauthorized access began long before the incident was publicly disclosed. That kind of persistence is exactly what happens when activity blends into trusted workflows long enough to avoid early escalation. In healthcare and healthcare-adjacent systems, long-running access may not produce a dramatic signature. It may instead produce a subtle but behaviorally inconsistent pattern over time.

These incidents differ in mechanics and scope, but they point to the same reality: many serious healthcare breaches do not begin with obviously malicious behavior. They begin with behavior that looks normal enough to survive initial scrutiny.

Why these breaches look like insider events

The phrase “insider threat” is sometimes too narrow to describe what defenders are actually seeing. The more useful frame is “insider-like behavior.”

That distinction matters because the visibility problem is often the same whether the actor is a malicious insider, an external attacker using stolen credentials, or a compromised account moving through familiar systems.

From a detection standpoint, these scenarios create the same challenge. The actions may fit the environment at a surface level while still being behaviorally wrong.

This is especially dangerous in healthcare because legitimate work already includes sensitive file access, privileged administration, remote connections, and interaction with critical clinical or financial systems. A single login or file access event rarely tells the whole story. The real signal often lives in the pattern.

Why signature-centric controls keep arriving late

Traditional security tooling is effective at recognizing known bad artifacts: malware families, hashes, exploits, IOCs, and rule-triggering events. But healthcare breaches increasingly depend on something else: malicious use of legitimate access.

That is where many defenders lose time.

If an attack enters through a known exploit and immediately detonates in a visible way, signatures and rules can help. But if the attacker uses a real account, a familiar remote service, an ordinary admin tool, or a normal-seeming data path, then the initial signal is weak. The event may be technically valid even though the behavior is not.

Those are not signature questions. They are behavioral questions.

Why behavior-based cohort analysis matters

This is where cohort-based behavior analysis becomes more useful than static detection alone.

A valid account can still behave abnormally. A trusted tool can still be part of malicious staging. A technically approved workflow can still be used in a way that does not fit the user, device, peer group, or operational context.

Behavior-based detection is not just about flagging anomalies. On its own, anomaly detection can become noisy and hard to operationalize. The better approach is continuous baselining combined with peer-aware comparison.

Personam evidence points to the same conclusion

This pattern is not theoretical.

In Personam’s Insider Threat Lab work, behavioral profiling and peer-group comparison helped reduce investigative focus to roughly 2% of the population while achieving strong recall on complex insider-style scenarios. In Personam’s IP law firm case study, low-and-slow data exfiltration looked routine until viewed behaviorally. In the government contractor case study, Personam identified compromised credentials being used in ways that diverged from the user’s historical and peer behavior.

Together, these cases reinforce the same lesson healthcare defenders are learning from recent breaches: the most important signal is often not unauthorized access in the traditional sense, but authorized-looking behavior that is out of character.

What CISOs should take from this

Healthcare leaders should assume that many of the most consequential attacks will not present as loud, obviously foreign intrusions. They will present as trusted-access abuse.

That requires continuous baselining, peer-aware behavioral comparison, network-layer visibility, and prioritization based on meaningful behavioral divergence instead of raw alert volume.

Healthcare environments are too complex, too interconnected, and too operationally sensitive to rely only on signatures and static indicators. The defenders who reduce dwell time will be the ones who can distinguish routine activity from behavior that only looks routine at first glance.

The bottom line

Healthcare breaches keep looking like insiders because many of them exploit the same blind spot: trusted access used in untrusted ways.

Not every breach is caused by an insider. But many of the worst ones create an insider-like detection problem. They move through real accounts, real systems, and real workflows, which is exactly why they are so easy to miss early.

That is why defenders should stop asking only whether access was allowed. They should ask whether the behavior was normal.

Sources:
– Change Healthcare: https://www.bleepingcomputer.com/news/security/unitedhealth-says-data-of-100-million-stolen-in-change-healthcare-breach/
– CareCloud: https://www.bleepingcomputer.com/news/security/healthcare-tech-firm-carecloud-says-hackers-stole-patient-data/
– TriZetto / Cognizant: https://www.bleepingcomputer.com/news/security/cognizant-trizetto-breach-exposes-health-data-of-34-million-patients/
– Personam internal case studies: Insider Threat Lab, IP Law Firm, Government Contractor

Sources

See how Personam reveals the behaviors other tools miss: https://personam.ai/