What Is SANS Incident Response?
A ransomware payload executes across your environment at 2:47 AM. Your SOC analyst sees the alert, but the playbook is outdated, the escalation path is unclear, and the containment procedure lives in a PDF nobody has opened since last year's tabletop exercise. The difference between a contained incident and a front-page breach often comes down to how well your team executes a structured response under pressure. That execution depends on preparation-phase investments made long before the alert arrives.
SANS incident response is the six-phase framework developed by the SANS Institute to give security teams that structure. Known as the PICERL model, it breaks incident handling into sequential phases: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. The GCIH certification validates practitioner competency across all six phases.
The SANS incident response framework focuses on operational execution. It tells your team what to do, when to do it, and how to hand off between phases so nothing falls through the cracks during a live incident.
How the SANS Framework Relates to Cybersecurity
The SANS PICERL model connects your tools, your people, and your processes into a repeatable workflow. Your SIEM generates an alert during Identification. Your EDR solution executes isolation during Containment. Your forensic tooling supports root cause analysis during Eradication. Each phase maps directly to the security tools and team roles already in your SOC. The framework also aligns with NIST SP 800-61 guidance on incident handling, though the two frameworks differ in structure and audience, which we compare in detail below.
Knowing how the SANS incident response framework works in theory is a starting point. Executing it under real-world conditions requires a closer look at each phase.
The 6 Phases of SANS Incident Response
The SANS framework operates as a linear cycle. You move through each phase sequentially during an incident, then loop back to Preparation based on what you learned. One principle spans every phase: document everything. That means capturing a time-stamped record of actions taken, systems touched, commands run, evidence collected, and decisions made. This record supports forensics, legal and regulatory needs, and a credible after-action review.
Phase 1: Preparation
Preparation builds your foundation before an incident occurs. You establish IR policies and procedures, deploy and tune security tools (SIEM, EDR, threat intelligence platforms), and develop playbooks for common attack scenarios like ransomware, credential compromise, and cloud intrusion. Preparation also includes establishing communication templates for internal escalation and external notification, since delays in either can compound incident damage.
This phase includes forming your tiered Incident Response Team:
- Tier 1 analysts handle initial alert triage and event monitoring.
- Tier 2 analysts conduct in-depth investigation and threat hunting.
- Tier 3 analysts lead complex investigations.
Security engineering and SOC management maintain tooling and coordinate response operations.
Defining tiers this way clarifies escalation paths before the first high-severity alert arrives. Preparation also means establishing relationships with external stakeholders you may need during an incident: legal counsel, public relations, law enforcement contacts, and any third-party forensic or IR retainer services. Teams that build these relationships during a crisis waste critical hours on logistics instead of containment.
Phase 2: Identification
Identification is where you confirm that a security event is actually an incident. Your SOC analysts perform continuous monitoring through SIEM platforms, triage alerts, validate indicators of compromise, and prioritize the incident by severity level. Effective identification depends on correlating data from multiple sources: endpoint telemetry, network flow data, identity logs, and threat intelligence feeds.
Assign at least two responders to confirmed incidents, one as the primary handler making decisions and the other supporting investigation and evidence gathering. During this phase, your team should also begin scoping the incident: which systems are affected, what data may be at risk, and whether the attacker still has active access. The faster and more accurately you scope an incident, the less damage your organization absorbs and the more targeted your containment actions can be.
Phase 3: Containment
Containment splits into short-term and long-term actions. Short-term containment stops the bleeding. The priority is to limit the blast radius while preserving evidence for later phases:
- Isolate affected endpoints from the network.
- Disable compromised accounts and revoke active sessions.
- Block malicious network traffic at the firewall or proxy.
- Capture forensic images before making any changes to affected systems.
Long-term containment applies temporary patches, implements enhanced monitoring, and establishes persistent network controls while you prepare for eradication. This may include standing up clean parallel systems so business operations can continue while compromised systems remain isolated. A key decision point in this phase is determining the acceptable level of business disruption: complete network segmentation is more secure but can halt operations, while targeted isolation preserves uptime but risks incomplete containment. Define these tradeoffs in your playbooks before the incident forces you to improvise.
Phase 4: Eradication
Eradication removes the attacker's foothold from your environment:
- Eliminate malware and malicious tooling from all affected systems.
- Close exploited vulnerabilities that enabled initial access or lateral movement.
- Remove persistence mechanisms such as backdoor accounts, scheduled tasks, or rogue services.
- Reimage compromised systems where cleaning is insufficient to guarantee integrity.
Root cause analysis is critical here: if you eradicate artifacts without understanding the initial access vector, re-compromise is likely. Your team should trace the full attack chain from initial access through lateral movement to understand every system the attacker touched. Partial eradication is one of the most common causes of re-compromise, and it typically happens when teams rush to restore services before confirming the full scope of attacker activity. Detailed phase guidance is available through SANS training courses such as SEC504, FOR508, and FOR608.
Phase 5: Recovery
Recovery brings systems back to production:
- Restore from clean, validated backups.
- Rebuild compromised systems where restoration is insufficient.
- Implement stronger security controls based on what you learned during eradication.
- Validate that restored images are clean and no residual attacker artifacts remain.
- Reintegrate systems gradually with extended monitoring for recurring indicators of compromise.
Define clear criteria for when monitoring can return to baseline levels. Recovery is also when you implement the security improvements that address root cause: if the attacker exploited a misconfigured VPN, the fix deploys during Recovery, not after.
Phase 6: Lessons Learned
Lessons Learned closes the loop. You conduct a formal post-incident review within two weeks of resolution, document the complete timeline, and analyze what worked and what failed. The review should include everyone who participated in the response, not just the IR team, since communication breakdowns and escalation delays often originate outside the SOC.
Findings feed back into Preparation through specific, assigned action items with deadlines and owners. The goal is to identify what enabled the incident and ensure the same pathways cannot be used again. Vague recommendations like "improve monitoring" are not actionable. Effective action items are specific: "deploy identity-based detection rules for service account lateral movement by Q2" gives your team a clear deliverable. Teams that skip or delay this phase tend to repeat the same containment failures across incidents.
Mapping these phases to the tools your analysts use every day ensures your SANS incident response workflow holds up under pressure.
Tool Integration across phases
Your SIEM and EDR platforms provide distinct capabilities across the response lifecycle. SIEM platforms deliver centralized log management, event correlation, and historical analysis. EDR tools deliver real-time endpoint visibility, targeted response including endpoint isolation, and deep process-level forensics.
In practice, teams get the best results when SIEM, EDR, identity telemetry, and cloud logs can be investigated together without manual correlation. For a deeper look at how unified telemetry supports faster scoping, see this XDR overview.
Understanding the six phases gives your team a shared language and sequence for handling incidents. The next step is turning that sequence into a documented, testable plan your team can execute under pressure.
How to Build an Incident Response Plan Using PICERL
A plan that lives in a shared drive and never gets tested is a compliance artifact, not an operational tool. The goal is a living document that maps each PICERL phase to your specific environment, tools, and team structure so your responders can execute it under pressure without improvising.
Start by mapping each PICERL phase to your current environment:
- Preparation: Document your IR team roster with roles, contact methods, and escalation authority. Define who can authorize containment actions like network isolation or account suspension without waiting for executive approval.
- Identification through Recovery: Build phase-specific playbooks that reference your actual tooling: which SIEM queries to run, which EDR actions to trigger, and which communication templates to send.
- Lessons Learned: Establish a post-incident review template with required fields for timeline, root cause, and assigned action items.
CISA playbook templates provide a strong baseline you can customize rather than building from scratch.
Your plan should also include a severity classification matrix that maps incident types to response tiers. A credential compromise affecting a single user and a ransomware outbreak affecting production servers require different escalation paths, different team compositions, and different containment timelines. Define those differences in advance.
Once drafted, stress-test the plan through tabletop exercises at least quarterly. Run scenarios that target your weakest phases. If your team has never practiced Eradication procedures for a supply chain incident, that gap will appear during a real event. Assign a plan owner responsible for quarterly reviews, contact updates, and tooling alignment. Plans that lack ownership drift out of date within months.
Your plan is only as effective as the people executing it. SANS provides a structured training path to build and validate IR competency.
SANS Incident Response Certifications and Training Paths
The SANS Institute validates IR competency through GIAC certifications and hands-on training courses. If you are building or scaling an IR team, these credentials map directly to PICERL phase responsibilities.
SEC504: Hacker Tools, Techniques, and Incident Handling
- Coverage: All six PICERL phases.
- Certification: GCIH (GIAC Certified Incident Handler), validating hands-on ability to detect, respond to, and resolve security incidents.
- Best for: Tier 1 and Tier 2 analysts moving into dedicated IR roles. This is typically the starting point for teams building IR capability.
FOR508: Advanced Incident Response, Threat Hunting, and Digital Forensics
- Coverage: Identification, Containment, and Eradication phases, with emphasis on memory forensics, timeline analysis, and advanced threat hunting.
- Certification: GCFA (GIAC Certified Forensic Analyst).
- Best for: Tier 2 and Tier 3 analysts who lead complex investigations.
For teams building their training roadmap, start with SEC504 for broad PICERL coverage, then advance to FOR508 for deeper forensic and hunting capability. Pair certifications with regular tabletop exercises to ensure classroom knowledge translates to operational readiness.
Even with trained teams and documented plans, organizations still encounter structural challenges and execution errors during live incidents.
Common Challenges and Mistakes in SANS Incident Response
The SANS incident response model itself is sound. The challenge is implementation. Organizations that adopt PICERL often find that the framework's value depends entirely on how well their tools, staffing, and processes support each phase in real time.
The Autonomy Deficit
Most teams still rely on manual or partially integrated workflows during the phases where speed matters most. Fragmented security stacks delay threat discovery, add manual steps for evidence collection, and slow investigations through human-driven correlation across consoles. Every manual step you fail to eliminate adds time to containment.
The Remediation Speed Mismatch
Vulnerability exploitation frequently outpaces organizational remediation cycles. When patching and configuration changes move slower than adversary activity, containment decisions become more disruptive. Segmentation, isolation, and service shutdowns become necessary because the window for low-impact fixes has already passed.
Staffing and Executive Accountability Gaps
Building a full-time, dedicated IR team remains difficult. Many organizations rely on part-time or borrowed resources, and when IR responsibilities are split across team members who also carry operational workloads, response quality degrades under pressure. Compounding this, incident response breaks down when escalation paths, decision rights, and external communications are unclear. If executives are not aligned on who can authorize containment actions, downtime, or disclosure, Preparation phase gaps cascade through every subsequent phase.
Treating Preparation as a one-time activity
You built your incident response plan last year. Your playbooks reference tools you have since replaced. Your escalation contacts have changed roles. Preparation requires quarterly updates, tabletop exercises, and ongoing integration testing with your current toolset.
BC/DR Integration Gap
When your incident response process does not align with your business continuity and disaster recovery (BC/DR) plan, Recovery phase decisions get improvised instead of executed from a tested procedure.
These challenges are structural, not theoretical. Each one maps back to a specific PICERL phase where preparation, tooling, or process broke down.
SANS Incident Response Best Practices
Knowing what goes wrong is half the equation. These practices address the challenges above through concrete operational improvements.
Align with NIST and CISA Playbook Standards
Use CISA response playbooks as your template. These playbooks align with NIST incident handling guidance and provide step-by-step procedures, decision trees, and notification requirements. Customize them for your environment rather than building from scratch.
Target Autonomous Response and Behavioral Detection
A growing share of security teams now target autonomy in their identification and response workflows. Extend that focus to containment actions, evidence collection, and alert triage. Complement automation with behavioral analysis of process activity, user patterns, and network anomalies to catch attacks that use valid credentials and living-off-the-land techniques that signature-based methods miss. Extend your playbook coverage to edge infrastructure, VPN compromise, supply chain incidents, and cloud-specific scenarios.
Measure MTTC and drive quarterly improvement
Track Mean Time to Contain (MTTC) as your primary effectiveness metric alongside Mean Time to Detect (MTTD), ticket escalation rates, and autonomy coverage. Tie outcomes back to specific playbooks, tooling gaps, and approval bottlenecks. Feed every post-incident review back into playbook refinements and rule updates on a quarterly cycle.
Applied consistently, these practices compound over time. Each improvement cycle tightens the gap between alert and containment. Real-world incidents show what happens when that gap stays wide.
Real-World Attack Examples That Map to PICERL
Even if your environment looks nothing like a critical infrastructure operator or a global casino, the failure modes are the same: unclear decision rights, slow containment, and incomplete scoping.
- Colonial Pipeline (2021, ransomware): The incident triggered a shutdown of pipeline operations and led to a $4.4 million ransom payment, illustrating how containment and recovery decisions can become business-wide continuity events.
- Kaseya VSA (2021, supply chain ransomware): Attackers used a managed service software platform to push ransomware to downstream customers, impacting up to 1,500 organizations. This is a direct reminder to build third-party and edge access playbooks in Preparation, not during the incident.
- MGM Resorts (2023, social engineering and ransomware): MGM reported a negative financial impact of $100 million for the quarter tied to the cyber incident, demonstrating why executive escalation paths and identity-focused containment actions matter.
Across these incidents, the pattern is consistent: preparation quality determines whether containment stays technical or becomes a business-wide crisis.
Organizations evaluating their response strategy often compare SANS PICERL against the NIST framework. Understanding where each one fits helps you apply the right model to the right problem.
SANS vs. NIST: Understanding the Key Differences
SANS PICERL is built for practitioners who need operational guidance during a live incident. NIST SP 800-61 is built for organizations that need policy alignment, compliance mapping, and governance structures.
| Aspect | SANS PICERL | NIST SP 800-61 Rev. 3 |
| Phases | 6 (Preparation, Identification, Containment, Eradication, Recovery, Lessons Learned) | 6 (Govern, Identify, Protect, Detect, Respond, Recover) mapped to NIST Cybersecurity Framework 2.0 |
| Focus | Operational execution for practitioners with granular tactical guidance | Policy development, federal compliance, and detailed communication structures |
| Granularity | Separates containment, eradication, and recovery into distinct operational phases | Combines incident response into higher-level framework functions aligned with organizational governance |
| Validation | GIAC GCIH certification demonstrating practitioner competency | Federal agency adoption and compliance mapping to organizational governance frameworks |
Use SANS PICERL for granular operational execution and NIST SP 800-61 for compliance alignment, communication structures, and enterprise governance. You can operationalize the combined framework through CISA playbook templates, which provide executive order-compliant procedures aligned to NIST's structure.
Regardless of the framework you choose, your real constraint is execution speed. That depends on whether your platform can unify telemetry and support autonomous actions during Identification and Containment.
Strengthen Incident Response with SentinelOne
Executing the SANS PICERL framework at the speed modern threats demand calls for more than technology alone. SentinelOne Wayfinder Incident Readiness & Response gives you an expert incident response program that supports you before, during, and after a breach.
Wayfinder Incident Readiness & Response is part of SentinelOne’s managed services portfolio and runs on SentinelOne telemetry together with Google Threat Intelligence, so your team can move from ad hoc reaction to prepared, repeatable response.
Before an incident, Wayfinder specialists run readiness assessments, playbooks, tabletop exercises, and purple‑team style drills to test controls and close gaps so your plans work under pressure.
During an incident, SentinelOne responders investigate active threats, contain affected systems, and coordinate digital forensics, root cause analysis, and IOC analysis so you can control impact and shorten disruption.
After an incident, the team guides recovery, provides executive level reporting, supports legal and compliance needs, and tunes your environment so lessons learned turn into stronger defenses for the next event.
To connect managed services with your existing controls, Wayfinder uses data from the Singularity™ Platform across endpoints, cloud, and identities, giving analysts and responders a unified view throughout the incident lifecycle. Storyline technology correlates process, file, and network activity into a complete attack narrative so your analysts can see incident scope without pivoting between tools..
Purple AI accelerates phases from Identification through Lessons Learned by letting your analysts query security data using natural language and reconstruct incident timelines faster. Customers report up to 55% faster threat remediation with Purple AI.
SentinelOne also reduces queue pressure before an incident becomes a full-scale response. In MITRE ATT&CK Evaluations, SentinelOne generated 12 alerts compared to 178,000 in a referenced comparison: an 88% reduction in analyst triage volume.
Singularity AI SIEM ingests and normalizes telemetry from native and third-party sources, providing centralized visibility and hot-stored data for historical analysis across incidents.
Request a demo with SentinelOne to see how autonomous response and unified telemetry reduce your mean time to contain.
Unleash AI-Powered Cybersecurity
Elevate your security posture with real-time detection, machine-speed response, and total visibility of your entire digital environment.
Get a DemoKey Takeaways
The SANS PICERL framework gives your team a proven six-phase structure for handling incidents. The challenge is not the framework itself but operationalizing it with proper autonomy, tool integration, and staffing. Teams that reduce manual work and execute consistent playbooks contain incidents faster and reduce breach impact.
Prioritize MTTC as your primary metric, build playbooks for emerging attack vectors, and invest in platforms that unify telemetry and enable autonomous response across every phase.
FAQs
SANS incident response is a six-phase framework developed by the SANS Institute called PICERL. It stands for Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. The framework provides operational guidance for security teams to handle incidents in a structured, repeatable way.
It maps each phase to specific team roles, tools, and actions so your SOC can execute consistently under pressure.
SANS PICERL uses six phases with granular operational guidance for practitioners. NIST SP 800-61 Rev. 2 uses four phases focused on policy development and federal compliance, while Rev. 3 maps incident response to the NIST Cybersecurity Framework 2.0's six functions.
SANS separates containment, eradication, and recovery into distinct phases. Many teams use SANS for daily operations and NIST for regulatory alignment.
Implementation timelines vary based on your maturity, existing tooling, and staffing. Most teams start their incident response plan by formalizing roles, escalation paths, and a minimum set of playbooks for ransomware, identity compromise, and cloud incidents, then stress-test workflows through tabletop exercises.
The fastest programs treat implementation as a rolling quarterly cycle, not a one-time project.
Mean Time to Contain (MTTC) is one of the most operationally relevant metrics because it captures how quickly you stop impact after confirming an incident. Track MTTC alongside Mean Time to Identify (MTTI) and re-compromise rates. Tie changes to specific playbooks and tooling gaps so you can prove which investments improved execution.
Autonomous AI speeds up response across PICERL by reducing manual correlation and repetitive tasks. During Identification, it connects endpoint, identity, and cloud activity to help you scope incidents faster.
During Containment, you can pre-authorize routine actions like isolating endpoints or disabling accounts to remove approval delays. During Lessons Learned, natural language queries and summarized timelines help document what happened and update playbooks.

