# IT Disaster Recovery Runbook

**VantagePoint Networks** | Production-Ready DR Plan Template

---

## Document Control

| Field | Value |
|---|---|
| Document ID | DR-RB-001 |
| Version | 1.0 |
| Author | `<DR_COORDINATOR_NAME>` |
| Owner | `<IT_DIRECTOR_NAME>` |
| Classification | Confidential - Internal Use Only |
| Last Reviewed | `<YYYY-MM-DD>` |
| Next Review | `<YYYY-MM-DD>` (annual + post-incident) |
| Approval | `<EXEC_SPONSOR_NAME>`, `<DATE>` |

### Distribution List
- Executive sponsor
- IT Director
- DR Coordinator
- IT Operations team leads
- Facilities Manager
- Legal counsel
- Insurance broker contact
- Key vendor account managers (secure vault copy)

### Revision History
| Version | Date | Author | Changes |
|---|---|---|---|
| 0.1 | `<DATE>` | `<AUTHOR>` | Initial draft |
| 1.0 | `<DATE>` | `<AUTHOR>` | Approved for production |

---

## 1. DR Policy Statement

### Purpose
This runbook defines the procedures for recovering IT services at `<COMPANY_NAME>` following a significant disruption. It provides the IT team with a clear, tested sequence of actions to restore operations while minimising data loss and downtime.

### Scope
Covers all production IT systems including:
- Network infrastructure (routers, switches, firewalls, wireless)
- Server infrastructure (physical and virtual)
- Storage systems (SAN, NAS, backup)
- Cloud workloads
- Business applications and databases
- End-user services (email, collaboration, file services)
- Identity and access services (Active Directory, SSO)

Out of scope: individual workstation recovery, non-production dev/test environments (unless explicitly listed in BIA).

### Objectives
- **RTO (Recovery Time Objective)**: Maximum acceptable time to restore service after incident
- **RPO (Recovery Point Objective)**: Maximum acceptable data loss measured in time
- **MTPD (Maximum Tolerable Period of Disruption)**: Beyond which organisation cannot survive

### DR Tiers
| Tier | RTO | RPO | Cost | Approach |
|---|---|---|---|---|
| 1 - Critical | < 1 hour | < 15 min | High | Active/active or hot standby |
| 2 - Important | < 4 hours | < 1 hour | Medium | Warm standby, fast restore |
| 3 - Standard | < 24 hours | < 24 hours | Low | Cold standby, daily backup |
| 4 - Deferred | > 24 hours | Weekly | Minimal | Restore from weekly backup |

---

## 2. Business Impact Analysis (BIA)

Complete this table for your organisation. Example rows shown.

| System | Owner | Tier | RTO | RPO | MTPD | Dependencies |
|---|---|---|---|---|---|---|
| Active Directory | IT Ops | 1 | 1 hr | 15 min | 4 hr | DNS, NTP, certificates |
| Internet connectivity | Network | 1 | 1 hr | N/A | 4 hr | ISP, firewall, router |
| Email (M365) | IT Ops | 1 | 2 hr | 1 hr | 8 hr | Internet, MFA, identity |
| ERP system | Finance | 1 | 2 hr | 30 min | 8 hr | Database, app server |
| Web storefront | Digital | 1 | 1 hr | 15 min | 4 hr | CDN, DNS, payment gateway |
| File server | IT Ops | 2 | 4 hr | 1 hr | 24 hr | AD, backup, storage |
| CRM | Sales | 2 | 4 hr | 4 hr | 24 hr | Database, SSO |
| `<APP_NAME>` | `<OWNER>` | `<TIER>` | `<RTO>` | `<RPO>` | `<MTPD>` | `<DEPS>` |

---

## 3. DR Team Structure

### Core DR Team
| Role | Primary | Deputy | Phone (Primary) | Phone (After Hours) | Email |
|---|---|---|---|---|---|
| DR Coordinator | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| IT Director | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| Network Lead | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| Server / Virt Lead | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| Security / IR Lead | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| Database Lead | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| Application Lead | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| Comms Lead | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| Facilities | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| Legal | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| HR | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |
| Executive Sponsor | `<NAME>` | `<NAME>` | `<PHONE>` | `<MOBILE>` | `<EMAIL>` |

### External Contacts
| Vendor | Product | Account # | Support Phone | Support Portal | Escalation |
|---|---|---|---|---|---|
| Primary ISP | Internet | `<ACCT>` | `<PHONE>` | `<URL>` | `<NAME>` |
| Secondary ISP | Internet | `<ACCT>` | `<PHONE>` | `<URL>` | `<NAME>` |
| Cloud provider | IaaS | `<ACCT>` | `<PHONE>` | `<URL>` | `<NAME>` |
| Data centre / colo | Hosting | `<ACCT>` | `<PHONE>` | `<URL>` | `<NAME>` |
| Cisco TAC | Network | `<CONTRACT>` | `<PHONE>` | support.cisco.com | Escalation Mgr |
| Microsoft | M365 | `<TENANT>` | `<PHONE>` | portal.azure.com | Premier TAM |
| Backup vendor | Backup | `<ACCT>` | `<PHONE>` | `<URL>` | `<NAME>` |
| Insurance broker | Cyber | `<POLICY>` | `<PHONE>` | N/A | `<NAME>` |
| Legal / cyber counsel | Advisory | N/A | `<PHONE>` | N/A | `<NAME>` |
| Forensics firm (retainer) | IR | `<CONTRACT>` | `<PHONE>` | N/A | `<NAME>` |

---

## 4. Activation Criteria

DR activation is triggered by any of these scenarios and authorised by the **DR Coordinator** (or deputy in their absence). Executive sponsor notified within 30 minutes.

### Triggering Events
| Event | Minimum Scale | Decision Authority |
|---|---|---|
| Primary data centre failure | Total loss > 1 hour | DR Coordinator |
| Network core failure | Total LAN down > 30 min | Network Lead |
| Cyber attack / ransomware | Confirmed encryption / exfil | CISO / DR Coord |
| Cloud provider region outage | Major services down > 30 min | DR Coordinator |
| Total Internet outage | All WAN links down > 15 min | DR Coordinator |
| Building loss / evacuation | HQ unusable > 4 hours | Exec Sponsor |
| Pandemic / public health | Government mandated | Exec Sponsor |
| Utility failure | Power/cooling lost > 15 min | Facilities Mgr |
| Key personnel unavailable | > 50% of IT team | DR Coordinator |

### Non-Triggering Events
These are handled via standard incident response, not DR activation:
- Single user / group impact
- Individual application bug
- Minor network segment issue
- Brief (< 15 min) outages
- Scheduled maintenance

---

## 5. Notification Cascade

### Phase 1 - Internal Declaration (T+0 to T+15 min)
```
Person discovering incident
    v
IT on-call engineer
    v
DR Coordinator (formal activation decision)
    v
(Parallel) IT Director, Security Lead, Executive Sponsor
    v
DR team assembly (call bridge or Teams/Zoom)
```

### Phase 2 - Team Activation (T+15 to T+30 min)
DR Coordinator opens the war room (physical or virtual) and sends the activation message:

```
SUBJECT: DR ACTIVATION - <INCIDENT_TYPE> - <YYYY-MM-DD HH:MM>

This is an official disaster recovery activation.

Incident type: <DDoS / Ransomware / DC Failure / Network / Cloud / Building>
Initial impact: <BRIEF_DESCRIPTION>
Declared by: <DR_COORDINATOR>
Activation time: <TIMESTAMP>

War room: <LOCATION / URL>
Conference bridge: <PHONE / URL>
Status page: <INTERNAL_URL>

All DR team members report in immediately.
Ops teams: begin containment per runbook.
Comms lead: prepare stakeholder notifications.
Do not communicate externally until comms lead approves.

Next update: <TIMESTAMP + 30 min>
```

### Phase 3 - Stakeholder Notification (T+30 min to T+2 hr)
- Internal staff via email/SMS/intranet
- Customers via status page, support portal
- Regulators if legally required (GDPR: 72 hrs for personal data breach)
- Press via pre-approved statement (after legal review)

---

## 6. Recovery Procedures by Scenario

### Scenario A - Primary Data Centre Failure

**Assumption**: Secondary site (or cloud DR region) is warm/hot standby with recent data replica.

1. **T+0 to T+15 min** - Confirm failure
   - Verify with facilities (power, cooling, physical access)
   - Check monitoring: ICMP, SNMP, backup sync status
   - Document last successful replication timestamp (RPO measurement)
2. **T+15 to T+30 min** - Declare and mobilise
   - DR Coordinator activates DR per section 5
   - All teams acknowledge in war room
3. **T+30 to T+60 min** - Network cutover
   - Update DNS (lower TTL to 60s days before; emergency to 30s)
   - Update public DNS A records to DR site public IPs
   - Activate BGP announcements from DR site (if multi-homed)
   - Confirm routing convergence globally (use external looking glass)
4. **T+60 to T+120 min** - Tier 1 application recovery
   - Validate AD replica in DR site (DCdiag, replication status)
   - Bring up identity services
   - Start email gateway / M365 failover (hybrid connector in DR)
   - Recover ERP and other Tier 1 apps in documented boot order
5. **T+120 to T+240 min** - Tier 2 services
   - File server restore from latest snapshot
   - CRM failover
   - Intranet / wiki
6. **T+240+ min** - Validate and communicate
   - End-to-end tests with business owners
   - Update status page to "services restored"
   - Begin monitoring heightened for residual issues

### Scenario B - Ransomware / Cyber Attack

Priority: **Preserve evidence, contain, then recover from known-clean state**.

1. **Immediate containment**
   - Disconnect affected network segments from rest of network (do not power off - preserves volatile data)
   - Isolate affected endpoints via EDR quarantine
   - Disable affected user accounts
   - Pause all backup jobs running to affected systems (prevent overwriting clean backups)
   - Engage retained forensics firm within first hour
2. **Do NOT**
   - Pay the ransom without executive, legal, and law enforcement consultation (varies by jurisdiction)
   - Re-image without forensic imaging first
   - Restore directly back to production without validation environment
3. **Assessment (with forensics firm)**
   - Identify initial access vector (phishing, exposed RDP, unpatched vuln, supply chain)
   - Scope: which systems, which accounts, which data
   - Look for persistence mechanisms (scheduled tasks, services, webshells)
   - Identify exfiltrated data (DNS tunneling, cloud storage uploads)
4. **Clean restore**
   - Stand up isolated rebuild VLAN (no Internet, no production)
   - Rebuild domain controllers from offline backup (prefer immutable backup)
   - Reset krbtgt password TWICE (with waiting period)
   - Rebuild application servers from known-clean image
   - Restore user data only after AV/YARA scan
   - Apply all patches before joining to production
5. **Communication**
   - Legal counsel leads regulatory and customer notifications
   - Consider mandatory breach notifications (GDPR 72h, state laws, sector-specific)
   - Document every decision with timestamp and approver

### Scenario C - Total Network Outage

1. Verify outage scope
   - Check primary and secondary ISP circuits
   - Test from outside (mobile hotspot) to public services
   - Check SD-WAN controller status
2. Failover to backup circuits
   - Activate 4G/5G failover routers at sites
   - Switch to secondary ISP on SD-WAN
   - Reroute critical traffic via VPN over LTE
3. Vendor escalation
   - Primary ISP: open P1 ticket with outage scope
   - Secondary ISP: confirm they are up
   - Escalate to named account manager after 30 min
4. Customer services
   - If e-commerce down, update status page
   - Activate cached read-only mode on website if supported
   - Voice calls to redirect to mobile numbers

### Scenario D - Cloud Provider Region Outage

1. Confirm via provider status page and third-party sources (Downdetector, status site aggregators)
2. Execute multi-region failover
   - DNS: update to secondary region public endpoints
   - Database: promote secondary region replica to primary
   - Compute: scale up in secondary region
   - Storage: verify replication lag acceptable
3. If multi-region not configured
   - Serve cached / static content
   - Communicate estimated recovery based on provider announcements
   - Consider emergency provisioning in alternate region
4. After primary restored, plan failback during change window (not during business hours unless required)

### Scenario E - Physical Building Loss

1. Ensure life safety first (evacuation, headcount, injuries)
2. Activate remote work for all staff
   - VPN capacity check (can handle 100% concurrent?)
   - SSO / MFA capacity
   - Collaboration tools (M365, Teams, Zoom)
3. Critical equipment procurement
   - Replacement laptops from spare pool or vendor express
   - Temporary office space if building loss > 1 week
4. Insurance engagement
   - Contact broker within 24 hours
   - Document damage with photos/video
   - Preserve damaged equipment for adjuster

---

## 7. System Recovery Priority Matrix

Recovery sequence enforces dependency order:

### Tier 1 - Foundation (RTO: 1 hour)
1. Power / cooling / physical access
2. Core network (router, core switch, firewall)
3. Internet connectivity
4. DNS and NTP
5. Certificate services (internal CA)
6. Active Directory / LDAP (primary DC)

### Tier 2 - Core Services (RTO: 2 hours)
7. Additional domain controllers
8. Identity / SSO (Azure AD / Okta / Ping)
9. MFA services
10. Certificate revocation / OCSP responder
11. Monitoring and syslog
12. Backup management plane

### Tier 3 - Business Critical (RTO: 4 hours)
13. Email (M365 / Exchange)
14. ERP / accounting
15. Customer-facing web / e-commerce
16. Payment processing integrations
17. Primary database servers

### Tier 4 - Important (RTO: 8 hours)
18. CRM
19. File servers / SharePoint / OneDrive
20. Collaboration platform (Teams / Slack)
21. Ticketing / helpdesk
22. Intranet / wiki

### Tier 5 - Standard (RTO: 24 hours)
23. Reporting / BI
24. Development and test environments
25. Training systems
26. Archive systems

---

## 8. Backup Verification

### Schedule
| Frequency | Verification Type | Owner |
|---|---|---|
| Daily | Automated backup job status check | Backup admin |
| Weekly | File-level restore test (sample) | Backup admin |
| Monthly | Database restore to DR environment | DBA |
| Quarterly | Full system bare-metal restore | IT Ops |
| Annual | DR site full failover simulation | DR Coordinator |

### Backup Health Dashboard (track these)
- Job success rate (target > 99%)
- RPO compliance (last successful backup within RPO window)
- Media integrity (periodic verify reads)
- Off-site copy age (should not exceed replication SLA)
- Encryption key access (test access quarterly)

### Common Backup Strategies
- **3-2-1 Rule**: 3 copies of data, on 2 media types, with 1 off-site
- **3-2-1-1-0**: Above + 1 immutable/air-gapped + 0 errors on verify
- **Immutable storage**: WORM, S3 Object Lock, tape, or backup appliance with immutability

### Failure Response
If a backup job fails:
- Retry immediately (transient issues)
- Investigate within 1 business day
- Escalate to next tier if 3 consecutive failures
- Maintain compensating control (additional snapshot) during investigation

---

## 9. DR Testing

### Test Types (increasing rigour)

#### Tabletop Exercise (quarterly)
- Read-only walk-through of a scenario
- DR team discusses actions without touching systems
- 1-2 hour meeting
- Facilitator raises unexpected complications
- Output: action items for plan improvement

#### Walkthrough / Simulation (semi-annual)
- Step through recovery procedures with pre-staged test data
- Touch non-production clones of systems
- 4-8 hour exercise
- Validate runbook accuracy and tool access
- Output: timing data, missing dependencies

#### Parallel Test (annual)
- Bring up DR site in parallel with production
- Validate DR site can carry load
- No production impact (DR site does not take live traffic)
- 1-2 day exercise

#### Full Failover (annual or biennial)
- Cut over production to DR site for real
- Run business from DR site for defined period (typically 4-24 hours)
- Fail back during agreed change window
- Highest value, highest risk test

### Tabletop Scenario Template
```
Scenario: <Ransomware encrypts file servers at 10am Monday>

Facilitator guides discussion:
- First 5 minutes: Who is aware? Who do they call?
- First hour: Containment decisions, communication
- First day: Scope assessment, recovery approach
- First week: Full recovery, lessons learned

Injects (facilitator adds mid-exercise):
- "The backup admin is on holiday and phone is off"
- "Legal requires breach notification drafted within 2 hours"
- "A customer tweets about system being down"

Debrief:
- What went well?
- What gaps were exposed?
- What plan changes are needed?
- Action items with owners and deadlines
```

---

## 10. Communication Templates

### Internal Staff Notification
```
SUBJECT: [ACTION REQUIRED] IT Systems Update - <YYYY-MM-DD>

Team,

At <TIME>, we experienced <BRIEF_DESCRIPTION>. Our IT team has activated our
disaster recovery plan and is working to restore services.

Current status: <DESCRIPTION>
Expected restoration: <ESTIMATE> (will update hourly)

What you should do:
- <ACTION 1 - e.g., Do not use the VPN until further notice>
- <ACTION 2 - e.g., Use mobile email only>
- <ACTION 3 - e.g., Direct all customer enquiries to the support team>

Status updates will be posted to <INTRANET_URL> every hour.

For urgent technical issues, contact <HELPDESK_PHONE> or <HELPDESK_EMAIL>.

<SIGNATURE>
```

### Customer Notification (Public)
```
SUBJECT: Service Update - <DATE>

Dear <CUSTOMER_NAME>,

At <TIME>, we identified an issue affecting <SERVICE_NAME>. Our team
activated incident response procedures immediately.

Impact: <WHAT IS / IS NOT AFFECTED>
Status: <CURRENT STATE>
Next update: <TIMESTAMP>

We will continue to post updates at <STATUS_PAGE_URL>.

We apologise for the inconvenience and appreciate your patience.

<COMPANY_NAME>
```

### Management / Board Update
```
SUBJECT: [EXEC UPDATE] DR Activation - <INCIDENT> - Update #<N>

Executive Team,

INCIDENT: <DESCRIPTION>
DECLARED: <TIMESTAMP>
CURRENT RTO PROGRESS: <% complete> / <ETA>
CURRENT RPO: <TIME OF LAST GOOD DATA>

BUSINESS IMPACT:
- Users affected: <NUMBER>
- Services impacted: <LIST>
- Revenue impact (estimated): <AMOUNT>
- Regulatory exposure: <YES/NO - if yes, who is notifying>

KEY ACTIONS IN PROGRESS:
- <ACTION 1>
- <ACTION 2>
- <ACTION 3>

DECISIONS NEEDED:
- <DECISION 1 - if any>

NEXT UPDATE: <TIMESTAMP>

<DR_COORDINATOR>
```

### Regulator / Supervisory Authority (GDPR Art. 33)
```
To: <SUPERVISORY_AUTHORITY>
From: <DPO_NAME>, Data Protection Officer
RE: Personal data breach notification

Controller: <COMPANY_LEGAL_NAME>
Date/time of breach discovery: <TIMESTAMP>
Date/time of this notification: <TIMESTAMP>

Nature of breach: <DESCRIPTION>
Categories of data: <TYPES OF PERSONAL DATA>
Approximate number of data subjects: <NUMBER>
Approximate number of records: <NUMBER>

Likely consequences: <ASSESSMENT>

Measures taken / proposed:
- <TECHNICAL MEASURES>
- <ORGANISATIONAL MEASURES>
- <NOTIFICATIONS TO DATA SUBJECTS (if applicable)>

DPO contact: <NAME>, <EMAIL>, <PHONE>
```

---

## 11. Return to Normal (Failback)

Failback is as important as failover. Do not assume DR site can run forever.

### Pre-Failback Checklist
- [ ] Primary site fully operational (infrastructure, cooling, power)
- [ ] Root cause fully addressed and documented
- [ ] All patches applied to rebuilt systems
- [ ] Security scan passed on primary site
- [ ] Backup of current DR state taken (so rollback possible)
- [ ] Change window approved by CAB
- [ ] Stakeholders notified of failback window
- [ ] Runbook reviewed by operating team
- [ ] Rollback plan documented

### Failback Sequence
1. Freeze writes on DR site (read-only mode if possible)
2. Perform final data sync DR -> Primary
3. Validate data consistency
4. Stop applications on DR site
5. Start applications on Primary site
6. Update DNS (reverse of failover)
7. Validate end-to-end (test transactions)
8. Reopen to users
9. Monitor for 24 hours heightened
10. Update documentation and metrics

### Data Reconciliation
- Identify writes that occurred on DR that must be replicated back
- Handle conflicts (duplicate IDs, version mismatches)
- Run application-level consistency checks
- Get business owner sign-off before considering failback complete

---

## 12. Post-Incident Review

Within 5 business days of recovery, conduct blameless post-mortem.

### PIR Template
```
Incident: <TITLE>
Date: <YYYY-MM-DD>
Duration: <HOURS>
Facilitator: <NAME>
Attendees: <LIST>

TIMELINE
<TIMESTAMP> - <EVENT>
<TIMESTAMP> - <EVENT>
...

IMPACT
- Users affected: <NUMBER>
- Duration of impact: <DURATION>
- Data loss: <NONE / AMOUNT>
- Revenue impact: <ESTIMATE>
- Reputation impact: <ASSESSMENT>

RPO / RTO ACHIEVED
- Target RPO: <MIN>  | Actual: <MIN>
- Target RTO: <HRS>  | Actual: <HRS>

ROOT CAUSE
<DESCRIPTION - five whys>

CONTRIBUTING FACTORS
- <FACTOR 1>
- <FACTOR 2>

WHAT WENT WELL
- <ITEM 1>
- <ITEM 2>

WHAT DID NOT GO WELL
- <ITEM 1>
- <ITEM 2>

ACTION ITEMS
| # | Action | Owner | Due | Status |
|---|---|---|---|---|
| 1 | <ACTION> | <NAME> | <DATE> | Open |
| 2 | <ACTION> | <NAME> | <DATE> | Open |

LESSONS LEARNED / PLAN UPDATES
- Runbook changes needed: <LIST>
- Tool gaps identified: <LIST>
- Training gaps: <LIST>

SIGN-OFF
DR Coordinator: <NAME> / <DATE>
IT Director: <NAME> / <DATE>
Executive Sponsor: <NAME> / <DATE>
```

---

## Appendix A - Tools and Access List

| Tool | Purpose | Access Location | Owner |
|---|---|---|---|
| DR Runbook | This document | `<SHAREPOINT_URL>`, printed in DR binder | DR Coord |
| Configuration backups | Device configs | `<BACKUP_SERVER>`, off-site S3 | Network Lead |
| Out-of-band mgmt | Console access | `<OOB_ADDR>` (separate Internet) | Network Lead |
| Cloud console | IaaS | `<URL>` - MFA required | Cloud Lead |
| Monitoring | Status | `<URL>` | Ops Lead |
| Password vault | Emergency creds | `<URL>` - break-glass account | DR Coord |
| Conference bridge | War room | `<DIAL_IN / URL>` | Comms Lead |
| Status page | Public comms | `<URL>` (external hosting) | Comms Lead |
| Ticketing | Incident tracking | `<URL>` | Ops Lead |
| Forensics retainer | IR firm | `<EMAIL>` | CISO |

## Appendix B - Insurance Details

| Item | Value |
|---|---|
| Cyber policy | `<POLICY_NUMBER>` |
| Underwriter | `<COMPANY>` |
| Broker | `<BROKER_NAME / PHONE>` |
| Coverage limit | `<AMOUNT>` |
| Retention (deductible) | `<AMOUNT>` |
| Notification deadline | `<HOURS FROM DISCOVERY>` |
| Pre-approved forensics | `<FIRM>` |
| Pre-approved legal | `<FIRM>` |

## Appendix C - Equipment Inventory (DR-Relevant Spares)

| Item | Qty | Location | Owner |
|---|---|---|---|
| Spare firewall (same model as prod) | `<N>` | DR site / colo | Network |
| Spare core switch | `<N>` | DR site | Network |
| Spare ISR / WAN router | `<N>` | MDF spare rack | Network |
| Spare WLC | `<N>` | MDF | Network |
| Spare APs | `<N>` | MDF | Network |
| 4G/LTE backup routers | `<N>` | Per-site | Network |
| Spare console cables / kits | `<N>` | Go-bag | Network |
| Laptops (imaged, ready) | `<N>` | IT spare pool | IT Ops |
| Mobile hotspots | `<N>` | IT spare pool | IT Ops |

## Appendix D - Related Documents

- Incident Response Runbook: `<LINK>`
- Change Management Runbook: `<LINK>`
- SOC Playbook Bundle: `<LINK>`
- Information Security Policy: `<LINK>`
- Business Continuity Plan: `<LINK>`
- Network Topology Diagrams: `<LINK>`
- Backup and Recovery Standard: `<LINK>`
- Acceptable Use Policy: `<LINK>`

---

**VantagePoint Networks** - vantagepointnetworks.com

End of document
