DPDP Compliance

DPDP Breach Notification: The 72-Hour Playbook

Manish Garg
Manish Garg Associate CISSP Β· RingSafe
April 19, 2026
5 min read

The DPDP Act gives Data Fiduciaries 72 hours from awareness of a personal-data breach to notify the Data Protection Board and affected Data Principals. That clock is the single hardest operational requirement in the Act, and the one most Indian organizations are not ready for. Ninety-five percent of organizations believe they would detect a breach. Twenty percent have actually tested the detection-to-notification pipeline end-to-end. Post-enforcement, the gap between those two numbers is where penalty exposure lives.

This is the 72-hour DPDP breach notification playbook β€” what the clock requires, how awareness is defined, the first-hour decisions, and the runbook we use when clients engage us during active incidents.

When the clock starts

The Rules specify 72 hours from “awareness.” The key operational question is what constitutes awareness. Interpretation guidance from early Board communications and regulatory commentary suggests:

  • Awareness begins at the earliest point where a responsible person in the organization has sufficient information to reasonably conclude that a personal-data breach has occurred or is likely to have occurred
  • You do not need forensic certainty to trigger the clock
  • “Responsible person” is interpreted broadly β€” SOC analyst, engineering on-call, DPO, IT manager
  • The clock is not paused by internal escalation delays; if the SOC knew on Monday and the DPO heard about it on Thursday, the clock started Monday

This is the trap. Organizations optimize for certainty before declaring a breach internally. That optimization routinely costs days. Under DPDP, it costs penalty exposure.

The first hour

What to do in the first 60 minutes after a credible breach signal:

  1. Initial triage β€” is there a plausible path from the signal to personal-data exposure? Not certainty; plausibility.
  2. Notify the DPO or equivalent role. This is an explicit, synchronous notification β€” not an email to be read when someone gets around to it. If you have no DPO, notify the CEO or senior legal counsel.
  3. Assemble the response team β€” engineering lead, security lead, legal, communications, executive sponsor. 30-minute synchronous meeting.
  4. Begin containment β€” whatever access paths are involved get cut. Rotate credentials. Revoke sessions. Isolate affected systems. Containment begins before full investigation.
  5. Preserve evidence β€” logs, memory captures, disk images of affected systems. Critical for later forensics and for demonstrating reasonable response.
  6. Decision gate β€” is this likely a personal-data breach? If yes, the clock is running.

The 72-hour runbook

Hours 0–24 β€” Containment and scope

  • Full containment β€” all identified access paths cut
  • Forensic investigation launched; IR provider engaged if not in-house
  • Data-scope determination β€” what personal data is in the affected systems, even if exfiltration is not yet confirmed
  • External counsel engaged for regulatory and disclosure advice
  • Internal communications β€” decide who inside the organization knows and when
  • Customer communication strategy drafted
  • Decision on notification scope (which Principals, which Board authorities)

Hours 24–48 β€” Investigation and decisions

  • Forensic findings consolidated β€” what happened, how, when, what was accessed
  • Affected-Principal list compiled
  • Notification content drafted β€” both Board notification and Principal notifications
  • Legal review of notification content
  • Regulator notification format prepared per Board rules
  • Principal notification channels selected (email primary, SMS for high-priority, app notification for in-product)
  • Executive briefing, decision on external communications (public statement, press)

Hours 48–72 β€” Notification

  • Formal notification to Data Protection Board via prescribed channel
  • Notifications to affected Principals begin
  • Public statement issued if breach is likely to become public anyway
  • Customer-success and support teams briefed, prepared for inbound questions
  • Ongoing forensic work continues β€” notification is not the end of investigation
  • Additional notifications planned as scope expands (amendment notifications are permitted)

What to put in the notification

The Rules specify notification content. Paraphrased:

  • Nature of the breach β€” what happened
  • Personal data affected (categories and approximate volume)
  • Approximate number of Data Principals affected
  • Probable consequences for affected Principals
  • Measures being taken or proposed to mitigate possible adverse effects
  • Contact details of the DPO or designated grievance officer
  • Any other information prescribed

Principal notifications must be understandable to non-technical readers. The letter a user reads on their phone must be clear about what was exposed, what it means for them, and what to do about it.

Common failure modes

  1. Awareness lag from detection to DPO. The SOC knows on Monday; the DPO hears about it on Thursday. Three days of the 72-hour clock burned on internal escalation.
  2. Certainty-before-notification delays. Waiting for forensics to confirm exfiltration before starting notification process. 72 hours is not enough time for most forensic investigations; notification must happen with available information and be updated as more is learned.
  3. Incomplete Principal list. Notifying some affected Principals but not all because the organization does not have a complete list of what data was in which systems.
  4. Panic communications. Over-notifying (including Principals not actually affected), under-notifying (missing categories of data), or mis-notifying (wrong details) β€” all driven by rushed decisions.
  5. No pre-drafted templates. Writing notification content from scratch during an incident. Templates should be drafted and legal-reviewed in advance.

Pre-breach preparation β€” what to do before anything happens

  • Breach-response playbook documented, accessible during an incident (not in SSO-gated documentation)
  • Named response team with current contact details; backup for every role
  • Pre-drafted notification templates (Board, Principals, public statement, customer support FAQ) reviewed by legal
  • Relationship with a forensic-capable IR provider β€” retainer or pre-agreed terms β€” not scrambling to find one during an incident
  • Tabletop exercise at least annually, ideally semi-annually, with rotating scenarios
  • Data inventory accurate enough to quickly identify affected Principals
  • Detection infrastructure capable of surfacing credible breach signals within hours, not days

Related reading

If you are mid-incident and need breach response support, contact us β€” we can engage within hours on emergency IR retainers and have walked Indian clients through multiple DPDP-era breach notifications.