Is Your AI Caller DPDP-Compliant? A 2026 Field Guide for Indian Businesses

The Digital Personal Data Protection Act 2023 received presidential assent on 11 August 2023. Two and a half years later, most Indian businesses making outbound calls are non-compliant — and most of them don't know it.
This is not a fringe problem. It applies to every D2C brand confirming a COD order, every NBFC sending an EMI reminder, every clinic calling patients about lab results, every real estate developer following up on a portal lead. If you are dialling a customer in 2026, you are processing their personal data. If you are doing that with an AI voice agent, you are processing it at scale, with recordings, transcripts, and structured extracted fields. The DPDP Act has something to say about all of this.
Most founders we speak to react to "DPDP" the way they once reacted to GST — eyes glaze over, somebody on the team is supposed to handle it, the assumption is that enforcement is years away and the law is mostly aspirational. That is a misread. The Data Protection Board is being constituted. Notification of the rules is happening in phases. And the penalty ceilings — up to ₹250 crore for significant data fiduciaries — were not chosen to be ignored.
This guide is a field manual. Not a legal opinion, not a compliance checklist that pretends every business is identical. It is a structured walk through what the law actually says about phone calls, where AI calling deployments fail in practice, and what a compliant calling stack looks like in 2026 India.
What the DPDP Act Actually Says About Phone Calls
Most of the public discussion of DPDP focuses on websites, cookies, and SaaS data flows. The law itself is broader. It governs the processing of personal data by anyone collecting it from a "data principal" — which is regulator-speak for "the person whose data it is."
A phone call is data processing. A phone call recorded is more data processing. A phone call where an AI extracts the customer's name, intent, sentiment, and transaction history into a CRM is large-scale, automated, structured data processing. The DPDP Act applies to all of it.
The sections that matter most for a calling programme are these.
Section 4 establishes the foundation: personal data may only be processed for a lawful purpose with valid consent or under a defined legitimate use. There is no "we've always called our customers" exemption. If you cannot point to a specific lawful purpose and the consent or legitimate-use ground that authorises it, you are processing data without authority.
Section 6 defines what valid consent looks like. It must be free, specific, informed, unconditional, and unambiguous, expressed by clear affirmative action. The crucial words are specific and informed. A buried checkbox at signup that says "I agree to receive communications" does not authorise a marketing call eighteen months later about a different product. Consent has to name the purpose. If it doesn't, you don't have it.
Section 7 is the section that saves most transactional calling programmes from collapse. It defines "legitimate uses" — situations where processing is permitted without fresh consent. The most relevant of these for outbound calling: where the data principal has voluntarily provided their data for a specified purpose, and the processing is for that purpose. A customer who placed a COD order has voluntarily provided a phone number for the order. Calling that customer to confirm the order is processing for the purpose for which the data was provided. That is a legitimate use. You do not need a separate consent record for the COD confirmation call.
The same logic does not extend to: a marketing call about a new product, a cross-sell call for a different category, an upsell embedded inside the confirmation call, or a feedback survey about something other than this specific transaction. Those are not "for that purpose." For those, you need consent under Section 6.
Section 9 establishes data principal rights. The most operationally important is the right to withdraw consent — and the law requires that the withdrawal be as easy as the giving. If a customer can opt-in with one tap on a checkout page, they must be able to opt-out with comparable friction. They cannot be forced to email a generic mailbox, fill a six-field form, or "speak to your relationship manager." The opt-out cascade — what happens after a customer says "remove me from your list" — is the single area where AI calling programmes most commonly fail.
Section 11 requires that data fiduciaries publish the contact details of a Data Protection Officer or grievance officer and respond to complaints within a defined timeline. For an AI calling programme, this means the customer must have a way to reach a human if the AI cannot resolve their concern.
A separate strand to track: sensitive personal data. The DPDP Act does not use the same explicit "sensitive data" category as GDPR, but the rules being notified create heightened obligations for data of children, financial information, health information, and biometric data. Health-tech companies, NBFCs, and insurance companies need to be particularly careful — their AI calls are routinely handling exactly the data the regulator is most protective of.
The Most Important Distinction in Indian Calling Compliance
If you take only one thing from this article, take this: there is a sharp legal distinction between transactional and promotional calls in India, and the distinction is not optional.
A transactional call is one that directly relates to a transaction the customer initiated. Examples: a delivery confirmation, a COD verification, an OTP for a payment, an EMI due reminder on an existing loan, an appointment confirmation for a service the patient booked, a shipment status update. These calls fall under Section 7 of DPDP (legitimate use, no fresh consent required) and are exempt from TRAI's Do Not Disturb registry. They can be placed to any customer who has an active service relationship, on 1600-series numbers, at reasonable hours, and without DLT scrubbing on each campaign.
A promotional call is marketing, upsell, cross-sell, abandoned cart recovery, win-back, or any communication where the customer has not initiated a transaction. These calls require explicit consent under DPDP Section 6, must be DND-scrubbed under TRAI rules before every campaign, must use 140x-series numbers, and are restricted to the 9am-9pm window.
The trap — and almost every consumer brand falls into it eventually — is mixing the two in the same call. A delivery confirmation call that ends with "and by the way, would you like to upgrade to the premium plan?" is not a transactional call. The presence of the upsell converts the entire call to promotional. Now it needs DND scrubbing, explicit promotional consent, a 140x number, and DLT template approval. Most brands do not realise this until a customer files a complaint.
The cleanest architecture is to keep them separate: confirmation calls are a service workflow, upsell calls are a marketing workflow, and they live in different campaigns with different consent ground-rules, different number series, and different operational teams.
The Five Things Your AI Caller Must Actually Do
Compliance with DPDP for an AI calling programme reduces, in practice, to five operational requirements. Each one looks simple. Each one is where deployments fail.
One: identify itself as an AI within the first thirty seconds of the call. This is not yet a hard DPDP rule, but the trajectory is unmistakable — TRAI has signalled it, the EU AI Act requires it, and the regulator's question of "did the customer know they were speaking to an AI?" is becoming the gating test in complaint resolution. Deploy this now. A line as simple as "Namaste, main Caller Digital ki taraf se ek AI assistant bol raha hoon" discharges the obligation, and customer acceptance rates remain in the 82-88% range when the disclosure is made naturally.
Two: state the purpose of the call clearly, in the same opening. The customer must know within the first thirty seconds what the call is about. "Aapne kal ek order place kiya tha — main yeh confirm karne ke liye call kar raha hoon" is purpose-specific. "Hum aapse ek important baat karna chahte hain" is not. The latter creates a Section 6 problem because the customer cannot give informed consent to a call whose purpose is undisclosed.
Three: log consent at the right touchpoint, which is almost never the AI call itself. The mistake most teams make is treating the AI call as the consent moment — "the customer didn't hang up, so they consented." That is not what the law requires. Consent must be collected at the original data-collection event: the form submission, the checkout, the loan application, the service signup. The CRM record must store: what was consented to, when, by what mechanism, and against what version of the terms. The AI call is downstream of that record. If the consent record is missing or vague, the call is not authorised, regardless of how the customer behaves on the call.
Four: make opt-out frictionless and propagate it fast. If a customer says "don't call me again," that statement must (a) be detected reliably by the AI, (b) flow into the CRM as a do-not-call flag, (c) update the campaign suppression list, and (d) propagate across all channels — voice, SMS, WhatsApp, email — within a defined window. Twenty-four hours is the operational benchmark; the law's "as easy as giving" standard is increasingly being read as "near-real-time." A customer who opts out today and gets called again tomorrow is a complaint waiting to happen, and the complaint's evidentiary core is your own call recording.
Five: store data in India, particularly if you are processing health, financial, or other heightened-risk data. The DPDP Act gives the central government the power to designate "significant data fiduciaries" with additional obligations including data localisation. Even before that designation arrives, sectoral regulators (RBI, IRDAI, MoHFW for ABDM) already require Indian data residency for relevant data types. For AI calling, this means: call recordings, transcripts, extracted personal fields, consent records, and the underlying language model fine-tuning data should all sit on Indian infrastructure. A US-hosted call recording of a patient discussing a lab result is, in 2026, indefensible.
Where AI Calling Deployments Quietly Fail
The compliance failure modes we see most often are not dramatic. They are quiet. They look fine until a complaint arrives.
The first is stale consent. A D2C brand collected a consent at checkout in 2022 that said "I agree to receive order updates and offers." In 2024, that same number is being called for a new product launch in a different category, marketed by a sister brand under the same parent. The original consent does not authorise the second call. Section 6's "specific" and "informed" requirements were not met. The brand never thought about it.
The second is data residency drift. A health-tech startup signed up for a global voice AI platform whose recordings are stored on US-based infrastructure. The recordings include patients discussing diabetes management, mental health, prenatal care. The product team didn't notice; they were focused on call quality. The compliance team didn't notice because they assumed "voice" was outside the data-storage scope. Both were wrong. Health data on non-Indian servers is a problem that grows with every passing call.
The third is the broken opt-out cascade. Customer says "remove me from your list" to the AI. The AI marks the call as "DNC requested." The disposition flows to the CRM. The CRM has a do-not-call flag. The flag exists. But the next campaign, run by a different team using a different segmentation logic, doesn't read the flag because their query joins on a different field. The customer gets called the next day. Two weeks later, the complaint arrives.
The fourth is the missing grievance pathway. The AI call ends. The customer is unhappy with the resolution offered. They want to complain. The IVR they are routed to plays a thirty-second corporate jingle and then asks them to "please visit our website." The website's contact page lists a customer support email. The grievance officer is buried three clicks deep. Section 11's "shall provide" obligation is not satisfied by "shall make findable with sufficient effort."
The fifth is the upsell-in-confirmation problem. A logistics partner calls to confirm a delivery time and at the end of the call offers a "5% off" coupon for the next order. The promotional content converts the call's classification. The campaign was never DND-scrubbed because it was filed as transactional. The next time TRAI runs an audit, the violation is sitting in the call recording.
None of these are exotic. All of them are routine. All of them are fixable with operational discipline before the regulator arrives.
Sectoral Obligations Stack on Top
DPDP is the floor. For most regulated sectors, there is more on top.
BFSI carries the heaviest overlay. The RBI Fair Practices Code requires caller identity disclosure within thirty seconds, restricts collection calls to 8am-7pm, prohibits intimidation in language and tone, and requires that calls be recorded and retained. The DPDP layer adds: financial data is heightened-risk, consent must specifically name the financial product, and any call about a different product needs fresh consent. For EMI reminder calls and BFSI calling programmes, this means the consent record at loan origination must be granular — collection calls, marketing calls, cross-sell calls all separately authorised — and the AI call must be classifiable against one of those granular consents.
Healthcare is the sector where the gap between current practice and compliance is widest. Health data is sensitive in every framework that matters. ABDM (Ayushman Bharat Digital Mission) creates a separate consent layer for ABHA-linked records. The DPDP rules being notified are likely to require explicit opt-in for any call where a health condition or diagnostic result is referenced. Most clinics are calling patients about lab results today on the strength of "we always have." That is not enough.
Insurance has IRDAI's regulatory framework on top of DPDP. Mandatory disclosures must be made on every sales call regardless of consent — the company name, the agent or AI identifier, the policy details being discussed, the recording notice, the cooling-off period. IRDAI's regulations on misselling have teeth, and the AI script's word-for-word fidelity is actually an asset here: the AI cannot deviate from the approved disclosures the way a human agent might.
D2C and e-commerce has the simplest compliance picture. Order and delivery calls are transactional. Cart recovery and upsell are promotional. Keep them separate, and most of the work is done. The exception: D2C brands with subscription models where billing reminders, renewals, and cross-category offers blur the line. Treat each communication as a separate compliance question.
Real estate is the trickiest because RERA and DPDP create different focal points. RERA requires script-level disclosures (project name, registration number, no claims that differ from filings). DPDP focuses on consent and data flow. The AI calling script must satisfy both — and because the same call may discuss a project the customer enquired about (Section 7 legitimate use) and a different project the developer wants to push (Section 6 consent required), the line-walking is delicate.
Collections and recovery is sectorally regulated by the RBI Fair Practices Code and the recently strengthened BNS (Bharatiya Nyaya Sanhita) provisions on harassment. The DPDP layer adds consent specificity. A consent record for "loan servicing communications" arguably covers reminders; it does not cover the kind of language that crosses into harassment, regardless of consent.
The Consent Architecture That Actually Works
A compliant AI calling programme rests on five layers, each visible to a regulator on inspection.
Layer one — at acquisition. Granular consent checkboxes at the point of data collection. Service communications, marketing, third-party sharing, profiling — each separately opted in. No bundling. No pre-ticked boxes. The privacy notice that the consent is given against must be versioned, archived, and recoverable.
Layer two — in the CRM. Every consent record is tagged with the purpose it covers, the date and timestamp, the channel (web form, app onboarding, in-store), and the version of the privacy notice in force at that moment. When the consent is later withdrawn, the withdrawal is recorded against the same record with its own timestamp and channel.
Layer three — at the calling platform. Before any campaign runs, it is classified: transactional or promotional. Promotional campaigns are filtered through the NDND registry and the brand's internal suppression list before dialling. Transactional campaigns skip the DND filter (they are exempt) but still respect the internal suppression list. Each call placed is tagged with the consent record ID that authorises it.
Layer four — in the call itself. The AI's opening identifies the entity, identifies itself as an AI, states the purpose, discloses the recording. Mid-call, the AI listens for opt-out triggers — "don't call me," "remove my number," "stop these calls" — and treats the trigger as binding. End-of-call disposition includes any consent or opt-out actions taken.
Layer five — in the audit log. Every call is queryable: who was called, when, by what campaign, against what consent record, with what disposition, and where the recording lives. When the Data Protection Board sends an inspection notice, this log is what determines whether you are compliant in two hours or compliant in two weeks.
Caller Digital's platform is built around this architecture by default — Indian data residency, consent-record linkage on every dial, NDND scrubbing integrated, transactional and promotional flows kept separate, opt-out propagation across CRM touchpoints. None of this is a feature add-on. It is the floor.
The Penalty Picture
The DPDP Act sets penalties via the schedule of penalties annexed to the Act. The numbers worth remembering:
Up to ₹250 crore for failure to take reasonable security safeguards to prevent personal data breach.
Up to ₹200 crore for failure to give the Board notice of a breach.
Up to ₹50 crore for failure to fulfil obligations to children or to persons with disabilities.
Up to ₹150 crore for breach of any other significant obligation.
These are ceilings, not minimums. The Board has discretion to assess based on the nature, gravity, and duration of the breach, the harm caused, and whether the entity took mitigating steps. The regulatory direction in India consistently favours proportionate penalties for first violations and escalating penalties for repeat or systemic failures. Treat the ceilings as a sense of how seriously the law was drafted, not as the expected outcome of any single inspection.
The non-monetary risk is more immediate. Reputational damage from a high-profile complaint travels fast in Indian consumer Twitter. Loss of customer trust translates directly into churn. And in regulated sectors, repeat compliance failures invite a different kind of regulator's attention — RBI, IRDAI, or the Drug Controller — whose tools are licence conditions and not just monetary fines.
A One-Hour Audit You Can Do Today
If your AI calling programme is live and you are not certain it is DPDP-compliant, here is the one-hour audit. Print it out, walk through it with your ops lead and your legal counsel, and write down which boxes you can tick honestly.
-
Can you produce, for any specific call placed in the last 30 days, the consent record that authorised it? If you can't, you have a Section 6 problem.
-
Is your privacy notice published, dated, and version-controlled? Can you produce the version of the notice that was in force on the day a given consent was collected?
-
Are your transactional and promotional campaigns operationally separate — different campaign IDs, different script templates, different number series, different teams? If they share infrastructure, you have a classification risk.
-
Where are your call recordings stored, by physical server location? If the answer is anything other than "India," you have a residency exposure on regulated data types.
-
When a customer says "don't call me again" to your AI, what is the elapsed time before that opt-out is propagated to all your outbound channels? If the answer is more than 24 hours, you have a Section 9 problem.
-
Is the contact information of your grievance officer published on the website at a click depth of 1 or 2 from the homepage? If it requires effort to find, Section 11 is not satisfied.
-
Have you classified your data processing under DPDP — are you a "data fiduciary," and if your scale is large enough, have you considered whether the central government may designate you a "significant data fiduciary" with additional obligations?
-
Is your AI's opening line compliant — entity identification, AI disclosure, purpose statement, recording notice, all within 30 seconds?
-
For your sectoral overlay (BFSI, healthcare, insurance, etc.), have you mapped the additional sector-specific call requirements onto the AI script? Is the mapping documented?
-
Do you have a quarterly internal review of consent records, opt-out propagation latency, recording residency, and complaint volumes? If compliance is not on a recurring cadence, it will drift.
If you ticked seven or more boxes, your programme is in better shape than most. If you ticked four or fewer, you have work to do, and the work is best done before a complaint forces it.
What to Build Now, What to Build Later
The pragmatic prioritisation, in our experience working with Indian businesses through this transition: build the consent architecture first, then the opt-out cascade, then data residency, then the audit log. Sectoral overlays come last because they are sector-specific and well-understood by the teams that already operate in the regulated space.
The one piece of advice we give every founder asking us about DPDP: do not wait for full enforcement before getting compliant. The first wave of Data Protection Board inspections will look at the most visible offenders, but the second and third waves will sweep everyone. Compliance built quickly under regulatory pressure is expensive, ugly, and disruptive. Compliance built carefully over a six-month window is none of those things, and it positions you to use compliance itself as a competitive signal — particularly with enterprise customers who are themselves under DPDP scrutiny and want their vendors to have already done the work.
For Indian D2C brands, NBFCs, healthcare providers, and insurance companies looking at AI calling in 2026, the compliance question is not whether to deploy. It is which platform deploys with the regulatory architecture already built in. The cost of retrofitting compliance onto a non-compliant calling stack — across consent records, residency, audit logs, and opt-out cascades — typically exceeds the cost of switching platforms. Choose accordingly.
Caller Digital's AI caller platform for India is designed around exactly this premise: that compliance for Indian businesses is not a feature you add later but the foundation everything else is built on. If your current AI calling vendor cannot answer the ten audit questions above to your satisfaction in a single conversation, it is worth a second look.
Frequently Asked Questions
Tags :
