VAPT

VAPT Services in India: The Complete Buyer’s Guide (2026)

Manish Garg
Manish Garg Associate CISSP Β· RingSafe
April 19, 2026
12 min read

Every 90 days, a new vulnerability class breaks something that your last VAPT called “secure.” That is not a failure of testing. It is the shape of the discipline. What separates a useful penetration test from an expensive PDF is whether the engagement is designed around that reality β€” and whether the people running it understand your business, not just your open ports.

This is the page we wish existed when Indian founders, CTOs, and security leads sit down to scope a VAPT for the first time. It covers what VAPT actually is, what it should cost, what a good report looks like, how it differs from a vulnerability scan, how to avoid checkbox-only engagements that satisfy auditors but not attackers, and how to choose a testing partner without getting burned. If you only have ten minutes, skim the section headings β€” they are written as the questions buyers ask us in sales calls, in the order they ask them.

What is VAPT β€” and what it is not

VAPT stands for Vulnerability Assessment and Penetration Testing. In practice the acronym bundles two activities that serve different goals and should be priced, scoped, and consumed differently.

A vulnerability assessment is a breadth exercise. It enumerates weaknesses β€” missing patches, insecure configurations, weak ciphers, outdated libraries β€” across a defined asset list. It is largely tool-driven, repeatable, and produces a list of findings scored by severity. Its job is coverage.

A penetration test is a depth exercise. It starts from a defined scope and adversary model and asks: given these constraints, can a skilled attacker compromise something that matters? It chains findings into attack paths, demonstrates real impact, and produces evidence an engineering team can act on without debate. Its job is proof.

A proper VAPT engagement runs both and integrates them. The assessment identifies the terrain; the pentest walks it. Engagements that skip the assessment often miss low-severity findings that matter only in combination. Engagements that stop at the assessment produce lists no one acts on, because “268 medium-severity findings” is not a narrative a CTO can prioritize.

When you actually need a VAPT

There are four legitimate reasons to run one. All of them should be named in the statement of work, because the reason determines the scope, the depth, and what a “successful” engagement means.

  • Pre-release or post-release assurance β€” you have built something and want independent confirmation that it does not have trivial-to-exploit flaws before it meets the internet (or has met it).
  • Compliance β€” PCI DSS 11.3, ISO 27001 A.12.6, SOC 2 CC4.1, RBI’s cybersecurity framework, or SEBI’s System Audit Report obligations all require some form of security testing. DPDP Act Β§8(5) obliges Data Fiduciaries to implement “reasonable security safeguards” β€” regulators have not yet prescribed VAPT by name, but after the first enforcement actions land it will be table stakes.
  • Customer / procurement pressure β€” your enterprise buyer sent you a vendor security questionnaire that asks for the VAPT report date and scope, and you cannot close the deal without one.
  • Post-incident β€” you had a breach (or a near-miss) and you need to know whether the blast radius extended beyond what you already found.

If none of these apply, a VAPT is probably premature. Run a vulnerability scan with Nuclei or OpenVAS, fix the critical findings, and revisit when one of the four triggers actually applies.

Types of VAPT engagements

Web application penetration testing

The most common engagement. Modern web app tests go beyond the OWASP Top 10. Business-logic flaws β€” race conditions in payment flows, IDOR chains across tenant boundaries, broken authorization on GraphQL resolvers β€” dominate the findings that matter commercially. A tester who cannot read your code or reason about your domain model will miss these and charge you the same.

API penetration testing

APIs now carry more attack surface than UI in most SaaS products. The OWASP API Security Top 10 (2023 edition, revised 2025) is the baseline, but the real work is scoping: REST, GraphQL, gRPC, event-driven streams, and SDK-embedded internal APIs all require different methodologies. If your provider quotes you “API testing” as a line item without asking which protocols and which authentication models, assume they’re going to run Postman against a Swagger file and stop.

Mobile application penetration testing

Android and iOS are different engagements. Android covers reverse engineering of the APK, Manifest analysis, traffic interception, storage-at-rest, and runtime hooking with Frida. iOS is constrained by the platform but requires jailbreak-on-device or emulator work for depth. OWASP MASVS L2 is the credible standard; anything described as “MASVS L1” is a surface-level pass.

Network penetration testing

External network tests enumerate the internet-facing perimeter and attempt to find footholds through exposed services. Internal network tests assume a foothold (a phished laptop, a rogue contractor) and ask how far laterally an attacker can move. For most Indian SMEs on AWS or Azure, the “internal network” is a VPC and the engagement blurs with cloud security testing β€” more on that below.

Cloud configuration review

Not strictly VAPT but commonly bundled. Reviews AWS/Azure/GCP accounts for misconfigurations β€” overly permissive IAM roles, public S3 buckets, internet-reachable RDS instances, unencrypted data stores, missing CloudTrail coverage. Engineered correctly, this is one of the highest-ROI engagements an Indian startup can run, because almost every cloud breach in the last five years traced to configuration, not code.

Red team engagements

Distinct from pentesting. A red team engagement simulates a specific adversary (scope: “gain access to production database”) over a longer timeline (weeks to months), with realistic opsec, across people, process, and technology. Premature for most organizations. If your blue team has not yet failed a tabletop, a red team will produce findings you cannot act on.

Black-box vs grey-box vs white-box

A recurring sales-call question. The short answer: grey-box is almost always the right default.

Black-box engagements β€” where the tester starts with no knowledge β€” are intuitive but expensive per finding. Testers burn hours on reconnaissance and enumeration that a two-minute architecture walkthrough would collapse. The only legitimate use case is specifically testing “what can an internet attacker see from zero” β€” which is a narrow and usually secondary question.

White-box engagements β€” where the tester has full source code, credentials, and architecture documentation β€” are the inverse. They produce the deepest findings per hour but require engineering time to provide the access, and they are not what a customer questionnaire expects when it asks for a “pentest report.”

Grey-box engagements give the tester realistic starting conditions: valid user accounts at several privilege levels, a high-level architecture diagram, and documentation. This mirrors the posture of a real attacker who has done reconnaissance or compromised a low-privilege account. You get depth on the findings that actually matter and efficient use of the billable hours.

What it costs

VAPT pricing in India spans a genuinely wide range, most of which corresponds to wildly different quality. We wrote a separate deep-dive on this: How Much Does a VAPT Cost in India? A 2026 Pricing Guide. The short version:

  • β‚Ή25,000–₹60,000: automated scan with a branded cover page. You are buying a Nessus export. Useful for the very early-stage startup that needs a document, not a finding.
  • β‚Ή75,000–₹2,50,000: scoped boutique engagement β€” single web app or API, grey-box, 5–15 billable days, senior tester, real report. This is the band where most Indian SaaS companies should be.
  • β‚Ή3,00,000–₹12,00,000: multi-asset or deep-scope engagement with a mid-tier consulting firm. Includes web, API, mobile, cloud review, and retest.
  • β‚Ή15,00,000+: enterprise engagements with Big Four-adjacent firms. Heavy methodology, heavy documentation, often underwhelming depth-per-rupee. You are paying for the logo on the cover of the report because your enterprise buyer requires it.

The anti-pattern to avoid: the firm that will not commit to a scope before quoting a number, or will not show you a redacted prior report before you sign. Both are strong indicators you will receive a Nessus export regardless of what is written in the statement of work.

What a good VAPT report actually contains

A report is not a deliverable. It is evidence, an instruction manual, and a compliance artefact rolled into one. It should serve three audiences: the CTO who needs to prioritize, the engineer who needs to fix, and the auditor who needs to verify. A good report has:

  • Executive summary β€” one page, plain language, named risks ranked by business impact, not CVSS. CVSS belongs in the findings table, not in the summary for the board.
  • Scope, methodology, and limitations β€” explicit. What was tested, what was not, what authentication contexts, what date range, what tools, and what the tester could not cover. Testers who omit the limitations section are selling you certainty they cannot deliver.
  • Findings β€” each with: unique ID, title, severity (CVSS 3.1 or 4.0), affected component, reproduction steps, evidence (screenshots, curl commands, HTTP traces), impact written in business terms, and specific remediation. “Implement input validation” is not remediation. “Parameterize the tenant_id query in orders_controller.rb:142, replace the direct string interpolation on line 145 with a bound parameter, and add a regression test that asserts no tenant-id-bearing query constructs the WHERE clause via string concatenation” is remediation.
  • Attack chains β€” any non-trivial compromise should be narrated as a chain, not a set of independent findings. An IDOR plus a missing rate limit plus a password-reset email oracle is the story. Three separate low-severity tickets is the anti-story.
  • Strategic observations β€” patterns across the findings. If authorization is broken in five places, the root cause is probably not authorization. It is probably a missing authorization layer in the framework. A good report names the pattern.
  • Retest section β€” every finding gets verified after the engineering team has shipped fixes. If the retest is sold as a separate engagement, you are being upsold.

How long a good engagement takes

For the standard Indian SaaS case (one web app, one API, grey-box, two user roles): 10 to 15 working days of active testing, plus three to five days of report writing, plus a one-week retest window after fixes. Anything faster is a scan. Anything slower, without scope to justify it, is billable-hour inflation.

Choosing a VAPT partner β€” the five questions

We are a VAPT firm, so assume bias. With that disclosure: these are the five questions we tell buyers to ask every provider they evaluate, including us. Providers who dodge more than one should be disqualified.

  1. Who, specifically, will test my application? Ask for the tester’s name, certifications, and a link to public work β€” CVE disclosures, CTF profiles, blog posts, conference talks. Firms that staff engagements from a pool of interchangeable juniors do not want to answer this question.
  2. Can I see a redacted prior report in my industry? A competent firm has templates they will send. Insist on seeing one before signing. If the finding descriptions look like ChatGPT output, the engagement will produce the same.
  3. What happens on day one? A credible provider can describe the first day’s activities concretely β€” kickoff, scope confirmation, access provisioning, tool setup, initial reconnaissance. Vague answers mean there is no repeatable methodology.
  4. How do you handle findings during the engagement? Critical-severity findings should be communicated within hours of discovery, not held for the final report. If the provider does not have a same-day escalation path, they are optimizing for their reporting cadence, not your risk.
  5. What is your retest policy? Unlimited retests within 30 days of final report delivery is the standard any serious firm should meet. “Retest available for additional fee” tells you their business model depends on findings not being fixed.

Compliance mapping for Indian regulators

A single VAPT engagement, scoped correctly, can satisfy multiple overlapping regulatory requirements. The most common combinations we see in Indian engagements:

  • RBI Cyber Security Framework (for regulated entities) β€” requires annual VAPT of critical applications. Scope must include the internet-facing perimeter, core banking integrations, and customer-facing channels.
  • SEBI System Audit Report (for market intermediaries) β€” mandates VAPT of trading and related systems at specified cadence.
  • PCI DSS 4.0 β€” application-layer testing under Requirement 6.4.1 and network-layer testing under 11.4. External testing quarterly by an ASV; internal testing annually and after significant change.
  • DPDP Act 2023 Β§8(5) β€” “reasonable security safeguards” is not yet prescribed, but post-enforcement we expect VAPT to be positioned as the de facto evidence standard, particularly for Significant Data Fiduciaries.
  • ISO 27001:2022 A.8.29 β€” secure testing in development and acceptance, with explicit reference to penetration testing for internet-exposed systems.
  • SOC 2 Trust Services Criteria CC4.1 and CC7.1 β€” continuous monitoring and assurance; VAPT evidence is the standard satisfying control.

Frequency β€” how often to test

The regulator-minimum answer is “annually.” The risk-minimum answer depends on how fast you ship. A reasonable heuristic for a product under active development:

  • Annual full-scope VAPT for audit coverage and comprehensive baseline.
  • Targeted VAPT on any material architectural change β€” new authentication flow, new tenant model, migration to a new cloud provider, first exposure of an internal API.
  • Continuous vulnerability scanning between engagements β€” Nuclei, Burp Enterprise, or a managed scanner running weekly against production.
  • Dependency and SBOM monitoring continuously β€” Snyk, Dependabot, or equivalent. Most of what breaks next year will be in your dependencies, not your code.

RingSafe’s VAPT methodology

We run grey-box by default. Our standard engagement structure is: kickoff and scope confirmation on day 1, architecture walkthrough with the engineering team on day 2, active testing days 3–12, daily critical-finding escalations via the shared Slack or Teams channel, draft report delivery on day 15, retest window days 16–22, final report with retest-verified statuses on day 23. All engagements include up to 30 days of unlimited retests on the findings we raised, at no additional cost.

Our deliverables are the full report, the evidence archive (traffic captures, screenshots, scripts we built during the engagement), a Jira-ingestable CSV of findings, and a one-hour debrief with your engineering team after delivery. Compliance sign-off letters for specific auditors are included on request.

Deep dives β€” the cluster

Start here

If you are scoping your first VAPT and want a second opinion on whether the quotes you are receiving match the work you actually need, book a 30-minute scoping call. We will review your architecture, the proposed scopes, and what is and is not worth paying for β€” no obligation to engage us.