A VAPT report is the tangible deliverable that decides whether the engagement produced value. Auditors, boards, enterprise buyers, and engineering teams all read it β each for different reasons. A report that serves all three is hard to write and easy to distinguish from one that does not. This is what a good VAPT report contains, with an annotated structural example you can use to evaluate any report you receive.
The six audiences
- The CTO or engineering leader β needs to understand business risk, prioritize fix effort, and track remediation
- The engineer who will fix a specific finding β needs reproduction steps, root cause, and specific remediation guidance
- The security team β needs attack-chain narrative, methodology transparency, and detection/prevention recommendations
- The compliance officer or auditor β needs methodology, scope, coverage, and evidence for regulatory submissions
- The board or investor β needs a one-page executive view framed in risk and readiness
- The enterprise buyer β reviewing during a procurement cycle, needs to confirm scope, date, and competence of tester
A report serving only one audience fails the others. A report serving all six is specifically structured.
The structural template
1. Cover page and executive summary (2 pages)
Report title, client name, engagement dates, scope summary, tester names. Executive summary follows on a single page: plain-language description of what was tested, how many findings by severity, the top 3 business risks, the overall posture assessment (typically on a 0β5 or letter-grade scale), and the recommended priority actions. No CVSS scores on this page β this is for the board, not the security team.
2. Scope and methodology (3β5 pages)
Explicit scope: in-scope assets, in-scope user roles, authenticated vs unauthenticated coverage, environments tested (production, staging, dev). Out-of-scope items explicitly listed. Methodology: standards followed (OWASP Top 10, OWASP API Top 10, MASVS, PTES), techniques used, tools employed, hours allocated by phase. Limitations: what the tester could not cover, and why β time, access, scope constraints.
Sign this section: a tester who will not commit to methodology in writing will not commit to it in practice.
3. Findings summary table (1β2 pages)
Every finding in a single table. Columns: ID (e.g., RSF-01), title, severity (Critical/High/Medium/Low/Informational), CVSS score, affected component, status (New/Fixed/Risk Accepted/Not Exploited in Retest). Sortable and filterable for engineering team use. This is the table that goes to Jira.
4. Attack chain narratives (2β5 pages)
For compromises that required chaining multiple findings, the narrative: starting state, finding 1 exploited, state after, finding 2 exploited, state after, etc. This is where non-trivial engagements produce the most valuable output. An IDOR plus a rate limit miss plus an email-oracle is a story. Three separate tickets is not.
5. Detailed findings (the bulk of the report)
One subsection per finding, following a consistent template:
- ID and title (concise, descriptive)
- Severity, CVSS 3.1 or 4.0 score
- Affected component (specific β file, endpoint, class)
- Description β what the finding is in plain language
- Business impact β what happens if exploited, in business terms
- Reproduction steps β numbered, with every command, HTTP request, and tool invocation
- Evidence β screenshots, response captures, tool output; redacted where sensitive
- Root cause β why this exists (framework default, missing validation, design flaw, etc.)
- Remediation β specific fix at code or configuration level; not “implement input validation” but “in OrdersController.rb line 142, replace the string interpolation at line 145 with a bound parameter and add a regression test”
- References β relevant CWE, OWASP, vendor advisories
6. Strategic observations (1β2 pages)
Patterns across the findings. If authorization is broken in 5 places, the root cause is probably an absent authorization layer, not 5 independent bugs. Good reports name the pattern and recommend architectural changes, not just individual fixes.
7. Retest results (1β2 pages)
After the engineering team has shipped fixes and the retest window has completed, status for every finding: Fixed, Partially Fixed, Not Fixed, Risk Accepted. Retest evidence for each. A report without a retest section is a report sold as half-complete.
8. Appendices
Tools and techniques detail. Tester certifications. Supporting evidence archives. Compliance-specific mapping tables (PCI DSS, ISO 27001, SOC 2) as appropriate.
Red flags in reports you receive
- Finding descriptions that are identical across reports β cut-and-paste from scanner output, not manual analysis
- No scope or methodology section, or a one-paragraph version β the tester does not have a repeatable methodology
- No retest section, or retests sold separately β the business model relies on findings not being fixed
- No attack chains β all findings are independent β tester may not have tried to chain exploits
- Business impact described only in CVSS terms β the tester cannot translate technical findings to business risk
- Remediation guidance at the level of “implement proper input validation” β not actionable; real remediation is specific
- No named tester, or tester with unverifiable credentials β accountability and skill unclear
Where to see redacted examples
We maintain sample reports in varying formats (web app, API, cloud, mobile) and share them on request during scoping conversations. Every firm that takes its reporting seriously has these; firms that refuse to show them are signaling that you will not like what you receive.
Related reading
- VAPT Services in India: The Complete Buyer’s Guide
- How Much Does a VAPT Cost in India?
- VAPT vs Vulnerability Scan
To see a redacted sample of our VAPT reports, book a scoping call.