Almost every web vulnerability is, at its core, a trust-boundary bug. The developer believed something was trustworthy and wrote code that relied on that belief. The attacker proved otherwise. This module is about learning to see trust boundaries before you learn to attack them — because once you can see them, the specific techniques (SQLi, XSS, SSRF) become variations on a single idea.
Why this happens
Trust is cognitive, not technical. When developers write code, they carry implicit assumptions about what is “inside” and what is “outside” their system. Inside things are trusted — internal function calls, database rows, environment variables, co-located microservices. Outside things are not — user input, third-party APIs, incoming HTTP.
These mental maps get things wrong in two directions. First, the boundary is drawn in the wrong place: the browser is assumed to enforce something it doesn’t, the internal microservice is assumed to authenticate when it doesn’t, the HTTP header is assumed to come from the load balancer when it can be forged. Second, the boundary drifts: a field originally user-controlled becomes “stored and re-displayed” and is no longer treated as untrusted on the way back out.
The attacker’s first job on any target is to find where the developer’s mental map of the trust boundary differs from reality. Every mismatch is a vulnerability waiting to be materialized.
How it happens
Concretely, trust-boundary bugs manifest in four patterns:
- Boundary missing entirely. Input from untrusted source reaches a privileged sink without any check. Classic SQLi, command injection, deserialization RCE.
- Boundary enforced in the wrong place. Validation happens on the client; the server accepts whatever arrives. Hidden-field tampering, price manipulation in mobile apps.
- Boundary bypassed via alternate path. The main flow checks permissions; the API endpoint behind it doesn’t. Mobile clients call endpoints web clients don’t touch.
- Boundary assumed one-way when it’s not. Sanitized-on-input data is re-escaped on output, but if the sanitizer is a different context, it breaks. Or: an internal service trusts a user-controlled header because “the load balancer strips it” (except it doesn’t).
Why we look
Seeing trust boundaries is the methodology that makes pentest reports useful instead of noisy. A report that says “Burp found XSS at /search” is less valuable than one that says “the search endpoint accepts untrusted input, stores it without output encoding, and the trust boundary between ‘stored data’ and ‘rendered HTML’ is missing — here is the proof, and here are seven other places in the same codebase where the same pattern exists.” The first finds one bug. The second finds a class of bugs.
What we find
- Code that trusts any field from request.body without schema validation
- Mobile-only or internal-only APIs that skip auth because “the main app calls them”
- HTTP headers used for authorization without verifying their integrity or origin (X-User-Id, X-Authenticated-User)
- Inter-service calls that accept any incoming claim (ambient trust inside a VPC)
- Stored data re-used in a different context without re-encoding (SQL column value rendered as HTML, JSON field used in shell)
- Client-side-enforced business rules (price, role, quantity) without server-side matching enforcement
- Reverse-proxy / CDN headers that the backend trusts but can be spoofed if the frontend is misconfigured
The mental model to adopt
When reviewing any piece of code or any API endpoint, ask three questions:
- Where is the data coming from? Not just the immediate parameter — the full chain. User → browser → proxy → load balancer → gateway → service → database → cache → back to service → back to browser. At every hop, does the next hop trust the previous one for integrity, authenticity, authorization?
- Where is the data going? Not just the immediate sink — every downstream consumer. If this value lands in a log, the log search is a new context. If it lands in a template, the template engine is a new context. If it lands in a shell command, the shell is a new context. Each context has its own escaping rules.
- What assumptions is this code making that reality doesn’t enforce? Often the vulnerability is one line of comment saying “this is pre-validated” which turns out not to be.
Mindset takeaway
A senior pentester doesn’t find bugs by running tools — they find bugs by walking the data flow and asking “what would happen if this wasn’t what the code expects?” The tools are accelerators for hypotheses. The hypotheses come from understanding trust boundaries. Learn to see them first; every other web attack technique is an instance of “I found a place where the developer assumed trust that isn’t there.”
The rest of this track applies this lens to specific vulnerability classes: injection, auth bypass, SSRF, access control, XSS, business logic, file upload, API trust, session management, and the overlooked class where defenders assume “the framework handles it.” Each module starts with “why does this happen?” — because the technique is downstream of the mindset.