Security Guides

API Threat Modeling: From OpenAPI Spec to Attack Surface Map

Manish Garg
Manish Garg Associate CISSP Β· RingSafe
April 20, 2026
7 min read

APIs are where most SaaS breaches happen, and threat modeling is where most SaaS teams stop before reaching APIs. Developers who understand STRIDE at the architecture level often stall when asked to threat-model a specific API endpoint. The reason is not conceptual; it is operational. An OpenAPI specification contains hundreds of endpoints, thousands of parameters, dozens of authentication flows. Threat-modeling each by hand is unappealing. So nobody does it, and the API attack surface is inferred only during pentests, months after the risky endpoints shipped.

This post shows how to go from an OpenAPI specification to an attack surface map that is actionable. The method applies to REST APIs; GraphQL and gRPC variants are covered briefly at the end. The output is a prioritized set of endpoints and authorization paths that need attention.

Why OpenAPI is the right starting point

Security teams sometimes reject the OpenAPI spec as “developer documentation.” They are missing the opportunity. The spec is a structured, machine-readable description of every endpoint, every parameter, every authentication requirement, and every response schema. It is the closest thing to an authoritative attack surface inventory a SaaS team usually has. If the spec is incomplete or inaccurate, fixing it is step zero of API security; you cannot defend what you cannot enumerate.

Step 1: Treat the spec as ground truth, then verify

Before modeling, check that the spec matches reality. Common discrepancies:

  • Undocumented endpoints. Admin or internal endpoints omitted from the customer-facing spec but reachable on the same host.
  • Documented-but-removed endpoints. Endpoints deprecated in code but still in the spec, routing to error handlers that may leak stack traces.
  • Parameter drift. New optional parameters added in code without spec updates, or parameters renamed in spec but not code.
  • Authentication drift. Endpoints marked as requiring authentication in the spec but reachable without it due to routing misconfigurations.

A quick verification loop: spider the running service using an authenticated browser session with a proxy, diff the observed endpoints against the spec. Anything observed but not in the spec is a candidate for undocumented attack surface. Anything in the spec but not observed is either dead code or out-of-scope for this exercise.

Step 2: Enumerate the attack surface

From the spec, produce a spreadsheet or table with one row per endpoint. Columns:

Column Purpose
Method and path GET /v1/accounts/{id}
Authentication requirement None, session, API key, OAuth scope
Authorization requirement Tenant-scoped, role-gated, object-owner-only
Accepts user input? Path, query, body, headers; highlight JSON and file uploads
Returns data Data classes in response; personal data, secrets, tokens
State-changing? Yes/no; is the effect reversible?
External side effects Third-party calls, emails, webhooks
Rate limit Per-user, per-IP, per-tenant; current limit if any

For a typical SaaS with 200 endpoints, this table takes two engineers a day to build. Shortcuts exist: scripted extraction from the OpenAPI JSON can populate most columns; the authorization and rate limit columns usually require reading code.

Step 3: Apply threat lenses to each endpoint

Rather than brute-force STRIDE across every endpoint, apply lenses that correlate with observed API breach patterns. The OWASP API Security Top 10 is a useful reference; so is the following five-question lens:

Lens 1: Broken object-level authorization (BOLA / IDOR)

Every endpoint that takes a resource ID in the path or query. Question: if an authenticated user at tenant A changes the ID to a value belonging to tenant B, is the request rejected? If the enforcement is done in the controller rather than in the data access layer, the risk is systemic across the codebase.

Lens 2: Broken authentication

Endpoints that accept tokens, including session cookies, bearer tokens, API keys. Questions: are tokens scoped appropriately? Do they expire? Is there rate limiting on authentication endpoints? Is there protection against brute force on password reset or login?

Lens 3: Broken property-level authorization (mass assignment)

Endpoints that accept a JSON body representing an object. Question: does the server accept arbitrary keys and apply them to the object, or is there an explicit allowlist? Mass assignment bugs let a user set “is_admin”: true on their own profile.

Lens 4: Excessive data exposure

Endpoints that return entities. Question: does the response include more fields than the UI or legitimate client needs? Are sensitive fields (password hashes, internal IDs, tokens) included and relied on to be ignored by the client?

Lens 5: Lack of resources and rate limiting

Every endpoint. Question: is there a rate limit? Is there a resource consumption limit (result count, pagination cap, file size, expensive query cost)? Can an authenticated user abuse an expensive endpoint to deny service to other tenants?

A rated endpoint that passes all five lenses is low risk for the common API breach patterns. An endpoint that fails any one lens is in the prioritization queue.

Step 4: Authentication and authorization mapping

API security fails most often at the edges of the authorization graph. Produce a second artifact: the authorization map.

  • List every role or scope in the system.
  • For each endpoint, record which roles or scopes permit access.
  • Look for endpoints that allow access to roles that should not reach them (for example, read-only scopes able to write).
  • Look for endpoints that require no role check beyond authentication.
  • Look for role combinations that create effective admin (two roles that each grant half of an administrative action).

The authorization map is where implicit elevation paths surface. In one assessment, we found that the combination of “support impersonation” and “export data” roles allowed any support engineer to exfiltrate full tenant datasets without any customer interaction. Neither role individually was dangerous; the combination was.

Step 5: Score and prioritize

Score each endpoint on two axes: likelihood of exploitation (how easy is it to find and exploit?) and impact (what is the blast radius?). Plot on a 2-by-2 matrix. Focus engineering time on the high-likelihood, high-impact quadrant first.

Typical findings cluster in:

  • Object-level authorization on resource download endpoints (high likelihood, high impact).
  • Mass assignment on profile or account update endpoints (high likelihood, medium to high impact).
  • Unauthenticated endpoints that accept user input with insufficient validation (high likelihood, variable impact).
  • Administrative endpoints reachable from customer-facing hosts (medium likelihood, very high impact).

From map to backlog

Every high and medium risk entry converts to a backlog item with:

  • Description of the specific endpoint and lens failure.
  • Acceptance criteria that are testable (for example: “an authenticated user at tenant A receives 404 when requesting any resource belonging to tenant B, verified by automated test”).
  • Owner, target sprint, and test coverage expectation.

Do not treat this as a one-off. Re-run the process each quarter, and on every major API version bump. The OpenAPI spec is a living document; so is the attack surface map.

GraphQL and gRPC considerations

For GraphQL, the equivalent of the spec is the schema. The modeling approach shifts: instead of enumerating endpoints, enumerate types and fields, and apply authorization lenses to each resolver. Query complexity and depth limiting become first-class concerns. N+1 resolver abuse is a common DoS vector.

For gRPC, the spec is the Protobuf definition. Endpoints become service methods. Streaming endpoints warrant special attention for resource exhaustion.

If your API threat model does not result in changes to specific endpoints, it was a design review, not a threat model. The measurable output is the PR history.

Integrating with automated testing

Many BOLA and mass assignment findings are amenable to automated regression testing. Once identified, encode the expected behaviour as an integration test that runs in CI: a request as user A to a resource of tenant B returns 404. This prevents regression as the codebase evolves and creates evidence for compliance audits.

Related reading

Work with RingSafe

RingSafe runs API threat modeling and authorization reviews for Indian SaaS companies. We start with your OpenAPI spec, verify it against the running service, and produce a prioritized attack surface map with backlog-ready findings. Founder Manish Garg (Associate CISSP, CEH, CCNP Enterprise) leads engagements focused on OWASP API Top 10 risk classes.

If your spec has drifted or your API has never been threat-modeled at the endpoint level, book a scoping call and we will map your attack surface before an attacker does.