Most B2B SaaS companies we work with have heard of STRIDE. Far fewer have actually threat-modeled a real product feature under production load, with real engineers, producing real backlog items. The gap is not a lack of method. STRIDE is simple. The gap is that teams read the framework, draw a data flow diagram, enumerate threats in a meeting room, and never revisit it. The output becomes a Confluence page nobody reads. The mitigations never ship.
This post walks through STRIDE applied to a representative B2B SaaS product. The goal is not academic completeness; it is to show what a practitioner-level threat model looks like when the output is measured by backlog items that reach production, not by the size of the document.
The product under the microscope
Consider a B2B SaaS we will call “ContractFlow,” a realistic composite of products we assess. It manages legal contracts for mid-market companies. Customers upload contracts, annotate them, share internally and externally, and use workflow features to route for approval and signature. The architecture is typical: React SPA, REST and GraphQL APIs, PostgreSQL, S3 for document storage, a background worker tier, third-party integrations (Salesforce, DocuSign, Slack), and OIDC-based SSO via customer IdPs.
Step 1: Draw the data flow diagram
A threat model without a data flow diagram is a conversation, not a model. The DFD has four elements:
- External entities: customer users, customer IdP, third-party integrations, unauthenticated attackers.
- Processes: the web app, API gateway, document processing worker, notification worker.
- Data stores: application database, document blob storage, audit log store, cache.
- Data flows: arrows labelled with the protocol, direction, and kind of data.
Trust boundaries are the most important lines on the DFD. They show where data crosses from one trust context to another: from unauthenticated internet to TLS-terminating load balancer; from the customer’s IdP to our session; from one tenant’s scope to the shared database tier; from our backend to DocuSign.
Step 2: Apply STRIDE per element
STRIDE is the acronym for the six canonical threat categories. Each DFD element gets each category asked of it.
- S Spoofing (identity impersonation)
- T Tampering (unauthorized modification)
- R Repudiation (actions that cannot be attributed)
- I Information disclosure (confidentiality breach)
- D Denial of service (availability breach)
- E Elevation of privilege (unauthorized escalation)
A typical DFD for ContractFlow has 15 to 25 elements. With six categories per element, you are looking at 90 to 150 threat candidates. You will not enumerate every one; you will focus on the high-signal ones. This is where experience matters.
Step 3: High-signal threats, walked through
Spoofing: the customer IdP boundary
ContractFlow accepts OIDC assertions from customer IdPs. Spoofing threats cluster around trust in the assertion.
- Is the OIDC issuer URL validated against a fixed allowlist per tenant?
- Is the JWKS fetched over TLS with caching and rotation?
- Is the audience claim validated?
- Can a user at one customer craft an assertion that lands them in another customer’s tenant?
A practical finding in past assessments: the code path parsed the OIDC issuer from the token itself rather than looking it up by tenant. A malicious IdP could assert any email, land any tenant. Mitigation: enforce per-tenant issuer pinning before token parsing.
Tampering: document integrity in S3
Documents are stored in S3. Tampering threats:
- Can a user or an attacker modify a document after upload without leaving a trail?
- Is the S3 bucket versioning enabled? Object lock?
- Does the application store a content hash at upload time and verify on read?
- Can a developer with S3 write access tamper with a contract as evidence?
Mitigation that actually ships: SHA-256 at upload, stored in the database alongside the object key. Periodic integrity check job. S3 bucket with object lock in legal hold mode for signed contracts. Separation between the application role (read) and the signing workflow role (write).
Repudiation: audit log adequacy
Customers using ContractFlow for contracts need audit logs that survive scrutiny. Repudiation threats:
- Is every action recorded: who, what, when, from what IP, on what object?
- Can an admin modify or delete audit log entries?
- Is the audit log stored on tamper-evident infrastructure?
- How long are logs retained?
Mitigation: append-only audit log store separate from the application database, with retention aligned to customer contractual needs (often 7 years for contracts), with cryptographic chaining between entries to detect tampering.
Information disclosure: cross-tenant IDOR
Multi-tenant SaaS lives and dies on isolation. The most common breach pattern is Insecure Direct Object Reference where an authenticated user from tenant A can request a resource belonging to tenant B.
- Does every API endpoint authorize based on the tenant ID derived from the session, not from the request?
- Is tenant isolation enforced at the data access layer, not just the controller?
- Are document links signed with tenant-scoped signatures?
- Does the search index enforce tenant isolation?
A practical finding: the document download endpoint validated that the user was authenticated and that the document existed, but not that the document belonged to the user’s tenant. Change one ID in the URL and you had another customer’s contract. The fix is a tenant-scoped repository pattern that makes this class of bug architecturally impossible.
Denial of service: worker queue poisoning
Background workers process document uploads. DoS threats:
- Can a customer upload a file that takes hours to process, blocking the queue?
- Is there per-tenant rate limiting on queue submissions?
- Can a malformed file crash the worker process indefinitely?
- Are dead-letter queues in place with alerting on buildup?
Mitigations: per-tenant quotas on queue depth and processing time, worker timeouts with graceful handling, DLQs for poison messages, tenant isolation in queues so one customer cannot starve another.
Elevation of privilege: admin role drift
Every SaaS accumulates admin roles. Elevation threats:
- Can a regular user invite themselves to an admin group?
- Does role assignment require two-person approval?
- Are service accounts separated from user accounts?
- Can a support engineer impersonate a customer admin without a logged justification?
Mitigation: separation of identity provider admin from product admin, SCIM-based group management with audit, break-glass admin roles with time-bound access, and a clear support impersonation workflow that requires both customer consent and internal approval, with audit log entries in both the support tool and the application.
Step 4: Prioritize and backlog
Enumerating threats is the easy part. Prioritization and backlog conversion is where teams fail.
A threat rating we use that correlates well with actual incident data:
| Factor | Low | Medium | High |
|---|---|---|---|
| Ease of exploitation | Requires specialized skill | Requires authenticated user | Requires only network access |
| Blast radius | Single user | Single tenant | All tenants |
| Existing mitigation | Defense in depth | Single control | None |
| Detection | Alerted today | Investigable post-hoc | Invisible |
Convert each prioritized threat to a backlog item with an owner, acceptance criteria, and a target sprint. If the threat model does not produce backlog items, it was a conversation, not a model.
Step 5: Keep it alive
A threat model is a living artifact. Re-run it when:
- A new trust boundary is introduced (new third-party integration, new data class).
- A major architectural change ships (new storage backend, new auth flow).
- A new threat actor group emerges targeting your segment.
- Annually, as a baseline hygiene exercise.
The best threat models are thin, current, and produce change. Thick, stale threat models are compliance theater.
Integrating with VAPT and bug bounty
A threat model is a hypothesis about what can go wrong. VAPT and bug bounty are the test. Good teams use threat model outputs as penetration test scope inputs and use pentest findings to update the threat model. The feedback loop is where security engineering matures.
Related reading
- API threat modeling from OpenAPI spec
- Threat modeling multi-tenant SaaS β isolation
- Web app pentest checklist β OWASP 2026
- API security β OWASP API Top 10
- VAPT services in India β buyer’s guide
Work with RingSafe
RingSafe runs STRIDE-based threat modeling workshops for SaaS product teams, typically across two to four working sessions with engineering, and produces prioritized backlogs rather than wall-art diagrams. Founder Manish Garg (Associate CISSP, CEH, CCNP Enterprise) and the team work with Indian B2B SaaS companies at Series A and later.
If your product has never been threat-modeled or your last threat model is collecting dust, book a scoping call and we will run a focused modeling session on one high-value feature to show you what useful output looks like.