Academy

Module 1 Β· DevSecOps Fundamentals πŸ”’

Manish Garg
Manish Garg Associate CISSP Β· RingSafe
April 22, 2026
5 min read

DevSecOps is the practice of embedding security throughout the software delivery lifecycle rather than treating it as a gate at the end. It is not a team, a tool, or a framework β€” it is a way of working. This module covers the mental model, where security checkpoints go in a modern SDLC, and the mistakes that prevent DevSecOps programs from landing.

Why “shift left” is necessary but not sufficient

The industry mantra “shift security left” means catching issues during development rather than after release. True, useful, but incomplete. A pure shift-left program produces:

  • Thousands of low-severity SAST findings that developers ignore
  • Slow pipelines as scans balloon
  • False confidence β€” you caught code bugs, but the runtime environment is untested

Modern DevSecOps is “shift left AND extend right”: catch issues early and monitor them in production. Different controls live at different stages.

The full SDLC security map

PLAN      β†’  Threat modeling, abuse cases, security requirements in stories
CODE      β†’  IDE security plugins, pre-commit hooks, secret scanning
BUILD     β†’  SAST, SCA (dependencies), license compliance
TEST      β†’  DAST, API fuzzing, IaC scanning
PACKAGE   β†’  Container image scanning, SBOM generation, signing
RELEASE   β†’  Policy-as-code enforcement (OPA, Kyverno), admission controllers
DEPLOY    β†’  Infrastructure posture scanning, secret rotation validation
OPERATE   β†’  Runtime protection, EDR, SIEM detections, WAF
MONITOR   β†’  Vulnerability management, threat detection, incident response

Not every organisation runs every control. The minimum viable DevSecOps for a small team: secret scanning + SCA + container scanning + a DAST scan of staging. Everything else is a maturity step up.

Security requirements in the plan stage

Most security bugs are specification bugs. If the requirements say “users can upload files” but not “files must be scanned for malware and size-limited to 10MB,” the developer has been given permission to build insecurely.

Introduce security into requirements via:

  • Abuse cases alongside user stories. For every “As a user, I can do X,” add “As an attacker, I can abuse X by…”
  • Threat model per major feature. STRIDE or PASTA β€” an hour of diagramming catches issues that SAST never will
  • Security acceptance criteria per story. The Definition of Done includes security checks

Code stage β€” developers’ hands

  • IDE plugins: Snyk Code / Checkmarx IDE / Semgrep / Sonar. Surface issues as the developer types
  • Pre-commit hooks: gitleaks or trufflehog for secrets; make it fast (<5 seconds) or developers will bypass
  • Signed commits: gpg or Sigstore gitsign β€” ties commits to identity, prevents commit-identity forgery
  • Branch protection: required reviews, status checks, no direct push to main

Build stage β€” CI-driven

The build pipeline is where scheduled checks run without developer friction. Minimum viable:

  • SAST (Static Application Security Testing) β€” Semgrep (fast), SonarQube, Checkmarx, Veracode. Language-specific rulesets
  • SCA (Software Composition Analysis) β€” Snyk Open Source, Dependabot, OWASP Dependency-Check. Vulnerabilities in libraries you depend on
  • Secret scan on every push (as backup to pre-commit)
  • License compliance β€” catch GPL in a commercial codebase before shipping

Test stage β€” running software

  • DAST (Dynamic Application Security Testing) β€” OWASP ZAP, Burp Enterprise, StackHawk. Run against deployed staging
  • API fuzzing β€” Postman/Schemathesis generating invalid payloads from OpenAPI specs
  • IaC scanning β€” Checkov, tfsec, kube-score for Terraform/Kubernetes manifests
  • Container scanning β€” Trivy, Grype. Scan every image built by the pipeline

Release stage β€” policy enforcement

By now you have signals. Release-stage controls decide what passes:

  • Quality gates: fail the build if SAST finds high-severity issues in new code (not legacy baseline β€” that is what makes the gate survivable)
  • Signed artifacts: Sigstore Cosign for container images. Kubernetes admission controller verifies signatures
  • SBOM generation: Syft or CycloneDX CLI. Ship the SBOM alongside the release β€” and in 2026 you often have to for regulatory reasons
  • Policy-as-code: OPA Gatekeeper, Kyverno. Deny deployment of containers without required labels, signatures, or resource limits

Runtime β€” the monitoring half

  • Runtime security (CNAPP): Wiz, Orca, Falco β€” detects anomalous behaviour in deployed workloads
  • Secrets rotation: Vault, AWS Secrets Manager β€” scheduled rotations, monitored access
  • WAF: Cloudflare, AWS WAF, ModSecurity β€” blocks known-bad requests before they hit your app
  • SIEM: ingestion of application logs for detection (see the Blue Team track)

Metrics that actually tell you the program works

Avoid vanity metrics. The useful ones:

  • Mean time to remediate for critical findings. If it is weeks, nothing else matters
  • Percentage of builds passing security gates on first try. Trending up = controls are tuned; trending down = friction is too high
  • Number of production vulnerabilities > 30 days old (by severity). The actual risk backlog
  • Secret leaks caught pre-commit vs post-commit. Pre-commit share should trend toward 100%
  • Coverage: % of services with SAST, SCA, DAST, container scanning active. 100% is the goal; 80% is where most programs actually live

Where DevSecOps programs fail

  • Tools without process: scanners are configured but nobody owns remediation SLAs. Findings pile up; dashboards ignored
  • Gates without exceptions: strict gates with no escape valve create either huge backlogs or routine disable. Tune acceptable risk thresholds with Engineering
  • Shift-left religion: all energy on SAST/pre-commit, production runtime blind. Attackers don’t care about your pipeline coverage
  • Security team gating everything: scaling model doesn’t work. Security must enable, automate, and measure β€” not approve every ticket
  • No threat modeling: automated scanning catches known patterns. Threat modeling catches the business-logic bugs no scanner will

Starting small β€” the first 30 days

  1. Week 1: Enable secret scanning on every repo (GitHub Advanced Security, or Trufflehog CI job)
  2. Week 2: Add SCA to CI (Dependabot on by default, alert-only)
  3. Week 3: Container image scan on image build, block high-severity CVEs in new images
  4. Week 4: DAST against staging on every deploy, surface findings in ticketing system with SLAs

Four weeks, four controls. That baseline catches 60%+ of routine issues. Everything else (threat modeling, policy-as-code, runtime) layers on top over the following quarters.

What the next modules cover

Module 2 takes the tooling layer hands-on β€” SAST, DAST, SCA tool selection and CI wiring. Module 3 does IaC security for Terraform and Kubernetes. Module 4 hardens the CI/CD pipeline itself (it is often the weakest link). Module 5 goes deep on supply-chain security β€” SBOM, signing, SLSA, the things regulators increasingly require.