SecureScan
Self-hosted security-scanning orchestrator: 14 scanners across code, dependencies, IaC, containers, secrets, DAST, and network targets, normalized into a single deterministic finding stream — fronted by a CLI, a GitHub Action, and a web dashboard.
This is the operator + developer reference for SecureScan v0.11.0. If you have not used SecureScan before, Quick start: your first scan is the place to begin.
What you can do with SecureScan
- Run diff-aware scans on every PR. The
Metbcy/securescan@v1GitHub Action wrapssecurescan diff, posts a single upserted PR comment of NEW findings, and uploads SARIF to GitHub's Security tab. See GitHub Action. - Triage findings across rescans. Each finding has a stable
fingerprint, so verdicts (
false_positive,accepted_risk,fixed, …) and per-finding comments surviveDELETE /scans/{id}and reappear on every later scan of the same target. See Triage workflow. - Watch scans in real time. The dashboard's scan-detail page streams live per-scanner progress over Server-Sent Events. See Real-time scan progress.
- Fan out events to your tools. Outbound webhooks deliver
HMAC-signed
scan.complete/scan.failed/scanner.failedevents to Slack, Discord, or any HTTP receiver, with a durable retry queue. See Webhooks. - Issue scoped, hashed API keys. v0.8.0 replaces the single shared
env-var key with DB-backed keys carrying explicit
read/write/adminscopes per route. See API keys.
Audience
This documentation is for three readers, in roughly that order:
| Reader | What they need | Start here |
|---|---|---|
| Operator | Install, configure, harden, deploy, and maintain SecureScan in their org. Health probes, env vars, signed artifacts. | Install → Production checklist |
| Developer | Talk to the API. Ship a PR scan. Verify webhook signatures. | API overview → Webhook payloads |
| Security team | Understand what SecureScan covers, what it deliberately does not, and how findings are scored / suppressed / triaged. | Scan types → Supported scanners |
What this is not
SecureScan is intentionally not a SaaS, not an SBOM database, and not a vulnerability database in its own right. It orchestrates the open-source scanners you already trust (Semgrep, Bandit, Trivy, Checkov, ZAP, nmap, …), normalizes their output into a single shape, and adds diff-awareness, signed artifacts, and a deterministic serialization contract on top. See Architecture overview for the full picture.
How does this compare?
If you're evaluating SecureScan against tools you already use or are also considering, the Compare section has factual side-by-side write-ups:
- vs DefectDojo — different problem (vuln management hub vs PR-loop scanner); many teams use both.
- vs Trivy — SecureScan wraps Trivy and adds 13 more scanners plus a diff-aware PR loop.
- vs Snyk — OSS, self-hosted, deterministic vs SaaS with reachability analysis.
Project links
- Source: github.com/Metbcy/securescan
- Container image:
ghcr.io/metbcy/securescan - Wheel + sdist + sigstore bundles: attached to every GitHub Release
- Changelog: reference/changelog
- Release process: reference/release-process
This site documents the stable public API surface and the
operational behavior. For the full request/response schema of every
endpoint — including the schemas you will not find here — point your
browser at the running server's /docs (FastAPI Swagger UI) or
/redoc. See API endpoints for the entry point.
Install
SecureScan ships three install paths. Pick the one that matches how you intend to run it, not necessarily where it ends up.
1. Container (recommended for production)
The image is multi-arch (amd64 + arm64), comes with all 14 scanners pre-installed at pinned versions, and is what the GitHub Action falls back to when wheel-mode prerequisites are not met.
docker pull ghcr.io/metbcy/securescan:v0.11.0
docker run --rm -v "$PWD:/work" -w /work \
ghcr.io/metbcy/securescan:v0.11.0 \
diff . --base-ref origin/main --head-ref HEAD --output github-pr-comment
To run the dashboard backend:
docker run --rm -p 8000:8000 \
-e SECURESCAN_API_KEY="$(openssl rand -hex 32)" \
ghcr.io/metbcy/securescan:v0.11.0 \
serve --host 0.0.0.0 --port 8000
Production deployments must verify the image signature with
cosign before running. See
Verifying signed artifacts.
The image follows the release schedule documented in
Release process. All tags from
v0.2.0 onward are signed.
2. Wheel from PyPI
pip install securescan # latest stable
pip install securescan==0.11.0 # exact pin
# Or, isolated, via pipx:
pipx install securescan
PDF reports (securescan scan ... --output report-pdf) require the
optional [pdf] extra, which pulls in WeasyPrint and its Cairo /
Pango / GObject system-library chain:
pip install 'securescan[pdf]'
The container image ships weasyprint pre-installed, so PDF reports
work out of the box there. Without the extra, requesting
--output report-pdf raises a clear RuntimeError pointing back at
this install step.
The wheel only ships SecureScan itself. The underlying scanner CLIs
(semgrep, bandit, safety, pip-licenses, checkov, trivy,
npm, nmap, ZAP, …) need to be installed separately and on PATH
for the scanners that wrap them to run. Use securescan status to
see which ones are detected:
$ securescan status
Scanner Type Available Version
semgrep code yes 1.71.0
bandit code yes 1.7.5
trivy dependency yes 0.49.1
checkov iac no (run: pip install checkov)
zap dast no (run: brew install zaproxy)
nmap network yes 7.94
...
If you do not want to manage scanner installs yourself, use the container instead.
Verify the wheel signature
Every tagged release is signed with sigstore-python. To verify the wheel:
RELEASE=v0.11.0
gh release download $RELEASE -R Metbcy/securescan \
-p 'securescan-*.whl' -p 'securescan-*.whl.sigstore.json'
pip install sigstore
sigstore verify identity \
--cert-identity "https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/${RELEASE}" \
--cert-oidc-issuer 'https://token.actions.githubusercontent.com' \
--bundle securescan-${RELEASE#v}-py3-none-any.whl.sigstore.json \
securescan-${RELEASE#v}-py3-none-any.whl
The *.sigstore.json bundles ship as GitHub Release assets. PyPI
itself does not host them.
3. GitHub Action (CI/CD)
The composite action wraps securescan diff, posts the upserted PR
comment, and uploads SARIF. It tries the wheel first and falls back
to the pinned container image when scanner binaries are not on
PATH.
# .github/workflows/securescan.yml
on: pull_request
permissions:
contents: read
pull-requests: write # required for the upserted PR comment
security-events: write # required for SARIF upload
jobs:
securescan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # diff needs both base and head commits
- uses: Metbcy/securescan@v1
with:
scan-types: code,dependency
fail-on-severity: high
See GitHub Action for the full input reference, inline-review mode, and permission requirements.
From source (development)
Only needed if you are contributing to SecureScan itself.
git clone https://github.com/Metbcy/securescan
cd securescan/backend
python3 -m venv venv && source venv/bin/activate
pip install -e .
pip install semgrep bandit safety pip-licenses checkov # plus any others you want
securescan serve --host 127.0.0.1 --port 8000
In a second shell:
cd securescan/frontend
npm install
npm run dev # http://localhost:3000
See Contributing for the test/lint/release loop.
What gets installed where
| Path | Contents |
|---|---|
securescan (binary on PATH) | Python entry point. Routes to serve, scan, diff, compare, … |
~/.config/securescan/.env | Optional persisted env vars (ZAP creds, etc). Local config. |
SQLite DB (default ~/.securescan/scans.db) | Scans, findings, triage state, API keys, webhooks, deliveries. |
/tmp/securescan-backend.log (when serving) | Structured scan-lifecycle log lines. |
The dashboard frontend is a separate Next.js app. The container ships
only the backend; deploy the frontend independently or use
docker compose up from the repo root for an all-in-one local stack.
Next
- Quick start: your first scan — end-to-end walkthrough.
- Production checklist — when you go past
localhost.
Quick start: your first scan
This walks through running SecureScan against the
~/Documents/securescan repo itself — backend + frontend up, an
end-to-end scan, and reading the result on the dashboard.
It assumes you have:
- Python 3.12+
- Node.js 20+
- The repo cloned at
~/Documents/securescan
1. Bring up the backend
cd ~/Documents/securescan/backend
python3 -m venv venv && source venv/bin/activate
pip install -e .
pip install semgrep bandit safety pip-licenses checkov
securescan serve --host 127.0.0.1 --port 8000
You should see something like:
INFO SECURESCAN_API_KEY not set; API is unauthenticated (dev mode).
INFO Started server process [12345]
INFO Waiting for application startup.
INFO Application startup complete.
INFO Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
Confirm liveness and readiness:
$ curl -s http://127.0.0.1:8000/health
{"status":"ok"}
$ curl -s http://127.0.0.1:8000/ready | jq .
{
"status": "ready",
"checks": {
"database": "ok",
"scanner_registry": "ok"
}
}
Dev mode means no authentication required. The startup banner
warns you. For anything past localhost, set SECURESCAN_API_KEY
or create DB-backed keys — see API keys and
Production checklist.
2. Bring up the frontend
In a second shell:
cd ~/Documents/securescan/frontend
npm install
npm run dev
Open http://localhost:3000. The topbar API
status indicator should be green: the dashboard is talking to the
backend at http://localhost:8000.
3. Kick off a scan via the API
You can use the dashboard, but the API path is the easiest to show
in a guide. Request a code + dependency scan of the SecureScan
repo itself:
curl -s -X POST http://127.0.0.1:8000/api/v1/scans \
-H 'Content-Type: application/json' \
-d '{
"target_path": "/home/you/Documents/securescan",
"scan_types": ["code", "dependency"]
}' | jq .
Response:
{
"id": "0f1a93cb-44c2-4c8e-9f92-0a7c5a2e1b51",
"target_path": "/home/you/Documents/securescan",
"scan_types": ["code", "dependency"],
"status": "pending",
"started_at": "2026-04-29T20:11:05.123456",
"completed_at": null,
"scanners_run": [],
"scanners_skipped": []
}
The backend immediately starts running the requested scanners as a
background asyncio task. Save the id — call it $SCAN_ID.
4. Watch progress live
Open the dashboard at
http://localhost:3000/scan/<SCAN_ID>. Above the StatLine you will
see <ScanProgressPanel> with one row per scanner, each going
queued → running → complete as the orchestrator drives them. This is
the v0.7.0 SSE stream; on the wire it looks like:
curl -N "http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/events"
event: scan.start
data: {"scan_types":["code","dependency"]}
event: scanner.start
data: {"name":"semgrep"}
event: scanner.complete
data: {"name":"semgrep","duration_s":4.31,"findings_count":7}
event: scanner.start
data: {"name":"bandit"}
...
event: scan.complete
data: {"findings_count":12,"risk_score":34.2}
In an authenticated deployment, browsers cannot send X-API-Key on an
EventSource. The dashboard exchanges the API key for a short-lived
signed event token via POST /api/v1/scans/{id}/event-token.
See SSE event tokens.
5. Read the results
Once status flips to completed, the scan-detail page shows:
- A PageHeader with the target path, scan id, and total finding count.
- A StatLine with risk score, severity counts, scanners run, and total duration.
- A scanner-chip strip showing which scanners ran, which were skipped, and the install hint for skipped ones.
- A findings table with columns:
Column What it shows Severity critical/high/medium/low/infowith a colored dot prefix.Title One-line finding summary from the scanner. File:line Mono-spaced; click to expand the row for the matched line and AI explanation. Rule Scanner-specific rule id ( B106,python.lang.security.audit.eval-detected).Scanner Origin scanner ( semgrep,bandit,trivy, …).Compliance Tag chips: OWASP-A03,PCI-DSS-6.5.1,SOC2-CC7.1, etc. (Compliance)Status Triage verdict pill — new(default),triaged,false_positive,accepted_risk,fixed,wont_fix. (Triage)
The same data is available over the API:
curl -s "http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/findings" | jq '.[0]'
{
"id": "f-2c1...",
"scanner": "semgrep",
"scan_type": "code",
"severity": "high",
"title": "Use of eval()",
"description": "...",
"file_path": "backend/securescan/cli.py",
"line": 142,
"rule_id": "python.lang.security.audit.eval-detected",
"fingerprint": "9d2f...",
"compliance_tags": ["OWASP-A03"],
"state": null,
"metadata": { "suppressed_by": null }
}
state is null until you set a triage verdict — see
Triage workflow.
6. Triage one finding
Suppose row 1 is a false positive in test code. Set the verdict:
FP=$(curl -s "http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/findings" \
| jq -r '.[0].fingerprint')
curl -s -X PATCH \
"http://127.0.0.1:8000/api/v1/findings/$FP/state" \
-H 'Content-Type: application/json' \
-d '{"status":"false_positive","note":"intentional in test fixture","updated_by":"alice"}'
Response:
{
"fingerprint": "9d2f...",
"status": "false_positive",
"note": "intentional in test fixture",
"updated_at": "2026-04-29T20:14:22.000000",
"updated_by": "alice"
}
The default findings filter hides false_positive (along with
accepted_risk and wont_fix); the row disappears on next reload.
This verdict survives every later scan of the same target —
fingerprints are cross-scan stable.
7. Clean up
curl -X DELETE "http://127.0.0.1:8000/api/v1/scans/$SCAN_ID"
# 204 — scan + findings rows cascade-deleted.
# Triage verdicts persist (keyed on fingerprint, not scan id).
What you just touched
- The scan engine — see How scans work.
- The API — see API overview.
- The SSE stream — see Real-time scan progress.
- The triage workflow — see Triage.
Where to next
- Run a CI scan: GitHub Action.
- Deploy past
localhost: Production checklist. - Hook another tool to scan completion: Webhooks.
Architecture overview
SecureScan is three pieces that talk over a stable JSON API:
- A FastAPI backend that schedules scanner subprocesses, persists findings to SQLite, and exposes the REST API.
- A Next.js dashboard that consumes that API.
- A CLI / GitHub Action that runs scans either directly or against a backend over HTTP.
Everything centers on the finding: a normalized record produced by one of the 14 scanners, deduplicated across scanners, fingerprinted for cross-scan identity, and serialized deterministically.
Component diagram
flowchart LR
subgraph Client[Client surfaces]
CLI[securescan CLI]
Dash[Next.js dashboard]
Hook[Outbound HTTP receiver]
GHA[GitHub Action]
end
subgraph Server[FastAPI backend single uvicorn worker]
Auth[auth.py + api_keys.py]
Scans[/api/v1/scans/]
Triage[/api/v1/findings/state/]
Hooks[/api/v1/webhooks/]
Notify[/api/v1/notifications/]
Bus[(in-process event bus)]
Disp[Webhook dispatcher async task]
Pipe[Scanner orchestrator pipeline.py]
end
DB[(SQLite scans.db)]
Scanners[14 scanner subprocesses]
Dash <--> Scans
CLI --> Scans
GHA --> Scans
Scans --> Auth
Scans --> Pipe
Pipe --> Scanners
Pipe --> Bus
Bus --> Dash
Bus --> Notify
Bus --> Disp
Disp --> Hook
Scans --> DB
Triage --> DB
Hooks --> DB
Notify --> DB
Disp --> DB
Scan lifecycle
POST /api/v1/scans returns immediately with a pending scan row.
The scanners run as a background asyncio task on the same uvicorn
worker; the request handler does not block on them.
sequenceDiagram
autonumber
participant C as Client
participant API as FastAPI handler
participant DB as SQLite
participant O as Orchestrator (_run_scan)
participant S as Scanner subprocess
participant Bus as Event bus
C->>API: POST /api/v1/scans
API->>DB: INSERT scans (status=pending)
API-->>C: 200 {id, status: pending}
Note over API,O: asyncio.create_task(_run_scan(id))
O->>Bus: publish scan.start
loop for each requested scanner
O->>Bus: publish scanner.start
O->>S: spawn subprocess (semgrep / trivy / ...)
S-->>O: stdout JSON / SARIF
O->>Bus: publish scanner.complete (duration_s, findings_count)
end
O->>DB: UPDATE scans (status=completed) + INSERT findings
O->>Bus: publish scan.complete
Bus->>Bus: side-effect: enqueue webhook delivery
Bus->>Bus: side-effect: insert in-app notification
Every event in this flow is published to the in-process event bus, which fans out to:
- The dashboard's SSE subscribers (live progress).
- The outbound webhook dispatcher (webhook_dispatcher.py).
- The notifications table (bell-icon in-app feed).
Data model
erDiagram
scans ||--o{ findings : has
scans ||--o{ scanner_skips : has
finding_states ||--o{ finding_comments : has
webhooks ||--o{ webhook_deliveries : has
api_keys ||--|| principals : authenticates
notifications
scans {
text id PK
text target_path
text scan_types
text status
timestamp started_at
timestamp completed_at
}
findings {
text id PK
text scan_id FK
text fingerprint
text scanner
text severity
text rule_id
text file_path
int line
text metadata_json
}
finding_states {
text fingerprint PK
text status
text note
timestamp updated_at
}
finding_comments {
text id PK
text fingerprint FK
text text
}
api_keys {
text id PK
text name
text prefix
text key_hash
text scopes_json
timestamp created_at
timestamp last_used_at
timestamp revoked_at
}
webhooks {
text id PK
text url
text secret
text event_filter_json
boolean enabled
}
webhook_deliveries {
text id PK
text webhook_id FK
text event
text payload_json
text status
int attempt
timestamp next_attempt_at
}
notifications {
text id PK
text severity
text title
text body
timestamp created_at
timestamp read_at
}
The cross-cutting identity in this schema is the fingerprint: a
SHA-256 over (scanner, rule_id, file_path, normalized_line_context, cwe)
that stays stable across scans of the same target. Triage state and
comments are keyed on the fingerprint, not the scan id, which is why
deleting a scan does not lose the verdict — see
Triage workflow for the full story.
Authentication topology
flowchart LR Req[Incoming request] --> Mw[require_api_key] Mw -->|X-API-Key matches env| Env[env Principal: all scopes] Mw -->|ssk_* matches DB row| DB[DB Principal: row.scopes] Mw -->|?event_token=...| Tok[Validate HMAC + rehydrate Principal] Mw -->|none + AUTH_REQUIRED=0| Dev[None: dev mode passthrough] Mw -->|none + AUTH_REQUIRED=1| Block[401] Env --> Scope[require_scope] DB --> Scope Tok --> Scope Dev --> Scope Scope -->|scope OK| Handler[Route handler] Scope -->|missing scope| Forbid[403]
See Authentication overview for the full path
through auth.py, including event-token auth on the SSE route.
Process boundaries
| Process | Responsibility | Restart safe? |
|---|---|---|
uvicorn worker (single) | Serve API, run orchestrator, dispatch webhooks, drive event bus. | Yes — pending webhook deliveries resume from DB. |
| Scanner subprocesses | One per scanner per scan, spawned by the orchestrator. | Killed when scan is cancelled (POST /cancel). |
| Frontend (Next.js) | Pure read/write client of the API. Stateless. | Yes — page reload re-subscribes the SSE stream. |
The event bus and the webhook dispatcher are both in-process
singletons. Run uvicorn with --workers 1. To scale horizontally
today, run multiple separate single-worker instances behind a
sticky-session load balancer keyed on scan_id. Multi-process pubsub
(Redis backplane) is on the roadmap.
See Single-worker constraint.
Determinism contract
For the diff-aware PR comment and the SARIF Security-tab dedup to work, the renderer must produce byte-identical output for the same inputs. SecureScan enforces this by:
- Sorting findings by a canonical key
(severity_rank desc, scanner, rule_id, file_path, line, title). - Excluding wall-clock timestamps from byte-identity-sensitive
sections.
SECURESCAN_FAKE_NOWpins the only time-derived field. - Deduplicating + ordering rule lists in SARIF.
- Computing each finding's
partialFingerprints.primaryLocationLineHashfrom the stable per-finding fingerprint. - Auto-disabling AI enrichment when
CI=true(it is non-deterministic).
Without these properties, every PR push would post a new PR comment instead of upserting the existing one, and SARIF re-uploads would look like a wave of new alerts. See Findings & severity for the fingerprint construction.
Source layout
| Directory | Contents |
|---|---|
backend/securescan/api/ | FastAPI routers (scans, triage, webhooks, notifications, keys, …). |
backend/securescan/scanners/ | One module per scanner; all subclass BaseScanner. |
backend/securescan/auth.py | Principal, require_api_key, require_scope. |
backend/securescan/api_keys.py | Key generation + salted-SHA-256 hashing. |
backend/securescan/event_tokens.py | SSE event-token mint/verify. |
backend/securescan/webhook_dispatcher.py | Durable webhook delivery worker. |
backend/securescan/pipeline.py | The scan orchestrator (_run_scan lives in api/scans.py). |
backend/securescan/fingerprint.py | Cross-scan finding identity. |
backend/securescan/dedup.py | Cross-scanner deduplication. |
backend/securescan/scoring.py | Risk-score formula (severity rank × scanner confidence). |
frontend/src/app/ | Next.js app router pages — see Dashboard tour. |
action/ | The composite GitHub Action. |
Next
- Run a scan from scratch: Quick start.
- Operate it past
localhost: Production checklist. - Wire it into CI: GitHub Action.
How scans work
A scan is an orchestrated sweep of one or more scanner subprocesses
against a target — almost always a directory in the local filesystem,
sometimes a URL (DAST) or hostname (network). The result is a row in
the scans table plus N rows in findings, normalized into a
single shape regardless of which scanner produced them.
Lifecycle
stateDiagram-v2
[*] --> pending : POST /api/v1/scans
pending --> running : orchestrator picks it up
running --> completed : every scanner returned (or skipped)
running --> failed : orchestrator raised before terminal
running --> cancelled : POST /api/v1/scans/{id}/cancel
completed --> [*]
failed --> [*]
cancelled --> [*]
Every transition emits a structured log line on the
securescan.scan logger AND publishes to the in-process event bus
(Real-time scan progress). On the bus,
each scanner has its own sub-lifecycle:
scan.start
scanner.start # one per scanner that will run
scanner.complete # OR
scanner.skipped # tool not on PATH; payload includes install_hint
scanner.failed # tool crashed; error truncated to 200 chars
scan.complete # OR
scan.failed # OR
scan.cancelled
Source: see _log_scan_event in
backend/securescan/api/scans.py.
What runs, in what order
POST /api/v1/scans accepts:
{
"target_path": "/abs/path/to/repo",
"scan_types": ["code", "dependency"],
"target_url": null,
"target_host": null
}
The orchestrator looks up scanners by scan_type
(see Supported scanners) and runs them
in registry order. Each scanner is a Python class with an async run()
that shells out to the underlying tool, parses stdout, and returns a
list of Finding objects.
scanners_run and scanners_skipped are persisted on the scan row,
so a 404 for a scanner whose binary is not installed never silently
disappears — the dashboard shows it under "Skipped (N)" with the
install hint surfaced from the scanner's install_hint property.
Cancellation
curl -X POST -H "X-API-Key: $K" \
http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/cancel
- Returns 200 with
status: cancelledif the scan wasrunningorpending. - Returns 409 if the scan is already
completed/failed/cancelled. - Returns 404 for an unknown id.
The orchestrator's asyncio task is cancelled, which propagates
CancelledError into the currently-running subprocess wrapper and
asks it to terminate.
Deletion
curl -X DELETE -H "X-API-Key: $K" \
http://127.0.0.1:8000/api/v1/scans/$SCAN_ID
- Returns 204 on success — findings are cascade-deleted.
- Returns 409 if the scan is
runningorpending(cancel first). - Returns 404 for an unknown id.
Deleting a scan does not delete triage verdicts (finding_states)
or per-finding comments. Those are keyed on the cross-scan
fingerprint, so they reactivate when a matching finding reappears in
a later scan. Cleared on purpose: a "false positive" verdict outlives
the scan that produced the original finding. See
Triage workflow.
Determinism
Two scans of the same git ref produce the same findings, in the same order, with the same fingerprints. This is foundational — without it, the v0.2.0 PR-comment upsert would behave like "post a new comment every push", and SARIF re-uploads to GitHub's Security tab would look like a flood of new alerts.
The relevant guarantees are:
- Findings are sorted canonically:
severity_rank desc, scanner, rule_id, file_path, line, title. - AI enrichment is auto-disabled when
CI=trueis set (it is non-deterministic). Pass--aito override. - Wall-clock timestamps are excluded from byte-identity-sensitive
payload sections.
SECURESCAN_FAKE_NOWpins the one time-derived field that exists. - SARIF rule lists are deduplicated and ordered.
See Findings & severity for the fingerprint construction and Architecture overview for the end-to-end determinism contract.
CLI mode (no backend)
securescan scan ./your-repo runs the same orchestrator without
involving the backend or DB. The output goes to stdout in whatever
format you ask for:
securescan scan ./your-repo \
--type code --type dependency \
--output sarif --output-file results.sarif
securescan scan ./your-repo \
--type code \
--output json --output-file findings.json
CLI mode is what securescan diff uses internally — it scans both
sides of the diff and classifies findings into NEW / FIXED /
UNCHANGED. See CLI commands.
Failure modes
| Symptom | Likely cause | Where to look |
|---|---|---|
Scan stuck in running past the longest scanner | A subprocess hung; the orchestrator awaits its asyncio future. | tail /tmp/securescan-backend.log; cancel the scan; check nmap/ZAP |
All scanners come back as skipped | None of the underlying CLIs are on PATH. | securescan status |
scan.failed with error: ... | Orchestrator-level exception (DB write failed, target invalid) | Backend log securescan.scan logger. |
Specific scanner repeatedly scanner.failed | Tool present but crashed on input. error field has details. | Re-run the tool by hand on the same target. |
tail -f /tmp/securescan-backend.log | grep securescan.scan shows
the scan-lifecycle line for every event. The same data is on the
SSE stream at /api/v1/scans/{id}/events — use whichever fits your
debugging surface.
Next
- Scan types — what each
scan_typecovers. - Supported scanners — what each tool finds.
- Findings & severity — the finding shape.
- Suppression — three ways to silence a finding.
Scan types
A scan_type selects which family of scanners runs. The orchestrator
expands ["code", "dependency"] into the union of every scanner whose
scan_type matches.
You can pass any subset; if none are passed the CLI defaults to code
for fast PR feedback. There are six families.
Type table
| Type | Default for securescan diff | Scanners | Typical target |
|---|---|---|---|
code | ✅ yes | semgrep, bandit, secrets, git-hygiene | Source tree |
dependency | trivy, safety, npm-audit, licenses | Source tree (manifests) | |
iac | checkov, dockerfile | Source tree | |
baseline | baseline (built-in) | Host or filesystem | |
dast | builtin_dast, zap | URL | |
network | nmap | Hostname / IP / range |
The mapping lives in
backend/securescan/scanners/__init__.py
(ALL_SCANNERS registry) and each scanner's scan_type class
attribute.
code
Static analysis of source files in the target tree. Picks up:
- SAST issues (SQL injection, XSS, command injection, path
traversal) via Semgrep with
--config autoplus any custom rule packs you declare in.securescan.yml. - Python-specific insecure imports and bandit's signatures.
- Secrets (hardcoded API keys, tokens, private keys) via the built-in regex bank and Gitleaks.
- Git hygiene — sensitive files committed to the repo,
missing
.gitignoreentries.
Example:
securescan scan ./your-repo --type code --output text
[HIGH] semgrep backend/api.py:42 Use of eval()
[HIGH] bandit backend/db.py:12 SQL injection via str.format
[MEDIUM] secrets config/local.yml:5 AWS access key
The same call via the API:
curl -X POST http://127.0.0.1:8000/api/v1/scans \
-H 'Content-Type: application/json' \
-H "X-API-Key: $K" \
-d '{"target_path":"/abs/path/to/repo","scan_types":["code"]}'
dependency
Manifest + lockfile vulnerability scanning:
trivy— handlesrequirements.txt,package.json,Gemfile.lock,Cargo.lock,go.sum,composer.lock,Pipfile.lock, etc.safety— Python dependencies against the safety DB.npm-audit— npm advisories on transitive deps.licenses— copyleft / unknown-license risks viapip-licenses.
Example:
securescan scan ./node-project --type dependency --output sarif \
--output-file deps.sarif
The licenses scanner reports compliance findings (unknown / GPL /
AGPL detected), not CVEs. It is part of the dependency family
because the data source is the manifest. Filter it out with
.securescan.yml's ignored_rules if your org has explicit
copyleft approval.
iac
Infrastructure-as-code misconfigurations:
checkov— Terraform, Kubernetes manifests, Helm charts, CloudFormation, Dockerfiles. Hundreds of policies out of the box.dockerfile— opinionated checks for:latestbase images, running as root,curl | shpatterns, secrets inENV.
securescan scan ./infra --type iac --output text
The dockerfile scanner is fast and runs even when checkov is not installed; checkov is the heavyweight, broader source.
baseline
Host-config audit: SSH daemon settings, /etc/passwd /
/etc/shadow perms, ~/.ssh perms, kernel parameters, password
policy.
The behavior depends on target_path:
target_path = "/"— host-wide probes (the default behavior).- Anything else — probes
<target>/etc/ssh/sshd_config,<target>/etc/passwd,<target>/etc/shadow. Skips host-only checks like~/.sshperms. If none of those files are present, emits one info-severity finding pointing at--baseline-host-probes.
# Audit the running host (requires read access to /etc/...)
securescan scan / --type baseline
# Audit a chrooted filesystem
securescan scan /mnt/snapshot --type baseline
# Force host-scope probes alongside a target scan
securescan scan ./my-config --type code --baseline-host-probes
Every baseline finding gets a metadata.baseline_scope tag of
host or target so the audit trail records which mode produced
the finding.
dast
Dynamic application security testing — runs against a live URL:
builtin_dast— header / cookie / info-disclosure checks. No external dependency. Fast.zap— full ZAP active+passive scan. Requires a running ZAP daemon atSECURESCAN_ZAP_ADDRESS.
securescan scan https://staging.example.com \
--type dast \
--output text
For the ZAP scanner, set credentials in
~/.config/securescan/.env:
SECURESCAN_ZAP_ADDRESS=http://127.0.0.1:8090
SECURESCAN_ZAP_API_KEY=your-key
Only run DAST against systems you own or have explicit authorization
to test. ZAP active mode is intrusive. The default securescan diff
in CI does not include dast — you have to opt in with
--type dast (or scan-types: code,dast on the GitHub Action).
network
Network-perimeter probe via nmap. Reports open ports, detected
service banners, and a coarse risk classification (telnet, RDP, SMB,
exposed databases, etc.).
securescan scan 10.0.0.1 --type network --output text
Or a CIDR / hostname:
securescan scan example.com --type network
securescan scan 10.0.0.0/24 --type network --output sarif --output-file net.sarif
Combining types
Comma-separated list — all are unioned together:
securescan scan ./your-repo --type code --type dependency --type iac
Or in .securescan.yml:
scan_types:
- code
- dependency
- iac
The PR-mode default is scan-types: code because it produces fast
feedback on every push. Adding dependency is the most common
upgrade for a busy repo.
Picking what to run
- Scanning a PR diff?
code(default) — adds dependency / iac as your team adopts them. - Scanning a release tag before publishing?
code,dependency,iac. - Auditing a production host?
baselineagainst/. - Verifying a deployed service?
dastagainst the URL. - Surveying a subnet?
network(with authorization).
Next
- Supported scanners — what each tool produces.
- Suppression — silencing rules across types.
- Compliance — how findings map to OWASP / SOC 2 / PCI-DSS.
Supported scanners
SecureScan ships 14 scanners. Each is a Python class that subclasses
BaseScanner and shells out to the underlying tool. The registry is
backend/securescan/scanners/__init__.py;
each module is named after the scanner.
Registry
| Scanner | Module | scan_type | What it finds |
|---|---|---|---|
| semgrep | scanners/semgrep.py | code | SQLi, XSS, command injection, hardcoded secrets via Semgrep's rule library. |
| bandit | scanners/bandit.py | code | Python-specific security issues, insecure imports. |
| secrets | scanners/secrets.py | code | Hardcoded credentials, API keys, tokens, private keys. |
| git-hygiene | scanners/gitleaks.py | code | Sensitive files committed to repo, gitleaks rules, missing .gitignore protections. |
| trivy | scanners/trivy.py | dependency | Known CVEs in package manifests and lockfiles across many ecosystems. |
| safety | scanners/safety.py | dependency | Python dependency vulnerabilities from the safety DB. |
| npm-audit | scanners/npm_audit.py | dependency | npm package advisories and transitive vulns. |
| licenses | scanners/license_checker.py | dependency | Copyleft / unknown / restricted license findings via pip-licenses and the license field of npm packages. |
| checkov | scanners/checkov.py | iac | Terraform, Kubernetes, Helm, CloudFormation, Dockerfile misconfigurations. |
| dockerfile | scanners/dockerfile.py | iac | Insecure Docker patterns: :latest, root user, curl | sh, secrets in ENV. |
| baseline | scanners/baseline.py | baseline | SSH config, /etc/passwd perms, password policy, kernel params. |
| builtin_dast | scanners/dast_builtin.py | dast | Missing security headers, info disclosure, insecure cookie flags. No external dep. |
| zap | scanners/zap_scanner.py | dast | OWASP ZAP active + passive scan against a URL. |
| nmap | scanners/nmap_scanner.py | network | Open ports, service detection, risk classification. |
To see what is installed and reachable on the current host:
$ securescan status
Scanner Type Available Version Notes
semgrep code yes 1.71.0
bandit code yes 1.7.5
trivy dependency yes 0.49.1
safety dependency yes 2.3.5
checkov iac no pip install checkov
npm-audit dependency yes npm 10.x uses ambient `npm` on PATH
zap dast no /usr/share/zaproxy/zap.sh; recommended port 8090
nmap network yes 7.94
...
The same data is at GET /api/v1/dashboard/status — the dashboard's
/scan page reads it on mount and disables categories whose scanners
are all unavailable. See Dashboard tour.
Per-scanner notes
Semgrep
-
Uses
--config autoby default. To override, setsemgrep_rulesin.securescan.yml:semgrep_rules: - .securescan/rules/secrets.yml - .securescan/rules/unsafe-deserialization.ymlWhen set, replaces
--config autowith one--config <path>per entry. Paths are relative to the config file. -
Rule IDs surface as
python.lang.security.audit.eval-detectedetc. Use them inseverity_overrides:andignored_rules:to tune.
Bandit
- Runs against Python files only. Rule IDs are
B<NNN>(e.g.B106= hardcoded password). - One bandit gotcha: it scans
__init__.pyand test files too. Use inline# securescan: ignore B106on test fixtures to silence intentional-by-design findings.
Trivy
- The heavyweight dependency scanner. Picks up most ecosystems out of
the box (
requirements.txt,package-lock.json,Cargo.lock,go.sum,composer.lock,Pipfile.lock). - Updates its DB on first run; allow ~30s extra latency on a cold cache.
ZAP
-
Requires a separately running ZAP daemon. The scanner connects to the daemon's HTTP API.
# ~/.config/securescan/.env SECURESCAN_ZAP_ADDRESS=http://127.0.0.1:8090 SECURESCAN_ZAP_API_KEY=your-key -
The Arch Linux launcher is auto-detected at
/usr/share/zaproxy/zap.sh. The scanner'sinstall_hintrecommends port8090because8080is commonly busy.
nmap
- Default scan is non-intrusive (TCP connect, top 1000 ports, service banners). Risk classification flags exposed databases (3306, 5432, 6379, 27017), unencrypted protocols (telnet 23, FTP 21), and SMB (445).
nmap is not passive. Only run it against networks you own or have explicit written authorization to scan. SecureScan does not enforce scope authorization — that is your responsibility.
baseline
- The only built-in scanner (no external CLI). Implements every probe directly in Python.
- Probes are categorised as
hostortargetscope; see Scan types for howtarget_pathselects between them. - Surfaces
metadata.baseline_scope = "host" | "target"on every finding for audit trail.
Adding scanners
Adding a new scanner means dropping a new module under
backend/securescan/scanners/ that subclasses BaseScanner and
appending an instance to ALL_SCANNERS. The base class handles:
- Subprocess spawn + cancellation.
- Stdout / stderr capture.
install_hint+ availability detection.- Wrapping returned dicts into
Findinginstances with the rightscan_type,scannername, and severity normalization.
The BaseScanner interface lives in
backend/securescan/scanners/base.py.
That said, the v0.9.0 contract treats the registry as fixed — new
scanners should land via PR rather than runtime registration.
Next
- Findings & severity — the normalized shape every scanner outputs.
- Suppression — silencing a noisy scanner / rule.
- Compliance — mapping rule IDs to frameworks.
Findings & severity
Every scanner output is normalized to the same finding shape. That
shape is what the API returns, what securescan diff compares, what
SARIF / JSON / CSV / JUnit exporters serialize, and what the dashboard
renders.
The Finding shape
{
"id": "f-2c1a93cb",
"scan_id": "0f1a93cb-44c2-4c8e-9f92-0a7c5a2e1b51",
"scanner": "semgrep",
"scan_type": "code",
"severity": "high",
"title": "Use of eval()",
"description": "Detected use of eval(); evaluating arbitrary input is dangerous.",
"file_path": "backend/securescan/cli.py",
"line": 142,
"column": 8,
"rule_id": "python.lang.security.audit.eval-detected",
"cwe": "CWE-95",
"fingerprint": "9d2f3a1b8c4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1",
"compliance_tags": ["OWASP-A03", "PCI-DSS-6.5.1"],
"metadata": {
"suppressed_by": null,
"original_severity": null
}
}
The Pydantic model is Finding in
backend/securescan/models.py.
The dashboard's findings endpoint (GET /api/v1/scans/{id}/findings)
returns FindingWithState — every field above plus an optional
state: FindingState | null for the triage verdict
(see Triage workflow).
Severity
Five levels. The ramp is a single tonal ramp around the warm hue, not stoplight RGB:
| Level | Meaning | Default --fail-on-severity behavior |
|---|---|---|
critical | Drop-everything. Active exploitation likely. | Fail |
high | Real risk. Fix before release. | Fail |
medium | Should fix; not blocking. | Fail when --fail-on-severity=medium |
low | Nice to fix. | |
info | Informational; not actionable on its own. |
Severity is per-scanner but normalized here. Different scanners report on different scales (Trivy uses CVSS, Bandit uses LOW/MEDIUM/HIGH, Semgrep uses INFO/WARNING/ERROR), and they all map into this five-level common denominator.
You can override severity per rule via .securescan.yml:
severity_overrides:
python.lang.security.audit.dangerous-system-call: medium
python.lang.security.audit.eval-detected: low
When an override applies, the original severity is preserved on
metadata.original_severity, and the dashboard renders
severity (was: original) in the row so the audit trail stays
visible.
Risk score
The scan summary (GET /api/v1/scans/{id}/summary) carries a
risk_score field, a single number aimed at trend lines and
quarterly reviews. It is roughly:
Weighted by severity rank (critical/high count for far more than low/info) and scanner confidence.
The exact formula lives in
backend/securescan/scoring.py
and is intentionally not documented in detail here — the score is
useful as a trend indicator, not as a precise metric to negotiate.
For decisions, look at the severity counts directly.
For the dashboard Overview page's trend chart and the scan-detail
StatLine, severity counts are the primary metric. risk_score is a
single rolled-up number for headline use.
Fingerprints — cross-scan identity
Every finding gets a deterministic fingerprint:
sha256(
scanner | rule_id | file_path | normalized_line_context | cwe
)
Construction is in
backend/securescan/fingerprint.py.
The normalized_line_context is the matched line with whitespace
collapsed and trivial reformat normalized — so renaming a variable
shifts the fingerprint, but reformatting the file does not.
This identity is what keeps:
- Triage verdicts sticky across rescans (
finding_statesis keyed onfingerprint, not(scan_id, finding_id)). - PR comment threads stable across re-runs (the inline-review poster looks up existing comments by fingerprint and PATCHes them rather than posting duplicates).
- SARIF re-uploads clean —
partialFingerprints.primaryLocationLineHashis set from the same value, so GitHub's Security tab dedupes. securescan comparesane —NEW/STILL_PRESENT/DISAPPEAREDis computed by fingerprint set difference.
Practically: if you triage a finding as false_positive once, it
stays a false positive across every later scan of the same target,
even after DELETE /scans/{id}. See Triage.
Deduplication
Multiple scanners can find the same underlying issue from different
angles — Bandit and Semgrep both flag eval(). The orchestrator runs
dedup_key from
backend/securescan/dedup.py
across the union of scanner outputs and keeps the higher-confidence
finding (the one whose scanner is more authoritative for that rule
class).
The dropped findings still show up in the scan's lifecycle log — they are filtered before persistence, so the database only stores the canonical finding for each underlying issue.
Severity badges
The dashboard renders severity as a colored dot prefix + the level text:
● critical coral background
● high burnt orange
● medium saffron
● low dusty teal (NOT bright blue)
● info ash
The exact OKLCH values live in
frontend/src/app/globals.css.
There is no neon red / yellow / green — see DESIGN.md for
the rationale.
Compliance tags
Each finding can carry one or more compliance_tags — strings like
OWASP-A03, PCI-DSS-6.5.1, SOC2-CC7.1. The mapping engine
(backend/securescan/compliance.py) matches by CWE, rule_id, or
keyword and the dashboard renders chips per finding plus a coverage
summary on the Overview page. See Compliance for
which frameworks are mapped and how.
Suppression metadata
When a finding is suppressed (inline comment, config rule, or
baseline), metadata.suppressed_by is set to one of:
"inline"—# securescan: ignore RULE-IDon the line."config"—.securescan.ymlignored_rules."baseline"— present in the saved baseline JSON.
By default, suppressed findings are hidden from CI output (PR
comments, SARIF) but rendered on a TTY (and in the dashboard via the
"Show suppressed" toggle) with a [SUPPRESSED:<reason>] prefix so
you can audit the breakdown without re-running. Force visibility
everywhere with --show-suppressed.
See Suppression.
Output formats
| Format | Where it lives | Use |
|---|---|---|
text | CLI default for TTY runs. | Human-readable terminal output. |
json | --output json — finding records as a JSON array. | Snapshot mode, downstream tools, baselines. |
sarif | --output sarif — SARIF v2.1.0 with partialFingerprints. | GitHub Security tab, third-party SARIF readers. |
csv | --output csv — one row per finding. | Spreadsheet import, compliance reports. |
junit | --output junit — failures = findings. | CI test-result tabs. |
github-pr-comment | --output github-pr-comment — markdown with <!-- securescan:diff --> upsert marker. | The default for securescan diff. |
github-review | --output github-review — payload for GitHub's Reviews API. | Inline-review mode of the GitHub Action. |
All formats produce byte-identical output for the same inputs — see Architecture: determinism contract.
Next
- Suppression — three ways to silence a finding.
- Triage — verdicts that survive across scans.
- Compliance — framework mapping.
Suppression
Three independent suppression mechanisms, with a fixed precedence:
inline > config > baseline
The most-local mechanism wins; an inline comment defeats a config
rule, a config rule defeats a baseline entry. Each mechanism stamps
metadata.suppressed_by on the finding so the audit trail records
which path silenced it.
1. Inline ignore comments
Place a comment on the finding's line, or the line above:
data = eval(payload) # securescan: ignore python.lang.security.audit.eval-detected
# securescan: ignore-next-line python.lang.security.audit.eval-detected
result = eval(other_payload)
Recognized comment styles: #, //, --. Multiple rule IDs are
comma-separated. * is a wildcard.
// securescan: ignore-next-line javascript.lang.security.detect-eval, javascript.lang.security.detect-non-literal-regex
const fn = new Function(userCode);
-- securescan: ignore * (wildcard — silences EVERY rule on this line)
SELECT * FROM users WHERE id = '${user_id}';
When this mechanism applies, metadata.suppressed_by = "inline".
The Metbcy/securescan@v1 GitHub Action with pr-mode: inline, inline-suggestions: true posts each finding with a one-click
\``suggestionblock adding the# securescan: ignore RULE-ID`
comment above the line. Reviewers can apply it with the GitHub UI's
"Commit suggestion" button. See GitHub Action.
2. Config-driven rules (.securescan.yml)
Repo-wide rule list. Best for rules that fire too often to silence inline.
# .securescan.yml at the repo root
ignored_rules:
- python.lang.security.audit.dynamic-django-attribute
- B106 # bandit: hardcoded password (we use a vault)
- python.django.security.audit.csrf-exempt-without-csrf-token
The config file is auto-detected: securescan walks up from the
target directory until it finds .securescan.yml or hits a .git/
boundary. To validate:
$ securescan config validate
.securescan.yml: OK
scan_types: code, dependency
ignored_rules: 3
severity_overrides: 2
Validation catches typos in severity values, missing rule-pack
paths, and ignored_rules ↔ severity_overrides collisions before
they bite at scan time.
When this mechanism applies, metadata.suppressed_by = "config".
Severity overrides (related)
Not strictly suppression, but adjacent: severity_overrides rewrites
a rule's severity post-scan, preserving the original on
metadata.original_severity:
severity_overrides:
python.lang.security.audit.dangerous-system-call: medium
some.noisy.rule: low
The dashboard renders severity (was: high) so reviewers know the
override was applied. See Findings & severity.
3. Baseline (legacy findings)
Adopt-on-an-existing-codebase mechanism. Snapshot the current set of findings; subsequent scans suppress everything in the snapshot.
# Once, when adopting SecureScan:
securescan baseline # writes .securescan/baseline.json (deterministic, git-friendly)
git add .securescan/baseline.json
git commit -m "chore: SecureScan baseline"
# Every PR / CI run:
securescan diff . --base-ref main --head-ref HEAD \
--baseline .securescan/baseline.json
The baseline file is canonicalized and byte-deterministic: no timestamps, relative target_path, sorted finding entries. Two identical scans produce two identical baseline files, so it diffs cleanly in code review.
When this mechanism applies, metadata.suppressed_by = "baseline".
Refreshing the baseline
When findings get fixed, refresh:
securescan baseline # overwrites .securescan/baseline.json
# What disappeared since the last baseline?
securescan compare .securescan/baseline.json
The compare subcommand reports DISAPPEARED findings — your
remediation drift. Useful at the end of a sprint to confirm the
work landed.
Precedence in detail
flowchart TD
F[Finding from scanner] --> Q1{Has inline ignore<br>on this line?}
Q1 -->|yes| Inline[suppressed_by: inline]
Q1 -->|no| Q2{Rule listed in<br>ignored_rules?}
Q2 -->|yes| Config[suppressed_by: config]
Q2 -->|no| Q3{Fingerprint in<br>baseline.json?}
Q3 -->|yes| Baseline[suppressed_by: baseline]
Q3 -->|no| Kept[kept — visible in output]
Inline --> Out[output filtering]
Config --> Out
Baseline --> Out
Kept --> Out
The "winner" is the most-local applicable mechanism. If a finding is
both inline-ignored AND in the baseline, suppressed_by = "inline".
What gets hidden
By default:
| Surface | Suppressed shown? | How to override |
|---|---|---|
securescan diff PR comment | No | --show-suppressed |
SARIF (--output sarif) | No | --show-suppressed |
CSV (--output csv) | No | --show-suppressed |
securescan diff on TTY | Yes (with prefix) | --no-suppress to disable suppression entirely |
| Dashboard findings table | No | Toggle: "Show suppressed findings" (per-page) |
--fail-on-severity counting | No | Suppressed findings never gate the build by design |
--fail-on-severity is intentionally only counting kept findings.
A baseline-suppressed critical does not turn a clean PR red — that is
the whole point of baselining.
Audit: what was suppressed?
When you want to see the breakdown:
$ securescan diff . --base-ref main --head-ref HEAD --show-suppressed
[NEW]
[HIGH] semgrep backend/api.py:42 Use of eval()
[MEDIUM] secrets config.yml:5 AWS access key
[SUPPRESSED:inline]
[HIGH] semgrep tests/fixtures/eval.py:3 intentional eval in fixture
[SUPPRESSED:config]
[LOW] semgrep migrations/0001.py:12 ignored_rules: noisy-rule
[SUPPRESSED:baseline]
[HIGH] bandit backend/legacy.py:88 (legacy finding)
Summary
NEW: 2
Suppressed: 3 (inline=1, config=1, baseline=1)
fail-on-severity: none
The PR comment summary table also includes a
"Suppressed: N (inline=I, config=C, baseline=B)" row when
--show-suppressed is set so reviewers can audit without re-running.
Disabling suppression entirely (kill switch)
Sometimes you want every finding visible — review pass, audit, etc:
securescan diff . --base-ref main --head-ref HEAD --no-suppress
--no-suppress ignores inline comments, config rules, AND the
baseline. Useful but rarely the right CI default.
Picking the right mechanism
- One bad fixture line → inline comment.
- A scanner rule that fires hundreds of times →
ignored_rules. - Adopting SecureScan on an old codebase →
securescan baseline. - A rule that fires "right" on production code but is wrong here →
severity_overridesto lower its weight first;ignored_rulesonly if it is genuinely wrong. - An entire scan family that does not apply to your repo → just don't pass that
--type. Suppression is for findings, not scanners.
Next
- Triage workflow — durable verdicts beyond suppression.
- Findings & severity — the underlying model.
- GitHub Action —
inline-suggestions: truefor one-click ignores.
Triage workflow
Triage is how you record what you decided about a finding, durably, across rescans of the same target. Introduced in v0.7.0, it sits one layer above Suppression: suppression silences a finding, triage records the human verdict on a finding — including whether it was actually fixed, accepted as risk, or won't be fixed.
Triage statuses
new (default — no verdict recorded)
triaged (acknowledged; under investigation)
false_positive (scanner is wrong)
accepted_risk (real, but accepted by the team)
fixed (the underlying code was changed; verify on rescan)
wont_fix (real, but out of scope to fix)
The dashboard's findings table renders a colored status pill in the
"Status" column. The default filter hides
{false_positive, accepted_risk, wont_fix} — those are decisions, the
table no longer needs to draw attention to them.
Scope: per-fingerprint, not per-scan
Triage state is keyed on the cross-scan fingerprint — not the scan id. Three consequences:
- A verdict survives every later scan of the same target. Mark a
finding
false_positiveonce; the next scan, and the one after, keep that verdict. - A verdict survives
DELETE /scans/{id}. The triage row stays; it reactivates when a matching fingerprint appears again. - A "fixed" finding shows up loud on regression. If it
reappears in a later scan, the row is rendered with the
fixedpill + strikethrough — the team can see at a glance that something came back.
This is by design. Triage state and per-finding comments are an audit trail, not scan metadata.
Setting a verdict
FP="9d2f3a1b8c4e5f6a..."
curl -X PATCH \
"http://127.0.0.1:8000/api/v1/findings/$FP/state" \
-H "X-API-Key: $K" \
-H 'Content-Type: application/json' \
-d '{
"status": "false_positive",
"note": "intentional eval in test fixture",
"updated_by": "alice"
}'
Response:
{
"fingerprint": "9d2f3a1b8c4e5f6a...",
"status": "false_positive",
"note": "intentional eval in test fixture",
"updated_at": "2026-04-29T20:14:22.000000",
"updated_by": "alice"
}
PATCH semantics here are replace, not merge. Omitting note
clears the prior note. Use the comments thread for incremental
remarks; reserve note for a one-line "why" on the verdict.
The endpoint requires the write scope — see
Scopes. Source:
backend/securescan/api/triage.py.
Comments
Each finding has a per-fingerprint comments thread. Like the verdict, comments are durable across scans.
# List comments
curl "http://127.0.0.1:8000/api/v1/findings/$FP/comments" \
-H "X-API-Key: $K"
# Add a comment
curl -X POST "http://127.0.0.1:8000/api/v1/findings/$FP/comments" \
-H "X-API-Key: $K" \
-H 'Content-Type: application/json' \
-d '{"text":"Reviewed with security team — green-lit.","author":"bob"}'
# Delete a comment by id
curl -X DELETE \
"http://127.0.0.1:8000/api/v1/findings/$FP/comments/c-12ab" \
-H "X-API-Key: $K"
Comments require the write scope (add and delete); listing
requires read.
The dashboard renders the comments thread in the expanded row panel, lazy-loaded the first time you open it. Add / delete are inline.
In the dashboard
The Scan Detail page's findings table:
- New Status column shows the pill (color-coded;
fixedadds strikethrough + accent-green). - Sticky filter bar has a status filter chip strip in addition to severity chips. Triage and suppression filters are independent and AND-combined.
- Default-hide set is
{false_positive, accepted_risk, wont_fix}— toggle them via the filter to see them. - Expand a row → Triage panel (status dropdown + note textarea)
- Comments panel (lazy loaded; add / delete inline).
See Dashboard tour.
Endpoints
| Method | Path | Scope | Notes |
|---|---|---|---|
PATCH | /api/v1/findings/{fingerprint}/state | write | Set / replace verdict + note. Server stamps updated_at. |
GET | /api/v1/findings/{fingerprint}/comments | read | Returns [] when no comments. ASC by created_at. |
POST | /api/v1/findings/{fingerprint}/comments | write | 201 with the persisted row. |
DELETE | /api/v1/findings/{fingerprint}/comments/{comment_id} | write | 204 on success, 404 if id is unknown. |
The fingerprint is not validated against any current scan — it's intentional that you can set state on a fingerprint that does not correspond to any current finding. The state row reactivates when the finding reappears.
Workflow with rescans
sequenceDiagram
autonumber
participant Dev as Developer
participant API as SecureScan
participant DB as DB
Dev->>API: POST /api/v1/scans (target=./repo)
API-->>Dev: scan_1 (5 findings)
Dev->>API: PATCH /findings/FP-1/state {status:false_positive}
API->>DB: INSERT finding_states (FP-1, false_positive)
Dev->>API: DELETE /api/v1/scans/scan_1
Note over DB: finding_states (FP-1) survives
Dev->>API: POST /api/v1/scans (target=./repo)
API-->>Dev: scan_2 (same 5 findings)
Dev->>API: GET /findings (filtered)
API->>DB: JOIN findings(scan_2) ON finding_states by FP
API-->>Dev: scan_2 findings — FP-1 hidden by default filter
A fix workflow:
sequenceDiagram
autonumber
participant Dev as Developer
participant API as SecureScan
Dev->>API: PATCH /findings/FP-2/state {status:fixed,note:"fixed in PR 42"}
Note over Dev: ... time passes, rescan ...
Dev->>API: POST /api/v1/scans
API-->>Dev: scan_3 — FP-2 not present (truly fixed)
Note over Dev: regression check passes
Note over Dev: ... time passes, refactor ...
Dev->>API: POST /api/v1/scans
API-->>Dev: scan_4 — FP-2 present again
Note over Dev: dashboard shows FP-2 with strikethrough fixed pill — regression signal
CLI access
There is no first-class CLI for triage in v0.9.0; use curl or the
dashboard. A securescan triage subcommand is on the roadmap.
Triage vs. suppression: when to use which
| Question | Mechanism |
|---|---|
| "Stop printing this finding in PR comments forever" | Suppression (ignored_rules) |
| "I've reviewed this; record my decision" | Triage (PATCH /findings/{fp}/state) |
| "I want this finding gone but I want to know if it comes back" | Triage fixed (NOT default-hidden — regression loud) |
| "Adopt SecureScan on an old codebase" | Suppression (securescan baseline) |
| "Track who decided what, when" | Triage (updated_by + comments) |
In short: suppression silences output, triage records decisions. You can use both — a finding can be triaged AND suppressed.
Next
- Findings & severity — the fingerprint that keys triage state.
- Suppression — for output filtering, not decisions.
- API endpoints — full request/response shape.
Compliance
SecureScan ships a compliance-mapping engine that tags each finding
with framework references — OWASP Top 10, CIS, PCI-DSS, SOC 2 — by
matching the finding's CWE, rule_id, or scanner output keywords
against per-framework data files.
This is a coverage indicator, not a certification. SecureScan does not certify your org against any framework; it surfaces which findings would map to which controls so you can demonstrate coverage and prioritize remediation.
Mapped frameworks
| Framework | Where rules come from |
|---|---|
| OWASP Top 10 | CWE → A01–A10 mapping. Tags look like OWASP-A03, OWASP-A07. |
| CIS Controls | Control category mapping (CIS-3, CIS-5, …) by rule_id keyword. |
| PCI-DSS | Specific requirement IDs: PCI-DSS-6.5.1 (injection), PCI-DSS-2.2 (config), etc. |
| SOC 2 | Trust service criteria: SOC2-CC6.1 (logical access), SOC2-CC7.1 (change management), etc. |
The mapper lives in
backend/securescan/compliance.py
and uses simple data files under
backend/securescan/data/.
Adding a framework means dropping a JSON file with the rule mappings
— no code change.
How tags are computed
For each finding the mapper checks:
- CWE (
cwefield, e.g.CWE-89). Direct lookup against the framework's CWE → control table. - rule_id. Keyword match against per-framework lists
(
B106→ SOC2-CC6.1;eval-detected→ OWASP-A03; etc.). - Scanner output keywords. Falls back to substring match against
title+descriptionfor rules without a CWE or specific mapping.
Results are deduplicated and sorted alphabetically. A finding can carry multiple tags from multiple frameworks:
{
"rule_id": "python.lang.security.audit.eval-detected",
"cwe": "CWE-95",
"compliance_tags": [
"OWASP-A03",
"PCI-DSS-6.5.1",
"SOC2-CC7.1"
]
}
In the dashboard
Per-finding chips
Each finding row in the table renders a Compliance column with chip icons for each tag. Hover a chip to see the framework + control name.
Coverage cards
The Overview page renders one tokenized coverage card per framework:
┌─ OWASP Top 10 ─────────────────────────┐
│ 6 / 10 controls observed │
│ ●●●●●●○○○○ │
│ Last seen: A03, A07 (this scan) │
└────────────────────────────────────────┘
The cards link to a per-framework drill-down with the matching findings grouped by control.
API
List findings filtered by compliance tag
curl "http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/findings?compliance=OWASP-A03" \
-H "X-API-Key: $K"
Returns only findings whose compliance_tags includes
OWASP-A03. Multiple filters can be combined with the severity /
scan_type query params (AND semantics).
Coverage summary
curl "http://127.0.0.1:8000/api/v1/compliance/coverage?scan_id=$SCAN_ID" \
-H "X-API-Key: $K"
{
"frameworks": {
"OWASP": {
"controls": ["A01", "A02", "A03", "A04", "A05", "A06", "A07", "A08", "A09", "A10"],
"observed": ["A03", "A07"],
"coverage_pct": 20.0
},
"PCI-DSS": {
"controls": ["1.2", "2.2", "6.5.1", ...],
"observed": ["6.5.1"],
"coverage_pct": 12.5
}
}
}
Use in PR review
Compliance tags are included in the SARIF output's
properties.tags per result:
{
"ruleId": "python.lang.security.audit.eval-detected",
"level": "warning",
"properties": {
"tags": ["OWASP-A03", "PCI-DSS-6.5.1", "SOC2-CC7.1"],
"suppressed_by": null
}
}
When uploaded to GitHub's Security tab, those become searchable tags on the alert.
In the PR comment (github-pr-comment output), tags are rendered
inline next to each finding so reviewers see compliance impact at a
glance:
[HIGH] Use of eval() · semgrep · OWASP-A03 PCI-DSS-6.5.1 SOC2-CC7.1
backend/api.py:42
What this is not
The compliance engine is a coverage indicator. It tags findings that touch a control. It does not:
- Audit your org against the framework.
- Generate a control matrix on its own.
- Prove the absence of issues for unobserved controls.
- Replace the work of a qualified auditor.
What it gives you is a useful prioritization signal — a critical finding that touches OWASP-A03 + PCI-DSS-6.5.1 + SOC2-CC7.1 should be above one that touches none.
Customizing mappings
To add or override a mapping, edit the per-framework JSON file under
backend/securescan/data/. Each entry is a (rule_id | cwe | keyword)
→ [control_id, ...] mapping. Restart the backend after edits.
A custom-framework PR is welcome; please include rationale and at least one known-good test case.
Next
- Findings & severity — finding shape with
compliance_tags. - API endpoints — coverage / per-tag filtering endpoints.
Dashboard tour
The dashboard is the secondary surface — the CLI is the source of
truth. You go to the dashboard when you want to browse historical
scans, interactively triage backlog, or show somebody else
what's going on. It is a Next.js app under
frontend/;
all pages talk only to the backend's REST API.
This tour walks the v0.6.0 redesign. The screenshots-in-words below match what you see on a fresh local install.
App shell
Persistent layout, applied to every page (defined in
frontend/src/app/layout.tsx):
- Sidebar (220px, left). Page nav grouped into Scans, Reports, Settings. Collapses to icon strip below 1024px.
- Topbar (56px, sticky). Breadcrumb-style page label (left),
command palette trigger (
⌘K, center), and on the right: notifications bell, API health indicator, theme toggle. - Main content. Page-specific.
The shell is intentionally calm — refined neutrals, single accent color (moss green), no neon. See DESIGN.md for the rules.
Overview (/)
The home page. Layout:
- PageHeader — title "Overview", scan count + scanner count metadata, "New scan" primary action.
- StatLine — running totals: total scans, total findings, critical / high counts, last scan timestamp.
- Latest scan two-column section — the most recent scan's status, target, scanner-chip strip, and severity counts.
- Recent scans compact table — last ~10 rows.
- Compliance coverage cards — one per framework. See Compliance.

Source: frontend/src/app/page.tsx.
New scan (/scan)
Two-column wizard with a sticky preview panel showing what is about to run.
LEFT (form) RIGHT (sticky preview)
───────────────────────────────── ──────────────────────────
Target path [browse...] Will run:
☑ semgrep
Scan types ☑ bandit
☑ Code (4 scanners) ☑ secrets
☐ Dependency (3 scanners) ☑ git-hygiene
☐ IaC (2 scanners)
☐ Baseline (1 scanner) Skipped (1)
☐ DAST (URL required) checkov: pip install checkov
☐ Network (host required)
Quick presets Severity threshold
[code-only] [dep-only] [full] Fail at: high
Recently scanned
/home/me/proj-a
/home/me/proj-b [ Start scan ]
The page reads GET /api/dashboard/status on mount and disables
categories whose scanners are all unavailable, with inline install
hints. Default selection adapts to availability — a host with no
DAST tools won't pre-tick the DAST checkbox.
Source: frontend/src/app/scan/page.tsx.
Scan detail (/scan/[id])
The page everybody spends the most time on.
PageHeader
/home/me/proj-a · scan_id 0f1a93cb · [...] [Cancel] [Re-run] [Delete]
StatLine
Risk score 34.2 · ●3 critical · ●5 high · ●2 medium · 12 scanners · 1m 22s
ScanProgressPanel (only while running/pending — v0.7.0 SSE)
●semgrep ✓ complete 124ms 7 findings
●bandit ✓ complete 62ms 3 findings
●trivy running...
●safety queued
Scanner chip strip
Ran: semgrep · bandit · trivy · safety · secrets Skipped (1): zap
Sticky filter bar
Severity: [● critical 3] [● high 5] [● medium 2] [● low 0] [● info 0]
Status: [new] [triaged] [false_positive] [accepted_risk] [fixed] [wont_fix]
[search...] [▼ Show suppressed]
Findings table (compact, sortable, severity-tinted left edge)
┃ ● critical Use of eval() backend/api.py:42 semgrep ⌃
┃ ● critical SQL injection backend/db.py:12 bandit ⌃
┃ ● high Missing X-Frame-Options (https://...) dast ⌃
...

Expand a row to reveal:
- Matched line (mono, 5-line context).
- AI explanation + remediation hint (when
--aiwas on; off by default in CI). - Triage panel — status dropdown + note textarea.
- Comments panel — thread, lazy-loaded.

The live progress panel above the findings table only shows while a
scan is pending or running — it animates from queued → running →
complete per scanner over Server-Sent Events:

See Real-time scan progress for the SSE flow and Triage workflow for the verdict mechanics.
Source: frontend/src/app/scan/[id]/page.tsx.
History (/history)
A real data table, not a card grid (the v0.6.0 redesign:
frontend/src/app/history/page.tsx).
- Sortable columns: target, started, duration, status, finding count.
- Status icons inline (●completed / ●running / ●cancelled / ●failed).
- Mono target paths.
- Scanner chip strip per row with overflow indicator (
+3 more). - Kebab action menu per row: re-run, delete, copy link.
- URL-persisted sort + page-size — the URL is the truth, so sharing a filtered view is a copy-paste.

Scanners (/scanners)
Categorized scanner directory.
Code analysis ●semgrep ●bandit ●secrets ●git-hygiene
Dependencies ●trivy ●safety ●npm-audit ●licenses
Containers / IaC ●checkov ●dockerfile
Network ●nmap
Web (DAST) ●builtin_dast ●zap
- Sticky status legend + search at the top.
- Each card: name, category, version (if installed), install hint or
install button (for scanners that can be
pip installed). - "Install all available" bulk action.

Diff (/diff)
PR-style scan-vs-scan comparison (FEAT1 from v0.6.0).
Base [scan picker ▾] ↔ Head [scan picker ▾]
Summary chips
▲ 3 new ▼ 2 resolved = 14 unchanged Risk Δ +12.4
Tabs: [ New (3) ] [ Resolved (2) ] [ Unchanged (14) ]
(table per tab, same columns as scan-detail)
See Diff & compare.
SBOM (/sbom)
Software Bill of Materials viewer (CycloneDX or SPDX).
- Segmented format toggle: CycloneDX / SPDX.
- Scan picker card.
- Component table with ecosystem stats (npm vs PyPI vs Crates …).
See SBOM.
Compare (/compare)
Same shape as /diff but framed for "current scan vs saved baseline"
rather than "scan A vs scan B".
Notifications (/notifications)
Full feed of in-app notifications. See Notifications.
Settings
/settings/keys— list, create, revoke API keys. See API keys./settings/webhooks— list, create, edit, test webhooks. See Webhooks.

Topbar widgets
Notifications bell
Live unread badge, polled every 30s. Click → 360px popover with the 10 most recent (severity dot + title + relative timestamp). See Notifications.
API health indicator
Pings GET /ready every ~10s. Color codes:
- ● green — ready (200).
- ● amber — degraded (200 with one of the checks failing — rare).
- ● red — unreachable / 503.
Hover for the underlying check breakdown.
Theme toggle
next-themes integration. Dark default; persists to localStorage and
to a cookie so SSR doesn't flash.
Command palette (⌘K)
Mounted at app root. Searches: pages, recent scans, scanners. Keyboard driven. The primary nav affordance for power users.
Auth & the dashboard
The frontend client (frontend/src/lib/api.ts) injects
X-API-Key: <NEXT_PUBLIC_SECURESCAN_API_KEY> on every request when
the env var is set at build time. For DB-backed keys (v0.8.0+), the
flow is the same — set the key value as NEXT_PUBLIC_SECURESCAN_API_KEY.
For SSE streams (/scans/{id}/events), the dashboard exchanges that
key for a short-lived event token first — see
SSE event tokens.
NEXT_PUBLIC_* env vars are baked into the build and shipped to the
browser. Do not put a high-trust admin key there; use a read-scope
key. For dashboards exposed beyond your laptop, terminate the dashboard
behind your own auth (SSO, mTLS) and treat its key as a service
identity, not a user identity.
Next
- Real-time scan progress — the SSE flow under the hood.
- Notifications — the bell icon's data path.
- Webhooks — outbound delivery of the same events.
- API keys —
/settings/keysin detail.
Real-time scan progress
While a scan is running (or pending), the dashboard's scan-detail
page does not poll. It opens a Server-Sent Events stream and renders
each scanner's lifecycle as it happens — queued → running → complete / failed / skipped.
Introduced in v0.7.0; the SSE-with-auth path was finalized in v0.9.0 (SSE event tokens).
Endpoint
GET /api/v1/scans/{scan_id}/events
Accept: text/event-stream
# Authenticated deployments:
GET /api/v1/scans/{scan_id}/events?event_token=<token>
Browsers cannot send X-API-Key (or any custom header) on an
EventSource. For authenticated deployments, the FE first calls
POST /api/v1/scans/{id}/event-token to mint a short-lived token,
then opens EventSource with ?event_token=.... See
SSE event tokens.
Events emitted
| Event | When | Payload (selected fields) |
|---|---|---|
scan.start | Orchestrator begins. | {scan_types} |
scanner.start | A specific scanner is about to run. | {name} |
scanner.complete | A scanner returned successfully. | {name, duration_s, findings_count} |
scanner.skipped | A scanner is unavailable. | {name, reason, install_hint} |
scanner.failed | A scanner crashed. | {name, error} (error truncated to 200 chars) |
scan.complete | All scanners returned (or were skipped); status is completed. | {findings_count, risk_score, scanners_run, scanners_skipped} |
scan.failed | Orchestrator-level failure. | {error} |
scan.cancelled | POST /scans/{id}/cancel succeeded. | (no fields) |
scan.complete, scan.failed, and scan.cancelled are terminal
events — the server closes the stream after emitting one. Source:
TERMINAL constant in
backend/securescan/events.py.
On the wire
curl -N "http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/events"
: keepalive
event: scan.start
data: {"scan_types":["code","dependency"]}
event: scanner.start
data: {"name":"semgrep"}
event: scanner.complete
data: {"name":"semgrep","duration_s":4.31,"findings_count":7}
event: scanner.start
data: {"name":"bandit"}
event: scanner.complete
data: {"name":"bandit","duration_s":1.04,"findings_count":2}
event: scan.complete
data: {"findings_count":9,"risk_score":24.0,"scanners_run":["semgrep","bandit"],"scanners_skipped":[]}
15-second keepalive comments (: keepalive\n\n) keep idle proxies
from killing the connection.
Replay buffer
A late subscriber — e.g. a tab refresh mid-scan — gets a 200-event replay buffer so the dashboard can reconstruct the full state even though it missed the early events. The buffer is retained for 30 seconds after a terminal event, then dropped.
Terminal events (scan.complete / scan.failed / scan.cancelled)
are never dropped on subscriber backpressure. If the buffer is
full, the oldest non-terminal event is evicted.
Frontend wiring
sequenceDiagram
autonumber
participant FE as Dashboard
participant ES as EventSource
participant API as FastAPI
participant Bus as Event bus
FE->>API: POST /api/v1/scans/{id}/event-token
API-->>FE: { token, expires_in }
FE->>ES: new EventSource("/events?event_token=…")
ES->>API: GET /events?event_token=…
API->>Bus: subscribe(scan_id)
loop each lifecycle event
Bus-->>API: event
API-->>ES: data: {…}
ES-->>FE: onmessage
FE->>FE: update <ScanProgressPanel>
end
API-->>ES: terminal event
API->>API: close stream
ES-->>FE: onerror (close)
FE->>FE: GET /scans/{id}/findings (one-shot final reload)
The component is <ScanProgressPanel> in
frontend/src/app/scan/[id]/.
While status is running or pending it renders one row per
scanner with state dots (queued | running | complete | failed | skipped) and updates findings counts as they come in. On
scan.complete it triggers a one-shot
GET /api/v1/scans/{id}/findings for the final state.
The SSE stream carries lifecycle events (counts, durations, status
flips), not the findings themselves. Re-fetching findings on
scan.complete keeps the wire small and the rendering predictable.
Token rotation timeline
For authenticated deployments, event tokens are valid for 5 minutes. The frontend rotates at half-life (~2.5 minutes) and tries one re-mint on error before falling back to status polling.
sequenceDiagram autonumber participant FE as Dashboard participant API as FastAPI Note over FE: Long scan (8 min) FE->>API: POST /event-token (T=0:00) API-->>FE: token-1, ttl=300s FE->>API: open EventSource (token-1) Note over FE: T=2:30 — half-life rotation FE->>API: POST /event-token (T=2:30) API-->>FE: token-2, ttl=300s FE->>API: close EventSource (token-1) FE->>API: open EventSource (token-2) Note over FE: T=5:00 — half-life rotation FE->>API: POST /event-token (T=5:00) API-->>FE: token-3 FE->>FE: close (token-2) → open (token-3) Note over FE: T=7:30 — scan still running, rotate again FE->>API: POST /event-token (T=7:30) API-->>FE: token-4 FE->>FE: close (token-3) → open (token-4)
If a rotation fails (the API returned 401 because the underlying key
was just revoked), the FE tries one re-mint, then closes the SSE
and falls back to GET /api/v1/scans/{id} polling every 2 seconds —
matching the v0.6.1 status-only path.
See SSE event tokens for the token format and verification.
Fallback to polling
EventSource has no API surface for headers, but it does have one for
basic auth — except basic auth doesn't apply to X-API-Key either.
So in v0.7.0, the dashboard's behavior was:
- Unauthenticated: open SSE.
- Authenticated: skip SSE, poll
GET /scans/{id}every 2 seconds.
v0.9.0 closes that gap with event tokens. Polling is now reserved for the fallback path:
EventSourceerrors out → close SSE → poll status.- Token mint fails twice → close SSE → poll status.
Polling never fetches findings during a running scan; the v0.6.1 fix moved findings to a load-on-mount-and-on-status-flip pattern.
Deployment notes
The event bus is an in-process module-level singleton. A scan
that lands on uvicorn worker A and an SSE subscriber on worker B
will never see each other. Run --workers 1.
Multi-process pubsub (Redis backplane) is on the roadmap. To scale
horizontally today, run multiple separate single-worker instances
behind a sticky-session load balancer keyed on scan_id. See
Single-worker constraint.
Source code
- Endpoint:
GET /api/v1/scans/{id}/eventsinbackend/securescan/api/scans.py. - Event bus:
backend/securescan/events.py. - Token mint endpoint:
POST /api/v1/scans/{id}/event-token(same file). - Token logic:
backend/securescan/event_tokens.py. - Frontend wiring:
frontend/src/app/scan/[id]/.
Next
- SSE event tokens — auth on the SSE stream.
- Webhooks — out-of-process delivery of the same events.
- Notifications — durable record of completed scans.
Notifications
The dashboard's topbar bell icon (added in v0.9.0) shows an unread
badge and a popover of the most recent notifications. Clicking
through goes to /notifications, the full feed.
Notifications are a durable in-app record — the SSE stream (Real-time scan progress) shows scans as they happen, notifications stay around after the SSE has closed.
What gets a notification
Auto-created by the orchestrator on:
| Trigger event | Condition | Severity |
|---|---|---|
scan.complete | only when findings_count > 0 | Critical / high / medium / low / info — derived from the highest-severity finding in the scan |
scan.failed | always | high |
scanner.failed | always | medium |
Successful zero-finding scans don't spam the bell. If your CI runs SecureScan on every push of a clean repo, you don't get 50 notifications a day saying "all clear." A failing scan still notifies regardless — silent failure is the bug.
The filter logic lives in
backend/securescan/api/scans.py
under _create_notification_for_event.
Bell icon
┌──────────────────────────────────────┐
│ ●3 Notifications │ <- bell with unread count badge
└──────────────────────────────────────┘
↓ click
┌──────────────────────────────────────────────────────────┐
│ Notifications Mark all │
├──────────────────────────────────────────────────────────┤
│ ● critical 3 critical findings on /home/me/proj-a │
│ scan 0f1a93cb · 2 minutes ago │
├──────────────────────────────────────────────────────────┤
│ ● high Scan failed: nmap — connection timed out │
│ scan 0d2c... · 5 minutes ago │
├──────────────────────────────────────────────────────────┤
│ ● medium Scanner skipped: zap — install /usr/share/... │
│ scan 0d2c... · 5 minutes ago │
├──────────────────────────────────────────────────────────┤
│ See all notifications → │
└──────────────────────────────────────────────────────────┘
- Polled every 30s via
GET /api/v1/notifications/unread-count. - 360px popover, 10 most recent.
- Severity dot prefix per row.
- Click a row → marks read, navigates to the scan detail.
Full feed (/notifications)
A page with the same rows, no truncation, plus filter chips:
- All — every notification.
- Unread —
read_at IS NULL. - Read —
read_at IS NOT NULL.
Sorted newest-first. Shows the same severity / title / scan id / timestamp. Bulk action: "Mark all as read".
API
| Method | Path | Scope | Notes |
|---|---|---|---|
GET | /api/v1/notifications | read | Newest first. Query: unread_only=true, limit=50 (silently capped at 200). |
GET | /api/v1/notifications/unread-count | read | Returns {"count": N}. Index-only query, cheap to poll. |
PATCH | /api/v1/notifications/{id}/read | write | Returns the updated row. 404 if the id is unknown. |
PATCH | /api/v1/notifications/read-all | write | Returns {"marked_read": N}. Idempotent: a second call returns {"marked_read": 0}. |
Source:
backend/securescan/api/notifications.py.
Notification shape
{
"id": "n-9d2f3a1b",
"severity": "critical",
"title": "3 critical findings on /home/me/proj-a",
"body": "Scanners run: semgrep, bandit, trivy",
"scan_id": "0f1a93cb-44c2-4c8e-9f92-0a7c5a2e1b51",
"created_at": "2026-04-29T20:11:09.123456",
"read_at": null
}
Examples
Poll for unread count
$ curl -s -H "X-API-Key: $K" \
http://127.0.0.1:8000/api/v1/notifications/unread-count
{"count":3}
List the latest 10 unread
$ curl -s -H "X-API-Key: $K" \
"http://127.0.0.1:8000/api/v1/notifications?unread_only=true&limit=10" \
| jq '.[].title'
"3 critical findings on /home/me/proj-a"
"Scan failed: nmap — connection timed out"
"Scanner skipped: zap — install /usr/share/zaproxy/zap.sh"
Mark one read
$ curl -s -X PATCH \
-H "X-API-Key: $K" \
"http://127.0.0.1:8000/api/v1/notifications/n-9d2f3a1b/read" | jq .
{
"id": "n-9d2f3a1b",
"severity": "critical",
"title": "3 critical findings on /home/me/proj-a",
...,
"read_at": "2026-04-29T20:14:00.000000"
}
Mark all read
$ curl -s -X PATCH -H "X-API-Key: $K" \
http://127.0.0.1:8000/api/v1/notifications/read-all
{"marked_read":3}
A second call returns {"marked_read": 0} — idempotent.
Retention
Read notifications older than 30 days are pruned at backend startup. Unread notifications are kept indefinitely. The pruning is defensive — a long-running deployment that never restarts would accumulate forever; the on-startup sweep is enough for the typical operator-managed lifecycle.
If you need different retention, run periodic restarts or open an issue to discuss a configurable knob.
Multi-tenant note
v0.9.0 is single-tenant: every authenticated browser session sees
the same notifications. There is no per-user scoping. The schema and
endpoints are shaped so a user_id query param can be added later
without breaking existing callers, but the v0.9.0 contract is "one
queue per deployment."
If you have an internal-tooling stack with a single SecureScan deployment serving a small team, single-tenant is the right shape. For SaaS-style multi-tenancy, look at the v1.0 roadmap.
How notifications relate to webhooks
| Trigger | In-app bell | Webhook |
|---|---|---|
scan.complete | ✓ (when findings_count > 0) | ✓ |
scan.failed | ✓ | ✓ |
scanner.failed | ✓ | ✓ |
scan.start / scanner.start / scanner.complete / scanner.skipped / scan.cancelled | — | — |
Both surfaces consume the same SSE event stream. Notifications are "the operator's inbox"; webhooks are "external integrations." See Webhooks.
Next
- Webhooks — external delivery of the same events.
- Real-time scan progress — the live SSE stream.
- Dashboard tour — where the bell lives.
Webhooks
Outbound HMAC-signed deliveries to your incident server, Slack, Discord, or any HTTP receiver. Introduced in v0.9.0; durability is the headline feature — every delivery is persisted to SQLite before the HTTP call, so a backend restart resumes pending retries.
Quick start
Create a subscription:
curl -X POST http://127.0.0.1:8000/api/v1/webhooks \
-H "X-API-Key: $ADMIN_KEY" \
-H 'Content-Type: application/json' \
-d '{
"name": "ops-pager",
"url": "https://hooks.example.com/securescan",
"event_filter": ["scan.complete", "scan.failed", "scanner.failed"]
}'
Response (the secret is returned once):
{
"id": "ddc22f0a-3a8f-4f1c-86e9-2b4b4ab0a8e0",
"name": "ops-pager",
"url": "https://hooks.example.com/securescan",
"event_filter": ["scan.complete","scan.failed","scanner.failed"],
"enabled": true,
"created_at": "2026-04-29T20:00:00",
"secret": "Hk8wQpz8QH4...32-chars..."
}
Save the secret immediately. Subsequent reads strip it; rotating
means delete + recreate. Use it to verify HMAC signatures on the
receiver side (see below).
Fire a synthetic test event right away:
curl -X POST http://127.0.0.1:8000/api/v1/webhooks/$WID/test \
-H "X-API-Key: $ADMIN_KEY"
The synthetic event flows through the exact same dispatcher path as a real one — same retry, same signature contract — so a green test proves the receiver wiring end-to-end.
What gets delivered
Per-delivery, the receiver gets:
POST <your-url>
Content-Type: application/json
User-Agent: SecureScan-Webhook/0.9
X-SecureScan-Event: scan.complete
X-SecureScan-Webhook-Id: ddc22f0a-3a8f-4f1c-86e9-2b4b4ab0a8e0
X-SecureScan-Signature: t=1730230285,v1=8d3f0a7c...
{"event":"scan.complete","data":{"scan_id":"...","findings_count":3,...},"delivered_at":"..."}
For hooks.slack.com and discord.com/api/webhooks URLs, the body is
reshaped to the receiver's expected format (Slack blocks, Discord
embed). Generic JSON otherwise. See Slack/Discord shape
below.
The full payload schema for every event lives in Webhook payloads.
Signature verification
The signature is over the literal request body bytes, prefixed with the timestamp:
v1 = HEX( HMAC_SHA256( secret, f"{t}." + raw_body ) )
The dispatcher serializes the body with
json.dumps(payload, separators=(",", ":")) — whitespace-free, key
order preserved. Sign the bytes you receive on the wire, not a
re-parsed/re-serialized JSON object. If you parse, mutate, then
re-serialize before signing on the receiver side, the signatures
will not match.
Reject requests where t is more than 5 minutes old to defeat
replays.
Python
import hmac, hashlib
def verify(secret: str, header: str, raw_body: bytes) -> bool:
"""
header is request.headers['X-SecureScan-Signature'],
e.g. "t=1730230285,v1=8d3f0a7c...".
"""
parts = dict(p.split("=", 1) for p in header.split(","))
expected = hmac.new(
secret.encode(),
f"{parts['t']}.".encode() + raw_body,
hashlib.sha256,
).hexdigest()
return hmac.compare_digest(expected, parts["v1"])
A FastAPI receiver:
from fastapi import FastAPI, Header, HTTPException, Request
import time
app = FastAPI()
SECRET = "Hk8wQpz8QH4...32-chars..."
@app.post("/securescan")
async def receive(
request: Request,
x_securescan_signature: str = Header(...),
x_securescan_event: str = Header(...),
):
raw_body = await request.body()
parts = dict(p.split("=", 1) for p in x_securescan_signature.split(","))
ts = int(parts["t"])
if abs(time.time() - ts) > 300:
raise HTTPException(401, "stale signature")
if not verify(SECRET, x_securescan_signature, raw_body):
raise HTTPException(401, "bad signature")
payload = (await request.json()) # safe AFTER signature verify
print(f"event={x_securescan_event} data={payload['data']}")
return {"ok": True}
Node.js
import crypto from "node:crypto";
export function verify(secret, header, rawBody) {
const parts = Object.fromEntries(
header.split(",").map((p) => {
const [k, v] = p.split("=");
return [k, v];
}),
);
const expected = crypto
.createHmac("sha256", secret)
.update(`${parts.t}.`)
.update(rawBody) // rawBody MUST be a Buffer
.digest("hex");
return crypto.timingSafeEqual(
Buffer.from(expected, "hex"),
Buffer.from(parts.v1, "hex"),
);
}
Express receiver (note express.raw to keep the body bytes intact):
import express from "express";
const app = express();
const SECRET = process.env.SECURESCAN_SECRET;
app.post(
"/securescan",
express.raw({ type: "application/json" }),
(req, res) => {
const sig = req.header("X-SecureScan-Signature");
if (!verify(SECRET, sig, req.body)) return res.sendStatus(401);
const payload = JSON.parse(req.body.toString("utf8"));
console.log(req.header("X-SecureScan-Event"), payload.data);
res.json({ ok: true });
},
);
Go
package main
import (
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"io"
"net/http"
"strings"
"time"
)
const secret = "Hk8wQpz8QH4...32-chars..."
func verify(header string, body []byte) bool {
parts := map[string]string{}
for _, p := range strings.Split(header, ",") {
kv := strings.SplitN(p, "=", 2)
if len(kv) == 2 {
parts[kv[0]] = kv[1]
}
}
mac := hmac.New(sha256.New, []byte(secret))
mac.Write([]byte(parts["t"] + "."))
mac.Write(body)
expected := hex.EncodeToString(mac.Sum(nil))
return hmac.Equal([]byte(expected), []byte(parts["v1"]))
}
func handler(w http.ResponseWriter, r *http.Request) {
body, _ := io.ReadAll(r.Body)
if !verify(r.Header.Get("X-SecureScan-Signature"), body) {
http.Error(w, "bad signature", http.StatusUnauthorized)
return
}
// Optional: reject stale timestamps to defeat replays.
_ = time.Now
w.WriteHeader(http.StatusOK)
}
func main() {
http.HandleFunc("/securescan", handler)
http.ListenAndServe(":8080", nil)
}
Retry & state machine
Every delivery is persisted as a webhook_deliveries row before
the HTTP call. On any non-2xx (or transport error), the row's
next_attempt_at is set with full-jitter exponential backoff capped
at 5 minutes; max delivery age is 30 minutes — past that, the row is
marked failed.
stateDiagram-v2 [*] --> pending : insert before HTTP pending --> delivering : worker claims (atomic UPDATE) delivering --> succeeded : 2xx response delivering --> pending : non-2xx / timeout, age <= 30m, schedule retry delivering --> failed : non-2xx / timeout, age > 30m succeeded --> [*] failed --> [*]
Receivers MUST be idempotent. The retry policy is at-least-once.
Use the (t, v1) pair plus the event-specific data (e.g.
scan_id) to dedupe. A duplicate scan-complete is a no-op for a
correct receiver.
The dispatcher retries 4xx responses too — a misconfigured receiver that returns 401 for a few seconds while it loads its keys should not lose deliveries within the 30-minute window.
FIFO ordering per webhook
Deliveries for the same webhook id are processed in created_at
order. Two layers ensure this:
- In-process guard. A per-webhook set
(
_inflight_per_webhook) skips any pending row whose webhook is already inflight; the next poll tick picks it up once the prior delivery finishes. - Atomic claim.
mark_delivery_deliveringruns anUPDATE ... WHERE status='pending'with a rowcount check; a second worker that races for the same row bails.
Different webhooks dispatch concurrently. Ordering is per-webhook, not global.
Crash recovery
On startup the dispatcher resets stale delivering rows back to
pending (left behind by a crash mid-delivery). Combined with the
"persist before HTTP" rule, a backend restart resumes any in-flight
retries cleanly.
Source:
backend/securescan/webhook_dispatcher.py.
Slack / Discord detection
The URL is matched against two prefixes:
https://hooks.slack.com/services/... → Slack-shape body
https://discord.com/api/webhooks/... → Discord-shape body
(anything else) → generic SecureScan envelope
The reshaper lives in
backend/securescan/webhook_formatters.py.
Slack and Discord webhook URLs are unauthenticated — anyone with the URL can post. The HMAC headers are still sent (so you can route through a proxy and verify there), but the receiver itself does not. Treat the Slack/Discord URL itself as the secret. Don't share it.
See Webhook payloads → Slack/Discord shape for the exact body each provider receives.
CRUD endpoints
All require the admin scope.
| Method | Path | Description |
|---|---|---|
POST | /api/v1/webhooks | Create. Returns secret once. |
GET | /api/v1/webhooks | List. No secrets. |
GET | /api/v1/webhooks/{id} | Fetch one. No secret. |
PATCH | /api/v1/webhooks/{id} | Edit name / url / event_filter / enabled. Cannot rotate secret. |
DELETE | /api/v1/webhooks/{id} | Cascades all deliveries. |
GET | /api/v1/webhooks/{id}/deliveries | Last 100 delivery rows, newest first. Status, attempt, response code. |
POST | /api/v1/webhooks/{id}/test | Enqueue a webhook.test event. Returns 202 + delivery_id. |
Source:
backend/securescan/api/webhooks.py.
Webhooks can leak finding data, and an attacker with write access
could redirect events to a sink they control. The read scope does
NOT see webhooks; only admin does. The /settings/webhooks
dashboard page is the only intended consumer.
In the dashboard
/settings/webhooks:
PageHeader: Webhooks · [+ New webhook]
Table
● enabled ops-pager hooks.example.com/securescan 3 events Last: 2m ago [Test] [⋮]
● enabled team-slack hooks.slack.com/services/... 1 event Last: 1h ago [Test] [⋮]
○ disabled audit-archive https://archive... 4 events Last: never [Test] [⋮]
Click a row to expand:
Delivery log (auto-refreshes every 5s)
● succeeded scan.complete delivery_id 204 attempt 1 T-2m
● succeeded webhook.test delivery_id 204 attempt 1 T-3m
● failed scan.complete delivery_id 500 attempt 6 T-1h
Source code
- API:
backend/securescan/api/webhooks.py - Dispatcher:
backend/securescan/webhook_dispatcher.py - Slack/Discord shaper:
backend/securescan/webhook_formatters.py
Next
- Webhook payloads — every event's full payload schema.
- Notifications — the in-app counterpart.
- Real-time scan progress — same events on the SSE stream.
SBOM
SecureScan generates a Software Bill of Materials in two formats:
- CycloneDX 1.5 — JSON; the default for tooling integrations.
- SPDX 2.3 — JSON; the format some compliance auditors require.
The generator runs against the same scanned target as the dependency
scanners, so the SBOM and the dependency findings reference the
same component set.
API
curl -H "X-API-Key: $K" \
"http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/sbom?format=cyclonedx" \
> sbom.cyclonedx.json
curl -H "X-API-Key: $K" \
"http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/sbom?format=spdx" \
> sbom.spdx.json
format defaults to cyclonedx. Both endpoints require the read
scope.
Source:
backend/securescan/api/sbom.py.
Component shape (CycloneDX excerpt)
{
"bomFormat": "CycloneDX",
"specVersion": "1.5",
"serialNumber": "urn:uuid:...",
"version": 1,
"components": [
{
"bom-ref": "pkg:pypi/requests@2.31.0",
"type": "library",
"name": "requests",
"version": "2.31.0",
"purl": "pkg:pypi/requests@2.31.0",
"licenses": [{"license": {"id": "Apache-2.0"}}]
},
{
"bom-ref": "pkg:npm/axios@1.6.7",
"type": "library",
"name": "axios",
"version": "1.6.7",
"purl": "pkg:npm/axios@1.6.7"
}
]
}
The component set is unioned across:
- Python:
requirements*.txt,Pipfile.lock,poetry.lock,pyproject.toml. - Node:
package-lock.json,npm-shrinkwrap.json,yarn.lock. - Go:
go.sum. - Rust:
Cargo.lock. - Ruby:
Gemfile.lock. - PHP:
composer.lock.
Detection follows what the underlying scanners (Trivy, Safety, npm-audit) already parse — adding a manifest format is a follow-up that lands first in those scanners.
Dashboard view
/sbom:
PageHeader: SBOM · scan_id [picker ▾]
[ CycloneDX ] [ SPDX ] <- segmented format toggle
Ecosystem stats
●●● PyPI 21 components
●● npm 14 components
● Crates 3 components
Component table (paginated; sortable)
Name Version Ecosystem License bom-ref
requests 2.31.0 PyPI Apache-2.0 pkg:pypi/requests@2.31.0
axios 1.6.7 npm MIT pkg:npm/axios@1.6.7
...
The format toggle is a no-cost view re-render — the API call returns the chosen format, the dashboard parses it and renders the same component table.
Source: frontend/src/app/sbom/page.tsx.
Use in CI
The SBOM is most useful as an artifact attached to a release build:
- name: Run SecureScan
run: securescan scan . --type dependency --output json --output-file findings.json
- name: Upload SBOM
run: |
curl -H "X-API-Key: $K" \
"https://securescan.internal/api/v1/scans/${SCAN_ID}/sbom" \
> sbom.cyclonedx.json
- uses: actions/upload-artifact@v4
with:
name: sbom
path: sbom.cyclonedx.json
For supply-chain attestations (sigstore-cosign attestation), the CycloneDX file is the right input — the format has stable canonicalization rules.
What this is not
SecureScan's SBOM generator is a convenience surface built on top
of the dependency scanners. For production-grade SBOMs that need to
be canonical for compliance audits, use a dedicated tool —
Syft and
cyclonedx-cli are the
references. SecureScan deliberately does not try to be an SBOM
generator (see README "Non-goals").
The trade-off: SecureScan's SBOM is "good enough" for human review and for tying to the same scan that produced your vulnerability findings. It is not a replacement for Syft when an auditor requires a canonical SBOM.
Next
- Findings & severity — the matched dependency vulns.
- Compliance — license risk surfaces here too.
- API endpoints — full list.
Diff & compare
Two related surfaces, both built on the cross-scan fingerprint identity:
securescan diff— what's NEW between two git refs (or two pre-scanned snapshots). The CI workhorse.securescan compare— what's drifted since a saved baseline. An auditing / triage surface.
The dashboard renders both at /diff and /compare.
diff (CLI)
# Ref mode — refs must exist in the local clone
securescan diff . --base-ref main --head-ref HEAD
# Snapshot mode — recommended for CI; no second checkout required
securescan diff . \
--base-snapshot before.json \
--head-snapshot after.json \
--output github-pr-comment
The classifier produces three buckets keyed on fingerprint:
- NEW — present in head, absent from base.
- FIXED — present in base, absent from head.
- UNCHANGED — fingerprint in both.
Only NEW is reported by default in the github-pr-comment output — that is the diff-aware-PR-comment property.
Each side of the diff runs securescan scan ... --output json
independently — possibly on different runners — and a single
classification step does the diff without re-checking-out the tree.
This decouples the heavy work from the diff logic and lets you cache
each side's snapshot.
See CLI commands.
diff (dashboard)
/diff: a PR-style scan-vs-scan comparison.
PageHeader: Diff
Base: [ scan picker ▾ ] ↔ Head: [ scan picker ▾ ]
0d2c... · 2026-04-29 0f1a... · 2026-04-29
Summary chips
▲ 3 new ▼ 2 resolved = 14 unchanged Risk Δ +12.4
[ New (3) ] [ Resolved (2) ] [ Unchanged (14) ] <- tabs
Findings table (severity-tinted, expandable rows)
● critical Use of eval() backend/api.py:42 semgrep ⌃
● high SQL injection via str.format backend/db.py:12 bandit ⌃
● medium Missing X-Frame-Options (https://...) dast ⌃
Source: frontend/src/app/diff/page.tsx (FEAT1 from v0.6.0).
compare (CLI)
# What disappeared since the last baseline?
securescan compare .securescan/baseline.json
compare classifies findings into:
- NEW — in current scan, not in baseline.
- DISAPPEARED — in baseline, not in current scan.
- STILL_PRESENT — in both.
The PR-comment marker is <!-- securescan:compare --> so a comment
upserter can keep this on a separate thread from the
<!-- securescan:diff --> PR-diff comment.
compare (dashboard)
/compare: same shape as /diff, framed for "current scan vs saved
baseline" rather than "scan A vs scan B". Useful at end-of-sprint to
confirm legacy findings were actually remediated.
API: scan-vs-scan compare
curl -H "X-API-Key: $K" \
"http://127.0.0.1:8000/api/v1/scans/compare?scan_a=$BASE&scan_b=$HEAD" \
| jq .
Response:
{
"scan_a": "0d2c...",
"scan_b": "0f1a...",
"new": [ /* findings present in scan_b only */ ],
"fixed": [ /* findings present in scan_a only */ ],
"unchanged": [ /* fingerprints in both */ ]
}
Source:
backend/securescan/api/scans.py::compare_scans.
CI integration
The Metbcy/securescan@v1 action runs securescan diff automatically
on pull_request events, posts the upserted PR comment, and uploads
SARIF — see GitHub Action. To wire diff
into a custom CI:
- name: Snapshot base
run: |
git checkout ${{ github.base_ref }}
securescan scan . --type code --output json --output-file before.json
- name: Snapshot head
run: |
git checkout ${{ github.head_ref }}
securescan scan . --type code --output json --output-file after.json
- name: Diff
run: |
securescan diff . \
--base-snapshot before.json \
--head-snapshot after.json \
--output github-pr-comment \
--output-file diff.md
How fingerprints handle reformats
A reformat that does not change the matched line's meaning should
not reclassify findings as NEW. The fingerprint's
normalized_line_context collapses whitespace and trivial
reformatting before hashing, so:
| Change | Fingerprint |
|---|---|
Reflow eval(payload) → eval(\n payload\n) | Stable |
| Replace tabs with spaces | Stable |
| Rename a variable used in the line | Changes (semantic shift) |
| Move the line to a different file | Changes (file_path is in the hash) |
For the few cases where this is wrong (e.g. you move a function file
that the scanner re-flags), the inline securescan: ignore comment
travels with the code — the suppression survives the rename.
Determinism
Both diff and compare are byte-deterministic given the same inputs:
the underlying securescan scan is deterministic
(Findings & severity),
and the classification step is a pure set difference. So the same PR
push twice posts the same comment body; if the body has not changed,
the upsert is a no-op.
Next
- GitHub Action — wires diff into PRs.
- Suppression — particularly the
baseline mechanism (
securescan baseline). - CLI commands.
Authentication overview
SecureScan supports three credential types and one fallthrough mode:
| Source | Format | Scopes | Where it lives |
|---|---|---|---|
| Env-var (legacy) | Any string | read + write + admin | SECURESCAN_API_KEY env var |
| DB-issued key | ssk_<id>_<secret> | Per-key (declared) | api_keys table, salted-SHA-256 hashed |
| SSE event token | base64url(...) | Inherits caller's | ?event_token=... on /scans/{id}/events only |
| Dev mode | (none) | All — fail-open | When neither of the first two is configured |
The auth path is in
backend/securescan/auth.py.
Decision tree
flowchart TD
Req[Incoming request to /api/*] --> SSE{SSE route<br>and ?event_token=...?}
SSE -->|yes| Tok[Validate HMAC, expiry,<br>scan_id binding,<br>rehydrate principal]
SSE -->|no| Hdr{X-API-Key or<br>Authorization: Bearer set?}
Hdr -->|yes| Match{Matches<br>env or DB?}
Match -->|env match| EnvP[Principal: env, all scopes]
Match -->|DB match, not revoked| DbP[Principal: db, row.scopes]
Match -->|no match| F401[401 Invalid API key]
Hdr -->|no| Has{any creds<br>configured?}
Has -->|yes| F401b[401 X-API-Key required]
Has -->|no| Required{AUTH_REQUIRED=1?}
Required -->|yes| F503[503 unreachable in practice;<br>startup raised SystemExit]
Required -->|no| Dev[None: dev mode passthrough]
Tok --> Scope[require_scope]
EnvP --> Scope
DbP --> Scope
Dev --> Scope
Scope -->|scope OK| H[Route handler]
Scope -->|missing| F403[403 Requires scope: ...]
Three things to set in production
At minimum, a production deployment must:
-
Configure credentials. Either set
SECURESCAN_API_KEY(legacy single-key) OR create at least one DB key withadminscope viaPOST /api/v1/keys. See API keys. -
Set
SECURESCAN_AUTH_REQUIRED=1. Without it, an empty DB plus an unset env var silently falls back to dev mode (every request passes through). With it, the backend exits with status code 2 at startup if no credentials exist. -
Set
SECURESCAN_EVENT_TOKEN_SECRET. Required whenAUTH_REQUIRED=1. Without it, every backend restart breaks any in-flight SSE tokens and the live-progress dashboard goes blind. See SSE event tokens.
The full pre-flight is on the Production checklist.
Public endpoints
These never require auth — they are for Kubernetes / load-balancer probes and the API discovery surface:
| Endpoint | Purpose |
|---|---|
GET / | API root: {name, status, docs, health}. No DB read. |
GET /health | Liveness — process up. Always 200 unless crashing. |
GET /ready | Readiness — DB openable + scanner registry loaded. 200 or 503. |
GET /docs | FastAPI Swagger UI (auto-generated from the route schemas). |
GET /redoc | FastAPI ReDoc UI. |
GET /openapi.json | OpenAPI document. |
/docs, /redoc, and /openapi.json describe every route on
the server, including admin-scope ones. They do not expose data, but
they do expose the surface. If your threat model includes that, put
the dashboard behind a TLS-terminating proxy with IP allowlisting —
SecureScan's auth model is intentionally simple and assumes the API
is not directly internet-facing. The legacy unprefixed /api/*
paths additionally carry deprecation headers, which leak nothing
sensitive but do tell a probe the version. See
Production checklist.
Header formats
Every authenticated route accepts either of:
X-API-Key: ssk_5a7c8f9e_abc123def...
Authorization: Bearer ssk_5a7c8f9e_abc123def...
The Bearer form exists because some HTTP clients (curl with
--user, some load balancers) handle Authorization more
ergonomically than custom headers. They are equivalent.
For the SSE event-token path on /scans/{id}/events, neither header
is used — the token rides in the query string:
GET /api/v1/scans/0f1a93cb/events?event_token=eyJzY2FuX2lkI...
This is the only route on which event_token is honored. Any
other URL with a query-string token gets 401.
Dev mode
When no env var is set AND no DB keys exist AND
AUTH_REQUIRED=0 (the default), the backend logs a startup banner:
SECURESCAN_API_KEY not set; API is unauthenticated (dev mode).
Every /api/* request passes through. request.state.principal is
None; require_scope(...) fails open in this case so route-level
scope checks do not block local development.
This is convenient for one-machine local development and unacceptable for anything else. The startup banner is a warning, not a request-time block. See Production checklist.
What happens to a revoked key
- An explicit-but-bogus key — even one that was valid yesterday — always returns 401. It does not fall through to dev mode just because no other DB keys remain.
- An SSE event token bound to a now-revoked DB key fails the rehydrate step at connect time and returns 401, even if the token HMAC and TTL are still valid. Revocation is immediate.
- The
last_used_attouch only writes on a successful auth, so a rate of 401s does not pound theapi_keystable.
The "explicit-but-bogus → always 401" path was a v0.8.0 bug fix:
without it, revoking your only key would silently flip the system
back to dev mode and the revoked key would keep working until at
least one other key was created. Regression test in
test_revoked_db_key_rejected_when_no_env_var.
What's authenticated, what isn't
| Surface | Auth required (when configured)? |
|---|---|
Every /api/* route | Yes, with explicit per-route scope |
/health, /ready | No — public for probes |
/docs, /redoc, /openapi.json | No — schema is the API surface, not data |
| Static dashboard assets (Next.js) | Out of scope; deploy frontend behind your own auth |
| Frontend → API requests | Yes — frontend injects the NEXT_PUBLIC_SECURESCAN_API_KEY |
SSE /scans/{id}/events | Yes, via short-lived event token (browsers can't send headers) |
Source
auth.py—Principal,require_api_key,require_scope,assert_auth_credentials_configured.api_keys.py— key generation, salted-SHA-256 hashing,parse_key_id.event_tokens.py— SSE token mint/verify.api/keys.py— CRUD endpoints for DB keys.
Next
- API keys — DB-issued keys, the v0.8.0 way.
- Scopes — read / write / admin per route.
- SSE event tokens — auth on EventSource.
- Production checklist — the pre-flight.
API keys
Hashed, scoped, DB-issued API keys — introduced in v0.8.0. Replaces the v0.5.0 single-shared-env-var contract for production deployments. The legacy env var still works as a break-glass / dev-mode fallback.
Format
ssk_<10-char-id>_<32-char-secret>
- Prefix
ssk_— fixed; lets the auth path quickly reject obviously-malformed keys without a DB roundtrip. - id — 10 base64url chars, ~60 bits of entropy. The id is what
surfaces in API responses (the
idfield onApiKeyView) and what the dashboard shows as the key's stable identifier. - secret — 32 base64url chars, ~192 bits of entropy. Combined with the id, total ≈ 250 bits — brute-forcing is infeasible without a memory-hard KDF.
Example: ssk_5a7c8f9eab_abcdefghij1234567890klmnopqrstuv.
The DB only ever stores a salted SHA-256 hash. The plaintext is returned exactly once at creation. Lose it → revoke and re-issue.
Why salted SHA-256, not bcrypt / argon2?
The keys are 192-bit random secrets. Brute-forcing the hash is
already infeasible without a memory-hard KDF. Adding bcrypt buys
nothing except a hard dependency and per-request CPU cost on the
hot auth path. The hash is <salt-hex>$<sha256-hex> so it's
self-contained — no separate salt column, no runtime config knob.
This is the rationale documented in
backend/securescan/api_keys.py.
If your threat model is different (e.g. you let users pick weak keys), use longer keys, not a slower KDF. The strength is in the secret, not the hash.
Issuing a key
Via the API (curl)
$ curl -s -X POST http://127.0.0.1:8000/api/v1/keys \
-H "X-API-Key: $ADMIN_KEY" \
-H 'Content-Type: application/json' \
-d '{
"name": "ci-runner",
"scopes": ["read", "write"]
}' | jq .
{
"id": "5a7c8f9eab",
"name": "ci-runner",
"prefix": "ssk_5a7c8f9eab",
"scopes": ["read", "write"],
"created_at": "2026-04-29T20:00:00",
"last_used_at": null,
"revoked_at": null,
"key": "ssk_5a7c8f9eab_abcdefghij1234567890klmnopqrstuv"
}
Save the key field immediately — write it to a secrets manager,
inject it into a CI environment, whatever your workflow is. The
secret is never returned again. Subsequent reads strip it.
Via the dashboard
The /settings/keys page lists existing keys (name, prefix, scopes,
created, last used, status) and has a New key modal:
- Click "New key".
- Fill in name + scopes.
- Modal shows the secret with a copy button. Close button is disabled for 1 second to prevent reflexively dismissing.
- Closing without an explicit "I saved it" confirmation triggers a "discard without saving the key?" dialog (Esc / outside-click).
- Once closed, the secret is gone from memory. The list-row updates.
Source: frontend/src/app/settings/keys/page.tsx.
Default scopes
A new-key request without an explicit scopes list defaults to
["read", "write"]. admin is never granted by default — you
have to ask for it. This matches the principle of least privilege
and means the common case (a CI runner that posts scans and reads
results) is handled without thinking about permissions.
See Scopes for what each scope grants.
Listing & introspection
# List all keys (admin only)
curl -H "X-API-Key: $ADMIN_KEY" http://127.0.0.1:8000/api/v1/keys
# Introspect the calling key (any DB key)
curl -H "X-API-Key: $K" http://127.0.0.1:8000/api/v1/keys/me
GET /me is useful from a CI step to confirm the right key is
plumbed in:
$ curl -s -H "X-API-Key: $K" http://127.0.0.1:8000/api/v1/keys/me | jq .
{
"id": "5a7c8f9eab",
"name": "ci-runner",
"prefix": "ssk_5a7c8f9eab",
"scopes": ["read", "write"],
"created_at": "2026-04-29T20:00:00",
"last_used_at": "2026-04-29T20:14:22",
"revoked_at": null
}
Revoking a key
curl -X DELETE -H "X-API-Key: $ADMIN_KEY" \
http://127.0.0.1:8000/api/v1/keys/5a7c8f9eab
# 204 — revocation is recorded with revoked_at = now
A revoked key is never un-revoked. To rotate, create a new key and revoke the old one.
A second DELETE on the same id is a no-op 204 (idempotent: the
caller's intent — "this id should be revoked" — is satisfied).
Lockout protection
Revoking the last unrevoked admin key when AUTH_REQUIRED=1 and no
SECURESCAN_API_KEY env var is set returns:
HTTP/1.1 409 Conflict
Content-Type: application/json
{"detail": "Cannot revoke last admin key without an env-var fallback"}
The check is in count_admin_keys_active. The point: you cannot lock
yourself out of your own deployment via the API. To intentionally
revoke the last admin key, either:
- Set
SECURESCAN_API_KEYfirst (so an env-var fallback exists), then revoke; OR - Issue a replacement admin key first, then revoke the original.
If AUTH_REQUIRED=0 (or the env var is set), revoking the last admin
key is allowed — the system will fall back to env-var auth or, with
neither, dev mode.
Lifecycle: rotate a key
sequenceDiagram autonumber participant Op as Operator participant API as SecureScan Op->>API: POST /api/v1/keys (name=ci-runner-2026q2, scopes=[read,write]) API-->>Op: ssk_NEWID_NEWSECRET Op->>Op: deploy ssk_NEWID_NEWSECRET to CI Note over Op,API: ... drain in-flight CI runs ... Op->>API: DELETE /api/v1/keys/OLDID API-->>Op: 204 Note over API: ssk_OLDID_OLDSECRET now 401s on every call
The rotation is fast (no proxying period) because both keys are
valid in parallel until the old one is revoked. There is no caching
in the auth path beyond last_used_at.
Migration from v0.8.0 env-var-only
If you are running v0.7.0 or earlier and want to move to DB-backed keys, the path is:
flowchart LR Start([env-var only<br>SECURESCAN_API_KEY set]) --> A[Upgrade to v0.8.0+] A --> B[Backend boots; env-var path still works] B --> C[Issue an admin DB key:<br>POST /api/v1/keys with scopes=admin] C --> D[Test the new admin key against /keys/me] D --> E[Issue scoped keys for CI/dashboard:<br>POST /api/v1/keys] E --> F[Update CI/dashboard to the new key] F --> G[Set AUTH_REQUIRED=1 + EVENT_TOKEN_SECRET] G --> H[Restart backend] H --> I([Optional: unset SECURESCAN_API_KEY<br>now that DB keys carry the system])
Don't unset SECURESCAN_API_KEY until you have verified at least
one admin DB key works. If you remove the env var and your only
DB keys are read/write, you've locked yourself out of issuing more
keys. Lockout protection on DELETE /keys/{id} catches the symmetric
case but not "operator removed the env var manually."
DB schema
CREATE TABLE api_keys (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
prefix TEXT NOT NULL, -- ssk_<id> + 1 char of secret; safe to display
key_hash TEXT NOT NULL, -- "<salt-hex>$<sha256-hex>"
scopes TEXT NOT NULL, -- JSON array of scope strings
created_at TEXT NOT NULL,
last_used_at TEXT, -- updated on each successful auth
revoked_at TEXT -- non-null = revoked
);
A revoked key is kept (not deleted) so the audit trail records who created it and when it was revoked.
API reference
| Method | Path | Scope | Body / Notes |
|---|---|---|---|
POST | /api/v1/keys | admin | {name, scopes?} → 201 ApiKeyCreated (the only response that includes the full secret) |
GET | /api/v1/keys | admin | ApiKeyView[] — no secret |
GET | /api/v1/keys/me | any DB key | ApiKeyView of the caller; useful for CI sanity checks |
DELETE | /api/v1/keys/{id} | admin | 204; 409 with lockout-protection message if revoking would zero out admin credentials |
The same routes are mounted at /api/keys (legacy, with deprecation
headers) and /api/v1/keys. See API versioning.
Source:
backend/securescan/api/keys.py.
Next
- Scopes — what
read/write/adminactually grant. - SSE event tokens — how the SSE stream auths once you have a key.
- Production checklist — full pre-flight.
Scopes
Three scopes, declared per-route, intersected (OR semantics) by the
require_scope dependency. Introduced in v0.8.0; every /api/* route
declares its required scope explicitly, and a regression-guard test
fails CI if a new route ships without one.
The scopes
| Scope | Grants | Typical caller |
|---|---|---|
read | Read scans, findings, summaries, SBOMs, notifications, compliance. | Read-only dashboard, monitoring tooling. |
write | Create / cancel / delete scans. Set triage state. Add comments. Mark notifications read. | CI runners, the dashboard. |
admin | All of the above + manage API keys, manage webhooks. | Operator-only (one admin key per deployment). |
Default new-key scopes are ["read", "write"]. admin must be
explicitly requested. See API keys.
Per-route mapping
The source of truth is the regression-guard test:
backend/tests/test_scopes.py::test_all_routes_have_explicit_scope.
That test enumerates app.routes and asserts every non-public route
declares a scope via Depends(require_scope(...)). A new route
without a scope fails CI.
The mapping below is the v0.9.0 surface, condensed:
Read scope
| Method | Path | Notes |
|---|---|---|
| GET | /api/v1/scans | List scans |
| GET | /api/v1/scans/{id} | Scan details |
| GET | /api/v1/scans/{id}/findings | Findings + triage state |
| GET | /api/v1/scans/{id}/summary | Severity counts + risk score |
| GET | /api/v1/scans/{id}/sbom | CycloneDX or SPDX |
| GET | /api/v1/scans/{id}/report | HTML / PDF report |
| GET | /api/v1/scans/{id}/events | SSE — accepts event token; see event tokens |
| GET | /api/v1/scans/compare | Scan-vs-scan diff |
| GET | /api/v1/findings/{fp}/comments | List comments on a fingerprint |
| GET | /api/v1/dashboard/status | Scanner availability |
| GET | /api/v1/dashboard/stats | Aggregate statistics |
| GET | /api/v1/dashboard/trends | Risk / finding trend data |
| GET | /api/v1/compliance/coverage | Per-framework coverage |
| GET | /api/v1/notifications | List notifications |
| GET | /api/v1/notifications/unread-count | Unread count |
| POST | /api/v1/scans/{id}/event-token | Mint SSE token (read-only operation) |
Write scope
| Method | Path | Notes |
|---|---|---|
| POST | /api/v1/scans | Start a new scan (rate-limited) |
| DELETE | /api/v1/scans/{id} | Delete a scan + cascade findings |
| POST | /api/v1/scans/{id}/cancel | Cancel a running scan |
| PATCH | /api/v1/findings/{fp}/state | Set / replace triage verdict |
| POST | /api/v1/findings/{fp}/comments | Add a comment |
| DELETE | /api/v1/findings/{fp}/comments/{id} | Delete a comment |
| PATCH | /api/v1/notifications/{id}/read | Mark one notification read |
| PATCH | /api/v1/notifications/read-all | Bulk mark notifications read |
| POST | /api/v1/dashboard/install/{scanner} | Install a supported scanner |
Admin scope
| Method | Path | Notes |
|---|---|---|
| POST | /api/v1/keys | Issue a new API key |
| GET | /api/v1/keys | List all keys |
| DELETE | /api/v1/keys/{id} | Revoke a key (lockout-protected) |
| POST | /api/v1/webhooks | Create a webhook subscription |
| GET | /api/v1/webhooks | List webhooks |
| GET | /api/v1/webhooks/{id} | Fetch one webhook |
| PATCH | /api/v1/webhooks/{id} | Edit a webhook (cannot rotate secret) |
| DELETE | /api/v1/webhooks/{id} | Delete a webhook + cascade deliveries |
| GET | /api/v1/webhooks/{id}/deliveries | Last 100 delivery rows |
| POST | /api/v1/webhooks/{id}/test | Fire a synthetic webhook.test |
There is no read-scope view of webhooks in v0.9.0. An attacker
with read cannot see webhook URLs or delivery history; an
attacker with write cannot redirect events to a sink they
control. Only admin does. The /settings/webhooks dashboard page
is the only intended consumer.
Special cases
GET /api/v1/keys/me
Carries no require_scope dependency — any authenticated DB key can
introspect itself. The handler returns the calling key's metadata
(no secret, just the prefix + scopes + timestamps).
GET /api/v1/scans/{id}/events
Declared with Depends(require_scope("read")), but the auth path
also accepts ?event_token=... because browsers can't send
X-API-Key on EventSource. The token is bound to the caller's
key_id; the rehydrated principal carries that key's scopes. See
SSE event tokens.
OR semantics
Depends(require_scope("read", "admin")) accepts a key with either
read or admin. If you need AND semantics (key must have both),
declare two separate dependencies — but the v0.9.0 surface does not
use AND anywhere.
Scope check failure
HTTP/1.1 403 Forbidden
Content-Type: application/json
{"detail": "Requires scope: admin"}
403 (not 401) because the caller is authenticated; they just don't
have the right permissions. Returns Requires scope: <scope> so
operators can quickly diagnose missing-scope issues.
Dev mode behavior
When the system has no env-var key AND no DB keys, every request
arrives with request.state.principal = None. require_scope(...)
fails open in this case — local development is not blocked by
scope checks.
require_api_key has already enforced the
"AUTH_REQUIRED=1 with no creds" case as 503; so when
require_scope sees principal is None it knows the system is
genuinely in dev mode, not in a misconfigured production state.
Changing scopes on an existing key
Not supported. Scopes are set at issuance. To change scopes:
- Issue a new key with the right scopes.
- Update the consumer to use the new key.
- Revoke the old one.
This is on purpose. A "PATCH key scopes" endpoint would be a fast privilege-escalation if an admin key was ever leaked and a reasonable-looking write-scope-only key carried the leak by quietly elevating in-place.
Source
auth.py::require_scope— dependency factory + the__securescan_scope__marker that the regression test uses.tests/test_scopes.py::test_all_routes_have_explicit_scope— the enforcement.
Next
- API keys — issuing, revoking, scoping at issuance.
- Production checklist.
SSE event tokens
Browsers cannot attach custom headers to an EventSource. Without
this mechanism the dashboard's live-progress stream would either:
- Punch a hole in
require_api_keyfor/events(no), or - Fall back to 2-second polling in any authenticated deployment (the v0.7.0 reality).
v0.9.0 closes that gap. The dashboard exchanges its X-API-Key for
a short-lived signed token and rides on ?event_token=....
Token format
base64url( "<scan_id>|<key_id>|<expires_at>|<sig_b64>" )
where sig_b64 = base64url( HMAC_SHA256( secret, "<scan_id>|<key_id>|<expires_at>" ) )
- scan_id — the scan the token authorizes. The token works on
only this scan's
/eventsroute; replaying it against scan B 401s. - key_id — the DB key id (
5a7c8f9eab), the literal string"env"for the legacy env-var path, or"dev"for dev mode. Used to rehydrate the principal at connect time, so revoking a DB key invalidates outstanding tokens immediately — even before their TTL expires. - expires_at — unix seconds. Default TTL is 300 seconds (5 minutes).
- sig — HMAC-SHA-256 over the body, base64url, no padding.
The signing secret is SECURESCAN_EVENT_TOKEN_SECRET. Required when
SECURESCAN_AUTH_REQUIRED=1. In dev mode the backend auto-generates
an ephemeral 32-byte secret on first use.
Set SECURESCAN_EVENT_TOKEN_SECRET explicitly in production.
Without it, every backend restart picks a new ephemeral secret —
in-flight SSE tokens from the FE silently 401, the dashboard's live
progress goes blind, and the FE falls back to polling. The startup
hook will exit with code 2 if AUTH_REQUIRED=1 and the secret is
unset (defense in depth).
End-to-end flow
sequenceDiagram
autonumber
participant FE as Dashboard
participant API as FastAPI
participant DB as DB
FE->>API: POST /api/v1/scans/{id}/event-token (X-API-Key)
API->>DB: validate key (or env-var match)
API-->>FE: { token, expires_in: 300 }
FE->>API: GET /scans/{id}/events?event_token=… (no header)
API->>API: HMAC verify + expiry check
API->>API: scan_id binding check
API->>DB: rehydrate principal from key_id
alt key revoked since mint
API-->>FE: 401 Bound key is revoked or missing
else principal OK
API->>API: subscribe to bus
loop scan progress
API-->>FE: event: scanner.complete
end
API-->>FE: event: scan.complete (terminal)
API-->>FE: close stream
end
Three layers of defense at verify time
The auth path is in
backend/securescan/auth.py::_authenticate_via_event_token.
- HMAC + expiry. The
event_tokens.verify(...)helper rejects forged or expired tokens. Constant-time comparison. - Scan-id binding. The
scan_idbaked into the token must match the one in the URL path. A token minted for scan A cannot be replayed against scan B. - Principal rehydrate. The bound
key_idis looked up in theapi_keystable; if the row is revoked or missing, the connection is refused — even if the token's HMAC and TTL are still valid. Revocation is immediate.
Frontend rotation timeline
The FE rotates tokens at half-life and tries one re-mint on error before falling back to polling.
gantt
title SSE token rotation for an 8-minute scan
dateFormat HH:mm:ss
axisFormat %M:%S
section Tokens
token-1 (5m TTL) :active, t1, 00:00:00, 5m
token-2 (rotate at half-life) :t2, 00:02:30, 5m
token-3 (rotate at half-life) :t3, 00:05:00, 5m
token-4 (rotate at half-life) :t4, 00:07:30, 5m
section EventSource
open with token-1 :crit, e1, 00:00:00, 2m30s
swap to token-2 :e2, 00:02:30, 2m30s
swap to token-3 :e3, 00:05:00, 2m30s
swap to token-4 :e4, 00:07:30, 1m
scan.complete (close) :milestone, 00:08:00, 0s
The rotation is implemented in frontend/src/app/scan/[id]/:
- On
EventSource.onopen, schedule a rotation timer at TTL / 2. - On rotation:
POST /event-token, then close the oldEventSource, then open a new one with the fresh token. The order matters — the browser's auto-reconnect on the oldEventSourcewould otherwise race with the new connection. - On
EventSource.onerror: try one re-mint; if that also errors, closeEventSourceand start pollingGET /scans/{id}every 2s.
Manually minting a token
$ curl -s -X POST -H "X-API-Key: $K" \
http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/event-token | jq .
{
"token": "MGYxYTkzY2J8NWE3YzhmOWVhYnwxNzMwMjMwNTg1fGFi....",
"expires_in": 300
}
The response includes Cache-Control: no-store so the token does
not stick in proxies or access logs longer than necessary.
Use it on the events stream:
$ TOKEN="MGYxYTkzY2J8NWE3YzhmOWVhYnwxNzMwMjMwNTg1..."
$ curl -N "http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/events?event_token=$TOKEN"
event: scan.start
data: {"scan_types":["code"]}
...
A token bound to scan A replayed against scan B:
$ curl -i -N "http://127.0.0.1:8000/api/v1/scans/OTHER/events?event_token=$TOKEN"
HTTP/1.1 401 Unauthorized
{"detail":"Token does not match scan id"}
Dev-mode tokens
When the system has no env-var key AND no DB keys (dev mode), the
mint endpoint still works — it issues a token bound to key_id="dev".
The verifier accepts dev tokens only while the system remains in
dev mode. If the operator subsequently sets SECURESCAN_API_KEY
or creates a DB key, every outstanding dev-mode token immediately
401s with Dev-mode token no longer valid (auth has been enabled).
This is by design. The alternative — keeping dev-mode tokens valid
forever — would let a stale browser tab bypass auth after the
operator hardens the deployment. Regression tests:
test_sse.py::test_dev_mode_token_round_trips and
test_dev_mode_token_invalidated_when_auth_enabled.
v0.9.0 introduced this "dev" sentinel. An earlier draft minted
dev-mode tokens with key_id="env", then verification rejected them
because no env-var was actually configured. Caught in integration;
fixed before the v0.9.0 ship. See the v0.9.0
CHANGELOG entry.
Path-spoofing safety
The auth path checks request.url.path to decide whether the
?event_token= is honored:
is_sse_route = (
request.url.path.endswith("/events")
and "/scans/" in request.url.path
)
request.url.path is the post-routing ASGI path that
FastAPI/Starlette already used to dispatch. A caller cannot lie
about being on the SSE route to win this check while actually
hitting a different handler — the path FastAPI sees is the path
that determines the handler, full stop.
In addition, the token's scan_id is compared against the path's
scan_id. So even within the SSE family, a token leaks only to its
own scan id.
Token TTL trade-off
5 minutes is short enough that a leaked token (browser extension, shared screenshot) is not a long-lived risk. It is long enough that a normal scan completes inside one token. For long scans (multi-hour nmap sweeps, ZAP active scans), the half-life rotation is the mechanism — the FE just keeps re-minting.
If your operational profile demands a different TTL, the constant
is TOKEN_TTL_SECONDS in
backend/securescan/event_tokens.py.
We do not currently expose it as an env var because no operator has
asked.
What this is not
Event tokens are scoped to a single scan's SSE route. They are not:
- General-purpose API auth (use the API key for anything else).
- Long-lived (5-minute TTL).
- Cookie-able (they ride in the query string by design).
- A delegation mechanism (they bind to the minting key's id).
If you need browser auth for the rest of the API, use the API key
directly via X-API-Key. SSE is the special case because of the
EventSource browser-API limitation, not a deliberate auth design.
Source
- Token mint/verify:
backend/securescan/event_tokens.py. - Mint endpoint + auth integration:
backend/securescan/api/scans.py(create_event_token). - Verify path:
backend/securescan/auth.py::_authenticate_via_event_token.
Next
- Real-time scan progress — what the SSE stream actually delivers.
- API keys — the issuer of the keys these tokens bind to.
- Production checklist —
SECURESCAN_EVENT_TOKEN_SECRETis on it.
Auth production checklist
A pre-flight specifically for the authentication surface. The broader Production checklist includes this plus the rest (rate limits, single-worker, signed artifacts, health probes).
Before exposing the API
-
At least one credential exists. Either:
- Set
SECURESCAN_API_KEYto a strong random string (e.g.openssl rand -hex 32); OR - Create a DB-backed admin key via
POST /api/v1/keys(see API keys).
- Set
-
SECURESCAN_AUTH_REQUIRED=1is set. Without it, dev-mode fallback applies if all credentials are removed (silent unauthenticated state). With it, the backend exits with code 2 at startup if no credentials exist — fail-closed. -
SECURESCAN_EVENT_TOKEN_SECRETis set (required whenAUTH_REQUIRED=1). Without it, every backend restart breaks in-flight SSE tokens. See SSE event tokens.
# Suggested env-var generation (Linux / macOS)
export SECURESCAN_AUTH_REQUIRED=1
export SECURESCAN_EVENT_TOKEN_SECRET="$(openssl rand -hex 32)"
# Either keep the legacy env-var key as a break-glass...
export SECURESCAN_API_KEY="$(openssl rand -hex 32)"
# ...or rely on DB keys exclusively (post-migration).
DB key issuance
-
Issue at least one admin key via the API or the dashboard
before turning off
SECURESCAN_API_KEY. Otherwise revoking the last admin key would lock you out — lockout protection catches the symmetric case but not "operator manually removed env var." -
One admin key, scoped down per consumer. A CI runner gets
["read", "write"]. A monitoring dashboard gets["read"]. Only the operator's break-glass identity getsadmin. - Save the secret to your secrets manager immediately. The plaintext is returned exactly once.
-
Document key ownership. Set
nameto something identifiable (ci-runner,read-only-monitoring,breakglass-2026q2). When you list keys later, you'll be able to tell which is which.
Network exposure
- Terminate TLS in front of SecureScan. The bundled uvicorn serves plain HTTP. nginx, Traefik, AWS ALB, or Caddy are all reasonable.
-
Forward
X-Request-IDthrough the proxy so client correlation works end-to-end. -
Restrict access to
/docs,/redoc,/openapi.jsonif your threat model includes "an unauthenticated actor learning the API surface." These describe every route — including admin — but expose no data. For most internal deployments, leaving them open to authenticated network paths is fine.
SecureScan does not ship its own SSO / OIDC integration. If you
need user-mapped auth, put it in front of SecureScan (oauth2-proxy,
Cloudflare Access, AWS ALB OIDC) and treat the backend as a service
that authenticates via API keys. The dashboard's
NEXT_PUBLIC_SECURESCAN_API_KEY is then a service identity for the
proxy, not a user identity.
Frontend wiring
-
NEXT_PUBLIC_SECURESCAN_API_KEYis aread-scope key, not admin. The value is baked into the build and shipped to the browser. Anyone who can hit the dashboard can read its key off the network tab. Only deploy the dashboard somewhere already protected by your front-line auth (SSO, mTLS). -
The dashboard is not exposed publicly. If it is, every
visitor automatically has
read(and any other scope you scoped it to) on the backend. That's a deliberately simple model — keep it inside your network perimeter.
Operational
- Rotation procedure documented. Know in advance how to issue a new key and revoke an old one. See API keys → Lifecycle: rotate a key.
-
Alarm on auth failures. Tail the structured logs for
securescan.requestlines withstatus: 401and graph them. A spike past your normal noise floor means something has gone wrong (key expired, brute-force attempt, misconfigured caller). -
Periodic key audit.
GET /api/v1/keyslists every key withlast_used_at. Anything not used in 90 days is a candidate for revocation.
SSE / real-time progress
-
SECURESCAN_EVENT_TOKEN_SECRETis set and stable across restarts. Rotating this secret invalidates every outstanding event token; your dashboard's live progress will go blind for ~5 minutes while clients re-mint. -
Run
--workers 1. The event bus is in-process; multi-worker uvicorn fragments the bus and SSE breaks silently. See Single-worker constraint. -
Sticky sessions on
scan_idif you scale horizontally. Each scan's SSE subscribers must land on the same backend instance that runs the scan.
Incident response
-
Revoke a leaked key immediately.
DELETE /api/v1/keys/{id}takes effect on the next request. No cache, no propagation delay. -
Rotate
SECURESCAN_EVENT_TOKEN_SECRETif an event token was specifically leaked. All outstanding tokens become invalid; legitimate clients re-mint within seconds. -
SECURESCAN_API_KEYrotation requires a backend restart. The env-var path reads on every request, but a hot rotation means the new value isn't picked up until the process restarts. To do a zero-downtime rotation, issue a DB key first, switch the consumer, then update / restart.
Verifying a deployment is hardened
# 1. Auth is required:
$ curl -i http://127.0.0.1:8000/api/v1/scans
HTTP/1.1 401 Unauthorized
{"detail":"X-API-Key header required"}
# 2. Bogus key 401s (does not fall through to dev mode):
$ curl -i -H "X-API-Key: nope" http://127.0.0.1:8000/api/v1/scans
HTTP/1.1 401 Unauthorized
{"detail":"Invalid API key"}
# 3. Read-only key cannot create a scan:
$ curl -i -X POST -H "X-API-Key: $READ_KEY" \
-d '{"target_path":"/tmp","scan_types":["code"]}' \
http://127.0.0.1:8000/api/v1/scans
HTTP/1.1 403 Forbidden
{"detail":"Requires scope: write"}
# 4. Write key cannot list webhook subscriptions:
$ curl -i -H "X-API-Key: $WRITE_KEY" http://127.0.0.1:8000/api/v1/webhooks
HTTP/1.1 403 Forbidden
{"detail":"Requires scope: admin"}
# 5. /health and /ready are public:
$ curl -s http://127.0.0.1:8000/health
{"status":"ok"}
# 6. Lockout protection refuses removing the last admin key:
$ curl -i -X DELETE -H "X-API-Key: $ADMIN_KEY" \
http://127.0.0.1:8000/api/v1/keys/$ONLY_ADMIN
HTTP/1.1 409 Conflict
{"detail":"Cannot revoke last admin key without an env-var fallback"}
If those six are green, the auth surface is correctly configured.
Next
- API keys — issuance, rotation, lockout protection.
- Scopes — what each scope grants, route by route.
- SSE event tokens — auth on the live stream.
- Production checklist — the broader pre-flight.
API overview
SecureScan exposes a single REST API. Every route is mounted at both
/api/... (legacy) and /api/v1/... (current); the legacy paths
return Deprecation / Sunset response headers. See
Versioning & deprecation.
This page is the entry point. For the interactive schema with
every parameter, look at the running server's /docs (FastAPI
Swagger UI) or /redoc.
Where each endpoint group lives
| Group | Prefix | Source |
|---|---|---|
| Scans | /api/v1/scans | backend/securescan/api/scans.py |
| Findings | /api/v1/scans/{id}/findings | (same file as Scans) |
| Triage | /api/v1/findings | backend/securescan/api/triage.py |
| Keys | /api/v1/keys | backend/securescan/api/keys.py |
| Webhooks | /api/v1/webhooks | backend/securescan/api/webhooks.py |
| Notifications | /api/v1/notifications | backend/securescan/api/notifications.py |
| SBOM | /api/v1/scans/{id}/sbom | backend/securescan/api/sbom.py |
| Compliance | /api/v1/compliance | backend/securescan/api/compliance.py |
| Dashboard | /api/v1/dashboard | backend/securescan/api/dashboard.py |
| Health probes | /health, /ready | backend/securescan/api/__init__.py |
Auth
Every authenticated route accepts X-API-Key: <key> or
Authorization: Bearer <key>. Per-route scope (read / write /
admin) is declared via Depends(require_scope(...)). See
Authentication overview and
Scopes.
The SSE route additionally accepts ?event_token=... because
browsers cannot send custom headers on EventSource — see
SSE event tokens.
Common request / response patterns
Request ID correlation
Every request carries a request id end-to-end. If you don't pin one, the server generates a UUID and echoes it back via the same header:
$ curl -i -H "X-Request-ID: my-trace-12345" \
-H "X-API-Key: $K" \
http://127.0.0.1:8000/api/v1/scans
HTTP/1.1 200 OK
X-Request-ID: my-trace-12345
...
In server logs, the same id appears on the securescan.request
structured log line:
{"timestamp": "...", "logger": "securescan.request", "request_id": "my-trace-12345",
"method": "GET", "path": "/api/v1/scans", "status": 200, "latency_ms": 4.13}
Rate limit headers
POST /api/v1/scans is rate-limited per API key (or per IP in dev
mode). Successful responses include:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 47
X-RateLimit-Reset: 1730230885
Exceeded responses are 429 with Retry-After and a structured body.
See Rate limits.
Error shape
HTTP/1.1 404 Not Found
Content-Type: application/json
{"detail": "Scan not found"}
detail is a single string for most errors; on 422 (Pydantic
validation failure), it is a structured list:
{
"detail": [
{"type": "value_error", "loc": ["body", "url"], "msg": "URL must be http(s)", "input": "..."}
]
}
The most-used endpoints
These are documented in detail; the rest live in /docs.
Start a scan
curl -X POST http://127.0.0.1:8000/api/v1/scans \
-H "X-API-Key: $K" \
-H 'Content-Type: application/json' \
-d '{
"target_path": "/abs/path",
"scan_types": ["code", "dependency"]
}'
→ 200 with a Scan row (status starts as pending). Background
asyncio task starts immediately; subscribe to events to watch.
Requires write scope.
Stream live progress
curl -N "http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/events"
→ SSE stream of scan.start, scanner.start, scanner.complete,
etc. Terminal events close the stream. Requires read scope (or a
valid ?event_token=).
See Real-time scan progress and SSE event tokens.
Read findings
curl "http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/findings" \
-H "X-API-Key: $K"
→ Array of FindingWithState. Filter via query: ?severity=high,
?scan_type=code, ?compliance=OWASP-A03. Multiple filters are
AND-combined. Requires read scope.
Set a triage verdict
curl -X PATCH "http://127.0.0.1:8000/api/v1/findings/$FP/state" \
-H "X-API-Key: $K" \
-H 'Content-Type: application/json' \
-d '{"status": "false_positive", "note": "..."}'
→ The persisted state row. Requires write scope. See
Triage workflow.
Compare two scans
curl "http://127.0.0.1:8000/api/v1/scans/compare?scan_a=$BASE&scan_b=$HEAD" \
-H "X-API-Key: $K"
→ {new, fixed, unchanged} arrays of findings. Requires read
scope.
Issue an API key
curl -X POST http://127.0.0.1:8000/api/v1/keys \
-H "X-API-Key: $ADMIN_KEY" \
-H 'Content-Type: application/json' \
-d '{"name": "ci-runner", "scopes": ["read", "write"]}'
→ 201 with ApiKeyCreated, including the plaintext key field
once. Requires admin scope. See API keys.
Create a webhook subscription
curl -X POST http://127.0.0.1:8000/api/v1/webhooks \
-H "X-API-Key: $ADMIN_KEY" \
-H 'Content-Type: application/json' \
-d '{
"name": "ops-pager",
"url": "https://hooks.example.com/...",
"event_filter": ["scan.complete", "scan.failed"]
}'
→ 201 with WebhookCreated, including the plaintext secret field
once. Requires admin scope. See Webhooks
and Webhook payloads.
OpenAPI / Swagger
The full machine-readable schema is at:
| Path | What it is |
|---|---|
/openapi.json | The OpenAPI 3.1 document. Feed it to your client generator. |
/docs | FastAPI Swagger UI — try-it-now interactive tool. |
/redoc | FastAPI ReDoc — read-only schema documentation. |
Each operation has its own description (the handler docstring) and
declared response models. For the complete parameter list per
endpoint, the /docs UI is the source of truth.
Versioning & stability
-
Route paths under
/api/v1/..., the response shapes documented in this site, and theFinding/Scan/ApiKeyPydantic models are stable. Additions (new fields, new optional query params, new endpoints) happen in minor releases without breaking existing callers. -
Internal-only routes — anything not listed in Endpoints or Scopes — may change without notice.
-
/api/...(legacy unprefixed) is deprecated. Migrate to/api/v1/...by Dec 31, 2026. See Versioning & deprecation.
Next
- OpenAPI specification — auto-generated schema, Swagger UI, ReDoc, and tooling imports.
- Versioning & deprecation — the legacy / v1 split.
- Rate limits —
POST /scansrate limiting. - Endpoints — complete list with grouping.
- Webhook payloads — full schema for outbound events.
OpenAPI specification
The full SecureScan REST API is auto-generated as an OpenAPI 3.1 document at runtime. Two ways to consume it:
Live spec (running backend)
curl http://localhost:8000/api/v1/openapi.json | jq .
The spec is regenerated on every backend startup, so it always matches the running version.
Interactive docs
http://localhost:8000/docs # Swagger UI
http://localhost:8000/redoc # ReDoc alternative
Both surfaces are protected by the same auth as the rest of the API. In dev mode (no SECURESCAN_API_KEY, no DB keys) they're accessible without credentials.
Importing into tooling
Postman
File → Import → paste the URL http://localhost:8000/api/v1/openapi.json. Postman auto-creates a collection from the spec.
Insomnia
Application → Preferences → Data → Import Data → From URL → paste the URL.
Backstage
Use the openapi entity definition:
apiVersion: backstage.io/v1alpha1
kind: API
metadata:
name: securescan
spec:
type: openapi
definition: |
\${{ jsonData '/api/v1/openapi.json' }}
Endpoint count
As of v0.11.0, the auto-generated spec covers ~30 endpoints across:
/api/v1/scans/*— scan lifecycle + SSE events + delete/api/v1/findings/*— triage state + comments/api/v1/sbom/*— SBOM generation + retrieval/api/v1/dashboard/*— scanner status + stats/api/v1/keys/*— API key CRUD (admin)/api/v1/webhooks/*— webhook CRUD (admin)/api/v1/notifications/*— in-app notifications/health,/ready— liveness/readiness probes/openapi.json,/docs,/redoc— meta
Versioning & deprecation
SecureScan v0.6.0 introduced an /api/v1/... mount that mirrors every
existing /api/... route. The legacy unprefixed paths continue to
work — the v0.5.0 CLIs, old GitHub Actions, and third-party scripts do
not break — but their responses now carry RFC 9745-style deprecation
headers so callers know where to migrate.
The two prefixes
| Prefix | Status | Notes |
|---|---|---|
/api/v1/... | Current | Use for all new code. No deprecation headers. |
/api/... | Deprecated | Identical handler. Adds Deprecation: true, Link: ..., Sunset headers. |
Source:
backend/securescan/api/versioning.py.
Deprecation headers
Every response under /api/... (but not /api/v1/...,
/health, /ready, /docs, /openapi.json, /) carries:
Deprecation: true
Link: </api/v1/scans/abc>; rel="successor-version"
Sunset: Wed, 31 Dec 2026 23:59:59 GMT
The Link header points at the matching /api/v1/ path so a smart
client can auto-migrate. The Sunset date is fixed: Dec 31, 2026,
23:59:59 GMT. That gives v0.5.0 callers roughly 18 months to migrate.
The Sunset date is the upper bound for planning, not a hard
EOL. The legacy paths will keep working past it; the date just
tells callers when SecureScan considers itself free to drop them.
We will revisit the date in a future release before any actual
removal.
Why mount both
The handlers are shared between the two prefixes. alias_router_at_v1
walks the legacy router's routes and re-registers each on a fresh /api/v1/
router pointing at the same callable. So:
- A bug fix in
create_scanaffects/api/scansAND/api/v1/scansin the same release. - The OpenAPI document lists each operation under both paths.
- There is no "version drift" possible — there is only one handler.
Migrating from /api/ to /api/v1/
Three patterns:
1. Hardcoded base URL
If your code does:
BASE = "https://securescan.internal/api"
Change to:
BASE = "https://securescan.internal/api/v1"
That's it. No request body changes; no auth changes.
2. Auto-follow Link: rel="successor-version"
A more robust client follows the deprecation hint:
import requests
resp = requests.get("https://securescan.internal/api/scans",
headers={"X-API-Key": KEY})
if resp.headers.get("Deprecation") == "true":
successor = resp.links.get("successor-version", {}).get("url")
if successor:
# Optional: log a warning, retry against the v1 path.
...
This is overkill for most callers, but useful in libraries that want to be self-correcting.
3. The CLI / Action — already on v1
securescan (the CLI) and Metbcy/securescan@v1 (the GitHub Action)
both already talk /api/v1/. No migration required if those are your
entry points.
What does not change
- Request bodies, headers, response shapes — identical between the two prefixes.
- Auth — the same API keys / scopes apply on both.
- Rate limits — same per-key bucket regardless of which prefix.
- Error responses — same status codes, same
detailshape.
Detecting deprecation usage
Tail your structured request log for lines under /api/... that do
not start with /api/v1/:
journalctl -u securescan-backend --output=cat \
| jq -c 'select(.path | startswith("/api/") and (startswith("/api/v1/") | not))'
A spike of legacy-prefix calls indicates an unmigrated caller. Fix the caller, not the server.
v1.x → v2.x policy
When a new major version of the API ships (v2.x), the v1 paths will
get the same deprecation treatment v0 got — /api/v2/... mounted
alongside /api/v1/..., with Deprecation / Sunset headers on
the v1 paths and a date at least 18 months out. Callers will get the
same migration window.
We do not ship breaking changes inside a major version — that is the SemVer contract. New optional fields, new endpoints, new query params: yes. Renamed fields, removed endpoints, changed shapes: only in a major-version bump.
Source
backend/securescan/api/versioning.py—alias_router_at_v1andDeprecationHeaderMiddleware.- The middleware is registered in
backend/securescan/api/__init__.py.
Next
- API overview — what's at
/api/v1. - Endpoints — the actual list.
- Rate limits — applies regardless of prefix.
Rate limits
POST /api/scans (and the forward-compatible POST /api/v1/scans
mount) is rate-limited with an in-memory token bucket. Read endpoints
(list scans, get findings, dashboard, sbom) are not rate-limited
— they are cheap and benefit from being responsive during incident
triage.
Defaults
- 60 requests per minute sustained.
- Burst of 10 — a client that's been silent can fire 10 immediately before the bucket starts metering.
- Per API key when
SECURESCAN_API_KEYis set or DB keys are in use; per client IP in dev mode.
Override
export SECURESCAN_RATE_LIMIT_PER_MIN=60 # sustained rate
export SECURESCAN_RATE_LIMIT_BURST=10 # burst capacity
export SECURESCAN_RATE_LIMIT_ENABLED=true # set to false to disable
The env-var-driven knobs let an operator tune without code changes. They are read once at backend startup; restart to apply.
Successful response headers
Every successful POST /scans response carries:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 47
X-RateLimit-Reset: 1730230885
X-RateLimit-Reset is the unix timestamp when the bucket fully
refills. Clients can watch Remaining to back off proactively.
When the bucket is empty
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
Retry-After: 7
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1730230885
{
"detail": "Rate limit exceeded",
"retry_after": 7,
"limit_per_min": 60
}
Retry-After is in seconds — wait that long, retry, succeed.
Well-behaved clients honor the header rather than guessing.
A handcrafted exponential backoff will under- or over-shoot the
bucket refill time; the server's Retry-After is the exact
duration to the next available slot. Particularly important on
periodic CI runs that all start near the top of the hour.
Why only POST /scans?
Starting a scan kicks off a CPU-and-IO-heavy background task —
fork the scanner subprocesses, spin the orchestrator, write findings.
A flood of POST /scans is a real DoS vector.
GET endpoints are read-only, indexed, and bounded by the size of
the existing data. They are cheap. Rate-limiting them would mostly
hurt incident triage (when a SecureScan operator is hammering the
findings table to find the regression).
Per-key isolation
The bucket key is the principal:
key_idfor DB-issued keys.- The string
"env"for the legacy env-var path (a single shared bucket). - The client IP in dev mode (no auth configured).
So a misbehaving CI runner with one key cannot starve another runner using a different key. The single shared env-var key, however, is one bucket — switch to per-CI DB keys for isolation.
Bounded memory
The bucket store has hard limits:
- Max 10K live keys — a key-rotation or DoS pattern with many unique keys can't grow memory without limit.
- 1h idle TTL with LRU eviction — buckets that haven't been hit in an hour are dropped. They re-initialize at full capacity if the key reappears.
Disabling rate limiting
Set SECURESCAN_RATE_LIMIT_ENABLED=false to turn it off. Useful in
test fixtures and when you've put SecureScan behind a smarter rate
limiter (envoy / nginx / Traefik) that handles this concern at the
edge.
Do not disable rate limiting on a deployment that allows
unauthenticated POST /scans (i.e. dev mode + AUTH_REQUIRED=0). The
bucket is the only thing standing between a curl loop and a
fork-bombed orchestrator. Either keep rate limiting on, or require
auth.
In the dashboard
The dashboard's New Scan page does not poll POST /scans — it
fires once per click. The 60/min default is generous enough that a
human triggering scans manually never hits it.
For a CI runner, 60/min with a burst of 10 supports about one scan
every second sustained, which is far above what most teams produce.
If you have a fleet of CI runners hitting the same backend on the
same key, increase SECURESCAN_RATE_LIMIT_PER_MIN to match.
Source
- Rate limit middleware:
backend/securescan/middleware/rate_limit.py. - Configuration:
backend/securescan/config.py.
Next
- Configuration reference — full env-var list.
- Production checklist — rate limits item.
Endpoints
A condensed listing of every public endpoint, its scope requirement,
and where it's documented. For the full request/response schema of
each, look at the running server's auto-generated /docs (FastAPI
Swagger UI) or /redoc.
This page is the navigation, not the complete reference.
Public
No auth required.
| Method | Path | Purpose |
|---|---|---|
| GET | / | API root info: {name, status, docs, health} |
| GET | /health | Liveness — process up. Always 200 unless crashing. |
| GET | /ready | Readiness — DB openable + scanner registry loaded. |
| GET | /docs | Swagger UI |
| GET | /redoc | ReDoc |
| GET | /openapi.json | OpenAPI 3.1 document |
Scans
Prefix: /api/v1/scans (and /api/scans legacy).
Source: backend/securescan/api/scans.py.
| Method | Path | Scope | Purpose |
|---|---|---|---|
| POST | / | write | Start a scan. Rate-limited. See How scans work. |
| GET | / | read | List scans. |
| GET | /{id} | read | Scan details. |
| DELETE | /{id} | write | Delete scan + cascade findings. 409 if running/pending. |
| POST | /{id}/cancel | write | Cancel a running scan. 409 if already terminal. |
| GET | /compare | read | ?scan_a=&scan_b= → {new, fixed, unchanged}. See Diff & compare. |
| GET | /{id}/findings | read | Findings + triage state. Filter ?severity=, ?scan_type=, ?compliance=. |
| GET | /{id}/summary | read | Severity counts, risk score, scanners run / skipped, timing. |
| GET | /{id}/sbom | read | ?format=cyclonedx|spdx. See SBOM. |
| GET | /{id}/report | read | HTML / PDF report. |
| GET | /{id}/events | read* | SSE stream. Accepts ?event_token=. See Real-time scan progress. |
| POST | /{id}/event-token | read | Mint short-lived SSE token. See SSE event tokens. |
* SSE route also accepts ?event_token= from browsers; see
Authentication overview.
Triage
Prefix: /api/v1/findings.
Source: backend/securescan/api/triage.py.
| Method | Path | Scope | Purpose |
|---|---|---|---|
| PATCH | /{fingerprint}/state | write | Set / replace triage verdict + note. |
| GET | /{fingerprint}/comments | read | List comments, oldest first. |
| POST | /{fingerprint}/comments | write | Add a comment. |
| DELETE | /{fingerprint}/comments/{comment_id} | write | Delete one comment by id. |
See Triage workflow.
API keys
Prefix: /api/v1/keys.
Source: backend/securescan/api/keys.py.
| Method | Path | Scope | Purpose |
|---|---|---|---|
| POST | / | admin | Issue a key. Returns plaintext secret once. |
| GET | / | admin | List keys (no secret). |
| GET | /me | any DB key | Calling key's introspection. |
| DELETE | /{id} | admin | Revoke. Lockout-protected (409 if would zero admin credentials). |
See API keys.
Webhooks
Prefix: /api/v1/webhooks.
Source: backend/securescan/api/webhooks.py.
| Method | Path | Scope | Purpose |
|---|---|---|---|
| POST | / | admin | Create. Returns secret once. |
| GET | / | admin | List. |
| GET | /{id} | admin | Fetch one. |
| PATCH | /{id} | admin | Edit name / url / event_filter / enabled. Cannot rotate secret. |
| DELETE | /{id} | admin | Cascades deliveries. |
| GET | /{id}/deliveries | admin | Last 100 delivery rows, newest first. |
| POST | /{id}/test | admin | Enqueue a synthetic webhook.test. Returns 202 + delivery_id. |
See Webhooks and Webhook payloads.
Notifications
Prefix: /api/v1/notifications.
Source: backend/securescan/api/notifications.py.
| Method | Path | Scope | Purpose |
|---|---|---|---|
| GET | / | read | List. ?unread_only=, ?limit= (capped at 200). |
| GET | /unread-count | read | {count} for the bell badge. Index-only query. |
| PATCH | /{id}/read | write | Mark one read. |
| PATCH | /read-all | write | Bulk mark read. Returns {marked_read: N}. |
See Notifications.
Dashboard
Prefix: /api/v1/dashboard.
Source: backend/securescan/api/dashboard.py.
| Method | Path | Scope | Purpose |
|---|---|---|---|
| GET | /status | read | Per-scanner availability + version. |
| GET | /stats | read | Aggregate counts. |
| GET | /trends | read | Risk / finding trends over time. |
| GET | /browse | read | Filesystem directory picker data (for the New Scan UI). |
| POST | /install/{scanner} | write | Install a supported scanner via the system package manager. |
Compliance
Prefix: /api/v1/compliance.
Source: backend/securescan/api/compliance.py.
| Method | Path | Scope | Purpose |
|---|---|---|---|
| GET | /coverage | read | Per-framework coverage with ?scan_id=. |
See Compliance.
Quick examples
Get all critical findings on a scan
curl -s -H "X-API-Key: $K" \
"http://127.0.0.1:8000/api/v1/scans/$SCAN_ID/findings?severity=critical" | jq .
List webhook delivery history
curl -s -H "X-API-Key: $ADMIN_KEY" \
"http://127.0.0.1:8000/api/v1/webhooks/$WID/deliveries" | jq '.[].status'
Mark every notification read
curl -s -X PATCH -H "X-API-Key: $K" \
"http://127.0.0.1:8000/api/v1/notifications/read-all"
Pin a request id (for log correlation)
curl -s -H "X-API-Key: $K" \
-H "X-Request-ID: my-debug-trace-2026-04-29" \
"http://127.0.0.1:8000/api/v1/dashboard/stats"
Where to look for the parameters
For the full set of query parameters, request body fields, and
response schemas — including the ones we don't repeat on this site
because they're auto-derived from Pydantic models — open
http://<your-backend>/docs in a browser. The "Try it out"
panel of Swagger UI lets you exercise any endpoint with your API
key plugged in.
Next
- API overview — auth and request/response patterns.
- Versioning & deprecation —
/api/vs/api/v1/. - Webhook payloads — outbound event schemas.
Webhook payloads
Full schemas for every event SecureScan delivers to outbound webhooks. The headers, signature contract, and retry policy are documented in Webhooks; this page is the payload reference.
Envelope
Every event is wrapped in a stable envelope:
{
"event": "<event-type>",
"data": { /* per-event payload */ },
"delivered_at": "2026-04-29T20:11:09.123456Z"
}
The literal bytes on the wire come from
json.dumps(payload, separators=(",", ":")) — whitespace-free,
key-order preserved. Sign these literal bytes, not a re-parsed
JSON object. See Webhooks → signature verification.
For Slack and Discord URLs, the envelope is replaced with the provider-specific shape — see Slack shape and Discord shape.
Headers
Every delivery, every event, same set:
POST <your-url>
Content-Type: application/json
User-Agent: SecureScan-Webhook/0.9
X-SecureScan-Event: <event-type>
X-SecureScan-Webhook-Id: <subscription-id>
X-SecureScan-Signature: t=<unix-ts>,v1=<hex-hmac-sha256>
scan.complete
Fires when a scan completes successfully (status flipped to
completed).
{
"event": "scan.complete",
"data": {
"scan_id": "0f1a93cb-44c2-4c8e-9f92-0a7c5a2e1b51",
"target_path": "/home/me/proj-a",
"scan_types": ["code", "dependency"],
"scanners_run": ["semgrep", "bandit", "trivy", "safety"],
"scanners_skipped": [
{"name": "checkov", "reason": "binary not on PATH", "install_hint": "pip install checkov"}
],
"findings_count": 12,
"severity_counts": {
"critical": 1,
"high": 3,
"medium": 5,
"low": 2,
"info": 1
},
"risk_score": 34.2,
"duration_s": 81.3,
"started_at": "2026-04-29T20:09:48.123456Z",
"completed_at": "2026-04-29T20:11:09.456789Z"
},
"delivered_at": "2026-04-29T20:11:10.001234Z"
}
scan.failed
Fires when the orchestrator fails the scan before reaching completed
(database write failure, target validation, internal error).
{
"event": "scan.failed",
"data": {
"scan_id": "0d2c3a8f-4f1c-86e9-2b4b4ab0a8e0",
"target_path": "/home/me/missing",
"scan_types": ["code"],
"error": "ValueError: target_path does not exist",
"scanners_run": [],
"scanners_skipped": [],
"started_at": "2026-04-29T20:00:00.000000Z",
"completed_at": "2026-04-29T20:00:00.250000Z"
},
"delivered_at": "2026-04-29T20:00:00.500000Z"
}
The error field is a single-line description, truncated for safety
(stack traces are kept in the backend log, not pushed to webhooks).
scanner.failed
Fires when an individual scanner crashed mid-scan. The scan itself may still complete successfully via the other scanners; the failed scanner's failure is recorded on the scan row.
{
"event": "scanner.failed",
"data": {
"scan_id": "0f1a93cb-44c2-4c8e-9f92-0a7c5a2e1b51",
"scanner": "nmap",
"scan_type": "network",
"error": "subprocess timed out after 600s",
"duration_s": 600.0
},
"delivered_at": "2026-04-29T20:10:00.123456Z"
}
error is truncated to 200 chars (the constant _SCAN_ERROR_TRUNCATE
in backend/securescan/api/scans.py) so a multi-MB stack trace
doesn't blow up your receiver.
webhook.test
Fires only when an operator clicks "Test" in the dashboard or calls
POST /api/v1/webhooks/{id}/test. The synthetic event flows through
the identical dispatcher path as a real one — same retry, same
signature contract — so a green test proves end-to-end wiring.
{
"event": "webhook.test",
"data": {
"message": "Test from SecureScan",
"timestamp": "2026-04-29T20:00:00.000000Z"
},
"delivered_at": "2026-04-29T20:00:00.000000Z"
}
What is not in the payload
Deliberately small so the public webhook contract stays stable and small:
- Findings. The full finding list is not delivered — it can be
thousands of rows. To get findings, hit
GET /api/v1/scans/{id}/findingswith thescan_idfrom the webhook payload. - Per-scanner lifecycle events.
scanner.start,scanner.complete,scanner.skipped,scan.start,scan.cancelledare NOT delivered to webhooks. They stay on the internal SSE event bus only — too noisy for outbound delivery, and the public webhook contract should be small and stable. The allowlist isWEBHOOK_RELEVANT_EVENTSinbackend/securescan/api/scans.py. - Triage state. Webhooks fire on scan/scanner lifecycle. Triage state changes are dashboard actions, not lifecycle events.
- API key / webhook secret values. Never delivered. Plaintext credentials are returned exactly once on creation and never travel through any other surface.
Slack shape
For URLs matching https://hooks.slack.com/services/..., the body is
reshaped to Slack's expected format. The reshaper is
backend/securescan/webhook_formatters.py.
For scan.complete:
{
"blocks": [
{
"type": "header",
"text": {"type": "plain_text", "text": ":shield: Scan complete: /home/me/proj-a"}
},
{
"type": "section",
"fields": [
{"type": "mrkdwn", "text": "*Findings:*\n12 (●1 critical, ●3 high)"},
{"type": "mrkdwn", "text": "*Risk score:*\n34.2"},
{"type": "mrkdwn", "text": "*Duration:*\n1m 21s"},
{"type": "mrkdwn", "text": "*Scanners:*\nsemgrep, bandit, trivy, safety"}
]
},
{
"type": "context",
"elements": [{"type": "mrkdwn", "text": "Scan ID `0f1a93cb-...`"}]
}
]
}
For scan.failed:
{
"blocks": [
{
"type": "header",
"text": {"type": "plain_text", "text": ":warning: Scan failed: /home/me/missing"}
},
{
"type": "section",
"text": {"type": "mrkdwn", "text": "*Error:* `ValueError: target_path does not exist`"}
}
]
}
scanner.failed and webhook.test get analogous Slack-shape blocks.
Discord shape
For URLs matching https://discord.com/api/webhooks/...:
{
"embeds": [
{
"title": "Scan complete: /home/me/proj-a",
"color": 7654321,
"fields": [
{"name": "Findings", "value": "12 (●1 critical, ●3 high)", "inline": true},
{"name": "Risk score", "value": "34.2", "inline": true},
{"name": "Duration", "value": "1m 21s", "inline": true}
],
"footer": {"text": "Scan 0f1a93cb-..."},
"timestamp": "2026-04-29T20:11:09.456789Z"
}
]
}
Embed color is set per severity:
| Severity bucket | Color decimal |
|---|---|
| critical | red-ish |
| high | orange-ish |
| medium | yellow-ish |
| low / info | blue-ish |
Both Slack and Discord webhook URLs are unauthenticated — anyone with the URL can post. The HMAC headers are still sent (so you could route through a proxy and verify there), but the receivers don't. Treat the URL itself as the secret. Don't share it; rotate it (create a new one at the provider, update SecureScan's subscription) if it leaks.
Versioning of the payload schema
The shapes above are stable for v0.9.x. New optional fields may be added in minor releases. Receivers should:
- Treat unknown top-level fields as additive (don't crash on new keys).
- Pin to
User-Agent: SecureScan-Webhook/0.9if you want to detect a major-version transition. - Use
event(not URL pattern) to dispatch.
When the major version of the payload changes, the User-Agent
will increment and the prior shape will continue working for at
least one minor cycle alongside the new one.
Source
- Payload construction:
_log_scan_eventand helpers inbackend/securescan/api/scans.py. - Slack/Discord shaper:
backend/securescan/webhook_formatters.py. - Dispatch + signing:
backend/securescan/webhook_dispatcher.py.
Next
- Webhooks — verification, retry, FIFO ordering.
- API endpoints — full route list including the webhook CRUD.
- Real-time scan progress — internal SSE events not delivered to webhooks.
Docker
The container image is the recommended production deployment path — multi-arch (amd64 + arm64), all 14 scanners pre-installed, signed with cosign on every tagged release.
Image
ghcr.io/metbcy/securescan:<tag>
| Tag | Meaning |
|---|---|
v0.11.0 | Specific tagged release. Immutable, signed with cosign. |
v1 | Floating major-version tag. Auto-tracks v1.x.y. |
Pin to a specific tag in production. Use v0.11.0 or v1. The
:latest tag is not published — cosign verify only works
against tagged releases, so an unsigned floating reference is not
something we ship.
Run the backend
Minimum:
docker run --rm -p 8000:8000 \
-e SECURESCAN_API_KEY="$(openssl rand -hex 32)" \
ghcr.io/metbcy/securescan:v0.11.0 \
serve --host 0.0.0.0 --port 8000
Production-shape:
docker run -d \
--name securescan-backend \
-p 127.0.0.1:8000:8000 \
-e SECURESCAN_AUTH_REQUIRED=1 \
-e SECURESCAN_API_KEY="$(cat /run/secrets/securescan-api-key)" \
-e SECURESCAN_EVENT_TOKEN_SECRET="$(cat /run/secrets/securescan-event-token-secret)" \
-e SECURESCAN_LOG_FORMAT=json \
-e SECURESCAN_RATE_LIMIT_PER_MIN=120 \
-e SECURESCAN_IN_CONTAINER=1 \
-v securescan-data:/data \
-v securescan-config:/root/.config/securescan \
--restart unless-stopped \
ghcr.io/metbcy/securescan:v0.11.0 \
serve --host 0.0.0.0 --port 8000 --workers 1
Notes:
- Bind to
127.0.0.1:8000and put a TLS-terminating reverse proxy (nginx, Traefik) in front. The bundled uvicorn serves plain HTTP. --workers 1is required for SSE and the in-process webhook dispatcher. See Single-worker constraint.- The
securescan-datavolume holds the SQLite DB. Back it up. securescan-configpersists~/.config/securescan/.envfor ZAP credentials etc. See Local config.
Run a one-shot scan from the CLI
docker run --rm \
-v "$PWD:/work" -w /work \
ghcr.io/metbcy/securescan:v0.11.0 \
diff . --base-ref origin/main --head-ref HEAD \
--output github-pr-comment
Image entry point routes to the same securescan CLI as pip install securescan. Anything you can do with securescan directly
works inside the container.
docker compose
The repo ships a docker-compose.yml for local development that
brings up backend + frontend together:
cd ~/Documents/securescan
docker compose up
Visit http://localhost:3000 for the dashboard,
http://localhost:8000/docs for the API.
This stack is not production-shape — the frontend is the dev build, the backend has no auth, no TLS, no rate limit tuning. Use it to evaluate, then build a real deploy from this page.
Kubernetes (sketch)
apiVersion: apps/v1
kind: Deployment
metadata: { name: securescan }
spec:
replicas: 1 # NOT >1; see single-worker constraint
selector: { matchLabels: { app: securescan } }
template:
metadata: { labels: { app: securescan } }
spec:
containers:
- name: securescan
image: ghcr.io/metbcy/securescan:v0.11.0
args: [ "serve", "--host", "0.0.0.0", "--port", "8000", "--workers", "1" ]
ports: [{ containerPort: 8000 }]
envFrom:
- secretRef: { name: securescan-secrets }
env:
- { name: SECURESCAN_AUTH_REQUIRED, value: "1" }
- { name: SECURESCAN_LOG_FORMAT, value: "json" }
- { name: SECURESCAN_IN_CONTAINER, value: "1" }
livenessProbe:
httpGet: { path: /health, port: 8000 }
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet: { path: /ready, port: 8000 }
initialDelaySeconds: 2
periodSeconds: 5
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim: { claimName: securescan-data }
The securescan-secrets Secret should contain at minimum:
SECURESCAN_API_KEY=...
SECURESCAN_EVENT_TOKEN_SECRET=...
replicas: 1. SecureScan is single-worker because the event bus
and webhook dispatcher are in-process. To scale horizontally,
deploy multiple separate Deployments behind a sticky-session
ingress keyed on scan_id. See Single-worker constraint.
Image contents
The bundled scanners are pinned at build time. To see versions:
docker run --rm ghcr.io/metbcy/securescan:v0.11.0 status
| Tool | How it ships |
|---|---|
semgrep | pip wheel |
bandit | pip wheel |
safety | pip wheel |
pip-licenses | pip wheel |
checkov | pip wheel |
trivy | apt + GitHub release |
npm-audit | bundled npm |
nmap | apt |
zap | NOT bundled — too large; install on the host or run separately |
gitleaks | apt + GitHub release |
ZAP is the only scanner not in the image. For DAST scans, run ZAP as
a separate container and point SecureScan at it via
SECURESCAN_ZAP_ADDRESS.
Container vs wheel
| Concern | Container | Wheel (PyPI) |
|---|---|---|
| Reproducible scanner versions | ✅ pinned at image build | ❌ depends on host |
| Easy install | ✅ docker run | ✅ pip install securescan |
| Easy upgrade | ✅ image bump | ✅ pip install -U securescan |
| Smaller install | ❌ ~600MB | ✅ ~10MB plus whatever scanners you install |
| Run ZAP / nmap | Need separate ZAP; nmap inside | Run on host |
| Signed artifact | ✅ cosign | ✅ sigstore-python (*.sigstore.json bundle) |
The GitHub Action picks the right one for you: tries the wheel first, falls back to the container if scanner binaries are missing.
Verifying the image
Before running in production, verify the cosign signature:
cosign verify ghcr.io/metbcy/securescan:v0.11.0 \
--certificate-identity 'https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0' \
--certificate-oidc-issuer 'https://token.actions.githubusercontent.com'
Full guide: Verifying signed artifacts.
Next
- Production checklist — the full pre-flight.
- Configuration reference — every env var.
- Single-worker constraint — why and how to scale.
- Verifying signed artifacts — cosign + sigstore.
Production checklist
A literal pre-flight before exposing SecureScan past localhost.
Every box is a real configuration step; skipping any of them has a
known consequence noted in the linked page.
Auth
-
Set
SECURESCAN_API_KEY(or) create at least one DB-backed admin key viaPOST /api/v1/keys. Without either, the backend runs in dev mode and serves every request unauthenticated. → API keys -
Set
SECURESCAN_AUTH_REQUIRED=1. Without this flag, an empty DB plus an unset env var silently falls back to dev mode. With it, the backend exits with code 2 at startup if no credentials exist — fail-closed. → Authentication overview -
Set
SECURESCAN_EVENT_TOKEN_SECRET(required whenAUTH_REQUIRED=1). Without it, every backend restart picks a new ephemeral signing secret and any in-flight SSE token from the dashboard 401s — live progress goes blind. → SSE event tokens -
Use scoped DB keys, not the env-var key, for every consumer.
The env var is full-trust by design. CI runners get
["read", "write"]; monitoring gets["read"]. Reserveadminfor one operator break-glass identity. → Scopes
Rate limits
-
Set
SECURESCAN_RATE_LIMIT_PER_MINandSECURESCAN_RATE_LIMIT_BURSTif the defaults (60/min, burst 10) don't fit your scan cadence. Higher for a CI fleet sharing a key; lower for a multi-tenant proxy. → Rate limits - Do not disable rate limiting unless you have a smarter rate limiter in front of SecureScan (envoy / nginx). The bucket is the only thing standing between a curl loop and a forked-bombed orchestrator.
Single-worker constraint
-
Confirm
--workers 1on the uvicorn invocation (the default). The event bus and webhook dispatcher are in-process; multi-worker uvicorn fragments them and SSE / webhooks break silently. → Single-worker constraint -
Sticky sessions on
scan_idif you scale horizontally. Each scan's SSE subscribers must land on the same backend instance that runs the scan. Multi-process pubsub (Redis) is on the roadmap.
Local config persistence
-
Persist
~/.config/securescan/.env(or$XDG_CONFIG_HOME/securescan/.env) across deploys / restarts. That is where ZAP credentials and other secrets live; without persistence you re-export them on every boot. → Local config (.env)
Signed artifacts
-
Verify the wheel signature with sigstore-python before installing in a CI image:
```bash sigstore verify identity \ --cert-identity 'https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0' \ --cert-oidc-issuer 'https://token.actions.githubusercontent.com' \ --bundle securescan-0.11.0-py3-none-any.whl.sigstore.json \ securescan-0.11.0-py3-none-any.whl ``` -
Verify the container image with cosign before pulling into a production registry:
```bash cosign verify ghcr.io/metbcy/securescan@<digest> \ --certificate-identity 'https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0' \ --certificate-oidc-issuer 'https://token.actions.githubusercontent.com' ``` → [Verifying signed artifacts](./verifying-artifacts.md)
Health probes
-
Confirm
/healthand/readyare reachable from your load balancer. Both are public regardless of API-key configuration. Sample Kubernetes fragment:```yaml livenessProbe: httpGet: { path: /health, port: 8000 } initialDelaySeconds: 5 periodSeconds: 10 readinessProbe: httpGet: { path: /ready, port: 8000 } initialDelaySeconds: 2 periodSeconds: 5 ``` `/health` returns 200 unless the process is crashing. `/ready` returns 200 with checks JSON when DB + scanner registry are OK; 503 with details otherwise.
TLS / reverse proxy
- Terminate TLS in front of SecureScan. The bundled uvicorn serves plain HTTP. nginx, Traefik, AWS ALB, Caddy all work.
-
Forward
X-Request-IDthrough the proxy so client correlation works end-to-end. Clients can pin a request id; the server echoes it back on the response. -
Set
SECURESCAN_CORS_ORIGINSto your dashboard origin(s) (comma-separated) if the frontend is on a different host than the backend. Defaults are localhost-only.
Logging
-
Set
SECURESCAN_LOG_FORMAT=jsonin containers (auto-set whenSECURESCAN_IN_CONTAINER=1). Each request emits one structured log line onsecurescan.requestwithrequest_id,method,path,status,latency_ms. Scan lifecycle events go onsecurescan.scan. -
Aggregate logs centrally. Filter by
logger: securescan.scanto track scan-level events; bylogger: securescan.requestfor HTTP traffic.
Database
-
Persist the SQLite DB volume. Default path is
~/.securescan/scans.db(or under/datain the container). Loss of this DB loses scans, findings, triage state, API keys, webhooks, and notifications. -
Back it up. SQLite's
.backupcommand works while the backend is running — use it on a cron schedule.
Frontend
-
NEXT_PUBLIC_SECURESCAN_API_KEYis aread-scope key, notadmin. The value is baked into the build and shipped to the browser. Anyone hitting the dashboard automatically inherits that key's scopes. - The dashboard is not exposed publicly. SecureScan does not ship its own SSO / OIDC integration. Put the dashboard behind your front-line auth (oauth2-proxy, Cloudflare Access, AWS ALB OIDC). The dashboard's API key becomes a service identity, not a user identity.
Webhooks (optional, if you use them)
- The receiver verifies HMAC and rejects stale timestamps (>5 minutes old). At-least-once delivery is the contract; receivers must be idempotent. → Webhooks
- Slack / Discord URLs are treated as secrets. Those receivers do not verify HMAC; the URL itself is the authorization. Don't share, rotate (delete + recreate) on suspicion of leak.
Operational
-
Document the rotation procedure for
SECURESCAN_API_KEY, DB keys,SECURESCAN_EVENT_TOKEN_SECRET, and webhook secrets. - Alarm on spikes of 401 / 403 in the request log. Indicates key expired, brute-force attempt, or misconfigured caller.
- Alarm on spikes of 5xx. Crashes don't auto-recover; a restart loop in liveness deserves attention.
-
Periodically audit
GET /api/v1/keysfor keys with nolast_used_atin the last 90 days — candidates for revocation.
Smoke test
After deploy, run these as a quick sanity check:
# 1. Liveness + readiness
$ curl -fs https://securescan.example.com/health
{"status":"ok"}
$ curl -fs https://securescan.example.com/ready
{"status":"ready","checks":{...}}
# 2. Auth is required (no key = 401)
$ curl -i -s https://securescan.example.com/api/v1/scans
HTTP/1.1 401 Unauthorized
# 3. Bogus key = 401 (does not fall through to dev mode)
$ curl -i -s -H "X-API-Key: nope" https://securescan.example.com/api/v1/scans
HTTP/1.1 401 Unauthorized
# 4. Read-only key cannot start a scan
$ curl -i -s -X POST -H "X-API-Key: $READ_KEY" \
-d '{"target_path":"/tmp","scan_types":["code"]}' \
https://securescan.example.com/api/v1/scans
HTTP/1.1 403 Forbidden
# 5. /docs renders
$ curl -fs https://securescan.example.com/openapi.json | jq '.info.title'
"SecureScan"
If those five are green, the deploy is safe to take traffic.
Next
- Configuration reference — every env var.
- Auth production checklist — narrower auth-only checklist.
- Verifying signed artifacts — full sigstore + cosign guide.
- Single-worker constraint — what fails on
--workers 2.
Configuration reference
Every environment variable SecureScan reads, what it controls, and its default. All variables are optional unless flagged otherwise.
Authentication
| Variable | Default | Description |
|---|---|---|
SECURESCAN_API_KEY | (unset) | Legacy single-shared-key auth. Treated as a synthetic principal with all scopes. When unset and no DB keys exist, runs in dev mode. |
SECURESCAN_AUTH_REQUIRED | 0 | When 1, backend exits with code 2 at startup if no credentials configured. Required for hardened deploys. |
SECURESCAN_EVENT_TOKEN_SECRET | (auto) | HMAC signing secret for SSE event tokens. Required when AUTH_REQUIRED=1. Ephemeral random in dev mode. |
→ Authentication overview, API keys, SSE event tokens.
Rate limiting
| Variable | Default | Description |
|---|---|---|
SECURESCAN_RATE_LIMIT_PER_MIN | 60 | Sustained requests per minute on POST /scans (per principal / IP). |
SECURESCAN_RATE_LIMIT_BURST | 10 | Burst capacity above the sustained rate. |
SECURESCAN_RATE_LIMIT_ENABLED | true | Set to false to disable. Only do this if you have a smarter rate limiter in front. |
→ Rate limits.
Logging
| Variable | Default | Description |
|---|---|---|
SECURESCAN_LOG_LEVEL | INFO | DEBUG / INFO / WARNING / ERROR. |
SECURESCAN_LOG_FORMAT | text (or json if in container) | text for dev TTY; json for container / aggregator-friendly. |
SECURESCAN_IN_CONTAINER | 0 | When 1, default LOG_FORMAT flips to json. |
Each request emits one structured line on securescan.request with
request_id, method, path, status, latency_ms. Scan
lifecycle events go on securescan.scan with event + per-event
fields.
CORS / network
| Variable | Default | Description |
|---|---|---|
SECURESCAN_CORS_ORIGINS | localhost:3000,127.0.0.1:3000,localhost:3003,127.0.0.1:3003 | Comma-separated CORS origins. |
When the frontend and backend are on different hosts, set this to the frontend's origin(s) so the browser doesn't get CORS-blocked.
Frontend
| Variable | Default | Description |
|---|---|---|
NEXT_PUBLIC_SECURESCAN_API_KEY | (unset) | API key the dashboard injects on every request. Baked into the build. |
NEXT_PUBLIC_SECURESCAN_API_URL | http://localhost:8000 | Backend URL the dashboard talks to. |
Both are NEXT_PUBLIC_* (Next.js convention) so they end up in the
browser bundle. Use a read scope key, not admin — see
Production checklist.
Scanner-specific
| Variable | Default | Description |
|---|---|---|
SECURESCAN_ZAP_ADDRESS | http://127.0.0.1:8090 | URL of the ZAP daemon for the zap scanner. |
SECURESCAN_ZAP_API_KEY | (unset) | API key the ZAP daemon expects. |
SECURESCAN_GROQ_API_KEY | (unset) | Groq API key for AI enrichment. AI is auto-disabled in CI (CI=true). |
These can — and should — live in ~/.config/securescan/.env rather
than the shell environment, so they persist across reboots. See
Local config (.env).
CI determinism
| Variable | Default | Description |
|---|---|---|
CI | (unset) | Set to true by GitHub Actions / GitLab CI / etc. SecureScan auto-disables AI enrichment when set. |
SECURESCAN_FAKE_NOW | (unset) | Pin the only time-derived field in output. Set in tests / CI replays for byte-identical output. |
→ How scans work → Determinism.
Database
| Variable | Default | Description |
|---|---|---|
SECURESCAN_DB_PATH | ~/.securescan/scans.db | SQLite DB file path. Persists scans, findings, triage state, keys, webhooks, notifications. |
In containers, mount a volume at this path. See Docker.
Examples
Minimum production env
export SECURESCAN_AUTH_REQUIRED=1
export SECURESCAN_EVENT_TOKEN_SECRET="$(openssl rand -hex 32)"
export SECURESCAN_API_KEY="$(openssl rand -hex 32)" # break-glass
export SECURESCAN_LOG_FORMAT=json
Then issue scoped DB keys for actual consumers (CI, dashboard) via the API; reserve the env-var key for emergencies.
Container env-file
# /etc/securescan/env
SECURESCAN_AUTH_REQUIRED=1
SECURESCAN_EVENT_TOKEN_SECRET=replace-me-with-openssl-rand-hex-32
SECURESCAN_LOG_FORMAT=json
SECURESCAN_IN_CONTAINER=1
SECURESCAN_RATE_LIMIT_PER_MIN=120
SECURESCAN_RATE_LIMIT_BURST=20
SECURESCAN_CORS_ORIGINS=https://securescan.example.com
docker run --env-file /etc/securescan/env \
-v securescan-data:/data \
ghcr.io/metbcy/securescan:v0.11.0 \
serve --host 0.0.0.0 --port 8000 --workers 1
Tuning rate limits for a CI fleet
A team with 30 CI runners hitting the same key would saturate the default 60/min. Bump it:
export SECURESCAN_RATE_LIMIT_PER_MIN=300
export SECURESCAN_RATE_LIMIT_BURST=30
Or — better — issue one DB key per runner so they have isolated buckets.
Source
- Most env vars resolve in
backend/securescan/config.py. - Auth-related:
backend/securescan/auth.py. - Logging:
backend/securescan/logging_config.py. - Local
.envloader:backend/securescan/config_loader.py.
Next
- Local config (.env) — the user-scoped env file.
- Production checklist — how to use these together.
- Authentication overview.
Single-worker constraint
SecureScan must run with --workers 1 on uvicorn. This page explains
why, what breaks if you ignore it, and how to scale horizontally
when you outgrow one worker.
What's in-process
Two systems are module-level singletons that live inside the uvicorn worker:
- The event bus — the in-process pub/sub powering live
scan-progress SSE. Each scan has subscribers (the dashboard tab,
notification creator, webhook enqueuer); each scanner publishes
lifecycle events to that bus.
Source:
backend/securescan/events.py. - The webhook dispatcher — a long-lived asyncio task that
polls the
webhook_deliveriestable, sends, retries, marks succeeded / failed. Source:backend/securescan/webhook_dispatcher.py.
Both are bound to the worker that booted them. There is no shared memory, no cross-process pubsub, no leader election.
What breaks on --workers 2+
flowchart LR
subgraph W1[uvicorn worker 1]
Bus1[event bus]
Sub1[SSE subscriber A]
end
subgraph W2[uvicorn worker 2]
Bus2[event bus]
Run2[scan running on W2]
end
Run2 -->|publishes events| Bus2
Sub1 -->|subscribes| Bus1
Bus1 -.never sees Run2's events.-> Sub1
If a POST /scans lands on worker 2 and a GET /scans/{id}/events
lands on worker 1:
- Worker 2's bus gets every lifecycle event.
- Worker 1's bus gets nothing.
- The dashboard tab silently sees no progress events; the live
panel hangs on
pending.
The webhook dispatcher has a related failure: each worker runs its
own dispatcher task. They both poll the same webhook_deliveries
table. The atomic mark_delivery_delivering claim prevents
double-sending — but you've doubled the polling load and lost FIFO
per webhook (different workers might race, claim, and deliver out
of order). The system appears to work, but the v0.9.0 contract
is violated.
The default uvicorn invocation in the bundled Docker entrypoint and
in securescan serve is --workers 1. Do not override it.
Why not multi-process pubsub right now
A Redis backplane is on the v0.7.x+ roadmap. We chose to ship the in-process bus first because:
- It is simple — no external dependency, no failure mode where the bus can be down while the API is up.
- Single-worker is enough for the typical SecureScan deployment size (one team, dozens of scans per hour).
- Multi-process correctness for the webhook dispatcher requires more than just pubsub — it needs leader election or partitioned queue ownership to keep FIFO per webhook.
We will add a Redis-backed bus + a pubsub-friendly dispatcher when real users hit the throughput ceiling. So far, nobody has.
Scaling horizontally today
If you need more throughput than one worker can give:
flowchart LR Client --> LB[Sticky-session LB<br>keyed on scan_id] LB -->|scan-id hashed to bucket A| InstA[Backend instance A] LB -->|scan-id hashed to bucket B| InstB[Backend instance B] LB -->|scan-id hashed to bucket C| InstC[Backend instance C] InstA --> DB[(SQLite db.<br>Or migrate to a per-instance DB.)] InstB --> DB InstC --> DB
Run multiple separate backend deployments, each with its own
single uvicorn worker. Put a sticky load balancer in front,
keyed on scan_id (or path-prefixed: scans starting with 0a* go
to instance A, etc).
Each scan's lifecycle (POST /scans → GET /events → SSE → webhook
dispatch) lives on the same instance, so the in-process bus
behaves correctly.
This pattern handles the SSE / webhook problem but introduces a
new one: the SQLite DB is per-instance. If you want a unified
finding history across instances, use a shared SQLite (NFS — not
recommended for write-heavy load) or migrate to PostgreSQL
(SecureScan's aiosqlite layer is the only thing that depends on
SQLite, and it has been intentionally kept thin to ease a future
swap).
A common compromise: scale the scanning workload horizontally, keep a single instance for the dashboard (which is read-mostly). The dashboard reads via the API; you don't have to share the DB to share findings — you can query each scanner instance and merge.
Read endpoints scale fine
Read endpoints (GET /scans, GET /findings, etc.) do not depend
on the in-process bus. A multi-instance read-replica pattern with
its own SQLite (rsync'd from the writer) works for read-heavy
auditing workloads.
What to do not do
- Do NOT run
uvicorn ... --workers 4behind a single load balancer. SSE breaks silently. - Do NOT run two
uvicornprocesses pointing at the same SQLite DB without sticky routing. The webhook dispatchers will fight (correctness preserved by the atomic claim, but FIFO ordering broken). - Do NOT use
gunicorn -w 4as a wrapper. Same fail mode.
How to verify your deploy is single-worker
$ docker exec securescan-backend ps aux | grep uvicorn
root 7 1.2 ... uvicorn securescan.api:app --host 0.0.0.0 --port 8000 --workers 1
Or hit /api/v1/dashboard/status from two different terminals and
confirm both responses come from the same X-Request-ID-emitting
process (the request_id is per-request, but the underlying PID in
the structured log is the same — you can grep securescan.request
in the log to confirm).
Roadmap
| Item | Status |
|---|---|
| In-process event bus (single worker) | ✅ shipped v0.7.0 |
| In-process webhook dispatcher | ✅ shipped v0.9.0 |
| Redis pubsub backplane (cross-worker bus) | Roadmap |
| Distributed queue + leader election for dispatcher | Roadmap |
PostgreSQL-backed webhook_deliveries | Roadmap (after Redis) |
When the roadmap items land, this page will become "horizontal scaling guide" rather than "constraint."
Source
- Event bus:
backend/securescan/events.py - Webhook dispatcher:
backend/securescan/webhook_dispatcher.py - Bus subscription on the SSE route:
event_streaminbackend/securescan/api/scans.py
Next
- Real-time scan progress — the consumer of the in-process bus.
- Webhooks — the dispatcher's job.
- Production checklist —
--workers 1is on it.
Local config (.env)
SecureScan auto-loads a user-scoped .env file at backend startup so
you can persist scanner credentials and other secrets between
reboots without re-exporting them every shell session.
Introduced in v0.6.1.
Path
$XDG_CONFIG_HOME/securescan/.env
(falls back to)
~/.config/securescan/.env
The file is optional. If it does not exist, the startup loader is a no-op.
What goes in it
Anything that you'd otherwise export in your shell. The most
common cases:
# ~/.config/securescan/.env
# ZAP daemon credentials
SECURESCAN_ZAP_ADDRESS=http://127.0.0.1:8090
SECURESCAN_ZAP_API_KEY=zap-api-key-from-zap-ui
# Groq API key for AI enrichment
SECURESCAN_GROQ_API_KEY=gsk_...your-key...
# Override the SQLite DB location
SECURESCAN_DB_PATH=/var/lib/securescan/scans.db
Any config var is fair game. Auth-related vars
(SECURESCAN_API_KEY, SECURESCAN_AUTH_REQUIRED,
SECURESCAN_EVENT_TOKEN_SECRET) work too — but you almost certainly
want those in a secrets manager / systemd env file, not a dotfile.
Precedence
shell environment (highest priority — wins)
~/.config/securescan/.env
(unset = use built-in default)
If both the shell and the file set the same variable, the shell wins. This makes ad-hoc overrides easy without editing the file:
$ cat ~/.config/securescan/.env
SECURESCAN_ZAP_ADDRESS=http://127.0.0.1:8090
# Override for one run:
$ SECURESCAN_ZAP_ADDRESS=http://10.0.0.5:8090 securescan scan https://staging --type dast
Format
Standard .env syntax:
KEY=valueper line.#starts a comment.- Whitespace around
=ignored. - Values are read literally — quoting is preserved (so don't quote unless the value needs the quotes).
# Good
SECURESCAN_ZAP_API_KEY=abc123def456
# Wrong (the quotes become part of the key)
SECURESCAN_ZAP_API_KEY="abc123def456"
Permissions
The file commonly contains secrets (ZAP API key, Groq token, maybe a SecureScan API key). Lock it down:
chmod 600 ~/.config/securescan/.env
Anything more permissive will leak credentials to other users on the host.
The loader does not enforce permissions — that's your operator's responsibility. A future release may add a startup warning.
Container deploys
In containers, mount the file as a volume:
docker run --rm -p 8000:8000 \
-v ~/.config/securescan:/root/.config/securescan:ro \
-e SECURESCAN_API_KEY="$(cat /run/secrets/securescan-api-key)" \
ghcr.io/metbcy/securescan:v0.11.0 \
serve --host 0.0.0.0 --port 8000
Mount read-only (:ro); the backend never writes to the file,
and read-only mounts prevent a compromised process from rewriting
secrets back in place.
For Kubernetes, prefer a Secret mounted as files:
volumes:
- name: securescan-config
secret:
secretName: securescan-env-file
items:
- key: .env
path: .env
volumeMounts:
- name: securescan-config
mountPath: /root/.config/securescan
readOnly: true
Verifying it loaded
The backend logs each loaded var (key only, never the value) at INFO on startup:
INFO securescan.config_loader loaded SECURESCAN_ZAP_ADDRESS from /home/me/.config/securescan/.env
INFO securescan.config_loader loaded SECURESCAN_GROQ_API_KEY from /home/me/.config/securescan/.env
INFO securescan.config_loader loaded SECURESCAN_DB_PATH from /home/me/.config/securescan/.env
A line per var that was actually set from the file (i.e. not already in the shell environment).
Source
Next
- Configuration reference — every supported variable.
- Production checklist — persistence is on it.
Verifying signed artifacts
Every tagged release of SecureScan publishes signed artifacts:
- Wheel + sdist — signed with
sigstore-python, bundle ships as a GitHub Release asset. - Container image — signed by digest with
cosignkeyless (Sigstore via OIDC).
Both identities are pinned to refs/tags/<tag> — that is why the
release workflow is tag-triggered only and does not offer
workflow_dispatch (a manual run would publish under a
refs/heads/... identity and break these verification commands).
Wheel + sdist (sigstore)
Install sigstore:
pip install sigstore
Download the wheel and its sigstore bundle from the GitHub Release page (both ship as Release assets):
gh release download v0.11.0 \
--repo Metbcy/securescan \
--pattern 'securescan-0.11.0-py3-none-any.whl' \
--pattern 'securescan-0.11.0-py3-none-any.whl.sigstore.json'
Verify:
sigstore verify identity \
--cert-identity 'https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0' \
--cert-oidc-issuer 'https://token.actions.githubusercontent.com' \
--bundle securescan-0.11.0-py3-none-any.whl.sigstore.json \
securescan-0.11.0-py3-none-any.whl
You should see:
OK: securescan-0.11.0-py3-none-any.whl
Same shape for the sdist:
sigstore verify identity \
--cert-identity 'https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0' \
--cert-oidc-issuer 'https://token.actions.githubusercontent.com' \
--bundle securescan-0.11.0.tar.gz.sigstore.json \
securescan-0.11.0.tar.gz
Container image (cosign keyless)
Install cosign (≥ v2.0).
Verify by tag:
cosign verify ghcr.io/metbcy/securescan:v0.11.0 \
--certificate-identity 'https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0' \
--certificate-oidc-issuer 'https://token.actions.githubusercontent.com'
Verify by digest (the recommended pattern for production — tags can be re-pointed; digests cannot):
DIGEST="$(crane digest ghcr.io/metbcy/securescan:v0.11.0)"
cosign verify "ghcr.io/metbcy/securescan@${DIGEST}" \
--certificate-identity 'https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0' \
--certificate-oidc-issuer 'https://token.actions.githubusercontent.com'
Successful output is JSON:
[
{
"critical": {
"identity": {
"docker-reference": "ghcr.io/metbcy/securescan"
},
"image": {
"docker-manifest-digest": "sha256:..."
},
"type": "cosign container image signature"
},
"optional": {
"Bundle": { "...": "..." },
"Issuer": "https://token.actions.githubusercontent.com",
"Subject": "https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0"
}
}
]
What the verification proves
- The artifact was produced by the SecureScan release workflow at the v0.11.0 tag running on GitHub Actions.
- The Sigstore transparency log (Rekor) has an immutable record of the signature.
- The artifact has not been tampered with since signing.
It does not prove:
- That the v0.11.0 source tree itself is bug-free or malware-free.
- That the artifact you have was downloaded from the official source (verify the registry / release URL too).
Pinning in production
Tags are mutable references. A registry compromise (or a careless
re-push) could re-point v0.11.0 to a different image. Pin by
digest in production manifests:
image: ghcr.io/metbcy/securescan@sha256:abcdef...
The cosign verify ...@sha256:... command above ties the running
image to the v0.11.0 release identity, regardless of what a tag now
points at.
For Kubernetes, an admission controller like
Sigstore Policy Controller
or Kyverno's verifyImages
can enforce verification at admission time so an unsigned image
never starts.
Why these specific identities
The cert identity is the workflow file path at the tagged ref:
https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0
That URL is self-describing:
| Segment | Meaning |
|---|---|
Metbcy/securescan | The repository. |
.github/workflows/release.yml | The workflow that signed the artifact. |
refs/tags/v0.11.0 | The git ref the workflow ran against. |
The OIDC issuer is GitHub Actions' fixed token issuer:
https://token.actions.githubusercontent.com
Together they prove "this artifact was signed by Metbcy/securescan's release workflow when run against the v0.11.0 tag." Re-running the workflow against a different tag, branch, or repository would produce a different identity that fails the verification.
Troubleshooting
error: tlog entry not found
Sigstore cached transparency entries can lag a few seconds after signing. Retry. If it persists past a minute, the artifact may have been signed against an unsupported transparency log; check the Sigstore status page.
subject mismatch
The --cert-identity does not match the actual signature. Most
common causes:
- Wrong tag in the URL (
v0.11.0vsv0.10.4). - Verifying a
latestimage —latestis built from main and not signed via the tagged-release path. - Forking SecureScan and re-running
release.ymlfrom your fork. Your fork's signatures will use your org/repo in the identity.
error: bundle does not match
The --bundle file does not correspond to the securescan-*.whl
on disk. Make sure you downloaded both from the same GitHub Release.
Source
- Release workflow:
.github/workflows/release.yml - The exact verification commands are also appended to each GitHub
Release's notes (auto-generated from the
release.ymltemplate), with<tag>and<version>substituted in.
Next
- Release process — what produces these signatures.
- Production checklist — verification is on it.
- Docker — the image we just verified.
CLI overview
The securescan CLI is the source of truth for the data model:
the same fingerprints, severities, scan types, and suppression
mechanics that the dashboard exposes are first defined here. The web
UI is a mirror.
There are three primary modes:
- One-shot scan —
securescan scanagainst a directory. - Diff scan —
securescan diffagainst two refs (or two pre-scanned snapshots). The CI workhorse. - Server —
securescan serveruns the FastAPI dashboard backend.
Install
The CLI is the same binary regardless of install path:
# From PyPI:
pip install securescan
# Or, the container (everything pre-installed):
docker run --rm -v "$PWD:/work" -w /work \
ghcr.io/metbcy/securescan:v0.11.0 \
scan . --type code
See Install for full details.
Subcommands
| Command | What it does |
|---|---|
securescan scan <path> | Full scan of a directory. Outputs findings in any format. |
securescan diff <path> | Diff-aware scan: only NEW findings introduced since the base ref. |
securescan compare <path> <baseline> | Compare current scan against a saved baseline; report drift (NEW / DISAPPEARED / STILL_PRESENT). |
securescan baseline [-o <path>] | Write a canonical baseline JSON of current findings (deterministic; check into git). |
securescan config validate [<path>] | Lint .securescan.yml for typos, bad severities, missing rule-pack paths. |
securescan history | List past saved scans. |
securescan status | List which scanners are installed and reachable. |
securescan serve | Run the FastAPI dashboard backend. |
Detailed examples for each: Commands.
Output formats
Pick with --output <format>:
| Format | Use case |
|---|---|
text (TTY default) | Human-readable terminal output. |
json | Downstream tools, baselines, snapshot-mode diff inputs. |
sarif | GitHub Code Scanning / Security tab; emits partialFingerprints so re-uploads dedup cleanly. |
csv | Spreadsheet import, compliance reports. |
junit | CI test-result tabs. |
github-pr-comment | The default for securescan diff. Markdown with <!-- securescan:diff --> upsert marker. |
github-review | GitHub Reviews API JSON payload (used by the inline-review action mode). |
--output-file <path> writes to a file instead of stdout.
Determinism
Every CLI run is byte-deterministic for the same inputs:
- Findings are sorted by a canonical key.
- AI enrichment is auto-disabled in CI (
CI=true). - Wall-clock timestamps are excluded from byte-identity-sensitive sections.
SECURESCAN_FAKE_NOWpins the only time-derived field that exists.
This is the property that makes the GitHub Action's "single PR comment, upserted on every push" work — and makes SARIF re-uploads to GitHub's Security tab dedup cleanly. See Architecture: determinism contract.
Most-used flags
These are the flags you'll reach for most. The full list is
securescan --help (and per-subcommand --help).
Global
| Flag | Default | Notes |
|---|---|---|
--type <t> | code (for diff); code,dependency typical for scan | Repeatable. code / dependency / iac / baseline / dast / network. |
--output <fmt> | text (TTY) / json (pipe) | One of the formats above. |
--output-file <path> | stdout | Write to file. |
--ai / --no-ai | auto | Force AI enrichment on / off (auto = off in CI). |
--fail-on-severity <s> | none | Exit non-zero if findings ≥ this severity exist. |
--show-suppressed | off | Include suppressed findings in output (with [SUPPRESSED:reason] prefix). |
--no-suppress | off | Disable suppression entirely. Kill switch. |
diff-specific
| Flag | Notes |
|---|---|
--base-ref <ref> | Git ref for the "before" side. Resolved to a sha. |
--head-ref <ref> | Git ref for the "after" side. Resolved to a sha. |
--base-snapshot <file> | Pre-scanned JSON for the base side (CI-friendly path). |
--head-snapshot <file> | Pre-scanned JSON for the head side. |
--baseline <file> | Path to baseline JSON to suppress legacy findings. |
--repo, --sha, --base-sha | For --output github-review: GitHub coordinates. Auto-resolved from git when omitted. |
securescan diff accepts ref mode (--base-ref / --head-ref) or
snapshot mode (--base-snapshot / --head-snapshot), never both.
Snapshot mode is the recommended CI path: each side runs
securescan scan ... --output json independently, then a single
classification step does the diff without re-checking-out the tree.
serve-specific
| Flag | Default | Notes |
|---|---|---|
--host | 127.0.0.1 | Bind interface. Use 0.0.0.0 in containers. |
--port | 8000 | |
--workers | 1 | Must stay 1. See Single-worker. |
Auth from the CLI
When SecureScan's backend has auth configured, the CLI's commands that hit the backend need a key. Two options:
- Set
SECURESCAN_API_KEYin the environment. - Pass
--api-key <key>on the command line.
The env path is preferred because it doesn't end up in shell history.
The relevant commands are: history, status (when probing a
remote backend), and the inline-review modes that POST to the
backend. scan / diff / compare / baseline work entirely
locally and do not need backend auth.
Examples
Scan a directory, fail on high
securescan scan ./your-repo --type code --type dependency \
--fail-on-severity high \
--output sarif --output-file results.sarif
Diff for a PR (snapshot mode)
securescan scan . --type code --output json --output-file before.json
git checkout HEAD
securescan scan . --type code --output json --output-file after.json
securescan diff . \
--base-snapshot before.json --head-snapshot after.json \
--output github-pr-comment --output-file pr-comment.md
Refresh a baseline
securescan baseline # writes .securescan/baseline.json
securescan compare .securescan/baseline.json # what disappeared since the last baseline?
Validate a config
securescan config validate .securescan.yml
Status check
securescan status
Source
- Entry point:
backend/securescan/cli.py(Typer-based).
This page covers the most-used flags. For the complete flag
list per subcommand, run securescan <subcommand> --help. The CLI
is built with Typer; help text is auto-generated from the function
signatures.
Next
- Commands — detailed examples for every subcommand.
- GitHub Action —
securescan diffwrapped for CI. - Suppression — for the
--show-suppressed/--no-suppressflags.
Commands
A subcommand-by-subcommand walkthrough with realistic examples.
For the full flag list per command, run securescan <cmd> --help
— that's the source of truth.
scan
Full scan of a directory (or URL / hostname for DAST / network). Outputs findings in any output format.
# Default: code + dependency
securescan scan ./your-repo
# Multiple types
securescan scan ./your-repo --type code --type dependency --type iac
# DAST against a URL
securescan scan https://staging.example.com --type dast
# Network probe
securescan scan example.com --type network
# Specify output file + format
securescan scan ./your-repo --type code \
--output sarif --output-file results.sarif
# Fail the build on high
securescan scan ./your-repo --fail-on-severity high
# Force AI enrichment on (off by default in CI)
securescan scan ./your-repo --ai
Sample text output:
Scanning ./your-repo (code, dependency)
semgrep: 7 findings (4.31s)
bandit: 2 findings (1.04s)
trivy: 3 findings (12.7s)
safety: 0 findings (0.6s)
[HIGH] semgrep backend/api.py:42 Use of eval()
[HIGH] bandit backend/db.py:12 SQL injection via str.format
[MEDIUM] trivy requirements.txt CVE-2024-12345 in requests<2.32.0
...
Summary
Total: 12 findings (1 critical, 3 high, 5 medium, 2 low, 1 info)
Risk score: 34.2
fail-on-severity: none
diff
Diff-aware scan: only NEW findings between two refs. The CI workhorse.
# Ref mode — refs must exist in the local clone
securescan diff . --base-ref main --head-ref HEAD
# Snapshot mode — recommended for CI
securescan diff . \
--base-snapshot before.json \
--head-snapshot after.json \
--output github-pr-comment
# Output as a GitHub review JSON (for inline-review mode)
securescan diff . --base-ref main --head-ref HEAD \
--output github-review --repo Metbcy/securescan \
--output-file review.json
Sample github-pr-comment output:
<!-- securescan:diff -->
### SecureScan diff
3 new findings (●1 critical, ●1 high, ●1 medium) · 0 fixed · 14 unchanged
fail-on-severity: high
| Severity | Scanner | Title | Where |
| --- | --- | --- | --- |
| ● critical | semgrep | Use of eval() | `backend/api.py:42` |
| ● high | bandit | SQL injection via str.format | `backend/db.py:12` |
| ● medium | secrets | Possible AWS access key | `config/local.yml:5` |
<sub>Run `securescan diff` locally to reproduce. Markers identify this comment for upsert.</sub>
Each side of the diff runs securescan scan ... --output json
independently — possibly on different runners — and the diff step
is a single deterministic classification. This decouples the heavy
work from the diff and lets you cache snapshots across runs. See
Diff & compare.
compare
Compare current scan against a saved baseline; report drift.
# What disappeared since the baseline?
securescan compare .securescan/baseline.json
# As a PR comment
securescan compare .securescan/baseline.json \
--output github-pr-comment --output-file compare.md
The classifier produces:
| Bucket | Meaning |
|---|---|
NEW | In current scan, not in baseline. |
DISAPPEARED | In baseline, not in current scan. (Your remediations.) |
STILL_PRESENT | In both. |
The PR-comment marker is <!-- securescan:compare --> so a comment
upserter can keep this on a separate thread from the
<!-- securescan:diff --> PR-diff comment.
baseline
Write a canonical baseline JSON of current findings.
# Writes to .securescan/baseline.json (default)
securescan baseline
# Custom path
securescan baseline -o /path/to/baseline.json
The output is byte-deterministic: no timestamps, relative
target_path, sorted entries. Two identical scans produce two
identical baseline files — diffs cleanly in code review.
Common usage on adoption:
securescan baseline # snapshot every existing finding
git add .securescan/baseline.json
git commit -m "chore: SecureScan baseline"
Then in CI:
securescan diff . --base-ref main --head-ref HEAD \
--baseline .securescan/baseline.json # only NEW findings appear
config validate
Lint .securescan.yml:
$ securescan config validate
.securescan.yml: OK
scan_types: code, dependency
ignored_rules: 3
severity_overrides: 2
semgrep_rules: .securescan/rules/secrets.yml
Catches:
- Typos in severity values (
hgihinstead ofhigh). - Missing rule-pack paths.
- Collisions between
ignored_rulesandseverity_overrides(a rule that is both ignored and severity-pinned).
Exit code 0 on OK, non-zero on validation failure.
history
List past scans (talks to the backend if serve is running, otherwise
reads the local DB):
$ securescan history
ID Target Started Status Findings
0f1a93cb-44c2-4c8e-9f92-0a7c5a2e1b51 /home/me/proj-a 2026-04-29 20:11:09 completed 12
0d2c3a8f-4f1c-86e9-2b4b4ab0a8e0 /home/me/proj-a 2026-04-28 18:00:02 completed 14
2b3a93bc-8f4f-1c86-e92b-4b4ab0a8e0e1 https://staging 2026-04-28 17:30:10 failed -
status
List which scanners are installed and reachable. Read this first when results look thinner than expected:
$ securescan status
Scanner Type Available Version Notes
semgrep code yes 1.71.0
bandit code yes 1.7.5
trivy dependency yes 0.49.1
safety dependency yes 2.3.5
checkov iac no pip install checkov
npm-audit dependency yes npm 10.x uses ambient npm on PATH
zap dast no /usr/share/zaproxy/zap.sh; recommended port 8090
nmap network yes 7.94
licenses dependency yes 4.3.4
secrets code yes
git-hygiene code yes
dockerfile iac yes
baseline baseline yes
builtin_dast dast yes
The same data is at GET /api/v1/dashboard/status — the dashboard's
/scan page reads it on mount.
serve
Run the FastAPI dashboard backend:
# Default
securescan serve
# Bind on all interfaces (in a container)
securescan serve --host 0.0.0.0 --port 8000
# Single worker is the default and required
securescan serve --workers 1
Inside the container, the entry point is the same — serve is the
command that the bundled image runs.
See Docker and Production checklist.
Less-used / power-user flags
Inline review mode integration
securescan diff . --base-ref main --head-ref HEAD \
--output github-review \
--repo Metbcy/securescan \
--sha "$GITHUB_SHA" \
--base-sha "$GITHUB_BASE_SHA" \
--review-event COMMENT \
--no-suggestions
These flags exist so the GitHub Action's post-review.sh can drive
the inline-review path, but they're useful for local debugging too:
the JSON payload is what the Reviews API expects, so you can
inspect it without posting.
Baseline host probes
securescan scan / --type baseline --baseline-host-probes
For power users who want host-scope baseline probes alongside target-scope scans. See Scan types → baseline.
Pinning the time field
SECURESCAN_FAKE_NOW="2026-04-29T20:00:00Z" \
securescan scan ./your-repo --output json --output-file findings.json
Pins the only time-derived field in the output for byte-identical test fixtures. Used in the SecureScan test suite; useful for any CI replay that needs reproducible bytes.
Source
- All commands route through
backend/securescan/cli.py(Typer-based). - The serve command bridges to
backend/securescan/api/__init__.py.
Next
- GitHub Action —
securescan diffwrapped for CI. - CLI overview — flags + output formats reference.
- Suppression —
--show-suppressed,--no-suppress, baseline.
GitHub Action
The Metbcy/securescan@v1 composite action wraps securescan diff,
posts the upserted PR comment, and uploads SARIF to GitHub's Security
tab. It tries the wheel first and falls back to the pinned container
image when scanner binaries are not on PATH.
Action source:
action/.
Minimum example
# .github/workflows/securescan.yml
on: pull_request
permissions:
contents: read
pull-requests: write # required for the upserted PR comment
security-events: write # required for SARIF upload to the Security tab
jobs:
securescan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # diff needs both base and head commits
- uses: Metbcy/securescan@v1
with:
scan-types: code,dependency
fail-on-severity: high
That's the full integration. The action:
- Checks out both refs (base + head).
- Runs
securescan diffagainst them. - Posts a PR comment with NEW findings (upserted via the
<!-- securescan:diff -->marker — one comment per PR, updated in place). - Uploads SARIF to the Security tab.
- Exits non-zero if NEW findings exist at
>= fail-on-severity.
Inputs
| Input | Default | Description |
|---|---|---|
base-ref | PR base sha | Git ref to diff from. Auto-resolved from the PR event payload. |
head-ref | PR head sha | Git ref to diff to. Auto-resolved from the PR event payload. |
scan-types | code | Comma-separated: code,dependency,iac,baseline,dast,network. |
fail-on-severity | none | Exit non-zero if NEW findings >= this severity. none|critical|high|medium|low. |
comment-on-pr | true | Post the diff as a PR comment. |
upload-sarif | true | Upload SARIF to the Security tab. |
image-tag | latest | Tag of ghcr.io/metbcy/securescan to use when falling back to the container. |
prefer-image | false | Skip the wheel install path; always run the container. |
baseline | (none) | Path to a baseline JSON to suppress legacy findings. |
github-token | GITHUB_TOKEN | Token used for PR comment upsert. |
pr-mode | summary | summary (one PR comment) / inline (inline-anchored review) / both. |
review-event | COMMENT | When pr-mode includes inline: COMMENT / REQUEST_CHANGES / APPROVE. |
inline-suggestions | true | Include \``suggestion` blocks for one-click inline-ignore / severity-pin. |
The full inputs: declaration is in
action/action.yml.
PR mode: summary, inline, both
summary (default) is one upserted PR comment listing every
NEW finding. Best for: dashboards, single-reviewer PRs, finding
counts that fit in one comment.
inline posts a GitHub Review with one inline comment anchored
on each affected line. Best for: many reviewers triaging
independently, larger PRs where individual resolution matters,
teams that already use GitHub Review threads.
both posts both surfaces. The summary lives in the
conversation tab; inline comments live in files-changed. Use this
when you want the "what's the headline finding count" surface AND
per-finding resolution.
Inline mode example
- uses: Metbcy/securescan@v1
with:
pr-mode: inline
review-event: COMMENT # COMMENT | REQUEST_CHANGES | APPROVE
inline-suggestions: true # one-click ignore / severity-pin
How inline mode behaves:
- Diff resolution. SecureScan reads
git diff <base>..<head>to compute each finding's position — GitHub's offset-into-the-PR-diff coordinate, not the source line number. - Findings outside the diff fall back to the review body so they're not silently dropped.
- Suggestion blocks (when
inline-suggestions: true):- For findings the reviewer can suppress, SecureScan offers a
one-click
\``suggestionblock adding# securescan: ignore <rule_id>` above the line. - For findings whose severity is wrong for this codebase,
SecureScan shows a copy-paste
severity_overrides:snippet for.securescan.yml.
- For findings the reviewer can suppress, SecureScan offers a
one-click
- Idempotent re-runs. Each comment carries a hidden
<!-- securescan:fp:<prefix> -->marker. On re-runs, SecureScan PATCHes existing comments instead of posting duplicates — reviewer reply threads survive. - Resolved findings are marked, not deleted. When a finding
disappears from a re-run, its comment is patched to
**Resolved in <sha7>** — finding no longer presentwith the original body strikethrough'd. Manual resolution by the reviewer is honored — we do NOT call GraphQLresolveReviewThread.
Summary vs inline at a glance
| summary | inline | both | |
|---|---|---|---|
| Comment count | 1 (upserted) | 1 review with N inline comments | summary + inline |
| Reviewer can resolve per-finding | No | Yes | Yes (inline) |
| Findings on touched code only | All | Only lines in PR's diff | summary covers all |
| Findings outside touched code | In the comment | Review body fallback | covered both ways |
| Suggestion blocks | No | Yes (when enabled) | Yes (inline only) |
Permissions
The action's permissions are controlled by the workflow YAML — set the right ones at the workflow level:
permissions:
contents: read
pull-requests: write # both summary comment AND inline review submission
security-events: write # for SARIF upload (if upload-sarif is true)
pull-requests: write is required for both summary and inline
modes. Without it, the action will fail at the comment / review POST
step.
Pinning
Metbcy/securescan@v1 is the floating major-version tag —
auto-tracks the latest v1.x.y stable release. Recommended for most
users.
Metbcy/securescan@v0.11.0 (or any specific vX.Y.Z) is the
immutable per-release pin — use it when you want reproducible
CI behavior and explicit upgrades:
- uses: Metbcy/securescan@v0.11.0 # pinned; you control upgrades
Examples
Multi-type scan with custom baseline
- uses: Metbcy/securescan@v1
with:
scan-types: code,dependency,iac
fail-on-severity: high
baseline: .securescan/baseline.json
Inline mode, request-changes on critical
- uses: Metbcy/securescan@v1
with:
pr-mode: inline
review-event: REQUEST_CHANGES # blocks merge if branch protection requires reviews
fail-on-severity: critical
Force the container path (slower; reproducible)
- uses: Metbcy/securescan@v1
with:
prefer-image: true
image-tag: v0.9.0
Summary + inline together
- uses: Metbcy/securescan@v1
with:
pr-mode: both
inline-suggestions: true
Local development of the inline-review path
To inspect what would be posted without running CI:
securescan diff . --base-ref main --head-ref HEAD \
--output github-review --repo Metbcy/securescan \
--output-file review.json
cat review.json | jq .
The CLI requires --repo, --sha, and --base-sha (auto-resolved
from --base-ref/--head-ref in a git working tree). It does NOT
post to GitHub on its own — that's the action's job.
Source
- Action:
action/action.yml. - Entry point script:
action/entrypoint.sh. - Summary poster:
action/post-pr-comment.sh. - Inline review poster:
action/post-review.sh. - Examples:
examples/github-action.yml.
Next
- Commands — what
securescan diffdoes on the wire. - Diff & compare — the model behind the action.
- Findings & severity — fingerprints stabilize comment threads.
pre-commit hook
SecureScan ships a pre-commit hook for the
fast pre-commit feedback loop. Add to your .pre-commit-config.yaml:
repos:
- repo: https://github.com/Metbcy/securescan
rev: v0.11.0
hooks:
- id: securescan
Then pre-commit install and pre-commit run --all-files. From here
on, every git commit will run SecureScan on the staged changes.
What it scans
Only files in git diff --cached --name-only. The full repo is NOT
re-scanned on every commit; for that, run securescan scan . directly
or use the GitHub Action.
Performance
The hook is amd64-only Python and skips heavyweight scanners when no
staged file matches their target type. Typical run is sub-3s on small
projects. If yours runs slow, narrow scan-types in your .securescan.yml.
Suppression
Triage state, inline securescan: ignore comments, and the baseline
file all apply to the hook the same way they apply to securescan scan. See Suppression.
Changelog
This is a mirror of the project's
CHANGELOG.md
for the post-v0.5.0 entries that the rest of this site references.
For the canonical list back to v0.1.0, follow the link above.
The format is based on Keep a Changelog, and the project adheres to Semantic Versioning.
[0.10.0] - 2026-04-30
A non-feature minor release: full product documentation website, Apache 2.0 relicensing, and a NOTICE file crediting the third-party scanners SecureScan orchestrates.
Added
- Documentation website at https://metbcy.github.io/securescan/.
41 pages covering install, all features (v0.6.0 → v0.9.0), API
reference, deployment, security, and CLI usage. Built with mdBook,
auto-deployed on every push to
mainthat touchesdocs/**via a new.github/workflows/docs.ymlworkflow. LICENSEfile (Apache 2.0, full text) andNOTICEfile crediting the third-party scanners which are invoked as subprocesses, not redistributed.
Changed
- Project relicensed to Apache 2.0. Previously declared MIT in
README.mdandbackend/pyproject.tomlbut noLICENSEfile existed in the repo. Apache 2.0 fits a security-tooling project better — explicit patent grant, NOTICE convention.
No code changes
This release ships zero changes to the API, database schema, scanner behavior, or dashboard UX. v0.9.0 callers can upgrade safely.
0.9.0 - 2026-04-29
A workflow / observability release: dashboards now have a notification bell, operators can issue outbound webhooks to Slack, Discord, or any HTTP receiver with HMAC-signed deliveries and a durable retry queue, and the v0.7.0 SSE live-progress stream now works in authenticated deployments via short-lived signed event tokens (the v0.8.0 deferral is closed).
Added
- SSE event tokens. New
POST /api/v1/scans/{id}/event-token(read scope) returns a 5-minute HMAC-signed token bound toscan_id+ the caller'skey_id. The SSE endpointGET /api/v1/scans/{id}/eventsnow accepts?event_token=…as an alternative toX-API-Keyso EventSource can authenticate. Token verification rehydrates the principal at connect time so a revoked DB key invalidates outstanding tokens immediately. Frontend rotates tokens at half-life with a single re-mint on error before falling back to polling. Signing secret fromSECURESCAN_EVENT_TOKEN_SECRET; required whenSECURESCAN_AUTH_REQUIRED=1. → SSE event tokens. - Outbound webhooks.
POST /api/v1/webhooks(admin) creates durable subscriptions to scan lifecycle events. Each delivery is persisted inwebhook_deliveriesBEFORE the HTTP call, so a backend restart resumes any pending retries. Retry policy: full- jitter exponential backoff capped at 5 minutes, max delivery age 30 minutes. Payloads are HMAC-SHA256 signed viaX-SecureScan-Signature: t=<unix-ts>,v1=<hex-hmac>overf"{t}.{raw_body}"(Stripe-style). FIFO ordering per webhook (different webhooks dispatch concurrently). Slack and Discord URLs auto-detected and reshaped; generic JSON otherwise. New/settings/webhookspage lists webhooks with a delivery log drawer that auto-refreshes every 5s.POST /webhooks/{id}/testfires a synthetic event through the exact dispatcher path so users can verify receivers end-to-end. → Webhooks. - In-app notifications. New
notificationstable; bell icon in the topbar with unread count badge (poll every 30s); 360px popover showing 10 most recent with severity dots. Notifications are auto-created onscan.complete(only whenfindings_count > 0— successful zero-finding scans don't spam the bell),scan.failed,scanner.failed. New/notificationspage lists everything with All / Unread / Read filter chips. Read notifications older than 30 days are pruned at backend startup. → Notifications.
Changed
_log_scan_eventnow triggers three side-effects per emission: the v0.6.1 logger line, the v0.7.0 ScanEventBus publish, and (new) two side-effect hooks for webhook enqueue + notification create. Hooks run viaasyncio.create_taskand swallow DB errors so a failed side-effect can't break a live scan.require_api_keynow accepts?event_token=for the SSE route specifically. The path-match check makes a leaked token only usable on/scans/{id}/events; any other route falls through to strictX-API-Keyvalidation.
Bug fixed during integration
- Dev-mode SSE event tokens were being minted with
key_id="env", then verification rejected them because no env-var was actually configured. Tokens minted in dev mode now use a"dev"sentinel that's accepted only while the system remains in dev mode; if credentials are added later, dev-mode tokens are invalidated. Regression tests intest_sse.py::test_dev_mode_token_round_tripsandtest_dev_mode_token_invalidated_when_auth_enabled.
Tests
- 790 → 863 (+73): 9 event-token unit + 12 SSE token integration (including 2 dev-mode regression), 31 webhooks, 21 notifications.
Deployment notes
- The webhook dispatcher runs as an asyncio task on the same uvicorn worker as the API. Single-worker constraint from v0.7.0 still applies (multi-worker pubsub backplane is a future feature). → Single-worker constraint.
0.8.0 - 2026-04-29
A production-readiness release: API authentication is no longer a
single shared env-var key. Operators can now issue, scope, and revoke
hashed API keys through the dashboard or the API, and the
existing endpoints are gated behind explicit read / write / admin
scopes. The legacy SECURESCAN_API_KEY env var still works as a
break-glass / dev-mode fallback.
Added
- DB-backed API keys with scopes. New
api_keystable stores salted-sha256 hashes — plaintext keys are returned exactly once at creation. Key format:ssk_<10-char id>_<32-char secret>(~250 bits of entropy). Three scopes:read,write,admin. Default new-key scopes are["read", "write"];adminmust be explicitly granted. → API keys.POST /api/v1/keys(admin) —{name, scopes}→ 201ApiKeyCreated(the only response that includes the full secret)GET /api/v1/keys(admin) →ApiKeyView[](no secret)GET /api/v1/keys/me(any authenticated DB key) → caller's own key infoDELETE /api/v1/keys/{id}(admin) → 204; 409 if revoking the target would leave the system with zero admin credentials andSECURESCAN_AUTH_REQUIRED=1is set (lockout protection)
- Per-route scope enforcement. Every
/api/*route now declares a required scope viaDepends(require_scope(...)). A new regression-guard test (test_all_routes_have_explicit_scope) enumeratesapp.routesand fails if any non-public route is missing a scope — preventing future scope-coverage holes. → Scopes. SECURESCAN_AUTH_REQUIRED=1startup safety. When set with no configured credentials, the backend logs CRITICAL and exits with code 2. Catches misconfigured deploys before they accept their first request unauthenticated.- Lockout protection. Revoking the last admin DB key when
AUTH_REQUIRED=1and no env-var key is set returns 409 with a human-readable message. Operators can still delete admin keys freely when an env-var fallback exists. /settings/keysdashboard page. Lists keys (name, prefix, scopes, created, last used, status), with a "New key" modal that enforces the one-shot secret reveal contract.
Changed
auth.pywas rewritten to support both the legacy env-var path and DB keys. Bug fix: an explicit-but-bogus key now always fails with 401 — even when no DB keys remain unrevoked. The previous logic would fall back to dev mode in that scenario, letting a revoked key keep working until at least one other key was created. Caught during integration; regression test intest_revoked_db_key_rejected_when_no_env_var.- The
Principalresolved byrequire_api_keyis stashed onrequest.state.principal, so per-route scope checks don't re-trigger DB writes.
Tests
- 738 → 790 (+52).
Backward compatibility
SECURESCAN_API_KEYenv var still works exactly as before; treated as a synthetic principal with all scopes.- Dev mode (no env, no DB keys) is unchanged: every request passes through and scope checks fail-open.
0.7.0 - 2026-04-29
A workflow + observability release: the dashboard now lets you triage individual findings (status + comments) with verdicts that survive across scans, and replaces 2-second polling with a live event stream so a running scan shows real-time per-scanner progress instead of a frozen "running" badge.
Added
- Findings triage workflow. Each finding now has an optional
triage state (
new,triaged,false_positive,accepted_risk,fixed,wont_fix) and a per-finding comments thread. State is keyed on the stablefingerprintso a "false positive" verdict on a finding survives subsequent rescans of the same target — and even survivesDELETE /scans/{id}. New API:PATCH /api/v1/findings/{fingerprint}/stateGET /api/v1/findings/{fingerprint}/commentsPOST /api/v1/findings/{fingerprint}/commentsDELETE /api/v1/findings/{fingerprint}/comments/{comment_id}→ Triage workflow.
- Triage UI. New compact "Status" column in the findings table.
Default-hide set is
{false_positive, accepted_risk, wont_fix}.fixedis intentionally NOT default-hidden so a "fixed" finding reappearing in a later scan stays visible. - Real-time scan progress (SSE).
GET /api/v1/scans/{scan_id}/eventsstreams scan + per-scanner lifecycle events. Late subscribers get a 200-event replay buffer. Terminal events are never dropped on subscriber backpressure. → Real-time scan progress. - Scan-detail page goes live. New
<ScanProgressPanel>above the StatLine while a scan isrunning/pending.EventSourcereplaces the 2-second poll, with a polling fallback when an API key is configured (the v0.9.0 SSE-with-auth path closes that gap).
Changed
GET /api/v1/scans/{id}/findingsnow returnsFindingWithStateobjects — every existing field, plus an optionalstatepayload. The bareFindingmodel is unchanged, so SARIF / JSON / baseline / CLI exporters keep their existing contract.
Deployment notes
- The SSE event bus is a module-level singleton. SecureScan now
requires
--workers 1for/api/v1/scans/{id}/eventsto work correctly. → Single-worker constraint.
Tests
- 709 → 738 (+29).
0.6.1 - 2026-04-29
A polish release focused on production readiness on real (large)
scans. The 20k-finding scan that shipped during v0.6.0 testing
exposed three issues — a stale-running UI badge, a janky search box,
and a missing delete-scan endpoint — all fixed here. We also added
structured scan lifecycle logs, a user-scoped .env loader so
credentials persist across restarts, and a smarter ZAP install hint.
Added
DELETE /api/v1/scans/{id}removes a scan and cascades its findings. Returns 204 on success, 409 if the scan isrunning/pending(cancel first), 404 otherwise.- Structured INFO logging for the scan lifecycle on the
securescan.scanlogger:scan.start,scanner.start,scanner.complete(withduration_sandfindings_count),scanner.skipped,scanner.failed,scan.complete,scan.failed,scan.cancelled. Tail/tmp/securescan-backend.logto debug a scan in flight. ~/.config/securescan/.env(or$XDG_CONFIG_HOME/securescan/.env) is auto-loaded at backend startup. → Local config (.env).
Changed
frontend/src/app/scan/[id]/page.tsxpolling no longer refetches the entire findings array every 2 seconds. While a scan is running, only the lightweight scan-status record is polled; findings and summary load once on mount and once when status flips tocompleted.FindingsTableis responsive again on 20k-finding scans. Search input uses React 19'suseDeferredValue; a single memoized projection replaces per-keystroke string normalization.
Tests
- 690 → 709 (+19).
0.6.0 - 2026-04-29
This release pairs an end-to-end frontend redesign with two backend
durability features. The dashboard moves off neon traffic-light
colors and ad-hoc card grids onto an OKLCH design system with a
single-hue severity ramp, dense data-table layouts, a new app shell
(sidebar + sticky topbar + ⌘K command palette), and a brand-new
/diff page for PR-style scan comparison. On the API side, all
routes are now mounted under /api/v1/... (legacy /api/... paths
still work, with Deprecation / Sunset response headers), and
POST /scans is protected by an in-memory per-key token-bucket rate
limiter.
Added
/api/v1versioning prefix; legacy/api/*paths returnDeprecation,Link, andSunset(Dec 31 2026) response headers. → Versioning & deprecation.- In-memory rate limiting on
POST /api/scansandPOST /api/v1/scans; per-API-key token-bucket, configurable viaSECURESCAN_RATE_LIMIT_PER_MIN/_BURST/_ENABLED. → Rate limits. - New
/diffdashboard page — PR-style scan-vs-scan diff with base/head pickers, summary chips, and tabbed findings. → Diff & compare. - Command palette (⌘K) for navigation, recent scans, and quick actions.
- Theme toggle and
next-themesintegration; dark default with light theme support.
Changed
- Frontend redesigned end-to-end. New OKLCH design tokens, single-hue
severity ramp (replaces neon traffic-light coloring), Geist
Sans/Mono typography, restrained color strategy per the new
DESIGN.md. → Dashboard tour.
Earlier releases
For v0.5.0, v0.4.0, v0.3.0, v0.2.0, and v0.1.0, see
CHANGELOG.md
on GitHub. Highlights:
- 0.5.0 — API key auth, structured JSON logging, request-ID
correlation,
/readydistinct from/health,Scan.scanners_run/scanners_skippedpersisted. - 0.4.0 —
pr-mode: inlineGitHub Action mode with one inline comment per finding,\``suggestion` blocks, idempotent re-runs via fingerprint markers. - 0.3.0 —
.securescan.ymlconfig, inline# securescan: ignorecomments, baselines,securescan compare,severity_overrides,--show-suppressed/--no-suppress,--ai/--no-ai. - 0.2.0 —
securescan diff, deterministic SARIF output, GitHub Action, container image, signed releases (cosign + sigstore-python), per-finding fingerprints. - 0.1.0 — Initial public release. 14 scanners, FastAPI backend, Next.js dashboard, SBOM (CycloneDX + SPDX), AI enrichment, OWASP / CIS / PCI-DSS / SOC 2 compliance mapping.
Release process
How a SecureScan release happens — from the tag push to the published,
signed artifacts. The full pipeline is in
.github/workflows/release.yml.
Trigger
Releases are strictly tag-triggered:
on:
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+'
Pre-release tags (v0.9.0-rc1) and arbitrary v* tags are
deliberately excluded. Manual workflow_dispatch is also
intentionally NOT offered — its OIDC identity is branch-based, not
tag-based, which would break the cosign / sigstore verification
commands published in the release notes (those identities are
pinned to refs/tags/<tag>).
The cosign verification command in Verifying signed artifacts includes:
--certificate-identity 'https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0'
If the workflow could be re-run via workflow_dispatch, that
identity would have a different ref and the verification would
fail. Tag-trigger only ensures the published verification commands
always work against the published artifacts.
Pipeline
flowchart LR Tag["Tag pushed"] --> Pre["preflight"] Pre --> Wheel["build-wheel<br/>sigstore-python signed"] Pre --> Image["build-image<br/>cosign keyless signed"] Wheel --> PyPI["publish-pypi"] Wheel --> Rel["publish-release"] Image --> Rel
Five jobs:
1. Preflight (cheap, fast-fail)
Verifies:
- The pushed tag matches
pyproject.toml'sversion. CHANGELOG.mdhas a section for that version.
Fails the whole release before spending time on builds if the metadata is out of sync.
2. Build-wheel
- Builds the wheel + sdist on a single Linux runner (pure-Python; no native code, no per-platform matrix needed).
- Signs each artifact with
sigstore-pythonkeyless via the workflow's OIDC identity. Bundle is the*.sigstore.jsonfile alongside the artifact. - Uploads the artifacts and bundles as workflow outputs for the PyPI + release jobs.
3. Build-image
- Builds the multi-arch container (amd64 + arm64) and pushes to
ghcr.io/metbcy/securescanwith the immutable per-release tagv<version>(e.g.v0.11.0).:latestis not published — pin to a tag. - Signs by digest with
cosignkeyless (Sigstore via OIDC). The signature attests the(digest, identity)pair, not the tag, so re-pointing a tag never changes what was signed.
4. Publish-pypi (OIDC Trusted Publishers)
- Uploads the signed wheel + sdist to PyPI via
OIDC Trusted Publishers.
PyPI verifies the GitHub Actions OIDC token signed for this
repo + workflow file + environmentcombination and mints a short-lived upload token; noPYPI_TOKENsecret is required. - Runs in the
pypiGitHub Environment so PyPI's Trusted Publisher configuration can scope the trust narrowly. skip-existing: truemakes re-runs of the same tag idempotent.- Note: PyPI does not host the
*.sigstore.jsonbundles. They are attached to the GitHub Release instead.
One-time setup (configure a pending publisher on PyPI) is documented
in docs/PUBLISHING.md.
5. Publish-release
- Extracts the matching
CHANGELOG.mdsection. - Appends signature-verification instructions (the literal commands
from Verifying signed artifacts
with
<tag>and<version>substituted). - Creates the GitHub Release with the signed wheel, sdist, sigstore bundles, and SBOM attached.
What artifacts ship
Per release, the GitHub Release page hosts:
| Artifact | Format / signing |
|---|---|
securescan-<version>-py3-none-any.whl | wheel (sigstore-python signed) |
securescan-<version>-py3-none-any.whl.sigstore.json | sigstore bundle for the wheel |
securescan-<version>.tar.gz | sdist (sigstore-python signed) |
securescan-<version>.tar.gz.sigstore.json | sigstore bundle for the sdist |
The container image is published separately to GHCR:
ghcr.io/metbcy/securescan:v<version> (e.g. v0.11.0 — immutable, signed)
:latest is not published. Always pin to a vX.Y.Z tag (or, in
production, by digest — see
Verifying signed artifacts).
Cosign signature attestations are stored alongside the image in GHCR
(use cosign verify to check, see
Verifying signed artifacts).
Concurrency
concurrency:
group: release-${{ github.ref }}
cancel-in-progress: true
A second push of the same tag (rare; e.g. force-push after a fixup) cancels the in-flight run rather than racing it. Different tags release concurrently.
Permissions
permissions:
contents: write # create release, upload assets
id-token: write # cosign + sigstore-python keyless OIDC
packages: write # ghcr.io push
The id-token: write permission is what makes keyless signing work —
the runner's OIDC token is exchanged with Sigstore's Fulcio for a
short-lived signing certificate. No long-lived key material is
involved.
Manual reruns
If a release job fails (transient network error, PyPI rate limit), re-run the failed job from the GitHub Actions UI on the original tag-push event. The same OIDC identity is re-used — verification commands remain valid.
Do not:
- Push a new tag for the same version. SemVer says tags are immutable; treating them otherwise will break consumer pins.
- Manually run the workflow via
workflow_dispatch. Disabled on purpose — see Trigger.
Pre-flight checklist (for the maintainer)
When cutting a release:
-
Bump
backend/pyproject.tomlversion. -
Move the
[Unreleased]section inCHANGELOG.mdto a new[<version>] - <date>section. -
Update the version-compare links at the bottom of
CHANGELOG.md. -
Update
README.md's "What's new in vX.Y.Z" callout (optional for patch releases). -
Commit + merge to
main. -
Tag:
git tag v<version>andgit push --tags.
The release workflow handles everything else.
How to verify a release as a downstream user
End-to-end:
# 1. Wheel
pip download securescan==0.11.0 --no-deps -d ./out
gh release download v0.11.0 --repo Metbcy/securescan \
--pattern 'securescan-0.11.0-py3-none-any.whl.sigstore.json' --dir ./out
pip install sigstore
sigstore verify identity \
--cert-identity 'https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0' \
--cert-oidc-issuer 'https://token.actions.githubusercontent.com' \
--bundle ./out/securescan-0.11.0-py3-none-any.whl.sigstore.json \
./out/securescan-0.11.0-py3-none-any.whl
# 2. Container image
cosign verify ghcr.io/metbcy/securescan:v0.11.0 \
--certificate-identity 'https://github.com/Metbcy/securescan/.github/workflows/release.yml@refs/tags/v0.11.0' \
--certificate-oidc-issuer 'https://token.actions.githubusercontent.com'
Both should succeed. See Verifying signed artifacts for failure-mode troubleshooting.
Source
- Workflow:
.github/workflows/release.yml. - Container build:
.github/workflows/container.yml. - Default test run:
.github/workflows/securescan.yml.
Next
- Release cadence — when to expect new minor / patch / major releases.
- Verifying signed artifacts — the consumer side.
- Changelog — the per-release feature record.
- Contributing — the path from PR to release.
Release cadence
SecureScan releases predictably so adopters can plan upgrades.
Schedule
- Minor releases (
vX.Y.0): first Monday of every month. New features, deprecations, larger refactors. - Patch releases (
vX.Y.Z, Z>0): as needed, typically same-day for fixes worth shipping. - Major releases (
vX.0.0): when breaking API/CLI changes accumulate. No fixed schedule; called out at least one minor release in advance.
What's a breaking change?
Anything that requires a user to change a command, config file, or CI workflow to keep working:
- Removed or renamed CLI flags
- Removed or renamed
.securescan.ymlkeys (gated byversion:field) - Removed REST API endpoints (we keep
/api/v1/...stable;/api/...legacy isDeprecation-headered for 1 year before removal) - Container image entrypoint changes
Adding new options, new endpoints, or new env vars is NOT breaking.
Pinning
Metbcy/securescan@v1— floating major; safe for most users.Metbcy/securescan@v0.11.0— exact pin; safe for fully-deterministic CI.pip install securescan— pulls the latest stable from PyPI.pip install securescan==0.11.0— exact pin.
Deprecation policy
When a feature is deprecated:
- Documented in CHANGELOG under a
### Deprecatedsection. - Runtime warning emitted (Python:
DeprecationWarning; CLI: stderr message). - Removal scheduled at least 2 minors out (≥60 days notice).
Glossary
Terms used across the SecureScan documentation, codebase, and PRs.
A — D
Admin scope. The highest of the three scope levels. Grants API-key management and webhook management in addition to read+write. Reserve for one operator break-glass identity.
Audit trail. The structured record of who did what to a finding —
metadata.suppressed_by for suppression, finding_states.updated_by
for triage verdicts, comments thread for discussion. SecureScan does
not delete this record on rescan or on DELETE /scans/{id}.
Backplane. The hypothetical multi-process pubsub layer (Redis or similar) that would let SecureScan run multiple uvicorn workers / instances without losing SSE or webhook FIFO ordering. Roadmap; v0.9.0 is in-process only. See Single-worker constraint.
Baseline. A canonicalized, byte-deterministic JSON snapshot of a scan's findings, used to suppress legacy findings on later runs. See Suppression → baseline. Distinct from the baseline scanner, the host-config audit family.
Compliance tag. A string like OWASP-A03, PCI-DSS-6.5.1,
SOC2-CC7.1 attached to a finding. Computed by the compliance mapper
from CWE / rule_id / keywords. See Compliance.
DAST. Dynamic Application Security Testing. Runs against a
live URL. SecureScan ships builtin_dast (header / cookie /
info-disclosure) and zap (full ZAP active+passive). Contrast with
SAST, which runs against source code.
Determinism contract. SecureScan's promise that every renderer produces byte-identical output for the same inputs. Foundational for the PR-comment upsert and SARIF Security-tab dedup. See Architecture: determinism contract.
Dev mode. Backend mode when no env-var key AND no DB keys are
configured AND SECURESCAN_AUTH_REQUIRED=0. Every request passes
through; scope checks fail-open. Convenient for local dev,
unacceptable for anything else. See
Authentication overview.
E — H
Event bus. The in-process pub/sub powering SSE live progress.
Module-level singleton; one per uvicorn worker. Source:
backend/securescan/events.py. See
Real-time scan progress.
Event token. A short-lived (5-minute) HMAC-signed token that
authorizes one specific scan's SSE stream. Exists because browsers
cannot send custom headers on EventSource. See
SSE event tokens.
Fingerprint. A SHA-256 over
(scanner | rule_id | file_path | normalized_line_context | cwe),
stable across scans of the same target. The cross-scan identity for
findings; what triage state, comments, and SARIF
partialFingerprints are keyed on. See
Findings & severity.
FIFO ordering (per webhook). SecureScan's promise that two
deliveries to the same webhook subscription are processed in
created_at order. Different webhooks dispatch concurrently. See
Webhooks.
Health probe. /health (liveness — process up) and /ready
(readiness — DB + scanners loaded). Both public regardless of auth.
I — L
IaC. Infrastructure as Code. SecureScan's iac scan type covers
Terraform, Kubernetes, Helm, CloudFormation, and Dockerfiles via
checkov and dockerfile scanners.
Inline ignore. A # securescan: ignore RULE-ID (or // securescan: ignore-next-line ...)
comment on the line a finding fires for, suppressing it. The most
local of the three suppression mechanisms. See
Suppression.
Inline review mode. The pr-mode: inline GitHub Action setting
that posts findings as inline review comments anchored on the
affected lines, instead of a single summary comment. See
GitHub Action.
Lockout protection. The 409 response from DELETE /api/v1/keys/{id}
when revoking would zero out admin credentials and the env-var
fallback is unset. Prevents the operator from locking themselves out.
M — P
OKLCH. The OKLab cylindrical color space used for SecureScan's
design tokens. The --accent, --bg, severity-ramp colors are all
expressed in OKLCH for predictable contrast. See
DESIGN.md.
Orchestrator. The asyncio task started by POST /api/v1/scans
that drives scanner subprocesses, captures their output, persists
findings, and emits lifecycle events. Source: _run_scan in
backend/securescan/api/scans.py.
Principal. The authenticated caller's identity. A dataclass with
(id, scopes, source) where source is "env", "db", or "dev".
Stashed on request.state.principal for downstream use. See
backend/securescan/auth.py.
R — S
Read scope. The lowest of the three scopes. Lets the caller list scans, read findings, view SBOM, see notifications. Cannot start a scan or set triage. See Scopes.
Replay buffer. A 200-event buffer per scan that lets a late SSE subscriber (tab refresh mid-scan) reconstruct full state. Retained 30s after a terminal event. See Real-time scan progress → replay buffer.
SAST. Static Application Security Testing. Runs against source
files (no execution). SecureScan's code scan type — semgrep,
bandit, secrets, git-hygiene.
SBOM. Software Bill of Materials. SecureScan generates CycloneDX 1.5 and SPDX 2.3. See SBOM.
Scope. A capability declaration on an API key — read, write,
or admin. Each route declares which scopes it accepts. See
Scopes.
SSE. Server-Sent Events. SecureScan's mechanism for streaming
live scan progress to the dashboard. One-way server-to-client;
compatible with the browser's EventSource API.
Sticky session. A load-balancer pattern that hashes a request
attribute (e.g. scan_id) to consistently route to one backend
instance. Required when scaling SecureScan horizontally because the
event bus is per-instance.
Suppression. Filtering a finding out of CI output. Three mechanisms with fixed precedence: inline > config > baseline. See Suppression. Distinct from triage, which records a verdict.
T — Z
Triage. Recording a human verdict on a finding (new,
triaged, false_positive, accepted_risk, fixed, wont_fix).
Per-fingerprint, durable across rescans. See
Triage workflow.
Upsert marker. An HTML comment in a PR comment body
(<!-- securescan:diff -->) that lets the action find and update
its existing comment instead of posting a new one each push. See
GitHub Action.
Webhook. An outbound HMAC-signed HTTP delivery of a scan lifecycle event. v0.9.0 feature. See Webhooks.
Webhook delivery. A row in webhook_deliveries. Persisted
before the HTTP call so retries survive backend restarts.
Write scope. The middle scope. Adds: start / cancel / delete
scans, set triage state, mark notifications read. The default for a
new key alongside read. See Scopes.
ZAP. OWASP Zed Attack Proxy. SecureScan's zap scanner connects
to a separately-running ZAP daemon. Not bundled in the container
because of size; install on the host or run as a sidecar.
Acronyms
| Acronym | Stands for |
|---|---|
| CWE | Common Weakness Enumeration |
| CVE | Common Vulnerabilities and Exposures |
| DAST | Dynamic Application Security Testing |
| HMAC | Hash-based Message Authentication Code |
| IaC | Infrastructure as Code |
| OIDC | OpenID Connect |
| OKLCH | OKLab cylindrical color space |
| PCI-DSS | Payment Card Industry Data Security Standard |
| OWASP | Open Worldwide Application Security Project |
| SARIF | Static Analysis Results Interchange Format |
| SAST | Static Application Security Testing |
| SBOM | Software Bill of Materials |
| SOC 2 | Service Organization Control 2 |
| SSE | Server-Sent Events |
| TLS | Transport Layer Security |
Next
FAQ
Common questions, with links into the rest of the documentation for the long-form answers.
General
Is SecureScan a SaaS?
No. It is a self-hosted Python package + container image. Run it on your laptop, your CI runner, your internal server, your Kubernetes cluster. There is no cloud-hosted instance to point a browser at. See Install.
What scanners are included?
14 across code, dependency, iac, baseline, dast, network.
Semgrep, Bandit, Trivy, Checkov, Safety, Gitleaks, nmap, ZAP, plus
built-ins for secrets / DAST / Dockerfile / baseline / git-hygiene /
licenses / npm-audit. Full list in
Supported scanners.
Does SecureScan have its own vulnerability database?
No. It orchestrates the open-source scanners you already trust. Trivy and Safety bring their own DBs; SecureScan does not maintain or duplicate them. This is a deliberate non-goal — see the README's "Non-goals" section.
Does it work in CI?
Yes — that's the primary use case. The Metbcy/securescan@v1 GitHub
Action wraps securescan diff, posts an upserted PR comment of NEW
findings only, and uploads SARIF. See GitHub Action.
For non-GitHub CI (GitLab, Jenkins, CircleCI), use securescan diff
directly with --output github-pr-comment or --output sarif.
Determinism
Why is AI enrichment off in CI?
Because it is non-deterministic, and the v0.2.0 contract (single upserted PR comment, deduped SARIF) depends on byte-identical output for the same inputs. AI explanations vary run-to-run, so they would break the upsert.
CI=true (set automatically by GitHub Actions, GitLab CI, etc.)
flips AI off. Pass --ai to force it on; --no-ai to be explicit.
How do I get reproducible CI output?
- Don't enable AI (
CI=truehandles this). - Pin the SecureScan version (
Metbcy/securescan@v0.11.0, not@v1). - Pin scanner versions inside your runner (use the container —
prefer-image: true). - Use snapshot-mode diff:
securescan scan ... --output jsonon each side, thensecurescan diff ... --base-snapshot ... --head-snapshot ....
See How scans work → Determinism.
A trivial reformat made every finding "new". Why?
Almost certainly something changed in the
normalized_line_context of the matched line — e.g. you renamed a
variable used on that line, or moved the line to a different file.
The fingerprint hash includes those, so the identity changes.
If the change should NOT have re-opened the finding, file an issue with a reproducer; we tune the normalization to handle real-world reformats.
Auth
Do I need API keys for local dev?
No. With SECURESCAN_API_KEY unset and SECURESCAN_AUTH_REQUIRED=0
(both defaults), the backend runs in dev mode — every request passes
through. The startup banner warns you. See
Authentication overview.
Can I rotate the env-var key without downtime?
Not exactly. The env-var key is read on every request, but a hot rotation means the new value isn't picked up until the process restarts.
For zero-downtime rotation, issue a DB key first, switch your consumer, then update / restart at your leisure. See API keys → Lifecycle.
Why salted SHA-256 and not bcrypt / argon2?
Because the keys are 192-bit random secrets — brute-forcing the hash is already infeasible without a memory-hard KDF. Adding bcrypt buys nothing except a hard dependency and per-request CPU cost on the auth path. See API keys → Why salted SHA-256.
If you have a different threat model — e.g. you let users pick weak keys — use longer keys, not a slower KDF.
How do I lock out a leaked key immediately?
curl -X DELETE -H "X-API-Key: $ADMIN_KEY" \
http://your-backend/api/v1/keys/<id>
The next request with that key returns 401. No cache, no propagation delay. SSE event tokens bound to the now-revoked key also fail at the rehydrate step at connect time, even if the token's HMAC and TTL are still valid. See SSE event tokens.
Why doesn't the API have OAuth / SSO?
Because SecureScan is intentionally an internal-tools / SRE shape. Adding OIDC / SSO inside the backend would force every operator through a much heavier integration than they need. Instead: terminate authentication in front of SecureScan with oauth2-proxy / Cloudflare Access / AWS ALB OIDC, and treat SecureScan's API keys as service identities behind the proxy.
Scaling
Can I run multiple uvicorn workers?
No. The event bus and webhook dispatcher are in-process singletons.
--workers 2+ silently breaks SSE and breaks webhook FIFO ordering.
See Single-worker constraint.
How do I scale horizontally?
Run multiple separate backend deployments behind a sticky-session
load balancer keyed on scan_id. Each deployment is single-worker;
all of one scan's lifecycle happens on the same instance.
A Redis pubsub backplane is on the roadmap to remove this constraint.
My SSE connections drop after 60s. Why?
Some load balancers / reverse proxies close idle HTTP connections after a default idle timeout. SecureScan emits a 15-second keepalive comment on the SSE stream specifically to defeat this — but if your proxy aggressively closes HTTP/1.1 streams regardless, raise its idle timeout to at least 60s.
Webhooks
Why is at-least-once the contract?
Because some receivers accept the request, succeed at their internal work, and then crash before responding 2xx. SecureScan retries because it cannot tell the difference between "you didn't receive it" and "you received it but failed to ack it." Receivers must be idempotent — see Webhooks → at-least-once.
Why retry 4xx?
A misconfigured receiver that returns 401 for a few seconds while it loads its keys should not lose deliveries within the 30-minute window. The cost (a few extra HTTP calls during a transient misconfig) is far smaller than the cost (lost notifications) of giving up immediately. See Webhooks.
Can I rotate a webhook secret?
Not in place. To rotate: delete + recreate the webhook subscription. The secret is returned only on creation; there is no "reveal current secret" or "PATCH secret" endpoint by design.
If you need rotation without lost deliveries, create a new subscription pointing at the same URL, switch your receiver to accept both the old and new secret for a transition window, then delete the old subscription.
Slack/Discord don't verify HMAC. Is that a problem?
It's a property of those receivers, not SecureScan. Both Slack and Discord webhook URLs are unauthenticated — anyone with the URL can post. Treat the URL itself as the secret. Don't share it; rotate it (regenerate at the provider, recreate the SecureScan subscription) on suspicion of leak.
If you need cryptographic verification end-to-end, route through a proxy you control that verifies HMAC and forwards into Slack / Discord with their URL.
Data
How do I back up my SecureScan data?
The SQLite DB at ~/.securescan/scans.db (or whatever
SECURESCAN_DB_PATH points at) holds everything: scans, findings,
triage state, API keys, webhooks, notifications. Use SQLite's
.backup command on a cron — it works while the backend is
running.
Don't forget ~/.config/securescan/.env for the ZAP credentials
and any other persisted env vars.
Can I delete old scans?
Yes:
curl -X DELETE -H "X-API-Key: $K" http://your-backend/api/v1/scans/$SCAN_ID
Findings are cascade-deleted. Triage verdicts and per-finding comments persist because they're keyed on cross-scan fingerprint, not scan id. They reactivate when the same finding reappears in a later scan. See Triage workflow.
Can I bulk-export findings?
Yes:
securescan scan ... --output csv— one row per finding.securescan scan ... --output json— full record set.securescan scan ... --output sarif— SARIF v2.1.0.
Or query the API: GET /api/v1/scans/{id}/findings returns the
JSON shape directly.
Behavior
Why doesn't DELETE /scans/{id} clear my triage verdicts?
Because triage state is keyed on the cross-scan fingerprint, not the scan id. The whole point of fingerprinted triage is that "this finding is a false positive" outlives the scan that produced the original instance. See Triage workflow.
If you want to clear a verdict explicitly, set the status back to
new (PATCH the state with {"status": "new"}).
Why isn't every scan emitting a notification?
scan.complete only creates a notification when findings_count > 0.
This is on purpose: zero-finding scans are the common case in a
healthy CI pipeline, and we don't want to spam the bell with 50
"all clear" notifications a day. scan.failed and scanner.failed
notify regardless. See Notifications.
My CI run is slow because Trivy is downloading its DB. What do I do?
The first Trivy run on a fresh runner downloads the vulnerability DB (~30 seconds). Speed it up by:
- Caching
~/.cache/trivybetween runs (the GitHub Action can useactions/cache). - Using the SecureScan container image (
prefer-image: true) — the pre-built image ships with a recent DB baked in.
Can I customize the severity ramp colors?
Yes, but it's a frontend change. Edit the --sev-* OKLCH custom
properties in
frontend/src/app/globals.css.
The dashboard tokenizes its severity rendering, so changing the
tokens is enough — no per-component override.
The deliberate constraint is that severity is a single tonal ramp,
not a traffic light. Going back to neon red/yellow/green is
discouraged; see
DESIGN.md.
Roadmap
When will multi-process pubsub land?
After v0.9.x. The plan is a Redis backplane behind a feature flag, followed by leader election for the webhook dispatcher (so FIFO ordering survives the multi-instance world). No firm date.
Will there be SaaS hosting?
No plans. SecureScan is positioned as an internal/SRE tool with the opposite shape of a SaaS — single-tenant, opinionated, deployed inside your perimeter. If a hosted offering ever ships it will be under a different name.
Can I get a feature added?
Open an issue with the use case. The
PRODUCT.md
captures the strategic principles that govern what goes in;
features that align (deterministic, density-favoring, CLI-as-source-of-truth)
are easier to land.
Next
- Glossary — precise definitions for terms used here.
- Changelog — what changed in each release.
- Contributing — how to ship a fix.
vs DefectDojo
TL;DR
DefectDojo and SecureScan solve different problems. DefectDojo is a vulnerability-management hub that ingests findings from many tools. SecureScan is a PR-loop scanner that runs the tools and posts a diff-aware PR comment. Many teams use both.
What each tool is
DefectDojo is a vulnerability management platform. Its job starts once findings already exist: import them from 150+ scanners, deduplicate, assign owners, track remediation SLAs, and report across products and engagements. It does not run scanners itself in any first-class way; it consumes their output.
SecureScan is a scan orchestration with PR feedback tool. Its job
is to run 14 scanners (Semgrep, Bandit, Trivy, Checkov, ZAP, nmap, and
others) against a target, classify the resulting findings as
NEW / FIXED / UNCHANGED against the PR's base ref, and upsert a
single GitHub PR comment so the developer who opened the PR sees only
what their change introduced.
Where they overlap
Both surface findings, both have a web UI, both speak SARIF, and both support a triage workflow with status and comments (SecureScan's triage shipped in v0.7.0). The overlap is shallow: they sit at different points in the security lifecycle.
Where they don't
| Capability | DefectDojo | SecureScan |
|---|---|---|
| Aggregate findings from external tools | ✅ first-class | ❌ runs scanners directly |
| Diff-aware NEW/FIXED/UNCHANGED on PRs | ❌ | ✅ |
| Single upserted PR comment | ❌ | ✅ |
| Triage workflow (status + comments) | ✅ mature | ✅ v0.7.0+ |
| User/role management | ✅ first-class | ❌ single-tenant + API keys |
| Stable across-runs fingerprints | ❌ | ✅ |
| OSS license | BSD-3 | Apache-2.0 |
Using both
The two tools compose cleanly. SecureScan emits deterministic SARIF on every scan; DefectDojo has a SARIF importer. A common arrangement: the GitHub Action runs SecureScan on every PR (developer-facing PR loop), and a nightly job re-imports the latest scan's SARIF into DefectDojo for portfolio-level tracking, SLA reporting, and cross-product views. The PR loop stays fast and local; the long-term ledger lives in DefectDojo.
When to pick which
- Just SecureScan: small or mid-size engineering org, dev-first PR feedback is the priority, no existing central vuln-management practice yet, single team or single product.
- Just DefectDojo: large engineering org with established scanners already wired into CI, a security team that owns triage centrally, and an existing PR-comment story that the team is happy with.
- Both: SecureScan owns the dev-time PR loop (NEW/FIXED on every push), DefectDojo owns the long-term portfolio view (SLAs, engagements, cross-product reporting).
The choice is not adversarial. SecureScan does not aim to replace DefectDojo, and DefectDojo does not aim to replace the PR-loop. Pick the one that fits the gap you actually have today.
vs Trivy
TL;DR
SecureScan wraps Trivy. If you already use Trivy and just want a unified PR loop on top of it, SecureScan is the right next step. If you only need Trivy's coverage (SCA, IaC, container, secrets), use Trivy alone — it's a single binary and it's excellent at what it does.
What Trivy does
Trivy is the de-facto open-source scanner for software composition analysis (SCA), infrastructure-as-code (IaC), container images, filesystem scans, and basic secrets detection. It ships as a single Go binary, has a large vulnerability database, and is fast. For many teams whose only need is "scan our containers and lockfiles", Trivy alone is enough.
What SecureScan adds
SecureScan runs Trivy as one of 14 scanners. Around it, SecureScan adds
code SAST (Semgrep, Bandit), dedicated secrets detection,
infrastructure-as-code policies (Checkov), DAST against live web apps
(OWASP ZAP), network discovery (nmap), and others. On top of that
layer, SecureScan adds a diff-aware PR loop: every finding is
classified NEW / FIXED / UNCHANGED against the PR's base, and a
single PR comment is upserted (not appended) on every push. Findings
have stable fingerprints (v0.6.0+) so triage state survives rescans
(v0.7.0+), and SARIF output is byte-deterministic for CI use.
Capability matrix
| Capability | Trivy alone | SecureScan (wraps Trivy) |
|---|---|---|
| SCA / IaC / container scan | ✅ | ✅ (via Trivy) |
| Code SAST (Python, JS, Go, …) | ❌ | ✅ (via Semgrep, Bandit) |
| Secrets detection | ✅ basic | ✅ (Trivy + dedicated scanner) |
| DAST (live web app) | ❌ | ✅ (via OWASP ZAP) |
| Network scan | ❌ | ✅ (via nmap) |
| Diff-aware PR comment | ❌ | ✅ |
| Single upserted PR comment | ❌ | ✅ |
| Web dashboard | ❌ | ✅ |
| Triage state + comments | ❌ | ✅ (v0.7.0+) |
| Determinism (sorted, stable fingerprints) | partial | ✅ |
| OSS license | Apache-2.0 | Apache-2.0 |
When to pick Trivy alone
Pick Trivy alone if your CI surface is intentionally minimal, you only need SCA/IaC/container/secrets coverage, you don't want to host a dashboard, you prefer a single static binary, and you don't need diff-aware classification on PRs. Trivy is a sharp tool that does its job well; adding SecureScan to that picture is overhead you don't need.
When to pick SecureScan
Pick SecureScan if you want diff-aware PR comments that show only what the change introduced, code SAST in addition to SCA, a dashboard for triage and history, and a triage workflow that survives rescans. The trade-off is operating one more service.
Using both intentionally
Nothing wrong with running both. A common split: Trivy inside the container image build as a hard gate on the final artifact, SecureScan at PR-time across the whole repo for developer feedback. Different cadences, different audiences, no real overlap.
vs Snyk
TL;DR
Snyk is a SaaS application security platform with reachability analysis, polished UX, and per-seat pricing. SecureScan is self-hosted, deterministic, OSS Apache-2.0, and free. Use SecureScan when SaaS is a non-starter or determinism matters; use Snyk when reachability analysis or a managed product is the priority.
What Snyk does well
Snyk is a mature commercial product, and it is fair to say so. It has reachability analysis on top of SCA, which materially reduces noise and is a real ASPM differentiator today. It has Snyk Code (proprietary SAST) with its own ML models, a large curated vulnerability database that is often ahead of public feeds, polished triage and reporting UI, and auto-generated fix PRs for many ecosystems. For teams that want a turnkey managed product and have the budget, Snyk is a defensible choice.
What SecureScan does differently
SecureScan is OSS Apache-2.0 and self-hosted. No source code, no scan
results, and no findings leave your infrastructure. The serialization
contract is byte-deterministic: re-running the same scan against the
same input produces SARIF that is identical down to the byte, which
matters for cache-friendly CI and for reproducible audits. The PR loop
classifies findings as NEW / FIXED / UNCHANGED against the PR
base and upserts a single comment per PR. You can read the source code
that scans your source code.
Capability matrix
| Capability | Snyk | SecureScan |
|---|---|---|
| SCA + container + IaC | ✅ proprietary db | ✅ via Trivy + others |
| Code SAST | ✅ Snyk Code | ✅ via Semgrep + Bandit |
| Reachability analysis | ✅ | ❌ (tracked) |
| Auto-fix PRs | ✅ | partial (suggestions only) |
| Diff-aware PR comments | ✅ | ✅ |
| Determinism (byte-stable output) | ❌ | ✅ |
| Self-hosted | enterprise tier only | ✅ default |
| OSS license | proprietary | Apache-2.0 |
| Cost | per-seat | free |
When SecureScan isn't the answer
Be honest about the trade-offs. If reachability analysis is your top requirement, Snyk wins today — SecureScan does not have a reachability layer yet. If your team will use a polished UI but won't operate a self-hosted service, Snyk wins. If you need a 24/7 support contract with an SLA, a vendor-curated vuln database with same-day triage, or auto-fix PRs across a wide ecosystem out of the box, Snyk wins. These are real gaps, and pretending otherwise wastes everyone's time.
When SecureScan wins
Pick SecureScan in regulated or air-gapped environments where SaaS ingestion of source code is not allowed; in cost-sensitive teams where per-seat pricing does not scale; in CI pipelines where deterministic, byte-stable output is a hard requirement (cache hits, reproducible audits, no spurious diffs); and in organizations that, on principle, want the tool that scans their source code to itself be open source they can read, fork, and audit.
Contributing
SecureScan is an MIT-licensed open-source project. Contributions are welcome — particularly bug reports with reproducers, scanner integrations, and documentation fixes.
Bug reports
Open an issue at github.com/Metbcy/securescan/issues with:
- SecureScan version (
securescan --versionor the container tag). - The command / API call you ran.
- Expected vs. actual behavior.
- A minimal reproducer if you can — even a small repo that exhibits the bug helps enormously.
- Relevant log lines from
/tmp/securescan-backend.log(thesecurescan.scanandsecurescan.requestloggers).
Pull requests
flowchart LR Fork[Fork the repo] --> Branch[git checkout -b your-fix] Branch --> Code[Make changes] Code --> Test[pytest tests/ -v] Test --> Lint[Lint + typecheck] Lint --> Push[Push to your fork] Push --> PR[Open PR against main] PR --> CI[CI runs] CI --> Review[Code review] Review --> Merge[Squash merge]
1. Set up a dev environment
git clone https://github.com/Metbcy/securescan
cd securescan/backend
python3 -m venv venv && source venv/bin/activate
pip install -e ".[dev]"
pip install semgrep bandit safety pip-licenses checkov # for tests that exercise scanners
For frontend changes:
cd ../frontend
npm install
npm run dev # http://localhost:3000
2. Run the tests
cd backend
pytest tests/ -v
The test suite is comprehensive — 863 tests on v0.9.0. Most of them
run in seconds; a few that exercise real scanner subprocesses are
slower. Skip the slow tier with pytest -m "not slow".
For the frontend:
cd frontend
npm run lint
npx tsc --noEmit
npm run build
3. Lint + typecheck
cd backend
ruff check . # linter
ruff format --check . # formatter
mypy securescan/ # type check
4. Add a changelog entry
Append a bullet under ## [Unreleased] in CHANGELOG.md. Match the
existing style: short, present-tense, scoped. Reference the PR
number when known.
5. Open the PR
- Title starting with
feat:,fix:,docs:, etc. (Conventional Commits — used to drive the changelog and the GitHub Action's default commit messages). - Description: what changed, why, and what you tested.
- Link the issue if there is one.
CI runs automatically on pull_request:
.github/workflows/securescan.yml.
6. Address review
Direct, specific feedback. Address every comment; if you disagree, say so with reasoning. Reviewers will resolve threads they're satisfied with.
7. Merge
Maintainers squash-merge to keep history linear. The squash commit's title becomes the changelog reference.
Adding a scanner
A new scanner is a Python class that subclasses BaseScanner:
# backend/securescan/scanners/your_scanner.py
from .base import BaseScanner, ScanType
from ..models import Finding, Severity
class YourScanner(BaseScanner):
name = "your-tool"
scan_type = ScanType.CODE
binary = "your-tool"
install_hint = "pip install your-tool"
async def run(self, target_path: str) -> list[Finding]:
result = await self._run_subprocess(
["your-tool", "--json", target_path]
)
return [self._parse(item) for item in result.json()]
Then register it in
backend/securescan/scanners/__init__.py:
from .your_scanner import YourScanner
ALL_SCANNERS = [
...existing...,
YourScanner(),
]
Add tests in backend/tests/scanners/test_your_scanner.py. Use the
existing scanners as templates; the test file pattern is to mock
the subprocess call and assert the parser handles representative
output.
Update Supported scanners with the new entry.
Documentation
This documentation site lives in docs/. To preview locally:
cd docs
mdbook serve --port 3001 # avoids the frontend's port 3000
# Open http://localhost:3001
Edits hot-reload. The site builds in CI on every push to main and
deploys to GitHub Pages — see
.github/workflows/docs.yml.
Style guidelines:
- Concrete examples on every page. Don't say "you can configure X" without showing X being configured.
- Cross-link aggressively. Make navigation easy.
- No marketing fluff. SecureScan is an internal tool, not a SaaS sell-page. Tone is precise + practical.
- Code blocks have language tags.
```bash,```python,```yaml, etc. - Use admonitions sparingly.
note,warning,tip,important. Don't sprinkle them. - Cite the source code when explaining behavior, e.g., "see
backend/securescan/scanners/zap_scanner.py".
Frontend changes
Read
DESIGN.md
first. The frontend has hard rules:
- OKLCH tokens only — no hardcoded hex colors.
- Single-hue severity ramp — no neon traffic-light.
- Tables, not card grids, for finding-dense surfaces.
- shadcn/ui primitives where available.
Every PR that touches frontend/:
pnpm tsc --noEmitpasses.pnpm buildpasses.- Visual review against
DESIGN.md. - Both themes (dark + light) render without missing tokens.
- No banned-list items introduced.
Release flow
Maintainers cut releases by:
- Bumping
backend/pyproject.tomlversion. - Moving the
[Unreleased]changelog section to a new[<version>]section. - Pushing a tag matching
v[0-9]+.[0-9]+.[0-9]+.
See Release process for the full pipeline.
Code of conduct
Be respectful, be specific, prefer evidence over preference. The project tone in PRs and issues mirrors the product tone in the codebase: calm, direct, opinionated, kind to the next maintainer.
License
MIT — see
LICENSE.
By contributing, you agree your contributions are licensed under
the same terms.
Source
- Repo: github.com/Metbcy/securescan.
- Issues: github.com/Metbcy/securescan/issues.
- Pull requests: github.com/Metbcy/securescan/pulls.