Scheduled Scans with Diff Mode: Get Notified Only When Something New Appears
Pentestas Team
Security Analyst

Diff mode = signal preservation. Full scan = baseline. New finding = alert.
Diff mode fixes this. Every scheduled scan reports only what's new, fixed, or regressed since the last run of the same schedule. The signal-to-noise ratio goes up an order of magnitude, and the alerts your team sees become things they actually read.
How It Works
How diff mode works
Each scheduled scan has a unique schedule ID. Pentestas computes diffs between consecutive runs of the same schedule:
new— findings present this run that weren't in the previous run of the same schedule.fixed— findings present in the previous run that aren't in this run (the oracle no longer triggers).regressed— findings previously markedresolvedthat are now re-appearing.unchanged— everything else (suppressed from notifications).
Finding matching is by (endpoint, vuln_type, parameter) tuple — not by DB finding ID. This catches a re-appearing finding even if the DB row is a new one.
Alerts fire on new + regressed. fixed is an informational line; unchanged is silent.
Configuration
Configuration
Scans → Schedule → Edit (or create new):
schedule:
cadence: weekly # daily | weekly | monthly | cron
cron: "0 2 * * 1" # 2 AM UTC every Monday
mode: diff # full | diff
target:
target_url: https://app.example.com
scan_types: [web, api, auth, authz]
config: {...}
delivery:
- type: slack
webhook_url: https://hooks.slack.com/services/...
filter: "new CRITICAL+HIGH" # only new CRITICAL/HIGH fire
- type: webhook
url: https://api.example.com/pentestas
events: [scan.completed, finding.new, finding.regressed]
- type: email
recipients: [secops@example.com]
filter: "new CRITICAL"The filter string is a tiny DSL:
new/fixed/regressed/unchanged— diff stateCRITICAL+/HIGH+— severity thresholdseverity:CRITICAL— exact severityvuln_type:SQLI/vuln_type:IDOR— class filter
Combine with + / AND. Omitting a filter means "every finding at every state".
In Detail
Running the baseline
The first run of a new schedule has no prior data to diff against. Default behaviour: send the full initial report (so you can triage baseline before continuous mode kicks in). Optional: set baseline_mode: silent to suppress the first report if you've already baselined manually.
In Detail
Alert ergonomics
Good diff-mode alerts look like:
📋 Pentestas weekly scan — production
3 new findings:
🚨 CRITICAL IDOR — GET /api/users/{id} — reveals other users' email
🔥 HIGH Missing rate limit — POST /api/login
⚠️ MEDIUM Mass-assignment candidate — PATCH /api/profile
1 fixed:
✅ XSS on /search (previously CRITICAL, now unreachable)
0 regressed.
38 unchanged (not listed).
→ View full scan: https://app.pentestas.com/scan-detail/...Every line is actionable. Nobody skims. Compare with the full-report version's 42-row dump.
The Problem
Why this matters for programme sustainability
Security engineers burn out triaging noise. Every false alert erodes a little of their attention for the real ones. A programme that consistently fires real alerts gets taken seriously; a programme that fires mostly ambient updates gets muted.
Diff mode is the operational discipline that keeps the programme sustainable over multi-year horizons. The programmes that work at 3-year scale are the ones where every alert is a real event and every silence is a confirmed-clean interval.
In Detail
Diff trends over time
Enterprise tier adds a trend view of diff history:
- New-findings-per-scan (rolling average) — expect this to decline as the programme matures.
- Fix-time-per-severity — the median time from
finding.newtofinding.fixed. - Regression rate — how often a fixed finding re-appears.
The three metrics are the closest thing to a KPI the AppSec programme has. Low new-rate + low fix-time + low regression-rate = mature programme. The numbers are board-visible.
By Industry
Industry applications
Fintech
Weekly diff-mode scan against every CDE-in-scope service. Slack alerts on new CRITICAL / HIGH fire to #pci-alerts. Board dashboards track median-time-to-fix for CRITICAL findings — a PCI-auditor-relevant metric.
Medtech
Daily diff-mode scan against every PHI-handling endpoint. On any new finding on an endpoint handling patient data, fire a PagerDuty page. Quarterly HIPAA programme review pulls from the diff-trend dashboard.
Legaltech
Weekly diff-mode scan against the client-facing platform. PagerDuty on new CRITICAL (regulator-adjacent data classes). Quarterly trend exported as client-facing attestation of security programme maturity.
Banks + insurance
Daily diff-mode scan against internal admin panels via agents. Any new finding fires to the internal-risk-management system with auto-ticket creation. Monthly diff-trend report feeds the cybersecurity committee packet.
Scanning
Diff + CI scans
Per-merge CI scans have a different diff model: the base is the previous successful scan of the same target-URL family, not of the same schedule. So per-PR scan against https://staging-${SHA}.example.com diffs against the previous merge's scan. Result: the PR comment shows only findings introduced by the PR, not baseline findings inherited from the previous main.
This is what makes per-merge AI pentest in CI actually sustainable. The PR author sees only new findings, not a 42-line history they have to scroll past.
Watch Out
Caveats
- **Target URL drift.*If a scheduled scan's target URL changes (subdomain rotation, new staging environment), the diff baseline resets. The first scan after drift sends a full report.
- **Scan-config drift.*Changing
scan_typesorrulesinvalidates the diff baseline for the same schedule. The first scan post-change sends full. - **First-scan noise.*The first run is always full. Plan to baseline in a week where your team has capacity for full-report triage.
Set up a diff-mode schedule
Pro+ plan. 30-minute setup. Silent between real events.
Start your AI pentestMore Reading
Further reading

Alexander Sverdlov
Founder of Pentestas. Author of 2 information security books, cybersecurity speaker at the largest cybersecurity conferences in Asia and a United Nations conference panelist. Former Microsoft security consulting team member, external cybersecurity consultant at the Emirates Nuclear Energy Corporation.