Back to Blog
Features6 min read

Scheduled Scans with Diff Mode: Get Notified Only When Something New Appears

P

Pentestas Team

Security Analyst

4/21/2026
Scheduled Scans with Diff Mode: Get Notified Only When Something New Appears

2026-04-21 · Pentestas Features

A weekly scan that reports the same 40 findings every week is noise. Diff mode reports only what's new since last run — signal without the fatigue.

Noisy weekly scan (no diff) • Week 1 report: 42 findings • Week 2 report: 42 findings (same 42) • Week 3 report: 43 findings (same + 1) • Week 4 report: 43 findings (same 43) Slack channel becomes background. Nobody notices the week-3 new finding. Diff-mode weekly scan • Week 1: 42 baseline (full report) • Week 2: 0 new (silent) • Week 3: 1 new (alert — read it) • Week 4: 0 new (silent) Channel stays quiet until something matters. Every alert is a real event.

Diff mode = signal preservation. Full scan = baseline. New finding = alert.

Diff mode fixes this. Every scheduled scan reports only what's new, fixed, or regressed since the last run of the same schedule. The signal-to-noise ratio goes up an order of magnitude, and the alerts your team sees become things they actually read.

⚙️

How It Works

How diff mode works

Each scheduled scan has a unique schedule ID. Pentestas computes diffs between consecutive runs of the same schedule:

  • new — findings present this run that weren't in the previous run of the same schedule.
  • fixed — findings present in the previous run that aren't in this run (the oracle no longer triggers).
  • regressed — findings previously marked resolved that are now re-appearing.
  • unchanged — everything else (suppressed from notifications).

Finding matching is by (endpoint, vuln_type, parameter) tuple — not by DB finding ID. This catches a re-appearing finding even if the DB row is a new one.

Alerts fire on new + regressed. fixed is an informational line; unchanged is silent.

⚙️

Configuration

Configuration

Scans → Schedule → Edit (or create new):

schedule:
  cadence: weekly         # daily | weekly | monthly | cron
  cron: "0 2 * * 1"       # 2 AM UTC every Monday
  mode: diff              # full | diff
  target:
    target_url: https://app.example.com
    scan_types: [web, api, auth, authz]
    config: {...}
  delivery:
    - type: slack
      webhook_url: https://hooks.slack.com/services/...
      filter: "new CRITICAL+HIGH"   # only new CRITICAL/HIGH fire
    - type: webhook
      url: https://api.example.com/pentestas
      events: [scan.completed, finding.new, finding.regressed]
    - type: email
      recipients: [secops@example.com]
      filter: "new CRITICAL"

The filter string is a tiny DSL:

  • new / fixed / regressed / unchanged — diff state
  • CRITICAL+ / HIGH+ — severity threshold
  • severity:CRITICAL — exact severity
  • vuln_type:SQLI / vuln_type:IDOR — class filter

Combine with + / AND. Omitting a filter means "every finding at every state".

📈

In Detail

Running the baseline

The first run of a new schedule has no prior data to diff against. Default behaviour: send the full initial report (so you can triage baseline before continuous mode kicks in). Optional: set baseline_mode: silent to suppress the first report if you've already baselined manually.

📦

In Detail

Alert ergonomics

Good diff-mode alerts look like:

📋 Pentestas weekly scan — production

  3 new findings:
    🚨 CRITICAL  IDOR — GET /api/users/{id} — reveals other users' email
    🔥 HIGH       Missing rate limit — POST /api/login
    ⚠️  MEDIUM    Mass-assignment candidate — PATCH /api/profile

  1 fixed:
    ✅ XSS on /search (previously CRITICAL, now unreachable)

  0 regressed.
  38 unchanged (not listed).

  → View full scan: https://app.pentestas.com/scan-detail/...

Every line is actionable. Nobody skims. Compare with the full-report version's 42-row dump.

💡

The Problem

Why this matters for programme sustainability

Security engineers burn out triaging noise. Every false alert erodes a little of their attention for the real ones. A programme that consistently fires real alerts gets taken seriously; a programme that fires mostly ambient updates gets muted.

Diff mode is the operational discipline that keeps the programme sustainable over multi-year horizons. The programmes that work at 3-year scale are the ones where every alert is a real event and every silence is a confirmed-clean interval.

📊

In Detail

Diff trends over time

Enterprise tier adds a trend view of diff history:

  • New-findings-per-scan (rolling average) — expect this to decline as the programme matures.
  • Fix-time-per-severity — the median time from finding.new to finding.fixed.
  • Regression rate — how often a fixed finding re-appears.

The three metrics are the closest thing to a KPI the AppSec programme has. Low new-rate + low fix-time + low regression-rate = mature programme. The numbers are board-visible.

💼

By Industry

Industry applications

Fintech

Weekly diff-mode scan against every CDE-in-scope service. Slack alerts on new CRITICAL / HIGH fire to #pci-alerts. Board dashboards track median-time-to-fix for CRITICAL findings — a PCI-auditor-relevant metric.

Medtech

Daily diff-mode scan against every PHI-handling endpoint. On any new finding on an endpoint handling patient data, fire a PagerDuty page. Quarterly HIPAA programme review pulls from the diff-trend dashboard.

Legaltech

Weekly diff-mode scan against the client-facing platform. PagerDuty on new CRITICAL (regulator-adjacent data classes). Quarterly trend exported as client-facing attestation of security programme maturity.

Banks + insurance

Daily diff-mode scan against internal admin panels via agents. Any new finding fires to the internal-risk-management system with auto-ticket creation. Monthly diff-trend report feeds the cybersecurity committee packet.

🔍

Scanning

Diff + CI scans

Per-merge CI scans have a different diff model: the base is the previous successful scan of the same target-URL family, not of the same schedule. So per-PR scan against https://staging-${SHA}.example.com diffs against the previous merge's scan. Result: the PR comment shows only findings introduced by the PR, not baseline findings inherited from the previous main.

This is what makes per-merge AI pentest in CI actually sustainable. The PR author sees only new findings, not a 42-line history they have to scroll past.

⚠️

Watch Out

Caveats

  • **Target URL drift.*If a scheduled scan's target URL changes (subdomain rotation, new staging environment), the diff baseline resets. The first scan after drift sends a full report.
  • **Scan-config drift.*Changing scan_types or rules invalidates the diff baseline for the same schedule. The first scan post-change sends full.
  • **First-scan noise.*The first run is always full. Plan to baseline in a week where your team has capacity for full-report triage.

Set up a diff-mode schedule

Pro+ plan. 30-minute setup. Silent between real events.

Start your AI pentest
📚

More Reading

Further reading

Alexander Sverdlov

Alexander Sverdlov

Founder of Pentestas. Author of 2 information security books, cybersecurity speaker at the largest cybersecurity conferences in Asia and a United Nations conference panelist. Former Microsoft security consulting team member, external cybersecurity consultant at the Emirates Nuclear Energy Corporation.