How to Build a Patch Risk Scoring Model (Step-by-Step)
March 17, 2026 · PatchWatch Team · 14 min read
How to Build a Patch Risk Scoring Model (Step-by-Step)
Most organizations use CVSS scores to decide what to patch first.
This is a mistake.
Not because CVSS is wrong.
But because CVSS was never designed to answer the question: "What should I patch first in my environment?"
CVSS scores what a vulnerability is.
Risk scoring answers what it means to you.
This guide shows you how to build a practical, composite patch risk scoring model that you can implement without a PhD in security.
Why CVSS Alone Fails as a Prioritization Tool
CVSS (Common Vulnerability Scoring System) measures the intrinsic severity of a vulnerability.
It considers:
- Attack vector (network vs local)
- Attack complexity
- Privileges required
- Impact on confidentiality, integrity, availability
What it does not consider:
- Whether the vulnerability is actively being exploited
- Whether your systems are exposed to the internet
- Whether a patch is even available
- Whether the affected asset is business-critical
The result?
A CVSS 9.8 vulnerability on an air-gapped internal system that handles non-sensitive data gets the same score as a CVSS 9.8 vulnerability on your public-facing customer portal.
They are not the same risk.
Your scoring model needs to reflect that difference.
The Four Dimensions of Real Patch Risk
A practical risk scoring model combines four inputs:
1. Base Severity (CVSS)
The intrinsic severity of the vulnerability itself.
2. Exploit Availability
Is working exploit code publicly available? Is it actively being used in attacks in the wild?
3. Asset Exposure
Is the affected system internet-facing? Is it accessible from untrusted networks? Is it segmented?
4. Business Impact
What is the operational and financial consequence if this asset is compromised?
These four dimensions, combined, produce a score that actually means something.
The Composite Risk Formula
Here is a practical formula you can implement immediately:
Composite Risk Score = (CVSS × 0.3) + (Exploit Score × 0.35) + (Exposure Score × 0.20) + (Business Impact Score × 0.15)
Each component is scored on a normalized 1–10 scale.
The weights can be adjusted based on your organization's risk priorities.
Component 1: Base Severity (CVSS)
Use the NVD CVSS v3.1 base score directly.
Normalize it to your 1–10 scale (already 0–10).
| CVSS Range | Label | Score |
|---|---|---|
| 9.0 – 10.0 | Critical | 10 |
| 7.0 – 8.9 | High | 7–8 |
| 4.0 – 6.9 | Medium | 4–6 |
| 0.1 – 3.9 | Low | 1–3 |
Weight in formula: 30%
CVSS is foundational — but lacks context.
Component 2: Exploit Availability Score
Score based on known exploit status:
| Exploit Status | Score |
|---|---|
| Actively exploited in the wild (KEV listed) | 10 |
| Public exploit code available | 8 |
| Exploit details published | 5 |
| Theoretical exploit only | 2 |
| No public exploit information | 1 |
Key data sources:
- CISA KEV catalog
- Exploit Database
- VulnCheck / GreyNoise
- NVD enrichment
Weight in formula: 35%
Component 3: Asset Exposure Score
| Exposure Level | Description | Score |
|---|---|---|
| Internet-facing, no authentication | Public endpoint | 10 |
| Internet-facing, authenticated | Login required | 8 |
| Internal, reachable from DMZ | Partial segmentation | 6 |
| Internal, segmented | VLAN restricted | 3 |
| Air-gapped | No network | 1 |
Weight in formula: 20%
Component 4: Business Impact Score
| Business Impact | Description | Score |
|---|---|---|
| Mission-critical | Revenue impact | 10 |
| Sensitive data | PII / financial | 8 |
| Internal system | Ops disruption | 5 |
| Dev / staging | Limited impact | 2 |
| Test system | No impact | 1 |
Weight in formula: 15%
Worked Example
CVE-A
- CVSS: 9.1
- Exploit: 2
- Exposure: 3
- Business Impact: 5
Score = 4.78 / 10
CVE-B
- CVSS: 7.2
- Exploit: 10
- Exposure: 8
- Business Impact: 10
Score = 8.76 / 10
CVE-B is the real priority.
Translating Scores to SLAs
| Score | Risk | SLA |
|---|---|---|
| 8.5 – 10 | Critical | 24–48 hrs |
| 6.5 – 8.4 | High | 3–7 days |
| 4.5 – 6.4 | Medium | 14–30 days |
| 1.0 – 4.4 | Low | Scheduled |
Maintaining the Model
Review when:
- Exploit trends change
- Infrastructure changes
- Compliance updates
- Incidents expose gaps
Cadence:
- Quarterly review
- Post-incident tuning
Automating Risk Scoring
- Pull CVSS from NVD
- Enrich exploit data (KEV, threat intel)
- Map assets (CMDB/RMM)
- Assign business tiers
- Calculate composite score
Common Mistakes
- Treating all production equally
- Static exploit data
- Ignoring compensating controls
- Over-complex models
The Goal of Risk Scoring
- Prioritize real risk
- Enable consistent decisions
- Improve auditability
- Reduce exposure efficiently
Key Takeaways
- CVSS measures severity, not risk
- Add exploit, exposure, business impact
- Use weighted scoring
- Map to SLAs
- Automate inputs
- Continuously refine
Start Monitoring Security Patches Today
PatchWatch automatically tracks CVEs and security patches across Windows, Linux, browsers, and open-source libraries. Get instant alerts via Slack, Teams, or email.
