One Does Not Simply Get a List of Unpatchable CVEs
A brief, high level, least complicated version possible, to describe the absurdity of modern Vulnerability Management
You know that feeling when someone asks what seems like a simple question, and suddenly you're staring into the abyss of everything that's wrong with cybersecurity? That's exactly what happened when someone innocently asked: "What's the easiest way to get a list of CVEs with no patch available? Preferably exploitable ones."
CVEs represent only a tiny fraction of known vulnerabilities, and when a CVE is brand new and wasn't part of a coordinated disclosure, there's probably no upstream patch at all.
You ordered a custom meal and expect it to not only be ready immediately, but they had the ingredients, knowledge, equiptment, and time to make it for you
The upstream repository has a patch committed to git, but that build hasn't been published yet.
The equivalent of knowing the cure for your headache exists but it's locked in a safe that won't open until Thursday
The git patch is merged and a build is published, but no downstream package repositories have it yet, and the upstream maintainer hasn't put it in a registry themselves because they use GitHub releases.
You have a prescription filled but the pharmacy is in another dimension
You're running RedHat, and while a patch exists in some registry because the package maintainer pushed it themselves (woo hoo), RedHat hasn't taken it from the registry or git source and made it available to customers in their repository becuase it's done in a nightly build, or may require a person to do work because they redistrubute a fork or some customised configurations
It's in patch purgatory and may never be seen again, unless customers go on search and rescue missions - assuming a custoemr noticed the missing patch AND knew to seek RedHat and not rage post on GitHub to the dude in China who maintains it, or the original creator hasn't sold it to China yet but they also don't check GitHub for a few months.
Your application runs in a community container, and while the package repository has published the patch, the base image maintainer hasn't built and pushed a new container image with it yet
Essentially waiting for someone to rebuild your entire house because they need to replace one brick - good luck demanding they do that for you, for free
Even if patches exist at every level, version locking somewhere in the supply chain means automation fails spectacularly. You need a human to identify they need to write code to open a merge request. This reality stands in stark contrast to the raw tool output suggesting thousands of "Critical" vulnerabilities requiring immediate patching, when in reality most organizations simply can't patch due to these practical constraints.
Known Exploited Vulnerabilities (KEV)
"But surely," you think, "I can just look at the KEV catalog for CVEs without patches?"
KEV is guaranteed to be patchable, it's a KEV entry requirement.
The KEV requirements are delightfully specific:
Must have a patch
Must have proof of exploitation
USA government must use the product
If the USA government doesn't use your product, even if the CVE is actively being exploited and causing global mayhem, it will never make it into KEV.
KEV is a fire department that only responds to fires at government buildings
The Actually Useful (But Painfully Complex) Solution
So how do you actually get a meaningful list of CVEs that deserve attention but won't show up in dependabot because there's no patch?
The Actually Useful (But Painfully Complex) Solution
So how do you actually get a meaningful list of CVEs that deserve attention but won't show up in dependabot because there's no patch? Here's the battle-tested, scar-earned process that actually works:
Step 1: Manifest Hunting
Identify all your package manager manifests. And when I say "all," I mean prepare for disappointment. You think you have a Python project with one requirements.txt
? Think again. You've got:
requirements.txt
(the obvious one)requirements-dev.txt
(because developers have feelings)pyproject.toml
(because Poetry is trendy)setup.py
(legacy code that nobody dares touch)Pipfile
(someone installed pipenv once)environment.yml
(the data scientist strikes back)
And that's just Python. Don't get me started on the JavaScript nightmare of package.json
, package-lock.json
, yarn.lock
, pnpm-lock.yaml
, and whatever new package manager emerged while you were reading this sentence.
The infrastructure-as-code revolution has blessed us with an explosion of new places to hide dependencies. Beyond the traditional language-specific manifests, modern applications are digital hydras with tentacles reaching into every corner of the technology stack.
Your Dockerfile
looks innocent enough with its FROM ubuntu:20.04
base image, but that single line pulls in 1,847 packages you never asked for. Each RUN apt-get install
command adds more mystery meat to your container. That curl | bash
installation script? It just downloaded and executed code from seventeen different sources, none of which appear in any manifest anywhere.
Multi-stage builds make this exponentially worse. Your final production image might not contain the build tools, but the vulnerabilities from the build stage can leak through in compiled binaries, cached files, or environment variables that reference vulnerable paths.
Helm charts reference other charts, which reference Docker images, which contain packages, which have dependencies. Your innocent helm install
command just deployed 47 different container images across 23 namespaces. The Chart.yaml
only shows the direct chart dependencies, but each subchart pulls its own universe of containers.
I once traced a single Helm deployment that transitively depended on 312 different container images. The vulnerability scanner took 8 hours to process everything and generated a 600MB report that nobody read.
Ansible playbooks are where systematic dependency tracking goes to die. Your tasks/main.yml
file contains gems like:
- name: Install "definitely secure" software
shell: |
wget https://sketchy-vendor.com/install.sh | sudo bash
pip install $(curl -s https://api.vendor.com/latest-deps)
npm install -g $(cat /tmp/mystery-packages.txt)
Ansible Galaxy roles add another layer of chaos. That requirements.yml
file references roles that download packages using methods that would make a security auditor cry. Dynamic package lists generated at runtime? Check. Package versions determined by environment variables? Double check. Installing packages based on the current phase of the moon? I've seen worse.
The Cloud Provider's "Patched" it, right?
AWS Lambda functions are beautiful examples of how "managed" infrastructure can become vulnerability archaeological sites. Your Lambda runtime environment ships with a curated collection of ancient binaries that would make a museum curator weep with joy.
That Node.js 18 runtime? Sure, it's running Node 18.x, but the underlying Amazon Linux container image contains:
ImageMagick from 2019 (with 47 known CVEs but check, it changes often)
Ghostscript 9.25 (because PostScript processing is rampet and essential for serverless functions if you didn’t realise, i’s the main boilerplate AWS themselves throw in your face in the Console, Blog, everywhere)
Various system utilities that haven't seen updates since the remote working had a meaning that was related to remote distances
SSL/TLS libraries that are "patched" with backported fixes rather than version updates, so even if there was a CVE the scanners can’t actually match to them (or the 100 other binaries rebuilt from source to produce unique never before reported signatures against said vulnerability advisories (ahem Chinguard zero CVE, I see what you did there)
The compliance artifacts proudly declare everything "patched and up-to-date" because AWS applies security patches to their base images. What they don't mention is that "patching" often means an auditor saw a policy once.
Cloud providers have perfected the art of security theater through backporting. When CVE-2023-12345 affects libpng 1.2.50, instead of updating to libpng 1.6.x (which would break compatibility), they:
Take the specific fix for CVE-2023-12345
Apply it to their ancient libpng 1.2.50 fork
Call it "libpng 1.2.50-aws-patched-v47"
Update compliance documentation to show "all CVEs resolved"
Your vulnerability scanners, meanwhile, see libpng 1.2.50 and scream about 200+ known vulnerabilities. The scanner doesn't know about AWS's custom patches because there's no standard way to communicate "we fixed CVE X, Y, and Z but not the other 197 issues in this ancient codebase." so even when AWS do good work, it doesn’t help you much because scanners suck and AWS love custom ways over standardisation (they have smarter people than the groups that take input from 1000s companies and communities to decide on reality-adhering standards)
Google Cloud Functions run on Debian base images that contain fossils from the early 2010s. That Python 3.9 runtime includes:
System Python libraries from 2018
curl 7.64 (with "security patches" that address some but not all known issues, and a maintainer that acts to make your prioritisation harder becaus ethey have dogma about security now the LLMs are harrasing them)
A collection of Perl modules that predate the invention of container security, go figure
GNU utilities that remember when Y2K was a legitimate concern, time ironically forgot to see if they have vulnerabilities (but hackers exist outside time and space, at least how enterprises measure time)
Google's approach to "patching" involves maintaining their own Debian derivative with cherry-picked security fixes. The result is a Frankenstein's monster of an operating system where the package versions suggest ancient vulnerabilities, but some (emphasis on "some") of the actual security issues have been addressed through non-standard patches. Oh, and scanners aren’t finding anything, not because they don’t exist, but because they are now custom builds and not match the known bad signatures.
Azure Functions running on Windows contain libraries and components that span decades of Microsoft development:
.NET Framework components from multiple eras, some patched, some "deprecated but still present" and never get patches but somehow don’t impact that auditor decision about patching
Windows system DLLs with version numbers that make security scanners confused and angry, like the devs that evolve enough to peek outside the IDE to see how software actually works
PowerShell modules that reference assemblies from the .NET 2.0 era, to be generous this may actually be a good thing because 3 people know how to hack that now and they’ve had their identities erased and live in their lab bunkers with excellent healthcare
COM components that were last updated when Internet Explorer was considered cutting-edge technology, because that was definately true for a while there, right?
Microsoft's patching strategy creates version number chaos. The same DLL might exist in multiple versions simultaneously, with some functions redirected to patched implementations and others calling legacy code paths that contain known vulnerabilities.
Amazon's "security-focused" container OS ships with a carefully curated selection of vulnerabilities disguised as "minimal attack surface." The read-only root filesystem doesn't prevent vulnerabilities; it just makes them harder to exploit and impossible to fix without redeploying everything.
Google's COS includes ancient kernel versions with "stability patches" that address some security issues while ignoring others. The automatic update mechanism ensures you get new vulnerabilities as soon as Google decides they're ready, typically 6-18 months after upstream patches are available.
Red Hat CoreOS combines the excitement of cutting-edge container technology with the stability of enterprise-grade legacy vulnerabilities. The immutable filesystem means you can't fix problems locally, you must wait for Red Hat to decide which CVEs are worth addressing in the next image release. So far they are actually doing a good job, but for how long?
EKS, GKE, and AKS worker nodes run "hardened" operating systems, and the Kubernetes control plane might be patched and managed, but the worker nodes are vulnerability theme parks where your containers run alongside ancient system components that haven't been meaningfully updated in years.
Lambda Layers let you share code across functions, creating a distributed vulnerability management nightmare. Popular layers contain:
FFmpeg builds from 2019 with 30+ known vulnerabilities
ImageMagick compiled with "security features disabled for performance"
Python packages with pinned dependencies from the previous decade
Node.js native modules compiled against OpenSSL versions that predate modern TLS standards
Layer maintainers rarely update dependencies because it might break existing functions. The result is a sharing economy for vulnerabilities where one popular layer can expose thousands of Lambda functions to the same security issues.
Cloud providers' managed services are vulnerability black boxes:
RDS instances run database versions with known vulnerabilities that "don't apply to managed environments"
ElasticSearch clusters use versions with security issues that are "mitigated by AWS networking controls" from public, what about the zero-priction load balancer entry points that hit them as if they were public?
The beautiful irony is that moving to "secure, managed cloud infrastructure" often means trading known, patchable vulnerabilities for unknown, unpatchable ones. At least when you managed your own servers, you could see what was broken and attempt to fix it. In the cloud, you get the security theater of "fully patched systems" running ancient, vulnerable software that you can neither inspect nor remediate - and are compliantly not patched of course.
Your compliance checklist shows green across the board while your actual attack surface includes decades of accumulated technical debt, courtesy of your cloud provider's commitment to stability over security.
Terraform modules are dependency launchers disguised as infrastructure code. Your main.tf
references a module that provisions EC2 instances with AMIs that contain who-knows-what software. The Terraform Registry module you're using was last updated in 2019 and references container images from Docker Hub accounts that no longer exist.
Cloud provider modules are particularly delightful. That AWS EKS module? It automatically deploys the AWS Load Balancer Controller, which runs specific versions of container images, which contain specific versions of Go libraries, none of which appear in your Terraform files.
Your cloud-init.yaml
files are manifest-free zones where dependencies are, what are they again?
runcmd:
- curl -fsSL https://get.docker.com | sh
- docker run --rm $(curl -s https://api.vendor.com/latest-image)
- pip install --upgrade $(wget -qO- https://vendor.com/requirements.txt)
Every cloud-init script is a potential Pandora's box of untracked dependencies. They execute at boot time, often with root privileges, downloading and installing software from the internet with no version controls, integrity hash checks, or vulnerability tracking.
YUM/DNF (Red Hat ecosystem): Your yum install httpd
command doesn't just install Apache. It pulls in 37 dependencies, some of which have optional sub-dependencies that get installed based on what other packages are already present. The yum history
command will show you what was installed, but good luck mapping that back to which Dockerfile or Ansible playbook triggered it.
APT (Debian/Ubuntu ecosystem): apt-get install
operations modify /var/lib/dpkg/status
in ways that make dependency tracking a nightmare. Package post-installation scripts can download additional software.
PKG (FreeBSD): FreeBSD's pkg system has ports that compile from source, and the compilation process can pull in build dependencies that remain on the system. Your "minimal" FreeBSD container somehow contains development headers for packages you've never heard of.
And then there's vendoring… the practice of copying third-party code directly into your repository. Vendored dependencies are the ghosts in your codebase:
Go vendor directories: Your
vendor/
folder contains copies of dependencies with their own dependencies. A CVE in a sub-sub-dependency might affect you, but it won't show up in yourgo.mod
file. Ask me how I know, I write a lot of Go, but asking any other Go dev about this makes me sound to them like I spat in their face and hate Go.. it’s a cult, and Go devs are the most Dogmatic bunch I have ever encountered, they are worse if they also do pentesting with that ego of breaking stuff and complaining how it is never fixed despite you never actually expalined what you found coherently. I digress… Dogma in Go is absurd.Node.js bundled packages: That
bundle.js
file? It's webpack's greatest spew of minified, concatenated code from 400+ npm packages. Good luck figuring out which specific version of lodash is embedded in that 2MB blob instead of a package manager’s manifest.Copied source files: Developers copy individual files from, well random anywhere. That "simple utility function" might be vulnerable code from a package that was well known if it had been declared by the package manager.
Git submodules: Submodules pin to specific commits, not versions. The commit you're using might be vulnerable, but there's no package manager metadata to query. Despite being my preferred way to manage git repositories, it is extremely frustrating to do any kind of vulnerability management!
Here's the REALLY uncomfortable truth! Most of the code in your production systems never appears in any manifest file or surfaced from scanners… why?
System libraries: Your application links against libc, OpenSSL, and dozens of other system libraries. These come from your base OS image and aren't tracked in language-specific manifests.
Statically compiled binaries: That Go binary you downloaded? It contains copies of all its dependencies compiled into a single file. A vulnerability in any of those embedded libraries affects you, but no manifest will ever show it. The binay can have rich metadata, but it is bloated more most often stripped out on purpose.
Browser JavaScript: Your web application loads JavaScript libraries from CDNs at runtime. The version that gets loaded depends on when your users visit the page and what the CDN is serving. This definately has never been abused! (we of course know it to be abused often).
So you decide to fingerprint every single file in your deployed systems. Congratulations, you've chosen the path of maximum suffering. File-based vulnerability detection tries to match file hashes, magic numbers, or string patterns against vulnerability databases.
This approach fails spectacularly because:
Compiled binaries change based on compiler flags, optimization levels, and build environments that are not the same as the one you built and deployed because most binaries are not reproducable and produce distinct checksums no one but you have ever seen
Minified/compressed files have different hashes than their source versions you’re looking for
Patched files might have local modifications, like the time you created the file, that can change fingerprints too
Dynamic linking means vulnerable code might be loaded from shared libraries at runtime that you can never try to match no matter how much you want to
I've seen fingerprinting tools confidently report that a system contains 147 different versions of OpenSSL because they detected string patterns in log files, documentation, and source code comments wihtout file extensions so they looked like binaries to a dumb scanner that as coded like a word doc user insead of a software developer.
Even perfect fingerprinting only tells you what files are present, not whether those files are actually used by your application. That vulnerable function might exist in your binary, but if it's never called, does it really represent a risk? (your security tools will still report it as critical, good luck saying they are wrong and you are right)
STEP 1 is wild, right!?
The manifest hunting game is rigged from the start. By the time you've tracked down dependencies across all these systems, new ones have been deployed, containers have been updated, and someone has definitely run another curl | bash
command somewhere in your infrastructure.
I once worked with a "simple" microservice that had 47 different manifest files across 12 different package managers. The DevOps engineer who documented this achievement left the company shortly after that “work” became thier job to do for the remianing 4000 repositories. Coincidence? I think not.
Step 2: SBOM Generation Done Right
So you know what to feed into a SBOM, great!
Use CycloneDX utilities to generate actually good SBOMs that capture the entire dependency tree, not just the dependencies your developers remember adding. This is where you discover that your "lightweight" React app somehow depends on 8,247 packages, including three different implementations of left-pad forks.
The CycloneDX tools will cheerfully generate a 50MB JSON file that maps every transitive dependency back to the heat death of the universe. You'll learn that your simple REST API somehow depends on a package for reading QR codes, which depends on an image processing library, which depends on a legacy C library that was last updated when flip phones were cutting-edge technology.
Maybe
Remember all the places things are, that arent useful to you for automation? yeah, that is hand crafting the SBOM data now, good luck.
Even when you aren’t hand crafting an SBOM likeyou do your OpenAPI specs like a caveman, if the SBOM generation takes longer than your actual build process, you know you're in dependency hell.
Step 3: The OSV.dev Reality
So you have the power of a CycloneDX SBOM and highly accurate dependency data, finally! Congratualtions, you have just saved $$$ on SCA tools that don’t come close to this level of accuracy. Use their licenses fees to donate to the open source you consume which the SCA tools used to scan.
Back to the taks at hand. Query the OSV.dev API using each PURL (Package URL) from your SBOM. This is more accurate than most SCA vendors who are still in 1999 using a CPE-based approach, which has a false positive rate that would embarrass a magic 8-ball.
Here's where the fun begins. OSV.dev will return results for packages you didn't know existed. That innocent-looking @types/node
dependency? It has 47 security advisories. The utility library you added "just for one function"? It's apparently a known cryptocurrency miner.
The API calls will take approximately forever because you're querying for every single package in your bloated dependency tree. I've seen SBOM queries that required 3,000+ API calls to OSV.dev. One particularly masochistic team automated this wiht awesome parralelism and promptly got rate-limited for what OSV.dev's monitoring systems probably flagged as a DDoS attack.
Step 4: The Great CVE Filter
Take the OSV results and filter out anything that's not a CVE (security hahahahaha). Prepare for an existential crisis as you watch 70-90% of your security findings evaporate, because that was the task, CVE only, right? This should tell you something profound about how incomplete the knowedge really is of industry ‘talking heads’ surrounding vulnerability management, that only parrot one another about CVEs, compared to the actual known vulnerabilities in the ecosystem. Anywho…
What remains after filtering looks like this:
Before filtering: 2,847 potential security issues
After CVE filtering: 312 actual CVEs
After checking the vulnerable versions, correctly: 103 unique CVEs
After removing duplicates from shared repositories: 49 unique CVEs
After removing CVEs for packages you don't actually use: 11 CVEs
After removing CVEs that don't actually affect your code: 3 CVEs
You'll spend most of your time explaining to management why the security dashboard went from "CRITICAL: 2,847 VULNERABILITIES" to "Informational: 3 items for review." They'll assume you broke something. You didn’t.
That is the quality of this ecosystem today.
Having 3 INFORMATIONAL remaining CVEs does not mean you have no problems, it measn you have no known vulnerabilities that could be detected.
Now, pick just 1 random package, it doesn’t matter which because so far I have had 100% hit rate on this little taks.
Go look at the package issue, search for some key terms like securiy, exploit, vuln, etc. Realise there are a lot more unnumbered vulnerabilities than will ever get a CVE. They are found, fixed, and never reported. Guess how I know? I found them and used them in my pentesting days, it wasn’t even that uncommon, I heard this from many peer testers, often.
CVE is a snow flake tht landed on the tip of the iceberg of knowable vulnerabilities. the tip of the iceberg was all the numbered issues you fitered out because they did not start with CVE.
Step 5: NVD is Dead, Long Live CVE.org
Use the CVE to lookup CVE.org API directly because NVD is essentially a digital graveyard at this point. The transition from NVD to CVE.org is still an ongoing saga with security scanners, few even know they need to, many have it in a backlog.
The CVE.org API will return JSON that looks like it was designed by someone who really, really loves nested objects. You'll discover that CVEs have:
Multiple CVSSv3 scores (because one score isn't confusing enough)
References that lead to 404 pages (the internet never forgets, except when it does)
Descriptions written by people who apparently learned English from reading assembly code comments. It has words, technically, but they mean little unless you’ve read CWE descriptions 100s of times (and even when I have I still struggle to comprehend CVE descriptions)
My favorite CVE description I've encountered: "A vulnerability exists." That's it.
That's the entire description. It's like a vulnerability haiku I wrote once.
Step 6: Vulnerable Paths
Analyze the vulnerable paths if they exist in the CVE data (spoiler: they usually don't 95% of them are empty), or attempt to derive them from the text description. One-by-one.
"An issue exists in the handling of user input in versions prior to 2.1.3 when certain conditions are met."
What are those "certain conditions"? Nobody knows. Which user input? All of it, apparently. What specific code path is vulnerable? The CVE author suggests you "see the commit for details," but the commit message is "fix bug." The actual diff shows 47 files changed with no comments explaining which change fixed the security issue.
You'll spend hours tracing through commit histories, reading changelogs that say things like "various security improvements," and trying to reverse-engineer vulnerability details from patch files that change seemingly random lines of code.
Or am I the first to actually follow these threads and do this work in all if the history of vulnerability management and Triage?
Step 7: Version Voodoo
So you want to analyze the vulnerable versions (again, if available), or I mean attempt to deconstruct version information from cryptic text descriptions. Pre-2021 CVE data is particularly "helpful" in this regard.
CVE version information is an art form of deliberate obfuscation. You'll see:
"Versions prior to 2.1.3" (but 2.1.0-rc.42 and commit a3b3x30c203a0ax023bax0bc3 is vulnerable)
"All versions" (except 1.0.0, which predates the vulnerability)
"See vendor advisory" (vendor advisory says "see CVE")
CVEs are special.
They'll reference version ranges using notation that would confuse a mathematics professor. I once spent three days coming back to one of these trying to figure out if version "2.0.0-rc1+build.123.git.abc123" was vulnerable according to a CVE that said "versions 2.x before 2.0.1 are affected." after seeing the PoC exploit work but the scanners didn’t find anything.
I learned that day 2 things; Scanners suck, Exploit code is king.
Step 8: Patch Availability Reality Check
The moment truth hits
Check if patched versions are actually available in the registries you use. This is where your carefully curated list of "fixable" vulnerabilities meets the brutal reality of software distribution.
You'll discover:
The patch exists in git but was never tagged as a release
The release exists but wasn't published to your package registry
The release was published but your corporate proxy hasn't synced it yet
The release exists in the registry but breaks everything else in your application
The patch was reverted three hours after release due to "unforeseen issues"
Most organizations maintain 20-50 or sometimes over 70 distinct security tools, yet 82% of breaches still occur through mismanagement of known vulnerabilities. The disconnect becomes clear when you realize that "patch available" and "patch installable" are completely different concepts. Segue…
Step 9: Version Lock Inspection
Determine if you or anything in your dependency tree has version locking that prevents updates. Welcome to dependency prison, where every package is a warden.
Version locks appear in the most creative places:
Hard-coded version pins in requirements files (because "it worked on my machine")
Docker base images locked to specific tags (because the latest tag burned them once)
Transitive dependencies with incompatible version requirements (A needs B>=2.0, C needs B<1.9)
Corporate policies that require approval for any dependency updates (good luck getting that meeting scheduled)
I've seen codebases where updating a single vulnerability required getting approval from seven different teams because the version bump might affect their integration tests. The vulnerability was a critical RCE. The approval process took four months and only through sheer hair on fire from my part because the expoit was 1 line I wrote from memory after reading the description - a whole new meaning to TRIVIAL.
This specific example actually COULD NOT BE PATCHED at all, at least at first. Why?
Step 10: Cascade Failure Analysis
Verify that each package in the tree applies patches correctly throughout the entire chain. This recursive validation is dangerous code if there's any opportunity for an unbounded code path, and anything recursive demands respect and careful attention.
This is where you discover the true horror of modern software dependency management. Your security fix needs:
Library A to update to version 2.1 (fixes the CVE)
Framework B to accept Library A v2.1 (currently pinned to 1.x)
Framework B to release a new version (maintainer is an online handle and yopmail email)
Your application to update Framework B (breaks 12 other things)
Those 12 other things to be compatible with the new Framework B (they're probably not)
Lucky I am known as the Jenga master at home.
Pulling out one block brings down the entire tower. You'll create dependency update branches that touch 200+ files, break half your tests, and get rejected in code review because "this seems risky." despite the trivial exploit being the undeniable risky thing to ignore here..
Two more months later, you'll still be running the vulnerable version because updating it requires architectural changes that would take a full quarter to implement properly.
The final insult? While you're locked in this dependency management nightmare, the CVE you're trying to fix gets added to active exploit kits like Metaspoint. Your security scanning tools continue to alert you daily about this "easily fixable" critical vulnerability, blissfully unaware of the Kafkaesque update hell you're trapped in.
And this, dear reader, is why the simple question "what CVEs have no patches?" opens a portal to madness. The patches exist, technically speaking.
Whether you can actually apply them is an entirely different circle of digital hell.
Exploitability?
Notice how we haven't even addressed the "exploitable ones" part of the original question? That's because exploitability prediction is like weather forecasting, but for chaos.
The key lies in understanding context rather than counts. When organizations shift their metrics to focus on business risk rather than vulnerability counts, they often discover entire business-critical systems operating outside their security controls.
For exploitability predictions, just use EPSS (Exploit Prediction Scoring System) and Coalition ESS. They're free, "good enough," and will save you from the existential crisis of trying to manually assess exploitability for thousands of CVEs. Not to mention the rest of our Vulnerability “Iceberg”.
Modern applications typically consist of 80-90% consumed code and only 10-20% custom-written code.
The real issue isn't finding unpatchable CVEs – it's that the entire system is fundamentally broken. Only about one in a million security findings becomes a numbered advisory, leaving most organizations vulnerable to issues their peers have already addressed.
We're essentially playing a game where:
The rulebook is incomplete, someone once said there should be a rule book I think, a poser may have published it with LLM content on Amazon with a “doctor” salutaion co-author. Not being specific wnough for you? I share their name too. Got it now?
The scoring system is arbitrary. Or, you used the score someone who doesn’t know you, your organization, or how you built your systems - because of course such a score is perfect tfor you to use. Oh, did I mention they were incentivized to make the score as high as posisble for bragging rights or a bounty pay out? No? That’s because they actually didn’t bother scoring at all, because the default CVE score is 9.8 Critical if all you do is say it has a high impact in one of the Confidentiality, Integrity, Availability metrics, just one of them, and you get a 9.8 with near zero effort.
Most of the pieces are missing, this is described already at nausium, just remember, you are vulnerable if the tool matches a CPE, they don;t actually have a clue if you are really vulnerable, not even slightly, and they do not care to do that work, that’s your job.
Everyone pretends this is fine
The Path Forward
Instead of chasing unpatchable CVEs
Perhaps it's time to focus on what actually matters: understanding which business outcomes define success, mapping technical metrics to business value, identifying coverage gaps, and measuring improvement in ways meaningful to stakeholders.
The future of vulnerability management isn't about finding more vulnerabilities – it's about building systems that automatically understand context, prioritize based on actual business risk, and focus human attention where it can make a real difference.
But if you absolutely must have that list of unpatchable CVEs, well... now you know how to do that.
Go forth and list them, please share how you did it publicly so I am not hte only one to do so.