Beyond CVE: Building a Decentralized Future for Vulnerability Intelligence
From Numbered Databases (security illusions) to Real-Time Security Information Sharing, with High Confidence Remediation, and Gamification
Josh Bressers' recent article "Can we trust CVE?" artfully dissects the fragile state of vulnerability management following CVE's funding crisis. His conclusion—that we simply cannot trust CVE anymore—rings painfully true. But where Josh ends with uncertainty, I see an opening for fundamental transformation that goes beyond merely replacing one centralized system with another.
The Numbered Vulnerability Illusion
The current ecosystem of vulnerability identification suffers from what I call the "numbered vulnerability illusion"—the false belief that simply assigning identifiers to vulnerabilities somehow makes us safer. Whether it's CVE, OSV, EUDB, CSA GSD, or GCVE, a numbered database alone doesn't solve the core problem.
Consider what happens when a vulnerability is discovered:
Someone finds a security issue, maybe a pentester, or someone doing performance benchmarks
It gets assigned a number (maybe, if it's lucky—only about 1 in a million findings ever receives a formal identifier), after a lot of disclosure drama that also prevent embarrassing assignments (ask me how I know)
Vendors scramble to write detection rules for their scanners, if they even catch the numbered issue is a candidate for their tool to cover (thousands go unnoticed for years)
Organizations run those scanners and discover they're vulnerable, often high rates of false positives and long exposure windows from discovery to scanner alerts
Everyone duplicates the same investigation work independently, often across teams within an the same organisation
“This process is fundamentally broken at scale.” Sean Marshall, CEO of Vulnetix - Summer'25 Demo Day, Sydney, May 1st 2025
As Steve Springett correctly points out in his OWASP call to action, the system was designed for a different era—when software was primarily commercial, automation was in its infancy, and the pace of development was measured in years rather than minutes.
Merely decentralizing the numbering system won't fix this. If we examine vendors who have already shifted to alternative systems like OSV (an API for a database of independent data, not decentralized), we don't see dramatic improvements in efficacy, speed, or coverage.
If there were substantial benefits, wouldn't Vendors like Aqua (Trivy) who openly use OSV data, or Snyk who remain non-transparent on such topics, wouldn’t they all be trumpeting these advantages in their marketing materials? Switching to OSV improved speed, efficacy, and our coverage by 9999%.. No they didn’t see any changes in these core problems of our flawed system that relies on numbered vulnerabilities.
The Continued Value and Limitations of Numbered Vulnerabilities
Vulnerability numbering systems like CVE have provided crucial standardisation for security communications. When professionals reference CVE-2021-44228 (Log4Shell), there's immediate clarity about which vulnerability is being discussed. This standardisation remains valuable and should not be discarded entirely.
However, these systems suffer from structural limitations:
Coverage gaps: According to NIST's own analysis, less than 5% of known software vulnerabilities ever receive a CVE identifier.
Temporal delays: Historical examples demonstrate this issue—Log4Shell was identified and remediated by multiple organisations nearly two years before receiving a CVE identifier and becoming widely known.
Scalability challenges: The manual processes underlying CVE assignment cannot keep pace with modern development velocity. As noted in CISA's 2024 "State of Software Security" report, modern applications are composed of 80-90% third-party code, creating a combinatorial explosion of shared vulnerable applications.
Detection-remediation disconnect: Current vulnerability management separates detection (the domain of scanners and databases) from remediation (left to individual organisations), causing duplicated investigation efforts.
Merely decentralising the numbering system—whether through OSV, EUDB, or other initiatives—addresses the governance challenge but not these fundamental limitations. Even the governance solution itself is the same it always was in concept.
The Real Problem: Information Sharing, Not Numbering
The real challenge isn't assigning unique identifiers—it's sharing actionable vulnerability intelligence in real-time across organizational boundaries. It's ensuring that when one team discovers and fixes a vulnerability, that knowledge instantaneously benefits everyone else using the same code.
Take Log4Shell as Josh mentioned. This catastrophic vulnerability was actually discovered and fixed by several organizations nearly two years before it received a CVE identifier and became widely known. Imagine if those early discoveries had been automatically shared across the ecosystem—how many breaches might have been prevented?
The Transparency Exchange
What we need isn't another numbered database, centralized or otherwise—it's a fundamentally different approach to vulnerability intelligence sharing. This is where the Transparency Exchange API (TEA) enters the picture.
TEA represents a (yes, i’ll say it) paradigm shift—moving from centralized vulnerability databases to a distributed, privacy-preserving network of vulnerability intelligence. Unlike traditional approaches that rely on central authorities, TEA enables real-time, machine-readable sharing of vulnerability data across organizational boundaries while maintaining appropriate confidentiality.
The concept is elegantly simple: When one organization discovers a vulnerability through a penetration test, bug bounty program, or internal security review, that knowledge can immediately benefit every other organization using the affected code—without requiring manual intervention or duplicate investigations.
TEA implements several key technical safeguards:
Metadata separation: Public logs contain only cryptographic hashes of vulnerability details, not the details themselves
Cryptographic attestation: Digital signatures verify the source and integrity of vulnerability reports
Verifiable timestamps: Tamper-evident timestamps document when vulnerabilities were discovered and shared
Selective disclosure: Organisations control which details are shared publicly versus privately
This architecture allows for global visibility of vulnerability existence without compromising sensitive details—similar to how Signal's private contact discovery works.
There are of course many additional data sources needed to achieve this, some would appropriately remain commercial in confidence and never be subject to transparency. This just means that public and open source are just pieces of the solution, the working solutions will build upon a TEA with proprietary methodologies that are suitable for organisations existing workflows and tailored to the unique context for each customer.
Ending the Duplicate Work Cycle
Consider this real-world scenario: A financial services company's application security team conducts a penetration test and discovers a critical SQL injection vulnerability in their authentication service. The remediation is not changing the code of the authentication service, but rather how all consumers of that code works with the authentication service as a dependency. Currently, this finding remains siloed within that single team or perhaps their organization if there are external parties involved too.
With a TEA-based approach:
The vulnerability is automatically documented in a standardized Vulnerability Disclosure Report (VDR) format
Privacy-preserving metadata about the vulnerability is published to the TEA network
Other organizations using similar code patterns are immediately notified
When a fix is developed and verified, that remediation intelligence is also shared as a VEX
Teams across different organizations can confidently implement fixes without duplicating investigation efforts
This isn't just theoretical. These are mature OWASP CycloneDX standards already, with TEA on its way through ECMA TC54 (the same body that standardized JavaScript) which both myself, and the quoted author Steve Springett, and many others, have contributed to make this a reality.
Building Trust Through Verification
The most powerful aspect of this approach is how it transforms vulnerability management from a trust-based to a verification-based model. Instead of trusting that a vulnerability exists because someone assigned it a number, organizations can cryptographically verify the existence and impact of vulnerabilities through attestations.
These attestations create a chain of evidence that can be independently validated:
Who discovered the vulnerability
How it was verified
Which code paths are affected for complicated dependency graphs
What remediation was applied, because most open source is never patched
Whether the fix was effective, and in which context
This verification-based approach addresses Josh's core concern about trust. Rather than asking "Can we trust CVE?" (or any other centralized authority), we can shift to asking "Can we verify this vulnerability?" The answer, with proper cryptographic attestations, becomes definitely yes.
The Recognition Economy
A fascinating side effect of this approach is the creation of what I call the "recognition economy" for security contributions. By tracking who discovers, validates, and remediates vulnerabilities, we can create powerful incentives for security professionals.
We have always had this in bug bounty programs, the researchers who have a CVE assigned—but what about our defenders? The blue team and software developers all also deserve the same recognition too, right? They fix problems..
Imagine a world where security engineers gain public recognition (if they opt-in) for their contributions to fixing vulnerable code. Like a reverse-bug bounty leaderboard but for our hero developers, this system could transform how organizations value and reward security expertise.
The Path Forward
To make this vision a reality requires three key components:
Standards adoption: Embracing formats like SARIF, xBOM, and OCSF for scanners
Infrastructure investment: Building out the TEA network for secure, privacy-preserving intelligence sharing
Organizational participation: Integrating these capabilities into existing security and software maker workflows) like JIRA, Service Now, Teams, etc.)
The good news is that all three are already underway. The standards exist, early implementations of TEA are being deployed, and forward-thinking organizations are beginning to demand vulnerability reports in standardized formats from their vendors—primarily to avoid disparate data ingestion patterns and vendor lock-in for the vulnerability management program and its reporting requirements.
Practical Implementation Challenges
Any transformative approach faces significant implementation barriers that must be acknowledged:
Regulatory alignment
Many regulated industries operate under specific vulnerability management requirements:PCI-DSS: Requires organisations to maintain an up-to-date vulnerability management program
HIPAA: Mandates regular security assessments for healthcare organisations
FedRAMP: Prescribes specific vulnerability scanning and remediation timeframes
A verification-based approach must demonstrate compliance with these frameworks. The structured, attestation-based nature of VDR and VEX documentation provides superior evidence for auditors compared to current approaches, but regulatory acceptance will require industry coordination.
Data sensitivity concerns
Organisations have legitimate concerns about sharing vulnerability information. This approach addresses these through:Opt-in sharing models: Organisations control what information is shared
Anonymisation options: Critical details can be shared without identifying the source
Delayed disclosure: Timing controls allow organisations to patch before information becomes widely available
Chicken & Egg for Network Effects
Unlike previous security intelligence sharing initiatives that required broad ecosystem adoption to deliver value, this approach provides immediate benefits within individual enterprises. The network effect begins within the organisation itself, where the problem of duplicated security efforts is already acute.Based on direct experience implementing these solutions within major financial institutions (including one of the "Big 4" Australian banks) and government agencies, the internal value proposition is compelling:
Immediate cross-team intelligence sharing: When the retail banking application security team identifies and remediates a vulnerability in a shared authentication component, that intelligence immediately benefits the wealth management, corporate banking, and insurance divisions using the same component.
Single source of remediation truth: Rather than each team independently investigating the same vulnerability in isolation, the first team to develop a solution creates a verified remediation pattern that other teams can confidently implement.
Quantifiable efficiency gains: Internal metrics from these implementations demonstrate that subsequent teams spend less time addressing vulnerabilities once a remediation pattern has been established by the first team.
Development velocity improvement: Vulnerability remediation is integrated directly into the developer workflow, reducing context switching and administrative overhead.
Reporting efficiencies: No more quarterly pushes that impact the entire organisation to prepare evidence collection about security controls related to risk management, patching, and vulnerabilities. With cryptographically verifiable attestations as evidence and all reporting in standardized formats resilient to changing vendors, there is no need for disruptions when you can just generate an evidence pack prepared specifically in the correct format for the auditor.
The value then expands organically as organisations connect with trusted partners, subsidiaries, and eventually broader public and private ecosystem participants—but crucially, this expansion is built upon a foundation of proven internal value, not theoretical future benefits.
Conclusion: From Centralized Failure to Distributed Resilience
Josh is absolutely right that we can't trust CVE anymore—but that's actually liberating. Instead of trying to fix a fundamentally flawed centralized model or patch in a replacement with the same flaws, we can realize Steve’s dream to build something better: a distributed network of vulnerability intelligence that scales with our increasingly complex software ecosystem.
By shifting from centralized vulnerability numbering to distributed vulnerability intelligence sharing, we can transform how we manage software security. The future isn't about who maintains the database of vulnerabilities, or what happens if they disappear—it's about how quickly we can share knowledge about vulnerabilities and their remediation across organizational boundaries.
This isn't just theoretical—companies implementing these approaches today are seeing dramatic improvements in their security posture, with faster remediation times, lower administrative overhead, and more effective risk reduction. The Australian government awarded me just last week for my effort in 2024 delivering a vulnerability management program across hundreds of developers and thousands of repositories.
The CVE crisis isn't the end of effective vulnerability management—it's the beginning of something much better.
This article draws on technical expertise from multiple standardisation efforts, including the OWASP CycloneDX project, ECMA TC54, and various industry initiatives aimed at improving vulnerability intelligence sharing. For organisations interested in implementing these approaches, multiple open-source implementations of these standards are available, including those supported by Vulnetix and other security vendors focused on next-generation vulnerability management.