The Inflation of "Security Researchers" and Its Consequences for Open Source

March 31, 2025  — 
 SecurityWeb

As an open-source maintainer, I deeply appreciate the importance of cybersecurity. Security is a shared responsibility — both for users who rely on software to be secure and for developers who build and maintain open-source projects. Responsible vulnerability reporting strengthens the ecosystem, helping us all build better, safer software. However, in recent years, the term "security researcher" has been stretched to the point where it is becoming counterproductive.

The Problem with False Positive CVEs

Security should always be taken seriously. When genuine vulnerabilities are discovered and responsibly reported, it leads to stronger software. However, the rise of false positive CVEs (Common Vulnerabilities and Exposures) is eroding trust in the entire system. When reports are filled with exaggerated, misclassified, or outright incorrect vulnerabilities, security research turns from a net positive into a liability.

The more false positives flood the system, the harder it becomes to separate real threats from noise. If every minor misconfiguration, theoretical issue, or misunderstanding of an application’s context results in a CVE, security teams — and open-source maintainers like me — will struggle to prioritize what truly matters. This leads to a dangerous situation: if everything is a "critical" vulnerability, nothing is.

A second, equally concerning risk is that the increasing burden of handling false or exaggerated vulnerability reports could discourage open-source maintainers from continuing their work. Many of us develop and maintain open-source projects in our free time, often without compensation. When dealing with bad-faith reports, inflated CVE claims, and a lack of appreciation becomes overwhelming, it’s not hard to imagine that some maintainers might choose to step away from their projects. If that happens on a larger scale, the entire open-source ecosystem could suffer — not because of real security threats, but because the noise is drowning out legitimate concerns.

Understanding the Domain Before Reporting

One key issue in today's security research landscape is the lack of context in vulnerability reporting. Not all software is the same, and not all theoretical vulnerabilities are real-world threats.

A security vulnerability in a desktop application may not be relevant in a cloud-based system. A theoretical attack vector that requires unrealistic conditions — like admin access already being compromised — may not be a true risk at all. Yet, reports often do not consider these nuances. As a maintainer of Automad, a content management system, I have seen reports that claim a severe risk without understanding how the system actually works in a real deployment.

Without proper domain knowledge, security researchers risk creating unnecessary panic over non-issues. Worse, this can lead to maintainers being overwhelmed with bad reports, which in turn slows down the response to actual security concerns.

A Real-World Example

A perfect example of this issue happened last year when I received multiple CVE notifications about a supposed cross-site scripting (XSS) vulnerability in Automad. The reports claimed that a non-sanitized input field could be exploited to inject JavaScript. However, these reports completely misunderstood the nature of the project. Automad is designed as a single-user content management system, meaning there are no user sessions to steal, and the only person with access is the site owner — who already has full control over the server. While it is possible to add other trusted collaborators, Automad does not include role-based access management or a permission system — this is intentional. As a minimalistic CMS, it is designed for simplicity rather than complex user management. This fact alone eliminates any meaningful attack vector for XSS.

Even more ironic is that the actual purpose of this field is to modify the template, which includes the ability to add JavaScript. It is an essential feature, not a vulnerability. Filtering out JavaScript in this context would make Automad less functional for its intended users. Yet, these reports reflect an extremely low intellectual quality, reducing the complexity of web security to an oversimplified "JavaScript = bad" narrative. This kind of thinking ignores the fact that JavaScript is a fundamental part of modern web development and that context matters. Not every instance of JavaScript is a security risk, and not every field that allows JavaScript input is a vulnerability. This level of reasoning results in false reports that waste time and distract from real security threats.

Following the logic of these reports, one could argue that all static websites on all servers are, by definition, a security threat — since they allow JavaScript to be embedded freely. But that would be an absurd claim. An admin is an admin, and with those privileges comes responsibility. If we start labeling every instance of an admin performing admin tasks as a security flaw, we will render security research meaningless. Despite this, the CVEs were still registered, adding unnecessary noise to security databases and further proving how poorly contextualized vulnerability reports can lead to misleading conclusions. The situation escalated to the point where even INCIBE, Spain's national cybersecurity institute, reached out to ask me to react — further illustrating how these flawed reports can create unnecessary bureaucracy and pressure on open-source maintainers. It was clear that nobody involved had actually taken a proper look at the issue or was simply unable to understand the affected code.

Fortunately, the at least the GitHub Security Advisory team reviewed the reported CVEs listed there and, after assessing the claims, decided to withdraw most of them (CVE-2024-40111CVE-2023-7036  and  CVE-2023-7035). This demonstrates that not all security platforms blindly accept reports and that careful review processes can help mitigate the spread of misleading vulnerability claims. However, the fact that these CVEs were published in the first place still highlights a major problem: maintainers must invest time and effort into defending their projects against false positives rather than focusing on actual development and security improvements.

The Data: False Positives on the Rise

Studies and industry reports indicate that a significant percentage of reported CVEs are now false positives. This is a troubling trend because, instead of strengthening security, it weakens it. When organizations and maintainers are forced to sift through countless invalid reports, real threats can be missed.

For example, a study by JFrog found that 78% of reported CVEs in popular DockerHub images were not actually exploitable [source]. Similarly, discussions in the open-source community have highlighted how many CVEs are assigned without proper validation, leading to unnecessary panic and wasted resources [LWN article].

A well-documented case that illustrates this problem is CVE-2020-19909, which was recently reassigned as a "critical" vulnerability in Curl — despite being a decades-old, non-exploitable bug. This case exposes the systemic flaws in how CVEs are assigned and scored [Daniel Stenberg’s blog][Hacker News discussion].

This is not just an open-source problem — it affects the entire cybersecurity field. If security research continues to focus on quantity over quality, the long-term effect will be a loss of credibility. Users, developers, and security teams will start ignoring vulnerability reports altogether, assuming they are just more noise. This is the exact opposite of what cybersecurity research should achieve.

The Way Forward

For security research to be effective, we need to emphasize responsible disclosure and domain expertise. Researchers should take the time to understand a system before reporting vulnerabilities. The industry needs better guidelines to filter out false positives before they are assigned CVEs. Open-source maintainers, in turn, need to be given the space to address real issues rather than constantly responding to invalid reports.

Cybersecurity is too important to be diluted by low-effort, context-free reports. If we want to improve security, we must ensure that vulnerability reports are meaningful, accurate, and relevant. Otherwise, we risk making the internet a less secure place — despite the best of intentions.

  Related Pages

Long-term open-source projects, such as Automad, require stability and independence from third-party libraries. Relying on external frameworks introduces risks that can impact maintainability, long-term support, and overall project longevity.

Creating a multilingual website can significantly expand your reach and user engagement by catering to a global audience. Automad 2, a lightweight CMS that prides itself on simplicity and flexibility, makes it easy to create and manage multilingual websites.

Starting with the latest alpha 16 of Automad 2, Open-Graph images are rendered automatically for every page on the fly. Colors can be customized, a logo can be added. No further setup needed. 

Automad 2.0 Alpha
October 4, 2024
 AutomadWeb

After a long development process, the stable release of Automad 2 is getting closer. While many aspects of the system have evolved, the core vision remains the same — delivering a fast, flexible, and file-based content management system with a powerful templating engine.