BLOG POST

Why Most Security Alerts Are Noise (And How to Fix It)

Your SAST tool found 120 problems. Your team spent 20 hours sorting things out. You fixed 15 real problems. This is the problem of alert fatigue, but there is a way to fix it.
January 20269 min read
Jost
SASTFalse PositivesAlert FatigueApplication SecurityDevSecOpsSemantic AnalysisSecurity Automation

Metric

Value

False Positive Rate

87%

Team Time Wasted per scan

20 hours

Real vulnerabilities per scan

15 real

Cost wasted per scan

$2,000


87% is a number that should scare every engineering leader.

When we ran traditional SAST tools on 10 open-source repositories, we found that the false positive rate was that high. There were almost seven false alarms for every real vulnerability that was found.

The maths is hard. If your security scanner finds 1,000 problems, only 130 of them are real. Your team spends days looking into fake problems while real ones go into production.

This is the problem of alert fatigue, and it's ruining people's trust in security tools.


The Real Price of False Positives

Let's make this real.

Senior engineer costs about $100 an hour, all expenses paid. Security triage needs experienced developers because junior developers can't tell the difference between real vulnerabilities and noise.

The economics of traditional SAST triage:

Security teams spend around 10 mins per alert to triage. At $100/hr, each project wastes around 20 hours or $2,000 on investigating alerts that are 85-90% noise when using legacy SAST scanning tools like SonarCube, Snyk, Checkmarx and Aikido.

But the price keeps going up. Developers stop trusting the tools after they cry wolf a few times. People ignore real problems because they think they are "probably another false positive." Critical issues go to production because the signal is lost in noise.


Why Traditional Tools Make So Much Noise

Pattern matching is used by SAST tools. They look for code structures that might show weaknesses:

1// Pattern: string concatenation in SQL query + userInput// Flagged as SQL injection

What's the problem? It matters what the context is.

1// This is flagged...
2const query = "SELECT * FROM users WHERE id = " + userId;
3// But this is too...
4const query = "SELECT * FROM users WHERE id = " + sanitizedUserId;
5// And this...
6const logMessage = "User query: " + queryName; // Not even SQL!

Pattern matching can't tell the difference. It sees string concatenation next to a word that looks like "query" and marks everything.

The "Cascade Effect"

When a tool cries wolf all the time, teams always react the same way:

  1. First, they carefully sort through the items (high cost, time delay)

  2. After that, they start getting rid of some groups (security holes)

  3. In the end, they stop sending alerts altogether (the tool is useless)

  4. In the end, they stop using the tool (back to no security coverage)

We've seen this cycle happen at a lot of different businesses.


Real Codebases Give Real Numbers

When we scanned NocoDB with traditional SAST and semantic analysis, this is what we found:

Metric

Traditional SAST

Semantic Analysis

Total findings

222

5

True vulnerabilities

0

5

False positives

208

0

Needs review

14

0

Signal rate

0%

100%

Please read that again. Traditional SAST found zero of the real vulnerabilities but sent out 222 alerts. Semantic analysis found all five critical issues, including a SQL injection in the Oracle client that could have led to a complete database breach.

Here's what the side-by-side comparison showed:

  • Traditional tools: 1,183 initial findings, 12.9% of which can be acted on. That's 120 alerts to find 15 real vulnerabilities per repo.

  • Semantic analysis: 346 results, all of which can be used

  • Fewer false alarms: 1,030 fewer false alarms

That's not a small improvement. That's a different kind of tool.

These figures are from the 10 repositories where we ran both approaches head-to-head. Across all 45 repositories, semantic analysis found 225 real vulnerabilities with a 90.24% maintainer acceptance rate.


The Five Ways to Get Rid of Noise

So, how do you get from 87% false positives to almost none? This is how we do it with Kolega.dev:

Filtering That Knows the Context

There are differences between code paths. A SQL concatenation in dead code, a test file, or behind a sanitisation function is not the same as one that is in a production endpoint.

We look at:

  • Can this code be reached from an entry point?

  • Does this input really get user input?

  • Are there steps to clean up in between?

1# Traditional SAST marks both of these the same way:
2
3# 1. Real vulnerability: direct user 
4inputuser_id = request.args.get('id')db.execute(f"SELECT * FROM users WHERE id = {user_id}")
5
6# 2. False positive: hardcoded value in migration
7db.execute(f"SELECT * FROM users WHERE id = {SYSTEM_USER_ID}")

Context-aware analysis knows the difference. The first is marked. The second one is filtered.

Keeping Track of Data Flow

Pattern matching examines each line separately. Semantic analysis looks at data all the way through the application.

1// File: api/users.js
2export async function getUser(req, res) {
3  const userId = sanitize(req.params.id); // Sanitised here
4  const user = await userService.find(userId);
5  res.json(user);
6}
7// File: services/userService.js
8export async function find(id) {
9  return db.query(`SELECT * FROM users WHERE id = ${id}`); // SAST flags this
10}

SAST sees that the query has string interpolation and marks it. Semantic analysis follows the flow of data from req.params.id to sanitise(), then to userId, then to id, and finally to the query. This is safe because it has been cleaned.

Grouping by Logic

A lot of "separate" findings are really just the same problem reported over and over:

  • Finding 1: SQL injection in line 42 of UserService.find()

  • Finding 2: SQL injection in line 45 of UserService.find()

  • Finding 3: There is a SQL injection in line 48 of UserService.find()

  • Finding 4: Line 67 of UserService.findAll() has a SQL injection

  • Finding 5: SQL injection in line 89 of UserService.findById()

A weak database utility function is the root cause of all of these problems. One fix fixes everything. We group similar findings by their root cause, which makes 50 alerts into 1 issue that can be acted on. Instead of playing whack-a-mole, your team fixes the pattern once.


Don't Let Noise Drown You

We looked at 45 repositories and got rid of more than 1,000 false positives while finding 225 real vulnerabilities. Project maintainers accepted our findings 90.24% of the time.

That's the difference between security tools that help and those that get in the way.


This analysis is based on our ongoing security research across more than 45 open-source repositories. "Why We Found 225 Vulnerabilities That SAST Missed in 45 Open Source Projects" has the full methodology and results.

Simple 3 click setup.

Deploy Kolega.dev.

Find and fix your technical debt.