Security Engine Too Many Alerts
The Engine Too Many Alerts issue appears when your Security Engine generates an abnormally high volume of alerts. This usually indicates a misconfigured scenario or an ongoing large-scale attack.
What Triggers This Issue
- Trigger condition: More than 250,000 alerts in 6 hours
- Criticality: ⚠️ High
- Impact: May indicate misconfiguration, performance issues, or a real large-scale attack
Common Root Causes
- Misconfigured or overly sensitive scenario: A scenario with thresholds set too low or matching too broadly can trigger excessive alerts.
- Parser creating duplicate events: A parser issue causing the same log line to generate multiple events.
- Actual large-scale attack: A genuine distributed attack (DDoS, brute force campaign) targeting your infrastructure.
- Custom scenario misconfigured blackhole: A custom scenario without the proper
blackholesetting may generate alert spam
Diagnosis & Resolution
Misconfigured or Overly Sensitive Scenario
CrowdSec default scenarios are usually not misconfigured. This section mostly applies to custom or third-party scenarios, or scenarios you modified.
If you do not use non-default scenarios, still investigate, but the issue is more likely upstream (acquisition, profile, or logging).
🔎 Identify problematic scenarios
- Identify which scenarios are generating the most alerts:
sudo cscli alerts list -l 100
Run this command for Docker or Kubernetes
Docker
docker exec crowdsec cscli alerts list -l 100
Kubernetes
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l type=lapi -o name) -- cscli alerts list -l 100
- Look for patterns:
- Is one scenario dominating the alert count?
- Are the same IPs repeatedly triggering alerts?
- Are alerts legitimate threats or false positives?
- Check metrics for scenario overflow:
sudo cscli metrics show scenarios
Run this command for Docker or Kubernetes
docker exec crowdsec cscli metrics show scenarios
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l type=lapi -o name) -- cscli metrics show scenarios
Look for scenarios with extremely high "Overflow" counts or "Current count" numbers.
🛠️ Tuning or Disabling the scenario
If you identify a problematic scenario, try the following:
Tuning the scenario threshold
If the scenario is triggering too easily, you can create a custom version with adjusted thresholds. See the scenario documentation for details on customizing scenarios.
Disabling the scenario temporarily
You have multiple ways to do this, among which the following 2:
- Removing the scenario
- Whitelisting it in postoverflow
Parser Creating Duplicate Events
🔎 Test parsing with sample log lines
Use cscli explain to test parsing:
sudo cscli explain --log "<sample log line>" --type <type>
Check if the log line generates multiple events incorrectly.
🛠️ Review parser configuration or report issue
Review parser configuration or report the issue to the CrowdSec Hub.
Legitimate Large-Scale Attack
🔎 Review alert patterns to confirm genuine attack
Review alert patterns to confirm a genuine attack:
# On host
sudo cscli alerts list -l 100
# Docker
docker exec crowdsec cscli alerts list -l 100
# Kubernetes
kubectl exec -n crowdsec -it $(kubectl get pods -n crowdsec -l type=lapi -o name) -- cscli alerts list -l 100
Look for:
- Multiple different source IPs targeting the same services
- Realistic attack patterns (brute force, scanning, etc.)
- Alerts matching known attack signatures
🛠️ Verify remediation is blocking attackers
If you're experiencing a real attack:
- Verify your remediation components are working to block attackers
- Check that decisions are being applied:
cscli decisions list - Consider increasing timeout durations in profiles if attackers are returning
- Subscribe to Community Blocklist for proactive blocking of known malicious IPs
- Monitor your infrastructure for the attack's impact
Custom scenario missing blackhole param
🔎 Check scenario bucket configuration
If you have custom scenarios, verify they have proper bucket lifecycle settings (blackhole parameter) to prevent unlimited bucket accumulation:
# Check custom scenarios
sudo cscli scenarios list | grep -i local
# Inspect scenario configuration
sudo cat /etc/crowdsec/scenarios/my-custom-scenario.yaml
Look for the blackhole parameter in the scenario configuration. This parameter sets how long a bucket should live after not receiving events.
🛠️ Add blackhole parameter to your custom scenarios
If your custom scenario is missing the blackhole parameter, add it to prevent unlimited bucket accumulation:
type: leaky
name: my-org/my-custom-scenario
description: "Custom scenario description"
filter: "evt.Meta.service == 'my-service'"
leakspeed: "10s"
capacity: 5
blackhole: 5m # Add this: buckets expire 5 minutes after last event
labels:
remediation: true
The blackhole parameter defines how long a bucket persists after no longer receiving events. Without it, buckets can accumulate indefinitely, consuming memory and generating excessive alerts.
Verify Resolution
After making changes:
- Restart or reload CrowdSec:
sudo systemctl restart crowdsec - Monitor alert generation for 30 minutes:
watch -n 30 'cscli alerts list | head -20' - Check metrics:
sudo cscli metrics show scenarios - Verify alert volume has returned to normal levels
Performance Impact
Excessive alerts can impact performance:
- High memory usage: Each active scenario bucket consumes memory
- Database growth: Large numbers of alerts increase database size
- API latency: Bouncers may experience slower decision pulls
If performance is degraded, consider:
- Cleaning old alerts:
cscli alerts delete --all(after investigation) - Reviewing database maintenance: Database documentation
Related Issues
- Security Engine Troubleshooting - General Security Engine issues
- Log Processor No Logs Parsed - If parsing is creating unusual events
Getting Help
If you need assistance analyzing alert patterns: