Automated Security Scanning Pipeline with Grype and OSV
Build automated security scanning pipelines with Grype, OSV, and Trivy—integrate vulnerability detection into CI/CD workflows with actionable reporting.
The Dependency That Haunted Me
Photo by Carlos Muza on Unsplash
I built an automated security pipeline that scans every commit with Grype, OSV-Scanner, and Trivy. The result: 69% faster builds (6.5min → 2min), 35% auto-remediation rate for vulnerabilities, and mean time to remediation dropping from 12 days to 4.2 days. Critical findings block deployment automatically.
Why it matters: Last year, I deployed a "simple" web app to my homelab. Three months later, a critical vulnerability (CVE-2023-XXXXX) was discovered in a nested dependency I didn't even know existed. The vulnerable code ran in production for 90 days before I found out from a security scanner. Hope is not a security strategy.
Automated Security Pipeline Architecture
⚠️ Warning: Security scanning pipelines must be configured with appropriate policies and approval gates. Automated remediation should include review processes for production environments.
flowchart TB
subgraph coderepository["Code Repository"]
Git[Git Push]
PR[Pull Request]
end
subgraph cicdpipeline["CI/CD Pipeline"]
Trigger[GitHub Actions Trigger]
Build[Build Stage]
Test[Test Stage]
Scan[Security Scan Stage]
end
subgraph securitytools["Security Tools"]
Grype[Grype<br/>Container Scanning]
OSV[OSV-Scanner<br/>Dependency Scanning]
Trivy[Trivy<br/>Multi-Scanner]
end
subgraph analysisreporting["Analysis & Reporting"]
SARIF[SARIF Reports]
GH[GitHub Security]
Slack[Slack Alerts]
Wazuh[Wazuh SIEM]
end
subgraph policyenforcement["Policy Enforcement"]
Gates[Quality Gates]
Block[Block on Critical]
Approve[Manual Review]
end
Git --> Trigger
PR --> Trigger
Trigger --> Build
Build --> Test
Test --> Scan
Scan --> Grype
Scan --> OSV
Scan --> Trivy
Grype --> SARIF
OSV --> SARIF
Trivy --> SARIF
SARIF --> GH
SARIF --> Slack
SARIF --> Wazuh
SARIF --> Gates
Gates --> Block
Gates --> Approve
classDef redNode fill:#f44336,color:#fff
classDef orangeNode fill:#ff9800,color:#fff
classDef darkRedNode fill:#d32f2f,color:#fff
class Scan redNode
class Gates orangeNode
class Block darkRedNode
Today, every commit to my repositories is automatically scanned for vulnerabilities. Critical findings block deployment. Here's how I built it.
Tool Selection and Comparison
Why Multiple Scanners?
I tested these three scanners in September 2024 against my homelab services to understand their strengths. These tools complement my broader approach to smart vulnerability prioritization with EPSS and KEV and integrate with my automated MITRE ATT&CK threat intelligence dashboard.
This helps me focus on what actually matters instead of raw CVE counts.
| Scanner | Strengths | Best For | My Test Results |
|---|---|---|---|
| Grype | Fast, low false positives, container-native | Container images, compiled binaries | 3.2s scan time, found 12 CVEs |
| OSV-Scanner | Language-specific, lockfile parsing | npm, pip, cargo, go.mod | 8.1s scan time, found 8 CVEs (4 overlapping) |
| Trivy | All-in-one, config scanning | Comprehensive coverage, IaC | 42s scan time, found 15 CVEs total |
My strategy: Run all three, correlate findings, reduce false positives. When I tested this on my Python microservices project, Grype caught a critical vulnerability in a base image layer that OSV missed entirely.
Meanwhile, OSV found a transitive npm dependency issue that Grype didn't detect. The overlap was only about 60%, which confirmed my suspicion that relying on a single scanner creates blind spots.
Installation
I installed all three scanners on my Ubuntu 22.04 homelab server. The process took about 10 minutes: curl script for Grype, go install for OSV-Scanner, and dpkg for Trivy's Debian package.
Note from experience: OSV-Scanner requires Go 1.21+. My first install failed with Go 1.19.
GitHub Actions Integration
Complete Scan Workflow
The pipeline orchestrates three scanners in parallel with a final quality gate:
flowchart LR
A[Git Push/PR] --> B{Trigger Pipeline}
B --> C[OSV: Dependency Scan]
B --> D[Grype: Container Scan]
B --> E[Trivy: Filesystem Scan]
C --> F[Upload SARIF]
D --> F
E --> F
F --> G{Security Gate}
G -->|Pass| H[Deploy]
G -->|Critical Found| I[Block & Alert]
Key workflow features: Triggers on push, pull requests, and daily at 2 AM UTC. Parallel scanner execution completes in 2-3 minutes total runtime. SARIF reports upload to GitHub Security tab automatically. Hard blocks occur on critical/high vulnerabilities. Slack notifications alert on failure.
📎 Full GitHub Actions workflow (109 lines): Complete implementation with SARIF uploads, quality gates, and Slack notifications
Slack Notifications
Add real-time alerts when scans fail:
📎 Complete Slack notification workflow with formatted blocks: Full implementation
The notification uses slackapi/slack-github-action@v1.24.0 with failure condition, including repo, branch, commit SHA, and direct link to failed run.
Local Development Integration
One lesson I learned the hard way: catching vulnerabilities in CI is good, but catching them before you even commit is better. I added pre-commit hooks after repeatedly pushing code only to have it rejected by the security gate 5 minutes later.
Pre-Commit Hooks
Create .pre-commit-config.yaml with local hooks for Grype (fail-on high) and OSV-Scanner (--lockfile=package-lock.json). Install with pip install pre-commit && pre-commit install.
Reality check: These hooks add 30-45 seconds per commit. Some developers use --no-verify to bypass them.
No good solution exists for this yet. It's a constant tension between security and developer experience.
VS Code Integration
Run scans directly from your IDE with custom tasks. Each task outputs JSON for easy parsing with jq.
📎 Complete VS Code tasks configuration: Full tasks.json with all three scanners
Advanced Scanning Configurations
Grype Custom Configuration
Control false positives and severity thresholds.
📎 Complete Grype configuration: Full .grype.yaml with all ignore rules
Configure fail-on-severity: high and add ignore rules with expiration dates for accepted risks.
OSV-Scanner Configuration
Customize lockfile scanning and parallel workers.
📎 Complete OSV configuration: Full osv-scanner.toml with private registries
Set workers = 4 for parallel scanning (40% faster on my 8-core system).
Trivy Policy as Code
Enforce security policies with custom OPA Rego rules.
📎 Complete Trivy OPA policy: Full security.rego with all deny/warn rules
Create Rego policies that deny on critical severities and apply with trivy image --policy ./policy/security.rego myapp:latest.
Continuous Monitoring
Scheduled Scans
Daily automated scans catch newly-published CVEs. I scan 3 production images daily. Results go to Wazuh for trend analysis.
📎 Complete scheduled scan workflow: Full workflow with matrix strategy and SIEM integration
Configure cron schedule (0 6 * * * for daily 6 AM) with matrix strategy scanning multiple production images.
Scan Comparison Script
Track vulnerability trends by detecting drift. This helped me identify 12 new CVEs in a dependency I thought was stable.
📎 Complete scan comparison tool: Full Python script with JSON parsing and reporting
Compare two scan results to detect new and fixed vulnerabilities. Run with --current today.json --baseline baseline.json.
SBOM Generation and Management
Generate Software Bill of Materials
Use syft to generate CycloneDX SBOM, scan with grype sbom:./sbom.json, and compare versions with jq to track dependency changes.
SBOM-Based Vulnerability Tracking
Generate and scan SBOMs on every release. I store historical SBOMs to track dependency evolution over time.
📎 Complete SBOM workflow: Full workflow with CycloneDX generation and S3 storage
Trigger on release publication, generate CycloneDX format, scan with Grype, and upload to S3 for historical tracking.
Remediation Workflows
Automated Dependency Updates
Weekly auto-remediation with PR creation. This automatically fixed 35% of vulnerabilities in my testing (12 of 34 CVEs).
📎 Complete auto-remediation workflow: Full workflow with PR creation and test validation
Weekly scheduled job scans for vulnerabilities, runs npm audit fix, validates fixes pass tests, and creates PR for review.
Integration with Wazuh SIEM
Ship Scan Results to Wazuh
Forward vulnerability data to your SIEM. I ship scans via syslog to Wazuh for centralized tracking, building on patterns from network traffic analysis with Suricata for comprehensive security monitoring.
📎 Complete Wazuh integration: Full script with JSON transformation and error handling
Pipe Grype JSON output through jq, format as syslog, and send to Wazuh manager on port 1514 using netcat.
Wazuh Rules for Vulnerability Alerts
Create custom alerting rules. Critical findings trigger level 12 alerts (email + PagerDuty integration).
📎 Complete Wazuh rules: Full local_rules.xml with all severity levels
Define base rule matching vulnerability IDs (level 7), then escalate to level 12 for critical severity findings.
Lessons Learned
After building and running this pipeline for a year, here's what I discovered through trial and error. These lessons integrate well with my approach to open-source vulnerability management at scale and complement container security hardening practices.
The focus should be on sustainable processes instead of perfect tools.
1. Multiple Scanners Reduce False Negatives
When I first tested Grype alone, I thought I had good coverage. Then I added OSV-Scanner and immediately found 4 additional vulnerabilities in a project I'd already "validated."
The overlap between tools is surprisingly low. I measured around 60-65% in my homelab testing. Running both catches more real issues. For smaller projects, three scanners might be overkill. I'm still testing this hypothesis.
2. Fail Fast, Fail Loud
I initially set my pipeline to "warn" on critical vulnerabilities, thinking I'd review them later. That lasted two weeks before I had 47 unreviewed warnings.
Switching to hard-block on critical findings was painful. I spent a full weekend fixing vulnerabilities the first time. It forces good hygiene. There are times when I question whether blocking a build for a vulnerability in a dev-only dependency is the right call. No perfect answer exists.
3. Baseline Everything
Without a baseline, you're drowning in noise. I learned this the hard way when Trivy flagged 183 findings on my first scan. Most were from base images I inherited.
Now I track what's new vs. what's been there. My alert fatigue dropped by 80%. I still struggle with deciding how long to "accept" known issues in the baseline before forcing remediation. This is an ongoing balance.
4. Automate Remediation Where Possible
npm audit fix catches low-hanging fruit automatically. In my testing, about 35% of vulnerabilities were fixed automatically without breaking tests. Focus human effort on complex issues.
That said, I've had npm audit fix break dependencies twice, so blind automation isn't always the answer.
5. Integration is Key
Scanning results are useless if no one sees them. I initially just had GitHub annotations, which I never checked. Adding Slack notifications increased my response time from days to hours.
Shipping to my Wazuh SIEM let me track trends over time. I'm still figuring out the right balance between visibility and notification fatigue. Too many alerts become noise.
Performance Optimization
When I first implemented this pipeline, builds were taking forever. Here are my actual scan times measured on October 15, 2024:
| Stage | Initial | Optimized | Improvement |
|---|---|---|---|
| OSV Scan | 45s | 12s | 73% faster |
| Grype Scan | 2m 30s | 35s | 77% faster |
| Trivy Scan | 3m 15s | 1m 10s | 64% faster |
| Total | 6m 30s | 2m | 69% faster |
Optimizations I added:
- Parallel scanning (matrix strategy): Reduced wait time by running all three scanners simultaneously instead of sequentially
- Cached vulnerability databases: Grype's DB cache alone saved 40 seconds per run
- Scoped scanning (ignore test files): Cutting out
node_modulesand test fixtures dropped scan time by 25% - Early failure (stop on critical): When a critical CVE is found, I stop immediately instead of completing all scans
These times are specific to my homelab setup (Intel i9-9900K, GitHub-hosted runners). Your mileage may vary depending on project size and runner specs.
The complexity of running three scanners creates maintenance burden. Smaller teams might be better off with just Grype. I'm still testing whether the extra coverage justifies the extra complexity.
Metrics Dashboard
Track security posture with PostgreSQL queries. My current MTTR: 4.2 days (down from 12 days initially).
📎 Complete SQL analytics: Full PostgreSQL queries for vulnerability tracking
Query vulnerability trends over time, grouping by severity and date to track remediation progress and new findings.
Research & References
Security Scanning Tools
- Grype Documentation - Vulnerability scanner for container images and filesystems
- OSV-Scanner - Google's open-source vulnerability scanner
- Trivy Documentation - Comprehensive security scanner
SBOM Standards
- CycloneDX Specification - Modern SBOM standard
- SPDX - Software Package Data Exchange
- NTIA SBOM Minimum Elements - U.S. government SBOM guidelines
Supply Chain Security
- SLSA Framework - Supply-chain Levels for Software Artifacts
- NIST SSDF - Secure Software Development Framework
- OWASP Dependency-Check - Dependency vulnerability detection
Limitations and Considerations
Before you build this exact pipeline, here are some things I'm still uncertain about:
When Is This Overkill?
For my homelab with 15+ services, running three scanners makes sense. For a single Node.js app, this might be excessive overhead. I don't know where the threshold is. Maybe two services? Five? It depends on your risk tolerance and team size.
Scaling unknowns:
- This setup works for my ~50 repositories
- Would it work for 500? 5,000? Unknown.
- Centralized SARIF reporting might become a bottleneck
- I haven't tested at enterprise scale
False Positives Are Still a Problem
Even with three scanners, I get false positives. Last month, Trivy flagged a "critical" vulnerability in a Go binary that turned out to be a misidentified version number. I spent three hours investigating before realizing the scanner was wrong. No tool is perfect. I haven't found a good way to systematically reduce false positives beyond manual review.
Maintenance Burden
These scanners update their databases constantly. Great for coverage. Terrible when your pipeline suddenly fails because a new CVE was published overnight. I've had emergency fixes on Sunday mornings because of this. Is there a better way to handle breaking changes from vulnerability database updates? I'm still figuring that out.
Cost Considerations
GitHub-hosted runners aren't free at scale:
- My current setup: ~$8/month in runner time
- Fine for a homelab
- Scales poorly for larger organizations
- Self-hosted runners would help (but you're managing infrastructure)
Conclusion
Automated security scanning isn't optional. It's a fundamental requirement for modern development. By integrating Grype, OSV-Scanner, and Trivy into my CI/CD pipeline, I've shifted security left and caught vulnerabilities before they reach production. These practices align with writing secure code from the start and zero trust architecture principles.
The initial setup took me about two weeks of evening work, but the ongoing protection has been worth it. Every critical vulnerability caught in CI is one that doesn't become a 3 AM incident (I know because I've had those incidents before implementing this).
Start with basic scanning, even just Grype on container images, then add quality gates, integrate with your SIEM, and watch your security posture improve. Don't try to implement everything I've shown here at once. I built this incrementally over a year, and you should too.
Building security pipelines? Share your scanning strategies, tools, and lessons learned. Let's improve supply chain security together!
Related Posts
Building a Private Cloud in Your Homelab with Proxmox and Security Best Practices
Learn to build and secure a production-grade private cloud using Proxmox VE. Covers network segmenta...
Hardening Docker Containers in Your Homelab: A Defense-in-Depth Approach
Eight security layers that stopped real attacks in homelab testing: minimal base images, user namesp...
Building a Homelab Security Dashboard with Grafana and Prometheus
Real-world guide to monitoring security events in your homelab. Covers Prometheus configuration, Gra...