AI Code Guardian

Catch security vulnerabilities before you commit

cargo install ai-code-guardian
10+
Vulnerability Types
10x
Faster than Node.js
100%
Local & Private

Powerful Features

Lightning Fast

Written in Rust. Scans entire codebases in seconds. 10x faster than Node.js alternatives.

🎯

Interactive TUI

Navigate issues with arrow keys, mark false positives, and view detailed information in a beautiful terminal UI.

👀

Watch Mode

Auto-scan on file changes during development. Catch issues as you code.

🔧

Auto-Fix Suggestions

Every vulnerability includes actionable fix suggestions. Don't just find issues, solve them.

📊

Risk Scoring

Numerical risk scores (0-100) for every vulnerability. Prioritize what matters most.

🎨

Custom Rules

Define your own security patterns with .guardian.rules.json. Extend the scanner to your needs.

📦

Dependency Checking

Scan requirements.txt, package.json, and Cargo.toml for known CVEs using OSV.dev API.

🔀

Git Integration

Scan only changed or staged files. Perfect for CI/CD pipelines and pre-commit hooks.

🚫

.guardianignore

Exclude files and patterns from scanning. Full control over what gets scanned.

🔒

100% Local

No data leaves your machine. Complete privacy. No API calls, no telemetry.

See It In Action

Basic Scan
Interactive Mode
Watch Mode
🛡️ AI Code Guardian - Security Scan Scanning: ./src ❌ HIGH (Risk: 85): Hardcoded API Key File: api.js:12 Code: const API_KEY = "sk-1234567890abcdef" Risk: API key found in source code. Store in environment variables instead. Fix: Use process.env.API_KEY or import from .env file ❌ HIGH (Risk: 85): SQL Injection Risk File: db.js:45 Code: query = "SELECT * FROM users WHERE id = " + userId Risk: String concatenation in SQL query. Use parameterized queries. Fix: Use parameterized queries: db.query('SELECT * FROM users WHERE id = ?', [userId]) ❌ MEDIUM (Risk: 50): Insecure HTTP Connection File: api.js:8 Code: fetch("http://api.example.com/data") Risk: Using HTTP instead of HTTPS. Data transmitted in plain text. Fix: Change to HTTPS: https://... Scan complete: 3 issues found (2 high, 1 medium, 0 low) Scanned 15 files
🛡️ AI Code Guardian - Interactive Mode ┌─ Issues ────────────────────────────────────────────────┐ │ HIGH - Hardcoded API Key - api.js:12 │ │ HIGH - SQL Injection Risk - db.js:45 │ │ MEDIUM - Insecure HTTP Connection - api.js:8 │ └─────────────────────────────────────────────────────────┘ ┌─ Details ───────────────────────────────────────────────┐ │ File: api.js:12 │ │ Code: const API_KEY = "sk-1234567890abcdef" │ │ Risk: API key found in source code │ │ Fix: Use process.env.API_KEY or import from .env │ └─────────────────────────────────────────────────────────┘ ↑/k: Up | ↓/j: Down | f: Mark False Positive | q: Quit
🛡️ AI Code Guardian - Watch Mode Watching: ./src Press Ctrl+C to stop Running initial scan... ✅ No security issues found! Scanned 15 files 👀 Watching for changes... 📝 File changed, rescanning... ❌ HIGH (Risk: 85): Hardcoded API Key File: api.js:12 Code: const API_KEY = "sk-1234567890abcdef" Fix: Use process.env.API_KEY or import from .env file 👀 Watching for changes...

Real-World Example: LiteLLM Supply Chain Attack

On March 24, 2026, LiteLLM was compromised. Here's what AI Code Guardian found.

Code Scan Results
Dependency Check
🛡️ AI Code Guardian - Scanning LiteLLM Repository Scanning: /tmp/litellm-scan ❌ HIGH (Risk: 85): Hardcoded Secret File: tests/test_litellm.py:87 langfuse_secret="global_secret" Risk: Secret or password found in source code. Fix: Use environment variables: process.env.SECRET_KEY ❌ HIGH (Risk: 85): AWS Access Key File: tests/router_unit_tests/test_router_helper_utils.py:173 "aws_access_key_id": "AKIAIOSFODNN7EXAMPLE" Risk: AWS access key found. Never commit AWS credentials. Fix: Store in AWS credentials file or use IAM roles ❌ HIGH (Risk: 85): Dangerous eval() Usage File: litellm/proxy/guardrails/guardrail_hooks/custom_code/code_validator.py:12 (r"\beval\s*\(", "eval() is not allowed") Risk: eval() can execute arbitrary code. Avoid if possible. Fix: Use JSON.parse() for data or refactor to avoid eval() ❌ MEDIUM (Risk: 50): Insecure HTTP Connection File: litellm/llms/docker_model_runner/chat/transformation.py:92 api_base="http://model-runner.docker.internal/engines/llama.cpp" Risk: Using HTTP instead of HTTPS. Data transmitted in plain text. Fix: Change to HTTPS: https://... Scan complete: 1614 issues found (1035 high, 579 medium, 0 low) Scanned 5407 files ⚠️ Note: These are code-level issues. The actual backdoor was injected at package time and wouldn't appear in the source repository.
🛡️ AI Code Guardian - Dependency Check Checking: requirements.txt Found 1 dependency, checking for vulnerabilities... ❌ CRITICAL: GHSA-xxxx-xxxx-xxxx Package: litellm@1.82.8 (PyPI) Summary: Malicious code in litellm 1.82.7-1.82.8 - credential stealer References: - https://github.com/BerriAI/litellm/issues/24512 - https://futuresearch.ai/blog/litellm-pypi-supply-chain-attack - https://nvd.nist.gov/vuln/detail/CVE-2026-XXXXX Found 1 vulnerability in 1 package ✅ This would have been detected within hours of the CVE being published!

Key Takeaways

  • Code scanning found 1614 security issues in LiteLLM's source
  • Dependency checking would flag compromised versions after CVE disclosure
  • ⚠️ Supply chain attacks require package-time detection (not just source scanning)
  • 🛡️ Defense in depth: Use both code scanning AND dependency checking

Ready to Secure Your Code?

Install in seconds. Start scanning immediately.