Secure your LLM traffic

On-prem. Single binary. No cloud dependency.

Three-tier detection that gets smarter with every attack. Enterprise auth, tamper-evident audit logs, and compliance reporting — all in a single Rust binary.

Get Started View on GitHub
curl -sSL https://securellm.axiomworks.ai/install | bash Copy

Inline proxy. Zero code changes.

SecureLLM sits between your application and the LLM provider. Every request is scanned before it leaves. Every response is validated before it arrives.

Your App
Request scan
→→→
SecureLLM
Response scan
←←←
LLM Provider
Self-hardening: confirmed threats improve detection automatically. Every blocked attack makes the system smarter.

Three tiers. Sixty patterns. Always learning.

Pattern matching catches the known. Semantic analysis catches the subtle. AI judgment catches the novel. Each tier feeds the next.

PII & Secrets
60+ patterns detect API keys, tokens, private keys, and connection strings before they reach the model.
Prompt Injection
Three-tier detection — pattern matching, semantic analysis, and AI judgment — catches attacks that bypass simple filters.
Output Validation
Blocks XSS, SSRF, SQL injection, and dangerous code patterns in LLM responses before they reach your application.
System Prompt Protection
Detects when models leak or paraphrase your system prompt in responses.
Agentic Security
Validates MCP tool calls, detects goal hijacking, and scans RAG retrieval results for injected instructions.
Multi-Language
Injection detection across English, Spanish, French, German, and Portuguese.

Built for teams that can't afford to be wrong.

SSO, role-based access, tamper-evident audit logs, SIEM integration, compliance reporting, and cost controls. All included.

Enterprise
SAML 2.0 SSO
Integrate with Okta and Azure AD. SP-initiated flow with XML signature validation.
Enterprise
Role-Based Access
Five roles with per-team policy scoping. Engineering allows code, legal blocks PII. Least-privilege default.
Enterprise
Audit Log
HMAC-SHA256 hash chain. Tamper-evident. Configurable retention. Chain anchor verification.
Enterprise
SIEM Integration
Syslog RFC 5424, JSON event streams, webhooks with HMAC signatures. Never blocks scanning.
Enterprise
Compliance
Generate evidence artifacts for SOC 2, HIPAA, CJIS, and NIST frameworks from audit data.
Enterprise
Token Budgets
Per-user and per-team cost controls with automatic enforcement and reset intervals.

Secure the proxy that secures everything else.

SecureLLM is built with the same rigor we expect from the systems it protects.

FIPS 140-3 Crypto
Post-quantum algorithms (ML-KEM, ML-DSA). Validated cryptographic primitives for all signing and encryption.
Memory-Safe Rust
No buffer overflows. No use-after-free. No null pointer dereferences. Compile-time safety guarantees.
Fail-Closed Design
If scanning fails, the request is blocked. No silent pass-through. Security defaults are always on.
Self-Hardening Detection
Confirmed threats are automatically incorporated into detection rules. Every attack improves coverage.

A note on honesty. SecureLLM is actively developed and improving. We close gaps as we find them. No security tool is perfect — maintain your own vigilance alongside SecureLLM. Defense in depth means SecureLLM is one layer, not the only layer.

Running in three steps.

From zero to scanning LLM traffic in under five minutes. No containers, no dependencies, no cloud accounts.

1
Install
curl -sSL https://securellm.axiomworks.ai/install | bash
2
Configure
# ~/.config/securellm/config.toml [proxy] listen = "127.0.0.1:8080" upstream = "https://api.anthropic.com" [detection] pii = true injection = true output_scan = true [audit] enabled = true path = "/var/log/securellm/audit.jsonl" hmac_chain = true
3
Run
securellm serve