HN Reader

NewTopBestAskShowJob
Show HN: Guardrails – A Contextual Security Layer for Agentic AI Systems
score icon1
comment icon0
Hey HN, we are a small team from Europe building in agent security and we have just released Invariant Guardrails, our open-source system to enforce contextual security in AI agents and MCP-powered applications.

Guardrails acts as a transparent layer between your LLM/MCP server and your agent. It lets you define deterministic rules that block risky behavior: secret leakage, unsafe tool use, PII exposure, malicious code patterns, jailbreaks, loops, and more.

Rules are written in a Python-inspired DSL, enabling powerful contextual logic like below. The origins of this idea go back to OPA/Rego, i.e. policy languages used for authentication.

  raise "PII leakage in email" if:

    (out: ToolOutput) -> (call: ToolCall)

    any(pii(out.content))

    call is tool:send_email({ to: "^(?!.*@ourcompany.com$).*$" })
It’s fast (low-latency, pipelined execution), supports both hosted and local deployments, and integrates via simple proxies. You keep your agent code unchanged.

Let us know what you think. We found it quite helpful for MCP debugging and security analysis so far. Happy to answer questions!

Docs: https://explorer.invariantlabs.ai/docs

Repo: https://github.com/invariantlabs-ai/invariant

Blog post: https://invariantlabs.ai/blog/guardrails

Playground: https://explorer.invariantlabs.ai/playground

No comments