We built OneCLI because AI agents are being given raw API keys. And it's going about as well as you'd expect. We figured the answer isn't "don't give agents access," it's "give them access without giving them secrets."OneCLI is an open-source gateway that sits between your AI agents and the services they call. You store your real credentials once in OneCLI's encrypted vault, and give your agents placeholder keys. When an agent makes an HTTP call through the proxy, OneCLI matches the request by host/path, verifies the agent should have access, swaps the placeholder for the real credential, and forwards the request. The agent never touches the actual secret. It just uses CLI or MCP tools as normal.Try it in one line:
docker run --pull always -p 10254:10254 -p 10255:10255 -v onecli-data:/app/data ghcr.io/onecli/onecliThe proxy is written in Rust, the dashboard is Next.js, and secrets are AES-256-GCM encrypted at rest. Everything runs in a single Docker container with an embedded Postgres (PGlite), no external dependencies. Works with any agent framework (OpenClaw, NanoClaw, IronClaw, or anything that can set an HTTPS_PROXY).We started with what felt most urgent: agents shouldn't be holding raw credentials.
The next layer is access policies and audit, defining what each agent can call, logging everything, and requiring human approval before sensitive actions go through.It's Apache-2.0 licensed. We'd love feedback on the approach, and we're especially curious how people are handling agent auth today.GitHub: https://github.com/onecli/onecli
Site: https://onecli.sh
Show HN: OneCLI – Vault for AI Agents in Rust
A critical security solution that allows AI agents to access external services without directly handling sensitive API keys, thereby preventing credential exposure and enabling secure agent operations.
View Origin LinkProduct Positioning & Context
AI Executive Synthesis
A critical security solution that allows AI agents to access external services without directly handling sensitive API keys, thereby preventing credential exposure and enabling secure agent operations.
OneCLI addresses a critical and rapidly escalating security vulnerability within the burgeoning AI agent ecosystem: the direct exposure of raw API keys to autonomous agents. As AI agents gain more sophisticated capabilities and broader access to external services, the risk of credential compromise becomes a significant impediment to their widespread and secure adoption. OneCLI positions itself as an essential security layer, acting as an intelligent proxy and encrypted vault that mediates agent-to-service interactions.
Developers will find OneCLI compelling because it offers a practical, low-friction solution to a complex problem. Instead of wrestling with custom credential management for each agent or service, developers can centralize secrets in OneCLI and provide agents with secure, temporary placeholders. This approach significantly reduces the attack surface and simplifies compliance, allowing agents to operate effectively without ever touching sensitive data. The "single Docker container" deployment with "no external dependencies" further lowers the barrier to entry, making it accessible for rapid prototyping and production environments.
This project represents a crucial trend: the maturation and operationalization of AI agent technology. Early AI agent development focused on functionality; now, the industry is shifting towards robust, secure, and auditable deployments. OneCLI embodies this by applying established security patterns—like credential vaults and proxy-based access control—to the unique challenges of AI agents. It anticipates future needs like granular access policies and audit trails, indicating a move towards enterprise-grade AI agent governance. By providing a foundational security primitive, OneCLI helps pave the way for more trustworthy and scalable AI agent applications, mitigating risks that could otherwise stifle innovation and adoption in this transformative field.
Developers will find OneCLI compelling because it offers a practical, low-friction solution to a complex problem. Instead of wrestling with custom credential management for each agent or service, developers can centralize secrets in OneCLI and provide agents with secure, temporary placeholders. This approach significantly reduces the attack surface and simplifies compliance, allowing agents to operate effectively without ever touching sensitive data. The "single Docker container" deployment with "no external dependencies" further lowers the barrier to entry, making it accessible for rapid prototyping and production environments.
This project represents a crucial trend: the maturation and operationalization of AI agent technology. Early AI agent development focused on functionality; now, the industry is shifting towards robust, secure, and auditable deployments. OneCLI embodies this by applying established security patterns—like credential vaults and proxy-based access control—to the unique challenges of AI agents. It anticipates future needs like granular access policies and audit trails, indicating a move towards enterprise-grade AI agent governance. By providing a foundational security primitive, OneCLI helps pave the way for more trustworthy and scalable AI agent applications, mitigating risks that could otherwise stifle innovation and adoption in this transformative field.
Community Voice & Feedback
wow very ncie - just in time for agents. I love the idea of agents not holding the credentials. Who knows how they log it into some .md file somewhere where it can be further exploited.
This is a smart approach, giving agents access without exposing secrets is definitely needed.
Curious how you handle dynamic access policies for agents that need temporary elevated permissions, or if you integrate with existing IAM systems.
Also, do you track or enforce agent-level audit logs for requests that go through the proxy?
Curious how you handle dynamic access policies for agents that need temporary elevated permissions, or if you integrate with existing IAM systems.
Also, do you track or enforce agent-level audit logs for requests that go through the proxy?
It’s an approach that works and I’ve thought of implementing the same thing but stopped short because I feel it just pushes the underlying problem around. Now I have to share my creds with a black box that I know very little about and it’s not a real vault.This should be solved by the vaults (hashi corp / AWS Secrets Manager).The one thing that I did build was based on a service that AWS provides (AWS STS) which handles temporary time bound creds out of the box.https://timebound-iam.com
I don't get the benefit. Yes, agents should not have access to API keys because they can easily be fooled into giving up those API keys. But what's to prevent a malicious agent from re-using the honest agent's fake API key that it exfiltrates via prompt injection? The gateway can't tell that the request is coming from the malicious agent. If the honest agent can read its own proxy authorization token, it can give that up as well.It seems the only sound solution is to have a sidecar attached to the agent and have the sidecar authenticate with the gateway using mTLS. The sidecar manages its own TLS key - the agent never has access to it.
This is slick but the only thing it prevents is agents from directly sharing the credentials through git or something.But that’s not the biggest risk of giving credentials to agents. If they can still make arbitrary API calls, they can still cost money or cause security problems or delete production.If you’re worried about creds leakage only because your credentials are static and permanent, well, time to upgrade your secrets architecture.
Secret and credential sprawl is a real problem in agent pipelines specifically -- each agent needs its own scoped access and the blast radius of a leaked credential is much larger when an agent can act autonomously. We ended up with a tiered secret model: agents get short-lived derived tokens scoped to exactly the tools they need for a given task, not broad API keys. Revocation on task completion, not on schedule. More ops overhead upfront but caught two misuse cases that would have been invisible otherwise.
This problem+solution, like many others in the agentic-space, have nothing agent-specific. Giving a "box" API keys was always considered a risk, and auth-proxying has existed as a solution forever. See tokenizer[0] by the fly.io team, which makes it a stateless service for eg - no database or dashboard. Or the buzzfeed SSO proxy, which lets you do the same via an OAuth2-dance at the frontend, and a upstream config at the backend which injects secrets: https://github.com/buzzfeed/sso/blob/549155a64d6c5f8916ed909....[0]: https://github.com/superfly/tokenizer
This can also be done using existing Vaults or Secrets manager. Hashicorp Vault can do this and agents can be instructed to get secrets, which are set without the agent's knowledge. I use these 2 simple scripts with OpenClaw to achieve this, along with time-scoped expiration. The call to vault_get.sh is inside the agent's skill script so that the secrets are not leaked to LLMs or in any trace logs:vault_get.sh: https://gist.github.com/sathish316/1ca3fe1b124577d1354ee254a...vault_set.sh: https://gist.github.com/sathish316/1f4e6549a8f85ac5c5ac8a088...Blog about the full setup for OpenClaw: https://x.com/sathish316/status/2019496552419717390
This is the right approach. I built a similar system to https://github.com/airutorg/airut - couple of learnings to share:1) Not all systems respect HTTP_PROXY. Node in particular is very uncooperative in this regard.2) AWS access keys can’t be handled by simple credential swap; the requests need to be resigned with the real keys. Replicating the SigV4 and SigV4A exactly was bit of a pain.3) To be secure, this system needs to run outside of the execution sandbox so that the agent can’t just read the keys from the proxy process.For Airut I settled on a transparent (mitm)proxy, running in a separate container, and injecting proxy cert to the cert store in the container where the agent runs. This solved 1 and 3.
IronClaw seems to do this natively, I like the idea in general, so it's good too see this pulled out.I have few questions:- How can a proxy inject stuff if it's TLS encrypted? (same for IronClaw and others)- Any adapters for existing secret stores? like maybe my fake credential can be a 1Password entry path (like 1Password:vault-name/entry/field and it would pull from 1P instead of having to have yet another place for me to store secrets?
Related Early-Stage Discoveries
Discovery Source
Hacker News Aggregated via automated community intelligence tracking.
Tech Stack Dependencies
No direct open-source NPM package mentions detected in the product documentation.
Media Tractions & Mentions
No mainstream media stories specifically mentioning this product name have been intercepted yet.
Deep Research & Science
No direct peer-reviewed scientific literature matched with this product's architecture.
Market Trends