← Back to Product Feed

Hacker News Show HN: Axe – A 12MB binary that replaces your AI framework

Axe is positioned as a lightweight, composable, and Unix-like alternative to traditional, monolithic AI frameworks that are often expensive, slow, and focused on chatbot-like, long-lived sessions. It aims to replace these frameworks by treating LLM agents as small, focused programs that can be chained together and integrated into existing development workflows.

206
Traction Score
118
Discussions
Mar 13, 2026
Launch Date
View Origin Link

Product Positioning & Context

AI Executive Synthesis
Axe is positioned as a lightweight, composable, and Unix-like alternative to traditional, monolithic AI frameworks that are often expensive, slow, and focused on chatbot-like, long-lived sessions. It aims to replace these frameworks by treating LLM agents as small, focused programs that can be chained together and integrated into existing development workflows.
The market is currently saturated with large, resource-intensive AI frameworks often geared towards conversational interfaces. Axe represents a significant counter-trend: the 'unbundling' of AI capabilities into small, focused, and composable agents. This shift addresses critical pain points for developers and organizations: the high cost, slowness, and fragility associated with massive context windows and long-lived sessions. By positioning LLM agents as 'Unix programs,' Axe taps into a deeply ingrained developer philosophy of small, focused tools that do one thing well and can be chained together.

Developers will care deeply about Axe's minimal footprint (12MB binary, no Python/Docker dependencies by default), its CLI-first approach, and its seamless integration into existing development workflows via stdin piping, git hooks, and CI. This drastically lowers the barrier to entry for incorporating AI into specific, automated tasks like code review, log analysis, or commit message generation, without the overhead of a full-blown AI framework. The inclusion of features like sub-agent delegation, persistent memory, multi-provider support, and path-sandboxed file operations further enhances its utility and security for enterprise adoption.

This trend signifies a maturation of the AI tools landscape. It moves beyond the initial hype of general-purpose chatbots towards practical, efficient, and integrated AI components that can augment existing software systems. Axe is not just an alternative; it's a philosophical statement advocating for lean, composable AI infrastructure that respects traditional software engineering principles, making AI more accessible, controllable, and cost-effective for targeted automation within the B2B SaaS ecosystem.
I built Axe because I got tired of every AI tool trying to be a chatbot.Most frameworks want a long-lived session with a massive context window doing everything at once. That's expensive, slow, and fragile. Good software is small, focused, and composable... AI agents should be too.Axe treats LLM agents like Unix programs. Each agent is a TOML config with a focused job. Such as code reviewer, log analyzer, commit message writer. You can run them from the CLI, pipe data in, get results out. You can use pipes to chain them together. Or trigger from cron, git hooks, CI.What Axe is:- 12MB binary, two dependencies. no framework, no Python, no Docker (unless you want it)- Stdin piping, something like `git diff | axe run reviewer` just works- Sub-agent delegation. Where agents call other agents via tool use, depth-limited- Persistent memory. If you want, agents can remember across runs without you managing state- MCP support. Axe can connect any MCP server to your agents- Built-in tools. Such as web_search and url_fetch out of the box- Multi-provider. Bring what you love to use.. Anthropic, OpenAI, Ollama, or anything in models.dev format- Path-sandboxed file ops. Keeps agents locked to a working directoryWritten in Go. No daemon, no GUI.What would you automate first?
12MB binary Stdin piping Sub-agent delegation Persistent memory MCP support Path-sandboxed file ops

Community Voice & Feedback

bsoles • Mar 13, 2026
I don't know exactly how these things work, but you may run into copyright/TM issues with Deque's Axe tool: https://www.deque.com/axe/devtools/
paymenthunter01 • Mar 13, 2026
Nice approach treating LLM agents like Unix programs. The TOML config per agent is clean. I've been working on something in a similar vein for invoice processing — small focused agents that do one thing well. Curious how you handle retries when an upstream LLM provider has intermittent failures mid-pipeline?
ColonelPhantom • Mar 13, 2026
I like the idea of LLM-calling as an automation-friendly CLI tool! However, putting all my agents in ~/.config feels antithetical to this. My Bash scripts do not live there either, but rather in a separate script collection, or preferably, at their place of use (e.g. in a repo).For example, let's say I want to add commit message generation (which I don't think is a great use of LLMs, but it is a practical example) to a repo. I would add the appropriate hook to /.git, but I would also want the agent with its instructions to live inside the repo (perhaps in an `axe` or `agents` directory).Can Axe load agents from the current folder? Or can that be added?
athrowaway3z • Mar 13, 2026
I'm not sure if HN is being flooded with bots or if the majority of people here nowadays lack a sense of simplicity.Anybody looking to do interesting things should instantly ignore any project that mention "persistent memory". It speaks of scope creep or complexity obfuscation.If a tool wants to include "persistent memory" it needs to write the 3 sentence explanation of how their scratch/notes files are piped around and what it achieves.Not just claim "persistent memory".I might even go so far that any project using the terminology "memory" is itself doomed to spend too much time & tokens building scaffolding for abstractions that dont work.
multidude • Mar 13, 2026
A problem i have is that the agent's mental model of the system im building diverges from reality over time. After discussing that many times and asking it to remember, it becomes frustrating.
In the README you say the agents memory persists across runs, would that solve said problem?Also, I had to do several refactorings of my agent's constructs and found out that one of them was reinventing stuff producing a plethora of function duplications: e.g. DB connection pools(i had at least four of them simultaneously).Would AXE require shared state between chained agents? Could it do it if required?
CraigJPerry • Mar 12, 2026
I've had good success with something along these lines but perhaps a bit more raw: - claude takes a -p option
- i have a bunch of tiny scripts, each script is an agent but it only does one tiny task
- scripts can be composed in a unix pipeline

For example: $ git diff --staged | ai-commit-msg | git commit -F -

Where ai-commit-msg is a tiny agent: #!/usr/bin/env bash
# ai-commit-msg: stdin=git diff, stdout=conventional commit message
# Usage: git diff --staged | ai-commit-msg
set -euo pipefail
source "${AGENTS_DIR:-$HOME/.agents}/lib/agent-lib.sh"

SYSTEM=$(load_skills \
core/unix-output.md \
core/be-concise.md \
domain/git.md \
output/plain-text.md)

SYSTEM+=$'\n\nTask: Given a git diff on stdin, output a single conventional commit message. One line only.'

run_agent "$SYSTEM"

And you can see to keep the agents themselves tiny, they rely on a little lib to load the various skills and optionally apply some guard / post-exec validator. Those validators are usually simple grep or whatever to make sure there were no writes outside a given dir but sometimes they can be to enforce output correctness (always jq in my examples so far...). In theory the guard could be another claude -p call if i needed a semantic instruction.
Multicomp • Mar 12, 2026
This is what I've been trying to get nanobot to do, so thanks for sharing this. I plan to use this for workflow definitions like filesystems.I have a known workflow to create an RPG character with steps, lets automate some of the boilerplate by having a succession of LLMs read my preferences about each step and apply their particular pieces of data to that step of the workflow, outputting their result to successive subdirectories, so I can pub/sub the entire process and make edits to intermediate files to tweak results as I desire.Now that's cool!
mccoyb • Mar 12, 2026
Cool work!Aside but 12 MB is ... large ... for such a thing. For reference, an entire HTTP (including crypto, TLS) stack with LLM API calls in Zig would net you a binary ~400 KB on ReleaseSmall (statically linked).You can implement an entire language, compiler, and a VM in another 500 KB (or less!)I don't think 12 MB is an impressive badge here?
reacharavindh • Mar 12, 2026
Reminded me of this from my bookmarks.https://github.com/chr15m/runprompt
bensyverson • Mar 12, 2026
It's exciting to see so much experimentation when it comes to form factors for agent orchestration!The first question that comes to mind is: how do you think about cost control? Putting a ton in a giant context window is expensive, but unintentionally fanning out 10 agents with a slightly smaller context window is even more expensive. The answer might be "well, don't do that," and that certainly maps to the UNIX analogy, where you're given powerful and possibly destructive tools, and it's up to you to construct the workflow carefully. But I'm curious how you would approach budget when using Axe.

Related Early-Stage Discoveries

Discovery Source

Hacker News Hacker News

Aggregated via automated community intelligence tracking.

Tech Stack Dependencies

No direct open-source NPM package mentions detected in the product documentation.

Media Tractions & Mentions

No mainstream media stories specifically mentioning this product name have been intercepted yet.

Deep Research & Science

No direct peer-reviewed scientific literature matched with this product's architecture.