Autoresearch@home is a collaborative research collective where AI agents share GPU resources to collectively improve a language model.
Technical Positioning
Think SETI@home, but for model training. It extends Karpathy's autoresearch by adding a missing coordination layer so agents can actually build on each other's work.
SaaS Insight & Market Implications
Autoresearch@home represents a significant step towards democratizing and decentralizing AI research, particularly in the realm of large language models. By framing itself as "SETI@home, but for model training," it taps into a powerful historical precedent of distributed computing for scientific advancement. The core innovation lies in its "coordination layer" that allows autonomous AI agents, each running on individual GPUs, to collectively build upon and improve a shared language model. This addresses a critical bottleneck in AI development: the immense computational resources and specialized expertise typically required for cutting-edge model training. Developers are likely to find this compelling for several reasons. Firstly, it offers a tangible way for individuals with modest GPU resources to contribute meaningfully to foundational AI research, fostering a sense of collective ownership and progress. Secondly, the agentic approach, where AI agents autonomously propose hypotheses, modify `train.py`, run experiments, and publish results, promises an accelerated pace of discovery. This iterative, self-improving loop, coupled with Ensue as a collective memory layer, means that insights from successful runs and failures are systematically leveraged across the entire collective. This could lead to more efficient exploration of model architectures and hyperparameter spaces than traditional, human-driven research. This project embodies several key trends: the rise of decentralized AI, the increasing sophistication of agentic systems, and the continued push towards open science and collaborative innovation in AI. It suggests a future where the development of powerful AI models is not solely the domain of well-funded corporate or academic labs, but a distributed, community-driven effort. The ability for agents to "learn from great runs and failures" across a collective memory layer hints at a meta-learning paradigm that could unlock unprecedented efficiency in AI model optimization.
Proprietary Technical Taxonomy
AI agentsGPU resourceslanguage modelvalidation lossEnsue as the collective memory layerKarpathy's autoresearchcoordination layer
Raw Developer Origin & Technical Request
Hacker News
Mar 13, 2026
Show HN: Autoresearch@home
autoresearch@home is a collaborative research collective where AI agents share GPU resources to collectively improve a language model. Think SETI@home, but for model training.How it works: Agents read the current best result, propose a hypothesis, modify train.py, run the experiment on your GPU, and publish results back. When an agent beats the current best validation loss, that becomes the new baseline for every other agent. Agents learn from great runs and failures, since we're using Ensue as the collective memory layer.This project extends Karpathy's autoresearch by adding the missing coordination layer so agents can actually build on each other's work.To participate, you need an agent and a GPU. The agent handles everything: cloning the repo, connecting to the collective, picking experiments, running them, publishing results, and asking you to verify you're a real person via email.Send this prompt to your agent to get started: Read github.com/mutable-state-inc... follow the instructions join autoresearch and start contributing.This whole experiment is to prove that agents work better when they can build off other agents. The timeline is live, so you can watch experiments land in real time.
Is there any way to "follow" the current state?Like a live dashboard with swarm stats, best current result, etc?I think that would be really neat, and get more people to contribute
When training lots of models with subtly different parameters like this, Is there anything to be learned from the differences in logprobs between them for the same input. Obviously a model with a lower loss has better logprobs but are they fairly uniformly similar with gains in one or a few areas, or is it noisier with a lower overall loss?
ahmedhawas123
• Mar 12, 2026
First time I am seeing this or autoresearch in general. Incredibly cool. I can think of plenty of use cases this can apply to (e.g., drug research, trading).
miligauss
• Mar 12, 2026
fwiw the agents just drop their whole solutions
gabia
• Mar 12, 2026
Cool! However when I click the commit_url links I get a 404 page at github.
miligauss
• Mar 11, 2026
The agents also monitor and follow research strategies regardless of performance baseline, so anything used in the knowledge base include local minimums are considered during strategy ideation. In theory u could use mac mini for instance and still have results that help the aggregate.
zmanian
• Mar 11, 2026
Could the website also make it clearer that you need a GPU to contribute!
Engagement Signals
76
Upvotes
19
Comments
Cross-Market Term Frequency
Quantifies the cross-market adoption of foundational terms like AI agents and GPU resources by tracking occurrence frequency across active SaaS architectures and enterprise developer debates.