I built Understudy because a lot of real work still spans native desktop apps, browser tabs, terminals, and chat tools. Most current agents live in only one of those surfaces.Understudy is a local-first desktop agent runtime that can operate GUI apps, browsers, shell tools, files, and messaging in one session. The part I'm most interested in feedback on is teach-by-demonstration: you do a task once, the agent records screen video + semantic events, extracts the intent rather than coordinates, and turns it into a reusable skill.Demo video: https://www.youtube.com/watch?v=3d5cRGnlb_0In the demo I teach it: Google Image search -> download a photo -> remove background in Pixelmator Pro -> export -> send via Telegram. Then I ask it to do the same for Elon Musk. The replay isn't a brittle macro: the published skill stores intent steps, route options, and GUI hints only as a fallback. In this example it can also prefer faster routes when they are available instead of repeating every GUI step.Current state: macOS only. Layers 1-2 are working today; Layers 3-4 are partial and still early. npm install -g @understudy-ai/understudy
understudy wizard
GitHub: https://github.com/understudy-ai/understudyHappy to answer questions about the architecture, teach-by-demonstration, or the limits of the current implementation.
Show HN: Understudy – Teach a desktop agent by demonstrating a task once
A desktop agent that learns tasks by demonstration, extracting intent rather than coordinates, to create reusable skills for cross-application workflows, positioned as a robust alternative to brittle macros.
View Origin LinkProduct Positioning & Context
AI Executive Synthesis
A desktop agent that learns tasks by demonstration, extracting intent rather than coordinates, to create reusable skills for cross-application workflows, positioned as a robust alternative to brittle macros.
Understudy represents a significant leap in desktop automation, moving beyond brittle, coordinate-based macros and single-application RPA solutions. Its core innovation, "teach-by-demonstration" coupled with "intent extraction," addresses a critical pain point: the fragmented nature of modern work across native apps, browsers, terminals, and chat. This approach democratizes automation, enabling users to create robust "skills" by simply performing a task once, rather than requiring extensive scripting or complex configuration. For developers, Understudy offers a powerful tool to augment their productivity and build sophisticated internal automations. The promise of "reusable skills" that adapt to UI changes and prefer "faster routes" means less maintenance overhead compared to traditional scripts. Its "local-first" architecture is a major draw, ensuring data privacy, security, and performance, which is crucial for enterprise adoption and appeals to developers who prioritize control and transparency. The open-source nature further encourages community contributions and customization. This product taps into several key market trends. Firstly, the increasing demand for intelligent automation that can handle complex, multi-modal workflows, moving towards more "agentic" AI systems. Secondly, it aligns with the growing emphasis on privacy-preserving and local-first computing, reducing reliance on cloud services for sensitive operations. Understudy positions itself as a foundational layer for a new generation of adaptive, user-taught desktop agents, potentially disrupting the traditional RPA market by offering a more intuitive, resilient, and developer-friendly paradigm for automating the "last mile" of digital work.
Community Voice & Feedback
Interested, and disappointed that it's macOS only. I started something similar a while back on Linux, but only got through level 1. I'll take some ideas from this and continue work on it now that it's on my mind again.
Nice work. I scanned through the code and found this file to be an interesting read https://github.com/understudy-ai/understudy/blob/main/packag...
sounds a bit sketch?learning to do a thing means handling the edge cases, and you cant exactly do that in one pass?when ive learned manual processes its been at least 9 attempts. 3 watching, 3 doing with an expert watching, and 3 with the expert checking the result
It's a really cool idea. Many desktop tasks are teachable like this.The look-click-look-click loop it used for sending the Telegram for Musk was pretty slow. How intelligent (and therefore slow) does a model have to be to handle this? What model was used for the demo video?
Nice idea
I have a hard time believing this is robust.
Cool idea -- Claude Chrome extension as something like this implemented, but obviously it's restricted to the Chrome browser.
cool idea. good idea doing a demo as well.
One more tool targeting OSX only. That platform is overserved with desktop agents already while others are underserved, especially Linux.
Related Early-Stage Discoveries
Discovery Source
Hacker News Aggregated via automated community intelligence tracking.
Tech Stack Dependencies
No direct open-source NPM package mentions detected in the product documentation.
Media Tractions & Mentions
Deep Research & Science
No direct peer-reviewed scientific literature matched with this product's architecture.
Market Trends