Hi, Ted here, creator of Mog.- Mog is a statically typed, compiled, embedded language (think statically typed Lua) designed to be written by LLMs -- the full spec fits in 3,200 tokens.
- An AI agent writes a Mog program, compiles it, and dynamically loads it as a plugin, script, or hook.
- The host controls exactly which functions a Mog program can call (capability-based permissions), so permissions propagate from agent to agent-written code.
- Compiled to native code for low-latency plugin execution -- no interpreter overhead, no JIT, no process startup cost.
- The compiler is written in safe Rust so the entire toolchain can be audited for security. Even without a full security audit, Mog is already useful for agents extending themselves with their own code.
- MIT licensed, contributions welcome.Motivations for Mog:1. Syntax Only an AI Could Love: Mog is written for AIs to write, so the spec fits easily in context (~3200 tokens), and it's intended to minimize foot-guns to lower the error rate when generating Mog code. This is why Mog has no operator precedence: non-associative operations have to use parentheses, e.g. (a + b) * c. It's also why there's no implicit type coercion, which I've found over the decades to be an annoying source of runtime bugs. There's also less support in Mog for generics, and there's absolutely no support for metaprogramming, macros, or syntactic abstraction.When asking people to write code in a language, these restrictions could be onerous. But LLMs don't care, and the less expressivity you trust them with, the better.2. Capabilities-Based Permissionsl: There's a paradox with existing security models for AI agents. If you give an agent like OpenClaw unfettered access to your data, that's insecure and you'll get pwned. But if you sandbox it, it can't do most of what you want. Worse, if you run scripts the agent wrote, those scripts don't inherit the permissions that constrain the agent's own bash tool calls, which leads to pwnage and other chaos. And that's not even assuming you run one of the many OpenClaw plugins with malware.Mog tries to solve this by taking inspiration from embedded languages. It compiles all the way to machine code, ahead of time, but the compiler doesn't output any dangerous code (at least it shouldn't -- Mog is quite new, so that could still be buggy). This allows a host program, such as an AI agent, to generate Mog source code, compile it, and load it into itself using dlopen(), while maintaining security guarantees.The main trick is that a Mog program on its own can't do much. It has no direct access to syscalls, libc, or memory. It can basically call functions, do heap allocations (but only within the arena the host gives it), and return something. If the host wants the Mog program to be able to do I/O, it has to supply the functions that the Mog program will call. A core invariant is that a Mog program should never be able to crash the host program, corrupt its state, or consume more resources than the host allows.This allows the host to inspect the arguments to any potentially dangerous operation that the Mog program attempts, since it's code that runs in the host. For example, a host agent could give a Mog program a function to run a bash command, then enforce its own session-level permissions on that command, even though the command was dynamically generated by a plugin that was written without prior knowledge of those permission settings.(There are a couple other tricks that PL people might find interesting. One is that the host can limit the execution time of the guest program. It does this using cooperative interrupt polling, i.e. the compiler inserts runtime checks that check if the host has asked the guest to stop. This causes a roughly 10% drop in performance on extremely tight loops, which are the worst case. It could almost certainly be optimized.)3. Self Modification Without Restart: When I try to modify my OpenClaw from my phone, I have to restart the whole agent. Mog fixes this: an agent can compile and run new plugins without interrupting a session, which makes it dynamically responsive to user feedback (e.g., you tell it to always ask you before deleting a file and without any interruption it compiles and loads the code to... actually do that).Async support is built into the language, by adapting LLVM's coroutine lowering to our Rust port of the QBE compiler, which is what Mog uses for compilation. The Mog host library can be slotted into an async event loop (tested with Bun), so Mog async calls get scheduled seamlessly by the agent's event loop.
Another trick is that the Mog program uses a stack inside the memory arena that the host provides for it to run in, rather than the system stack. The system tracks a guard page between the stack and heap. This design prevents stack overflow without runtime overhead.Lots of work still needs to be done to make Mog a "batteries-included" experience like Python. Most of that work involves fleshing out a standard library to include things like JSON, CSV, Sqlite, and HTTP. One high-impact addition would be an `llm` library that allows the guest to make LLM calls through the agent, which should support multiple models and token budgeting, so the host could prevent the plugin from burning too many tokens.I suspect we'll also want to do more work to make the program lifecycle operations more ergonomic. And finally, there should be a more fully featured library for integrating a Mog host into an AI agent like OpenClaw or OpenAI's Codex CLI.
Show HN: The Mog Programming Language
statically typed, compiled, embedded language (think statically typed Lua) designed to be written by LLMs; solves security paradox with existing security models for AI agents; fixes self-modification without restart for agents like OpenClaw.
View Origin LinkProduct Positioning & Context
AI Executive Synthesis
statically typed, compiled, embedded language (think statically typed Lua) designed to be written by LLMs; solves security paradox with existing security models for AI agents; fixes self-modification without restart for agents like OpenClaw.
Mog addresses critical security and operational challenges in AI agent development, specifically for agents generating and executing their own code. Its core innovation is a statically typed, compiled, embedded language designed for LLM generation, featuring capability-based permissions and native code execution. This directly tackles the 'pwnage and chaos' resulting from agents having unfettered access or insecure script execution. The ability for agents to self-modify without restarting, leveraging async support and a controlled memory model, significantly enhances dynamic responsiveness and operational efficiency. Market implications: Mog targets a nascent but rapidly expanding market of AI agent developers and enterprises deploying autonomous systems. It provides a foundational security and performance layer for agent self-extension, a critical enabler for advanced AI applications. The 'security paradox' it resolves is a major barrier to enterprise adoption of agentic workflows. Its success depends on developer adoption and proving its security guarantees in real-world, complex agent environments.
Community Voice & Feedback
This is a fascinating approach, solving the problem at the language level. The capability-based permission model is elegant.A complementary angle we've been exploring with LucidShark (lucidshark.com) is attacking the same problem from the workflow layer rather than the language layer: instead of constraining what the LLM can write, you run SAST, SCA, and linting automatically after every generation step, before anything touches CI or production.The nice thing about that approach is it works with existing languages today — Python, TypeScript, Go, etc. and plugs directly into Claude Code or Cursor as a pre-commit gate. The downside is it's catching issues after generation vs. preventing them structurally like Mog aims to.I suspect the long-term solution is both layers: safer languages for greenfield AI-native projects + robust static analysis for the 99% of existing codebases where you can't change the language.
Very cool!The permission model is almost identical to Roc's - https://www.roc-lang.org/platforms - although Roc isn't designed for "Syntax only an AI could love" (among many other differences between the two languages - but still, there are very few languages taking this approach to permissions).If you're curious, I've talked about details of how Roc's permission model works in other places, most recently in this interview: https://youtu.be/gs7OLhdZJvk?si=wTFI7Ja85qdXJWiW
Coding agents gain a lot of power from being able to download specialized utility programs off the Internet, using apt-get or whatever. So it seems like running in a VM is going to be more popular?A limited plugin API is interesting in some ways, but it has "rewrite it in Rust" energy. Maybe it's easier to flesh out a new library ecosystem using a coding agent, though?
I think the AI labs need to be the ones to build AI-specific languages so they can include a huge corpus in the model training data-set, and then do RL on it producing useful and correct programs in that language.If anthropic makes "claude-script", it'll outmog this language with massive RL-maxing. I hope your cortisol is ready for that.If you want to try and mog claude with moglang, I think you need to make a corpus of several terrabytes of valid useful "mog" programs, and wait for that to get included in the training dataset.
One nitpick I noticed:> String Slicing
> You can extract a substring using bracket syntax with a range: s[start:end]. Both start and end are byte offsets. The slice includes start and excludes end.Given that all strings are UTF-8, I note that there's not a great way to iterate over strings by _code point_. Using byte offsets is certainly more performant, but I could see this being a common request if you're expecting a lot of string manipulation to happen in these programs.Other than that, this looks pretty cool. Unlike other commenters, I kinda like the lack of operator precedence. I wouldn't be surprised if it turns out to be not a huge problem, since LLMs generating code with this language would be pattern-matching on existing code, which will always have explicit parentheses.
> You can extract a substring using bracket syntax with a range: s[start:end]. Both start and end are byte offsets. The slice includes start and excludes end.Given that all strings are UTF-8, I note that there's not a great way to iterate over strings by _code point_. Using byte offsets is certainly more performant, but I could see this being a common request if you're expecting a lot of string manipulation to happen in these programs.Other than that, this looks pretty cool. Unlike other commenters, I kinda like the lack of operator precedence. I wouldn't be surprised if it turns out to be not a huge problem, since LLMs generating code with this language would be pattern-matching on existing code, which will always have explicit parentheses.
For me this is Gleam. Fairly small lang, type safe, compiled, NO NULLS (very important IMO), good FFI, code is readable, and... you get the BEAM!Agents can pretty much iterate on their own.The most important thing for me, at least for now (and IMO the foreseeable future) is being able to review and read the output code clearly. I am the bottleneck in the agent -> human loop, so optimizing for that by producing clear and readable code is a massive priority. Gleam eliminates a ton of errors automatically so my reviews are focused on mostly business logic (also need to explicitly call out redundant code often enough).I could see an argument for full on Erlang too, but I like the static typing.
> it's intended to minimize foot-guns to lower the error rate when generating Mog code. This is why Mog has no operator precedence: non-associative operations have to use parentheses, e.g. (a + b) * c.Almost all the code LLMs have been trained on uses operator precedence, so no operator precedence seems like a massive foot-gun.
> When asking people to write code in a language, these restrictions could be onerous. But LLMs don't care, and the less expressivity you trust them with, the better.But LLMs very much do care. They are measurably worse when writing code in languages with non-standard or non-existent operator precedence. This is not surprising given how they learn programmming.
> Compiled to native code for low-latency plugin execution – no interpreter overhead, no JIT, no process startup cost.If you're running the compiled code in-process, how is that not JIT? And isn't that higher-latency than interpreting? Tiered-JIT (a la V8) solves exactly this problem.Edit: Although the example programs show traditional AOT compile/execute steps, so "no process startup cost" is presumably a lie?
I like the looks of this, and the idea behind it, but TypeScriot via Deno is an audited language with a good security model, a good type system, and sandboxing in an extremely well-hardened runtime. It's also a language that LLMs are exceptionally well-trained on. What does Mog offer that's meaningfully superior in an agent context?I see that Deno requires a subprocess which introduces some overhead, and I might be naive to think so, but that doesn't seem like it would matter much when agent round-trip and inference time is way, way longer than any inefficiency a subprocess would introduce. (edit: I realized in some cases the round-trip time may be negligible if the agent is local, but inference is still very slow)I admittedly do prefer the syntax here, but I'm more so asking these questions from a point of pragmatism over idealism. I already use Deno because it's convenient, practical, and efficient rather than ideal.
Related Early-Stage Discoveries
Discovery Source
Hacker News Aggregated via automated community intelligence tracking.
Tech Stack Dependencies
No direct open-source NPM package mentions detected in the product documentation.
Media Tractions & Mentions
Deep Research & Science
No direct peer-reviewed scientific literature matched with this product's architecture.
Market Trends