Product Positioning & Context
AI Executive Synthesis
like Protocol Buffer but better
Skir targets a fundamental developer pain point: data serialization across diverse language environments. Protocol Buffers are established, indicating a mature market with entrenched solutions. Skir's 'better' claim, without specific technical differentiators in the provided text beyond a single YML config, requires validation. The focus on 'teams running mixed-language stacks' highlights a specific enterprise segment facing interoperability challenges. Market implication: Skir aims to disrupt a foundational infrastructure component, a high-value but difficult proposition given the stability requirements for such tools. Success hinges on demonstrating tangible performance, ease-of-use, or feature advantages over existing, widely adopted standards.
Why I built Skir: https://medium.com/@gepheum/i-spent-15-years-with-protobuf-t...Quick start: npx skir initAll the config lives in one YML file.Website: https://skir.buildGitHub: https://github.com/gepheum/skirWould love feedback especially from teams running mixed-language stacks.
Community Voice & Feedback
This seems a Chesterton's fence fail.protobuf solved serialization with schema evolution back/forward compatibility.Skir seems to have great devex for the codegen part, but that's the least interesting aspect of protobufs. I don't see how the serialization this proposes fixes it without the numerical tagging equivalent.
> For optional types, 0 is decoded as the default value of the underlying type (e.g. string? decodes 0 as "", not null).In the "dense JSON" format, isn't representing removed/absent struct fields with `0` and not `null` backwards incompatible?If you remove or are unaware of a `int32?` field, old consumers will suddenly think the value is present as a "default" value rather than absent
Did you look at other formats like Avro, Ion etc? Some feedback:1. Dense jsonInteresting idea. You can also just keep the compact binary if you just tag each payload with a schema id (see Avro). This also allows a generic reader to decode any binary format by reading the schema and then interpreting the binary payload, which is really useful. A secondary benefit is you never ever misinterpret a payload. I have seen bugs with protobufs misinterpreted since there is no connection handshake and interpretation is akin to 'cast'.2. Compatibility checks+100 there's not reason to allow breaking changes by default3. Adding fields to a type: should you have to update all call sites?I'm not so sure this is the right default. If I add a field to a core type used by 10 services, this requires rebuilding and deploying all of them.4. enum looks great. what about backcompat when adding new enum fields? or sometimes when you need to 'upgrade' an atomic to an enum?
Like this but zero copy, easy migration/versioning, Rust and WASM support.
That “compact JSON” format reminds me if the special protobufs JSON format that Google uses in their APIs that has very little public documentation. Does anyone happen to know why Google uses that, and to OP, were you inspired by that format?
I spent some time in the actual compiler source. There's real work here, genuinely good ideas.The best thing Skir does is strict generated constructors. You add a field, every construction site lights up. Protobuf's "silently default everything" model has caused mass production incidents at real companies. This is a legitimately better default.Dense JSON is interesting but the docs gloss over the tradeoff: your serialized data is [3, 4, "P"]. If you ever lose your schema, or a human needs to read a payload in a log, you're staring at unlabeled arrays. Protobuf binary has the same problem but nobody markets binary as "easy to inspect with standard tools."
The "serialize now, deserialize in 100 years" claim has a real asterisk. Compatibility checking requires you to opt into stable record IDs and maintain snapshots. If you skip that (and the docs' own examples often do), the CLI literally warns you: "breaking changes cannot be detected." So it's less "built-in safety" and more "safety available if you follow the discipline." Which is... also what Protobuf offers.The Rust-style enum unification is genuinely cleaner than Protobuf's enum/oneof split. No notes there, that's just better language design.Minor thing that bothered me disproportionately: the constant syntax in the docs (x = 600) doesn't match what the parser actually accepts (x: 600).The weirdest thing that bugged the heck out of me was the tagline, "like protos but better", that's doing the project no favors.I think this would land better if it were positioned as "Protobuf, but fresh" rather than "Protobuf, but better." The interesting conversation is which opinions are right, not whether one tool is universally superior.Quite frankly, I don't use protobuf because it seems like an unapproachable monolith, and I'm not at FAANG anymore, just a solo dev. No one's gonna complain if I don't. But I do love the idea of something simpler thats easy to wrap my mind around.That's why "but fresh" hits nice to me, and I have a feeling it might be more appealing than you'd think - ex. it's hard to believe a 2 month old project is strictly better than whatever mess and history protobufs gone through with tons of engineers paid to use and work on it. It is easy to believe it covers 99% of what Protobuf does already, and any crazy edge cases that pop up (they always do, eventually :), will be easy to understand and fix.
The "serialize now, deserialize in 100 years" claim has a real asterisk. Compatibility checking requires you to opt into stable record IDs and maintain snapshots. If you skip that (and the docs' own examples often do), the CLI literally warns you: "breaking changes cannot be detected." So it's less "built-in safety" and more "safety available if you follow the discipline." Which is... also what Protobuf offers.The Rust-style enum unification is genuinely cleaner than Protobuf's enum/oneof split. No notes there, that's just better language design.Minor thing that bothered me disproportionately: the constant syntax in the docs (x = 600) doesn't match what the parser actually accepts (x: 600).The weirdest thing that bugged the heck out of me was the tagline, "like protos but better", that's doing the project no favors.I think this would land better if it were positioned as "Protobuf, but fresh" rather than "Protobuf, but better." The interesting conversation is which opinions are right, not whether one tool is universally superior.Quite frankly, I don't use protobuf because it seems like an unapproachable monolith, and I'm not at FAANG anymore, just a solo dev. No one's gonna complain if I don't. But I do love the idea of something simpler thats easy to wrap my mind around.That's why "but fresh" hits nice to me, and I have a feeling it might be more appealing than you'd think - ex. it's hard to believe a 2 month old project is strictly better than whatever mess and history protobufs gone through with tons of engineers paid to use and work on it. It is easy to believe it covers 99% of what Protobuf does already, and any crazy edge cases that pop up (they always do, eventually :), will be easy to understand and fix.
https://capnproto.org/ has been my goto since forever. Made by the protobuf inventor
> Skir is a universal language for representing data types, constants, and RPC interfaces. Define your schema once in a .skir file and generate idiomatic, type-safe code in TypeScript, Python, Java, C++, and more.Maybe I'm missing some additional features but that's exactly what https://buf.build/plugins/typescript does for Protobuf already, with the advantage that you can just keep Protobuf and all the battle hardened tooling that comes with it.
I would recommend exploring OpenRPC for those who have not yet seen it. It brings protocol-buffer-like definitions (components), RPC definitions and centralised error definitions.
Related Early-Stage Discoveries
Discovery Source
Hacker News Aggregated via automated community intelligence tracking.
Tech Stack Dependencies
No direct open-source NPM package mentions detected in the product documentation.
Media Tractions & Mentions
No mainstream media stories specifically mentioning this product name have been intercepted yet.
Deep Research & Science
No direct peer-reviewed scientific literature matched with this product's architecture.
Market Trends
This is a big one I wish proto had in the first place. The entire idea of a proto registry feels reactive to me when, ideally, you want to pull in a versioned shared file to import that is verified by the compiler long before serve or client verifies the payload schema.Schema validation and compatibility checks on CI. Again a big one and critical to catch issues early.Enums done right... No further comment required.I think with some more attention to details e.g. hammering out the gaps some other comments have identified and more language support e.g. Rust, Go, C# this can actually work out over time.Here is an idea to contemplate as a side gig with your favorite Ai assistant: A tool to convert proto to Skir. Or at least as much as possible. As someone who had to maintain larger and complex proto files, a lot of proto specific pain points are addressed.The only concern i have is timing. Ten years ago this would have been a smash hit. These days, we have Thrift and similar meaning the bar is definitely higher. That's not necessarily bad, but one needs to be mindful about differentiation to the existing proto alternatives.I hope this project gains trajectory and community especially from the frustrated proto folks.