Gemini Executive Synthesis
Rudel – Claude Code Session Analytics
Technical Positioning
An analytics layer for Claude Code sessions, providing visibility into efficiency, abandonment, and improvement over time, offered as a free and fully open-source tool.
SaaS Insight & Market Implications
The emergence of Rudel highlights a critical and rapidly expanding blind spot in the modern developer workflow: the lack of observability and analytics for AI agent interactions. As tools like Claude Code become integral to daily coding tasks, developers and engineering managers are left without metrics to assess efficiency, identify bottlenecks, or quantify the true ROI of these powerful assistants. Rudel directly addresses this by providing an "analytics layer" specifically for AI code sessions, revealing crucial insights such as surprisingly low skill utilization (4%), high abandonment rates (26% within 60 seconds), and significant performance variations across task types. This product signifies a nascent but crucial market trend: "AI workflow observability." Just as traditional software required APM and logging to understand system performance, the new paradigm of human-AI collaboration demands specialized tools to measure agent behavior, user engagement, and overall productivity. Developers care deeply about this because their time is valuable, and they need to ensure their AI tools are genuinely enhancing, not hindering, their output. The ability to identify "error cascade patterns" predicting abandonment or to establish a "meaningful benchmark for 'good' agentic session performance" offers tangible value for optimizing personal and team-wide AI adoption. Rudel's open-source nature further underscores the community-driven need for transparent, measurable AI integration, paving the way for a new category of tools focused on maximizing the effectiveness and efficiency of AI in software development. This move from qualitative assessment to data-driven optimization of AI interactions is a natural and necessary evolution in the enterprise adoption of generative AI.
Proprietary Technical Taxonomy
Claude Code sessions
analytics layer
agentic session performance
Error cascade patterns
tokens
interactions
Raw Developer Origin & Technical Request
Hacker News
Mar 13, 2026
Show HN: Rudel – Claude Code Session Analytics
We built rudel.ai after realizing we had no visibility into our own Claude Code sessions. We were using it daily but had no idea which sessions were efficient, why some got abandoned, or whether we were actually improving over time.So we built an analytics layer for it. After connecting our own sessions, we ended up with a dataset of 1,573 real Claude Code sessions, 15M+ tokens, 270K+ interactions.Some things we found that surprised us:
- Skills were only being used in 4% of our sessions
- 26% of sessions are abandoned, most within the first 60 seconds
- Session success rate varies significantly by task type (documentation scores highest, refactoring lowest)
- Error cascade patterns appear in the first 2 minutes and predict abandonment with reasonable accuracy
- There is no meaningful benchmark for 'good' agentic session performance, we are building one.The tool is free to use and fully open source, happy to answer questions about the data or how we built it.
Developer Debate & Comments