← Back to Product Feed

Product Hunt traceAI

Open-source LLM tracing that speaks GenAI, not HTTP.

225
Traction Score
31
Discussions
Apr 1, 2026
Launch Date
View Origin Link

Product Positioning & Context

traceAI is OTel-native LLM tracing that actually works with your existing observability stack. ✓ Captures prompts, completions, tokens, retrievals, agent decisions ✓ Follows GenAI semantic conventions correctly ✓ Routes to any OTel backend—Datadog, Grafana, Jaeger, anywhere ✓ Python, TypeScript, Java, C# with full parity ✓ 35+ frameworks: OpenAI, Anthropic, LangChain, CrewAI, DSPy, and more ✓ Two lines of code to instrument your entire app No new vendor. No new dashboard. Open source (MIT).
Open Source Developer Tools Artificial Intelligence

Community Voice & Feedback

[Redacted] • Apr 1, 2026
Since this is fully OpenTelemetry-native, I assume it should work seamlessly with backends like SigNoz as well?If yes might try it there too seems a cool tool
[Redacted] • Apr 1, 2026
Open-source LLM tracing is exactly what was missing.I run Claude API calls in a Celery worker — two calls per job,one at temperature=0 (deterministic analysis),one at temperature=0.7 (generative rewrites).Right now I log both manually with structlog.But correlating a specific trace across the two callswhen something fails in production is still painful.Does traceAI handle multi-step pipelines where the same jobtriggers two separate LLM calls with different parameters?
[Redacted] • Apr 1, 2026
The OTel-native approach is the right call here. Most LLM tracing tools force you into a new dashboard and a new vendor relationship. The fact that this routes to Datadog, Grafana, Jaeger means teams can use what they already have instead of adding yet another pane of glass to monitor.Curious about one thing: how does traceAI handle tracing across multi-agent workflows where one agent calls another? Do the traces compose into a single parent span, or do they stay isolated per agent?Congrats on the launch.
[Redacted] • Apr 1, 2026
Much needed! Since you’re positioning traceAI as a semantic layer over OpenTelemetry so do you see this becoming a standard like OTel itself or staying a developer-focused tool?
[Redacted] • Apr 1, 2026
Hey TraceAI team, great product. was able to get started by giving claude your documentation in a single day. We use this with our internal grafana server so it was a small setup but loving it! thanks!
[Redacted] • Apr 1, 2026
The OTel native approach is the right call imo. Every time I've tried an LLM observability tool it wants me to install yet another dashboard and I'm already drowning in Grafana tabs lol.Two lines of code to instrument is bold. Does it handle multi-step agent chains well? Like if I have a LangChain agent that calls tools that call other models, does the trace show the full tree or does it flatten everything?
[Redacted] • Apr 1, 2026
Two lines is impressive but curious - how does it handle agent decision tracking when you have nested tool calls 3-4 levels deep? Running a bunch of AI agents for project management workflows and the traces get messy fast. The GenAI semantic conventions piece is what's interesting here - most OTel solutions just treat LLM calls as HTTP and you lose all the context about what the model was actually doing.
[Redacted] • Apr 1, 2026
How does the Trace AI handles long running tasks or loops apart from standard loops? does it have any reasoning steps added to it?
[Redacted] • Apr 1, 2026
Really enjoyed building this solution for AI pros. It gives you a clear look at how your AI agents are performing without any vendor lock in
[Redacted] • Mar 26, 2026
Hey Product Hunt! 👋I'm Nikhil from Future AGI, and I'm excited to share traceAI with you today.The Problem We're SolvingIf you're building with LLMs, you know the pain: your agent made 34 API calls, burned through your token budget, and returned the wrong answer. You have no idea why.Existing LLM tracing tools force you into a new vendor dashboard. But most teams already have observability infrastructure - Datadog, Grafana, Jaeger. Why add another?OpenTelemetry is the industry standard for application observability, but it was designed before AI existed. It understands HTTP latency. It has no concept of prompts, tokens, or reasoning chains.What traceAI Does???traceAI is the proper GenAI semantic layer on top of OpenTelemetry. It captures everything that matters in your AI application:- Full prompts and completions- Token usage per call- Model parameters and settings- RAG retrieval steps and sources- Agent decisions and tool executions- Errors with full context- Latency at every layerAnd sends it to whatever observability backend you already use.Two lines of code:from traceai import trace_aitrace_ai.init()Your entire GenAI app is now traced automatically.Works with everything:- Languages: Python, TypeScript, Java, C# (with full parity)- Frameworks: OpenAI, Anthropic, LangChain, LlamaIndex, CrewAI, DSPy, Bedrock, Vertex AI, MCP, Vercel AI SDK, and 35+ more- Backends: Datadog, Grafana, Jaeger, or any OpenTelemetry-compatible tool- Actually follows GenAI semantic conventions. Not approximately. Correctly. So your traces are readable in any OTel backend without custom dashboards or parsing.- Zero lock-in. Your data goes where you want it. Switch backends anytime. We don't even collect your traces.- Open source. Forever. MIT licensed. Community-owned. We're not building a walled garden.Who Should Use This???AI engineers debugging complex LLM pipelinesPlatform teams who refuse to adopt another vendorAnyone already running OTel who wants AI traces alongside application telemetryTeams building agentic systems who need production-grade observabilityWhat's Next???We're actively working on:- Go language support- Expanded framework coverageTry It Now⭐ GitHub: https://shorturl.at/gKG7E📖 Docs: https://shorturl.at/AlyjC💬 Discord: https://shorturl.at/v4lluWe'd love your feedback! What observability challenges are you facing with your AI applications?

Related Early-Stage Discoveries

Discovery Source

Product Hunt Product Hunt

Aggregated via automated community intelligence tracking.

Tech Stack Dependencies

No direct open-source NPM package mentions detected in the product documentation.

Media Tractions & Mentions

No mainstream media stories specifically mentioning this product name have been intercepted yet.

Deep Research & Science

No direct peer-reviewed scientific literature matched with this product's architecture.