GitHub Issue
[Feature] Streaming output for agent execution
## Summary
Support real-time streaming output during agent execution, so users can see LLM responses as they are generated.
## Motivation
Currently `AgentRunner` uses `adapter.chat()` which waits for the full response. For long-running tasks, users have no visibility into what the agent is doing until it finishes. Streaming would enable:
- Real-time progress feedback in CLI or web UI
- Lower perceived latency
- Early termination if the agent goes off track
## Proposed Approach
- Add `stream` mode option to `AgentRunner.run()`
- Use `adapter.stream()` instead of `adapter.chat()` when enabled
- Emit events via callback or AsyncIterable for consumer integration
- Handle tool calls within streaming context
## Open Questions
- Should streaming be opt-in per agent or per team?
- How to handle tool execution interleaved with streaming output?
View Raw Thread
Market Trends