larksuite/cli
The official Lark/Feishu CLI tool, maintained by the larksuite team — built for humans and AI Agents. Covers core business domains including Messenger, Docs, Base, Sheets, Calendar, Mail, Tasks, Meetings, and more, with 200+ commands and 19 AI Agent Skills.
View Origin LinkProduct Positioning & Context
AI Executive Synthesis
Clarifying the architectural and strategic choices for integrating `lark-cli` into the AI agent ecosystem, specifically regarding its role as a "Skills" provider.
This issue directly questions the architectural choice of packaging `lark-cli` as a "Skills package" rather than an "MCP server," especially given the absence of an official Claude Code MCP server from Lark. This indicates user confusion regarding the optimal integration strategy for AI agents. The user seeks to understand the rationale behind this decision. Market implication: clarity on integration paradigms is crucial for developers building AI agents. Ambiguity around "Skills" versus "MCP" can lead to suboptimal architectural choices or hesitation in adoption. Articulating the strategic advantages of the current packaging, or addressing the perceived gap, is essential for guiding developers and maximizing `lark-cli`'s utility within the AI agent ecosystem.
The official Lark/Feishu CLI tool, maintained by the larksuite team — built for humans and AI Agents. Covers core business domains including Messenger, Docs, Base, Sheets, Calendar, Mail, Tasks, Meetings, and more, with 200+ commands and 19 AI Agent Skills.
Active Developer Issues (GitHub)
Logged: Mar 30, 2026
Logged: Mar 30, 2026
Logged: Mar 30, 2026
Logged: Mar 30, 2026
Logged: Mar 28, 2026
Community Voice & Feedback
公司禁止导出任何文档,审批直接自动拒绝。所以这里能不能自定义呢
> > > One important advantage of a CLI is progressive context disclosure. While MCP, Skills, or typical HTTP calls often require sending a relatively complete context in each request, a CLI lets you incrementally reveal only what’s necessary at each step. This not only improves control over execution flow, but also significantly reduces token usage and overhead in LLM-driven workflows.
> >
> >
> > Is the incremental disclosure feature you mentioned the same as the incremental disclosure feature of Skills?
>
> Not exactly.
>
> With MCP/Skills, every time the LLM connects, it still needs to reason about whether to call a tool and how to call it. That decision-making process itself introduces extra context and token overhead.
>
> A CLI, on the other hand, doesn’t require that upfront reasoning loop. It exposes capabilities progressively — for example via --help or command-specific introspection — only when they’re actually needed.
>
> So you can achieve a similar outcome, but with m...
> >
> >
> > Is the incremental disclosure feature you mentioned the same as the incremental disclosure feature of Skills?
>
> Not exactly.
>
> With MCP/Skills, every time the LLM connects, it still needs to reason about whether to call a tool and how to call it. That decision-making process itself introduces extra context and token overhead.
>
> A CLI, on the other hand, doesn’t require that upfront reasoning loop. It exposes capabilities progressively — for example via --help or command-specific introspection — only when they’re actually needed.
>
> So you can achieve a similar outcome, but with m...
> > One important advantage of a CLI is progressive context disclosure. While MCP, Skills, or typical HTTP calls often require sending a relatively complete context in each request, a CLI lets you incrementally reveal only what’s necessary at each step. This not only improves control over execution flow, but also significantly reduces token usage and overhead in LLM-driven workflows.
>
> Is the incremental disclosure feature you mentioned the same as the incremental disclosure feature of Skills?
Not exactly.
With MCP/Skills, every time the LLM connects, it still needs to reason about whether to call a tool and how to call it. That decision-making process itself introduces extra context and token overhead.
A CLI, on the other hand, doesn’t require that upfront reasoning loop. It exposes capabilities progressively — for example via --help or command-specific introspection — only when they’re actually needed.
So you can achieve a similar outcome, but with much tighter control over co...
>
> Is the incremental disclosure feature you mentioned the same as the incremental disclosure feature of Skills?
Not exactly.
With MCP/Skills, every time the LLM connects, it still needs to reason about whether to call a tool and how to call it. That decision-making process itself introduces extra context and token overhead.
A CLI, on the other hand, doesn’t require that upfront reasoning loop. It exposes capabilities progressively — for example via --help or command-specific introspection — only when they’re actually needed.
So you can achieve a similar outcome, but with much tighter control over co...
> One important advantage of a CLI is progressive context disclosure. While MCP, Skills, or typical HTTP calls often require sending a relatively complete context in each request, a CLI lets you incrementally reveal only what’s necessary at each step. This not only improves control over execution flow, but also significantly reduces token usage and overhead in LLM-driven workflows.
Is the incremental disclosure feature you mentioned the same as the incremental disclosure feature of Skills?
Is the incremental disclosure feature you mentioned the same as the incremental disclosure feature of Skills?
比较好奇开发者自己是怎么测试的,字节内部能接受 bot 权限这么大吗
One important advantage of a CLI is progressive context disclosure.
While MCP, Skills, or typical HTTP calls often require sending a relatively complete context in each request, a CLI lets you incrementally reveal only what’s necessary at each step.
This not only improves control over execution flow, but also significantly reduces token usage and overhead in LLM-driven workflows.
While MCP, Skills, or typical HTTP calls often require sending a relatively complete context in each request, a CLI lets you incrementally reveal only what’s necessary at each step.
This not only improves control over execution flow, but also significantly reduces token usage and overhead in LLM-driven workflows.
+1 默认权限太多了且无法修改 希望能自己调整权限 或者使用已有机器人的权限
> 按照配置步骤,一个授权 直接提了个审批到我leader那
这不合理,config init的时候,我已经把权限申请和审判好了,auth不应该修改,而应该是仅授权。
这不合理,config init的时候,我已经把权限申请和审判好了,auth不应该修改,而应该是仅授权。
按照配置步骤,一个授权 直接提了个审批到我leader那
Related Early-Stage Discoveries
Discovery Source
GitHub Open Source Aggregated via automated community intelligence tracking.
Tech Stack Dependencies
No direct open-source NPM package mentions detected in the product documentation.
Media Tractions & Mentions
No mainstream media stories specifically mentioning this product name have been intercepted yet.
Deep Research & Science
No direct peer-reviewed scientific literature matched with this product's architecture.
Market Trends