Application of MemPalace's AAAK compression for inter-LLM communication to save tokens.
Raw Developer Origin & Technical Request
GitHub Issue
Apr 7, 2026
I'm not a technical guy, so this could be a very dumb question, but I was thinking, that right there's a "token issue" going on specially with Claude.
Would it be possible to use AAAK for talking and receiving tokens between LLMs? Similar to what the RTK repo does with some commands to save tokens.
Some way to grab the sentences before reaching Claude and then send them in AAAK, then receive then in AAAK as well and translate via a some translation mechanism, or at the very least send them.
---
I asked of course my AI on how does AAAK work and if that could be possible, however it told me that this language is great for facts and not intent, so not sure if this would be possible.
But if it could, and it saves that % of tokens, you may have solved a very unique problem for everyone.
Cheers! Big fan!
Developer Debate & Comments
No active discussions extracted for this entry yet.
Adjacent Repository Pain Points
Other highly discussed features and pain points extracted from milla-jovovich/mempalace.
Engagement Signals
Cross-Market Term Frequency
Quantifies the cross-market adoption of foundational terms like AAAK and token issue by tracking occurrence frequency across active SaaS architectures and enterprise developer debates.
SaaS Metrics