For years the industry has talked about persistent AI memory like it is coming someday. Meanwhile every chat ends and the model forgets. Every API call is stateless. Every agent re-learns its own project from scratch.
We built CLM at TUNC.AI because we got tired of waiting.
It is a long-term verifiable memory standard for AI assistants. Not a product. A public format with a parser, a validator, and three reference implementations. Open spec. MIT license. Works across model families.
What you get
- Append-only handoff threads. Each session adds, never overwrites.
- Linear write cost as the thread grows. Cheap to update at depth.
- Parser-validated structure in Rust, Python, TypeScript.
- Audit lineage by ritual. Verbatim preservation of every prior author's contribution.
What it isn't
- A token compression format.
- A cloud product.
- Anyone's proprietary memory layer.
Welcome a new long-term verifiable memory standard for AI. Adopt it on your next agent project.