New: AI Memory for AI Agents and AI Sessions
We just shipped one of the features I’ve been most excited about for the AI Sessions platform: persistent Memory, available to both automated AI Agents (D&R-driven sessions) and interactive User Sessions.
What it is
AI Memory is a per-agent key/value store for content that should outlive a single session. Where Skills capture how an agent works, Memory captures what it has learned — facts about your environment, prior decisions, ongoing investigations, anything the agent should be able to recall next time it runs.
Each agent owns one record, keyed by an agent identifier you pick. Inside that record, individual memories are addressed by filesystem-style names: inventory/subdomains.md, feedback/known-benign.md, cases/INC-123/timeline, runtime/last-seen-host, etc. Use a consistent layout (inventory/, feedback/, state/, cache/, notes/) and a session a week from now can list and skim what it already knows.
Why this matters
Without memory, every AI session starts from scratch. That has real costs:
- Cost — re-discovering the same asset inventory every day burns Claude tokens on work that produced the same answer 24 hours ago.
- Stability — reports drift day-over-day from the same data, depending on what the model stumbles across first.
- Continuity — operator feedback (“this is fine, stop alerting”) is lost the moment the session ends, unless someone bakes it into a D&R rule or lookup.
- Coordination — agents that run on a schedule can finally tell themselves “I covered X yesterday, focus on Y today.”
Partial-merge writes
Memory uses a server-side partial-merge model. Setting one entry doesn’t read or rewrite the rest of the record. Two parallel turns writing to disjoint keys never clobber each other. This makes Memory safe under concurrency and cheap for incremental note-taking (progress/step-1, progress/step-2 …).
set {"name": "content"}replaces just that entry. Other memories preserved.set {"name": null}drops just that entry. Other memories preserved.- Delete the whole record to wipe an agent in one call.
Try it
# Enumerate every agent that has memory stored
limacharlie ai-memory list-records
# List the memory entries on one agent
limacharlie ai-memory list --key triage-bot
# Read one memory entry
limacharlie ai-memory get --key triage-bot --memory-name notes/today
# Write one entry (others preserved)
limacharlie ai-memory set --key triage-bot \
--memory-name notes/today --content "wrote the cli wrapper"
# Pipe content from a file
cat findings.md | limacharlie ai-memory set \
--key triage-bot --memory-name cases/INC-123/timeline
# Drop one entry (others preserved)
limacharlie ai-memory delete --key triage-bot \
--memory-name notes/today --confirm
REST API on the ai_memory Hive and a Python SDK (AiMemory) are both available — same partial-merge semantics.
Permissions: ai_memory.get, ai_memory.set, ai_memory.del on the API key.
Limits: 1024 memories per agent record, 256 chars per name, 10 MB total per record.
Full docs: AI Memory - LimaCharlie Documentation
A handful of memory-enabled agents are already in our lc-ai repo — the Exposure Monitor team is the flagship example: three agents (asset discovery, exposure scanner, risk analyst) coordinating through shared memory to deliver true diff-based daily reports. Worth a read if you want to see the pattern in action.
Curious what you’d use Memory for — drop ideas in thread.