If you've built an AI agent or assistant, you've hit this wall: the moment the session ends, it forgets everything.
The user comes back the next day. The agent has no idea who they are. No memory of their preferences, their history, what they were working on. The user has to re-explain themselves from scratch. Every. Single. Time.
This isn't a model problem — it's an infrastructure problem. Models don't have long-term memory. They have context windows. When the window closes, everything in it disappears.
What we built
AmPN is a hosted memory store for AI agents. Your agent stores memories via our API. When a new session starts, it retrieves the relevant context with semantic search — so it picks up exactly where it left off.
from ampn import MemoryClient
client = MemoryClient(api_key='your_key')
# Store a memory
client.store(
user_id='alice',
content='User prefers concise explanations and works in Python'
)
# Retrieve relevant context
results = client.search(
user_id='alice',
query='what does the user prefer?'
)
# Returns: ['User prefers concise explanations and works in Python']
The tech
- Storage: PostgreSQL for structured data, Qdrant for vector embeddings
- Search: Sentence embeddings (all-MiniLM-L6-v2) — finds semantically similar memories, not just keyword matches
- API: FastAPI, fully documented
-
SDKs: Python (
pip install ampn-memory) and Node.js (npm install @ampn/memory)
Free tier
Free tier is open now at ampnup.com. 1,000 memory operations and 10,000 reads per month — enough to build and test.
Would love to hear how you're handling memory in your agent projects today.
If you've built an AI agent or assistant, you've hit this wall: the moment the session ends, it forgets everything.
The user comes back the next day. The agent has no idea who they are. No memory of their preferences, their history, what they were working on. The user has to re-explain themselves from scratch. Every. Single. Time.
This isn't a model problem — it's an infrastructure problem. Models don't have long-term memory. They have context windows. When the window closes, everything in it disappears.
What we built
AmPN is a hosted memory store for AI agents. Your agent stores memories via our API. When a new session starts, it retrieves the relevant context with semantic search — so it picks up exactly where it left off.
from ampn import MemoryClient
client = MemoryClient(api_key='your_key')
# Store a memory
client.store(
user_id='alice',
content='User prefers concise explanations and works in Python'
)
# Retrieve relevant context
results = client.search(
user_id='alice',
query='what does the user prefer?'
)
# Returns: ['User prefers concise explanations and works in Python']
The tech
- Storage: PostgreSQL for structured data, Qdrant for vector embeddings
- Search: Sentence embeddings (all-MiniLM-L6-v2) — finds semantically similar memories, not just keyword matches
- API: FastAPI, fully documented
-
SDKs: Python (
pip install ampn-memory) and Node.js (npm install @ampn/memory)
Free tier
Free tier is open now at ampnup.com. 1,000 memory operations and 10,000 reads per month — enough to build and test.
Would love to hear how you're handling memory in your agent projects today.
Top comments (0)