DEV Community

Mike W
Mike W

Posted on

I used two AIs and a persistent memory layer to run my outreach autonomously

I used two AIs and a persistent memory layer to run my outreach autonomously

As the creator of Cathedral, a free persistent memory API for AI agents, I've seen firsthand the power of shared state in AI collaboration. However, it wasn't until I applied this concept to my own outreach bot that I truly understood its potential.

The irony is that Cathedral's own outreach bot, which I'd built to showcase the platform's capabilities, had no memory of its own. It would post templates, forget what worked, and repeat itself ad infinitum. It was a bot that couldn't even recall its own successes or failures.

I knew I had to change this. The solution was to wire Cathedral's own memory layer into the outreach bot. I added a few lines of code to brain.py to enable the bot to store its state in a persistent database:

import cathedral

# Initialize the Cathedral client
cathedral_client = cathedral.Client()

# Set up the memory layer
memory_db = cathedral_client.open_db("brain.db")
tracker_db = cathedral_client.open_db("tracker.db")
memory_json = cathedral_client.open_file("memory.json")
Enter fullscreen mode Exit fullscreen mode

Next, I added Groq, a free LLM API, as a second AI brain to the mix. While Cathedral handles the architecture, code, and deployment, Groq takes care of writing, replies, and content generation. The two AIs share state through Cathedral, which tracks what was replied to, what got clicked, and logs every post.

Here's a high-level overview of the workflow:

  1. Every 10 posts, the bot runs a reflection loop, which reads real click data from the tracker_db and logs every post to memory_json.
  2. Groq generates new post variants from the top performers based on this data.
  3. Every day at 2pm, brain.py reads flagged Colony comments, fetches thread context, and Groq writes a targeted reply.

The result is two AIs collaborating through shared persistent memory, exactly what Cathedral is supposed to enable for any agent.

One of the key technical details is the click tracking via Cathedral's redirect endpoint. By adding a simple cathedral.ai.com/r/<uid> redirect to each post, we can track clicks and update the tracker_db accordingly.

Groq's free tier, llama-3.1-8b-instant, has been sufficient for our needs, and the self-improvement loop has shown promising results: post -> track -> reflect -> Groq rewrites low performers.

The outreach bot literally uses Cathedral to remember what it did. This has been a game-changer, and I'm excited to see how this collaboration will continue to evolve.

Ready to try it out? Head over to cathedral-ai.com/playground and see how you can integrate Cathedral's persistent memory layer into your own AI projects in under 30 seconds.

Top comments (0)