<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dinuda Yaggahavita</title>
    <description>The latest articles on DEV Community by Dinuda Yaggahavita (@dinuda_yaggahavita_c30893).</description>
    <link>https://dev.to/dinuda_yaggahavita_c30893</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dinuda_yaggahavita_c30893"/>
    <language>en</language>
    <item>
      <title>I Built Tallei to Stop Repeating Myself Across AI Tools</title>
      <dc:creator>Dinuda Yaggahavita</dc:creator>
      <pubDate>Thu, 23 Apr 2026 14:36:42 +0000</pubDate>
      <link>https://dev.to/dinuda_yaggahavita_c30893/i-built-tallei-to-stop-repeating-myself-across-ai-tools-5f2d</link>
      <guid>https://dev.to/dinuda_yaggahavita_c30893/i-built-tallei-to-stop-repeating-myself-across-ai-tools-5f2d</guid>
      <description>&lt;p&gt;I built &lt;a href="https://tallei.com" rel="noopener noreferrer"&gt;Tallei&lt;/a&gt; because I was tired of repeating myself to AI.&lt;/p&gt;

&lt;p&gt;My workflow was already spread across &lt;strong&gt;multiple tools&lt;/strong&gt;. I would: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Brainstorm in Claude&lt;/li&gt;
&lt;li&gt;Generate images and graphs on ChatGPT&lt;/li&gt;
&lt;li&gt;Gather &amp;amp; analyze in Perplexity&lt;/li&gt;
&lt;li&gt;Argue with the content on Gemini 😅&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every switch came with a cost -&amp;gt; lost context &amp;amp; time wasted.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I had to re-explain the project.&lt;br&gt;
Re-state my preferences.&lt;br&gt;
Re-summarize decisions that had already been made.&lt;br&gt;
Re-upload documents or re-copy the important parts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The assistants were getting better. The workflow between them was getting &lt;strong&gt;worse&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;So I built Tallei — a cross-AI memory layer that sits underneath the tools I already use and helps them work from the same context. Not another chat app. Not a replacement for ChatGPT or Claude. A shared memory system that makes those tools feel less disconnected.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem was not intelligence. It was continuity.
&lt;/h2&gt;

&lt;p&gt;A lot of people talk about which model is smarter.&lt;br&gt;
That was not the pain I felt.&lt;br&gt;
The real pain was continuity.&lt;/p&gt;

&lt;p&gt;I could have a great working session in one assistant and then lose the thread completely when I moved to another. What mattered was not just text. It was the state of the work:&lt;/p&gt;

&lt;p&gt;what I was trying to do&lt;br&gt;
what had already been decided&lt;br&gt;
what documents mattered&lt;br&gt;
what constraints were important&lt;br&gt;
what kind of responses I preferred&lt;/p&gt;

&lt;p&gt;That state kept dying at the border between tools.&lt;/p&gt;

&lt;p&gt;Tallei started from a simple idea: your context should not disappear just because you switched AI assistants.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Tallei can help you
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furnchibyudesz59c7bux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Furnchibyudesz59c7bux.png" alt="How Tallei handles Documents and Memories" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tallei is designed to store and retrieve the parts of context that are worth carrying forward across tools.&lt;/p&gt;

&lt;p&gt;That includes things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;preferences&lt;/li&gt;
&lt;li&gt;facts&lt;/li&gt;
&lt;li&gt;decisions&lt;/li&gt;
&lt;li&gt;recent notes&lt;/li&gt;
&lt;li&gt;document-related context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;strong&gt;goal&lt;/strong&gt; is simple:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;reduce re-explaining&lt;br&gt;
keep projects coherent across assistants&lt;br&gt;
make handoffs between tools smoother&lt;br&gt;
let documents stay useful after the first chat window is closed&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you only use one assistant, native memory may be enough.&lt;br&gt;
But if your work spans multiple tools, you start to feel this gap very quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first version worked, but barely
&lt;/h2&gt;

&lt;p&gt;The first version of Tallei proved the idea, but the experience was rough.&lt;/p&gt;

&lt;p&gt;I had the basics in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a backend&lt;/li&gt;
&lt;li&gt;authentication&lt;/li&gt;
&lt;li&gt;memory save and recall&lt;/li&gt;
&lt;li&gt;an MCP server&lt;/li&gt;
&lt;li&gt;a dashboard&lt;/li&gt;
&lt;li&gt;integrations for multiple assistants&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Technically, it worked.&lt;br&gt;
But when I tested recall properly, it took over &lt;code&gt;**30 seconds**&lt;/code&gt; in the early version.&lt;/p&gt;

&lt;p&gt;That is unusable for a product like this.&lt;br&gt;
Memory cannot feel like a background report. It has to feel &lt;strong&gt;immediate&lt;/strong&gt;. If recalling context takes that long, the user would rather just manually explain things again.&lt;/p&gt;

&lt;p&gt;That was the first big product lesson: this could not just be “smart.” It had to be fast enough to disappear into the workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. My first big mistake: I optimized for speed and made it worse
&lt;/h3&gt;

&lt;p&gt;When I started fixing latency, I made a mistake that taught me a lot about memory products.&lt;br&gt;
I got recall much faster.&lt;br&gt;
But I also made it less trustworthy.&lt;/p&gt;

&lt;p&gt;The faster version sometimes returned the wrong memory first, especially on new or slightly different queries. The system looked impressive on latency, but it was no longer reliable. And for a memory system, that is deadly.&lt;/p&gt;

&lt;p&gt;A fast wrong answer is worse than a slower correct one.&lt;br&gt;
Because once the user stops trusting recall, the whole product starts falling apart. They hesitate. They second-guess what is injected. They stop depending on the system.&lt;/p&gt;

&lt;p&gt;That was the moment I stopped thinking only about speed and started thinking more clearly about trust.&lt;br&gt;
The recall system that actually worked&lt;br&gt;
The version that finally felt right was simpler.&lt;/p&gt;

&lt;p&gt;Instead of trying to be overly clever, I rebuilt recall around a few practical buckets of context:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3jedu5rrzqflj80i0ab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3jedu5rrzqflj80i0ab.png" alt="Tallei Memory Buckets" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;preferences&lt;br&gt;
long-term memory like facts and decisions&lt;br&gt;
short-term memory like recent notes and events&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That structure worked better because it reflected how context is actually in our brains.&lt;/p&gt;

&lt;p&gt;Some things should almost always be available, like preferences.&lt;br&gt;
Some things matter over time, like key decisions.&lt;br&gt;
Some things are recent and situational, like what happened in the last few days or weeks.&lt;/p&gt;

&lt;p&gt;The goal was not to create the most complicated retrieval system possible. The goal was to reliably bring the right categories of context into the conversation without flooding it.&lt;/p&gt;

&lt;p&gt;That made Tallei feel much more stable.&lt;/p&gt;

&lt;p&gt;It also led to a much better outcome on performance, while keeping accuracy intact. The “dump-all when it fits, retrieve selectively when it does not” model ended up being much more practical than forcing heavy retrieval logic on every request.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documents were one of the hardest parts
&lt;/h2&gt;

&lt;p&gt;One of the biggest things I realized while building Tallei was that memory is not just about facts like “I prefer concise answers.”&lt;/p&gt;

&lt;p&gt;Documents matter a lot more than that.&lt;/p&gt;

&lt;p&gt;People want assistants to remember:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;the PDF they uploaded&lt;/li&gt;
&lt;li&gt;the contract they reviewed&lt;/li&gt;
&lt;li&gt;the research they already summarized&lt;/li&gt;
&lt;li&gt;the notes from a brainstorm&lt;/li&gt;
&lt;li&gt;the key takeaways from a long session&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is where a lot of the handoff pain really lives.&lt;/p&gt;

&lt;p&gt;So Tallei handles documents in two ways:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;01. Document notes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These are lighter-weight summaries, key points, or takeaways.&lt;br&gt;
They are useful when you want the important context from a document without dragging the whole document into every conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;02. Document blobs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These are for the full source material when you need more complete archival context.&lt;/p&gt;

&lt;p&gt;On top of that, each document gets a &lt;a class="mentioned-user" href="https://dev.to/doc"&gt;@doc&lt;/a&gt;: reference, and documents can be grouped into lots with &lt;a class="mentioned-user" href="https://dev.to/lot"&gt;@lot&lt;/a&gt;: references. That makes it easier to organize related material and carry it forward in a structured way instead of treating every file like an isolated upload.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;03. Document lots&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These are multiple documents referenced and usually used together. This was really useful since people usually reference an average 2 documents per AI query, and running and finding 2 are costly and takes up memory. We need to make sure to not overbloat the context.&lt;/p&gt;

&lt;p&gt;This was important for the product because real work is rarely just one memory at a time. It is often a set of connected documents, notes, and decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Another mistake: I built things because they sounded smart
&lt;/h3&gt;

&lt;p&gt;At one point, I built a graph layer to map entities and relationships across memories.&lt;br&gt;
It sounded like exactly the kind of thing a memory product should have.&lt;/p&gt;

&lt;p&gt;In practice, it added complexity without enough value.&lt;/p&gt;

&lt;p&gt;It made the system heavier, introduced messy entity labeling problems, and did not improve the actual user experience enough to justify keeping it. So I removed it.&lt;/p&gt;

&lt;p&gt;That was another important lesson from building Tallei:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;not every advanced feature belongs in the product just because it sounds intelligent.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Sometimes the better product decision is to remove complexity and focus on what actually helps users keep their thread across tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I think this matters
&lt;/h2&gt;

&lt;p&gt;We are not another all-AI chat router interface✌️.&lt;/p&gt;

&lt;p&gt;That means the moment your workflow becomes multi-tool, you start feeling the missing layer.&lt;/p&gt;

&lt;p&gt;That missing layer is what I wanted Tallei to address.&lt;/p&gt;

&lt;p&gt;Not by replacing the assistants, but by helping them feel connected, and not distrupting your flow.&lt;/p&gt;

&lt;p&gt;I do not think the future of AI work is one assistant doing everything. I think a lot of real workflows will stay spread across multiple systems, each better at different moments. If that is true, then shared context becomes important infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Open source matters here too
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/Dinuda/tallei-ai" rel="noopener noreferrer"&gt;Tallei Github Repo&lt;/a&gt;&lt;br&gt;
&lt;a href="https://tallei.com" rel="noopener noreferrer"&gt;Tallei&lt;/a&gt; is also open source, which was important to me from the start.&lt;/p&gt;

&lt;p&gt;If you are building a memory product — something that stores preferences, facts, notes, and document context — people should be able to inspect how it works. Open source helps make the architecture, trade-offs, and implementation more transparent.&lt;/p&gt;

&lt;p&gt;It also makes it easier for others to understand that this is not just a vague AI promise. It is an actual system with real product decisions behind it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who this is for
&lt;/h2&gt;

&lt;p&gt;Tallei is for people whose AI workflow already spans more than one tool.&lt;/p&gt;

&lt;p&gt;That could be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;founders moving between strategy, research, and execution&lt;/li&gt;
&lt;li&gt;developers using different assistants for architecture, debugging, and building&lt;/li&gt;
&lt;li&gt;writers switching between ideation, editing, and fact-checking&lt;/li&gt;
&lt;li&gt;researchers comparing outputs across models&lt;/li&gt;
&lt;li&gt;anyone tired of pasting the same context over and over again&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If that sounds familiar, then you probably already understand the problem Tallei is trying to solve.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;I did not build Tallei because I thought the world needed one more AI product.&lt;/p&gt;

&lt;p&gt;I built it because I personally felt the pain of context breaking between assistants, over and over again.&lt;/p&gt;

&lt;p&gt;The core idea stayed simple the whole way through:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;your context should survive the handoff within your AI tools&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That means preferences should carry over.&lt;br&gt;
Documents should stay useful.&lt;br&gt;
Important decisions should not disappear.&lt;br&gt;
And switching tools should not mean starting from zero.&lt;/p&gt;

&lt;p&gt;That is what Tallei is built for.&lt;/p&gt;

</description>
      <category>claude</category>
      <category>chatgpt</category>
      <category>gemini</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
