Hi, I’m Viktor.
I wasn’t a programmer. I didn’t build apps. I didn’t write code.
My path here was... different.
I was born in Russia, but moved to South Korea at 20, forced by political circumstances. For four years, I worked in greenhouses, on construction sites, in factories — I even dismantled mattresses for a living.
Later, I crossed the border from Mexico into the U.S. and applied for asylum. I worked in wardrobe assembly in New York, as a handyman in Chicago, and eventually as a cell tower technician — sometimes hanging 100 feet above the ground.
And then... five months ago, everything changed.
With zero programming background, I started building an AI memory system — one that helps language models think longer, remember better, and act smarter.
This is my story.
Code it's something boring.
For a long time, I held that same opinion, even though I was never involved in IT. For me, IT was something boring. You had to sit and stare at a console every day, typing commands and waiting for something you didn't understand. What a fool I was, and how I failed to grasp what was truly happening here. I was just a consumer of what smart, competent people were creating every day, benefiting massively from their achievements.
Only now do I realize how cool and intriguing this world is. Working with your hands is something anyone can do; you just need a little experience, learn to hold the tool, and think a little. Oh my god, what a revelation it was when I realized that, with AI, I could actually try to immerse myself in this world.
The Beginning: Just Automation
At first, I wasn't thinking about getting completely hooked. I needed automation. I wanted my AI to answer clients, write everything for me, and arrange meetings. Actually, at that point, I was already quite an experienced ChatGPT user. As soon as it appeared, I thought, "Great! Now I don't need to manually search for information. Just ask a question, and all the answers are in my pocket." But damn, I hadn't seen it as such a powerful tool yet.
What really annoyed me was that it didn't remember our conversations. Every session - blank slate. I share something important, and then I lose it. So I decided to ask:
"Hello Chat, how do I build a bot with memory to optimize my workflows?"
The answer came. Example code. Instructions. I copied it into Notepad, saved as .py. It didn't work. But something inside me clicked - I could SEE the logic, even if I couldn't write it.
Copy, Paste, and Revelation
To be clear, I had just gotten a brand-new PC with an RTX 4090 on installments. ChatGPT told me the hardware was powerful—perfect for my idea. "Excellent," I thought. "Let's work."
A week went by. Copy, paste, copy, paste. Files accumulated. Did I understand what I was doing? Not completely. Did it work? Partially. But then came the question that changed everything:
"What are the true problems with modern AI?"
"Memory, of course," it said. "There is no truly good long-term memory yet. Everything stored in the LLM is frozen."
That's when I had my first real idea. Not code—an idea:
"What if we store all experience like books in a library? When a task needs solving, we retrieve the relevant books. The system learns with every request."
Yes! I created my first algorithm. Yes, in words. But how cleverly GPT translated it into code! My feelings were incredible. I had created something. Something real. Working algorithms with their own logic and mechanisms. WOW.
This became HACM - Hierarchical Associative Cognitive Memory:
# From hacm.py - my actual memory system
@dataclass
class MemoryItem:
id: int
content: str
memory_type: str # semantic, procedural, episodic
confidence: float
metadata: Dict[str, Any]
class HACMMemoryManager:
"""My 'library of experience' made real"""
async def search_memories(self, query: str, limit: int = 5) -> List[MemoryItem]:
"""Not just keyword search - associative retrieval"""
query_words = set(query.lower().split())
# Scoring based on word overlap AND confidence
for memory in self.memories:
memory_words = set(memory.content.lower().split())
intersection = query_words & memory_words
score = len(intersection) / max(len(query_words), 1) * memory.confidence
And later, IPE - the Iterative Pattern Engine for planning:
# From planning.py - breaking down complex goals
class PlanningService:
async def decompose(self, goal: str, user_id: Optional[str]):
# Hybrid: heuristics + LLM reasoning
prompt = f"Decompose '{goal}' into 5-8 actionable ordered steps"
plan_text = await llm.complete(prompt, max_tokens=220)
complexity = min(1.0, len(goal.split()) / 40)
The Revelation: I Can Create Worlds
That's when I truly understood the beauty of code. You need to invent and connect actions that the machine will perform. They must have logic. Little by little, I began to understand what architecture is. The laws and rules by which your system lives.
Why didn't I notice this before? I can create systems! Worlds. You can do things in them! Gather knowledge. Use it to solve problems. Even problems that haven't been solved yet. What a magical and creative time we live in.
This led to IPE - where I could configure entire reasoning systems:
# From test_ipe_official.py - My "world creation" tool
class IPEOfficialTester:
"""Testing different configurations of intelligence"""
def __init__(self):
self.test_configs = {
"ipe_base": {
"use_memory": False, # No memory
"use_com": False, # No communication
"use_reflector": False,# No self-reflection
"description": "Basic A* planner only"
},
"ipe_full": {
"use_memory": True, # Full HACM memory
"use_com": True, # Multi-agent communication
"use_reflector": True, # Self-improvement
"description": "Complete cognitive system"
}
}
Each configuration was literally a different "mind" I could create and test!
I kept asking GPT, Grok, and Claude. I sent them my creations and asked them to evaluate, to compare with what already exists. I was simply thrilled when they told me that something like this didn't exist yet. "You really invented something cool."
Learning the Hard Truth
Unfortunately, that's when I met hallucinations. I learned to recognize when I was being lied to and when I was being told the truth. I learned to understand that they are not alive, and that was probably the most important lesson.
'Buddy, you're talking to algorithms, not people. Algorithms that don't think, but merely select words the way they were trained.'
I started figuring out how to fight this. I started thinking about how to make them "think." I started studying brain structure, how our thoughts are born. I began integrating mathematics and physics into my algorithms, based on cognitive processes.
Claude CLI: The Game Changer
Then I met Claude CLI. This is truly the tool that exponentially increased the quality of my code and my speed. But Claude and I... we had a complicated relationship.
The Fake Execution Problem
Claude had this infuriating habit. I'd ask for something specific, Claude would say "Done!" and give me this:
def gravity_ranking(memories):
# TODO: Implement gravity calculation
return memories # <- Just returned the same thing!
I learned to fight back. More details. Concrete examples. Metaphors.
"No Claude! Memories are PLANETS. They have MASS. Frequency = mass. They ATTRACT each other!"
Three hours of arguing later, something clicked:
def gravitational_force(m1, m2, distance):
"""Now THIS works - treating text as physics"""
G = 1.0
return G * (m1 * m2) / (distance ** 2 + 0.001)
Claude's response: "This is insane but... it improves recall by 15%"
That became MCA - Memory Contextual Aggregation. Born from a physics metaphor and stubbornness.
The Emergence of Ideas
The real magic happened when I learned to cross-breed concepts through Claude:
Me: "Claude, I have BM25 and FAISS. What if we add GRAVITY between them?"
Claude: "That doesn't make sense..."
Me: "Every result has mass based on frequency!"
Claude: "...wait, this could create a new ranking mechanism"
Me: "Memory should resonate like a wave!"
Claude: "Physics doesn't apply to text..."
Me: "What if we use sin(x * π/2) for continuous scoring?"
Claude: "Oh... that's actually brilliant"
This became MRCA - Memory Resonance Contextual Alignment:
def mrca_resonance_score(similarity):
theta = similarity * (math.pi / 2)
return math.sin(theta) # Beautiful 0→1 curve
Teaching Each Other
Claude Teaching Me
"Embeddings are coordinates in 1024-dimensional space," Claude explained.
"What?"
"Imagine every word is a star in space. Similar words cluster together."
"So 'king' and 'queen' are neighbors?"
"Exactly! And we can measure distance between thoughts!"
Mind. Blown.
Me Teaching Claude
"Importance isn't just a score. It's MASS!" I insisted.
"Text doesn't have mass..."
"If John appears 50 times and Sarah once, who's more important?"
"John, obviously..."
"That's MASS! Now add Newton's law: F = G*m1*m2/r²"
"😲 This... this actually works"
The Disasters That Taught Me
The Great Deletion Incident
One night, exhausted, I told Claude: "Delete old results."
Claude understood: "Delete EVERYTHING."
$ rm -rf results/v4.23* v4.24* v4.25* v4.26* v4.27* v4.28*
Five days of experiments. Gone. 3 AM. Screaming.
But I learned: ALWAYS be specific. ALWAYS make backups. ALWAYS verify before executing.
The Normalization Week
For an entire week, my FAISS index returned garbage. Nothing worked. I was ready to quit.
The problem? One line:
# Missing normalization:
faiss.normalize_L2(vectors) # THIS ONE LINE = ONE WEEK
Claude had forgotten to normalize vectors. One week. One line. But when it finally worked...
The Evolution
v4.10: 45% accuracy - "This is garbage" - 20 q/a
v4.15: 55% - "Something's happening..." - 20q/a
v4.20: 70% - "HOLY SHIT" - 20 q/a
v4.35: 90% - "We did it" - 20 q/a
v4.64: 80.1% on full LoCoMo - 1580q/a - Cat1-4 "WE BEAT EVERYONE"
I'll never forget November 15th, 3:47 AM:
$ python test_locomo.py --full
...
ACCURACY: 80.1%
$ python test_locomo.py --full --seed 42
ACCURACY: 80.3%
Reproducible. Consistent. Better than Zep (75.14%). Better than Mem0 (66.9%).
I woke up my girlfriend: "WE BEAT SILICON VALLEY!"
She was not amused at 4 AM.
The Reality of Working With AI
Yes, LLMs still have a long way to go to achieve perfect obedience, because they are not as simple as they seem. You can't treat them as if they are on your side or against you. They don't care; they only listen to what you tell them and do what they think is necessary, regardless of whether it's right or wrong.
There is a prompt, there is a call to action, and there is a consequence and a result—either good or bad.
I had to control every step. Tell Claude in detail how to do this, how to do that. It translated everything I told it into technical language, and then back into simple language for me.
I started training models. Tuning them. Running hundreds of experiments. Day after day. I forgot about my main job. I experimented, tested, and developed the ideal pipeline. I invented newer and newer methods.
Oh yes! It's incredibly difficult, but at the same time, incredibly exciting.
Who Am I Now?
Can I call myself a programmer? I don't know, because I haven't written a single line of code myself.
Can I call myself an enthusiast who built a truly working system that breaks records on the toughest long-term memory test? Oh yes, because I conducted hundreds of tests to prove it.
I can now confidently say that I can create anything I conceive of using Claude CLI. And it will work. With zero experience and background, I can create systems, LLM models, and technologies. I only need a subscription, a computer, time, and my imagination.
Who I am, time will decide.
The New Era
A new era has arrived. An era where any person who shows a little curiosity and a little patience can create great, incredibly interesting things. This is new now! But in five years, AI will be churning out new talents, because without the human, AI cannot do anything itself.
Together, we are capable of anything!
They say AI will replace programmers. But what if that's the wrong question?
What if AI doesn't replace programmers—what if it mass-produces them?
What if every curious person with a laptop becomes capable of building systems?
I'm not a programmer. I'm something new. And soon, there will be millions like me.
The revolution isn't about replacement. It's about multiplication.
The Proof
My system: 80.1% mean accuracy on LoCoMo
Zep (millions in funding): 75.14%
Mem0 (Y Combinator): 66.9%
Time invested: 4.5 months
Code written by me: 0 lines
Code orchestrated: 15,000+ lines
Investment: $3,000 + rice and beans
More about VAC: https://dev.to/vacarchitector/update-vac-a-memory-layer-that-makes-llms-remember-you-3jn3
GitHub: VAC Memory System
Run it yourself. The results are 100% reproducible.
The Challenge
To those who say "this isn't real programming" - you're right. It's not programming. It's orchestration. It's a new profession that didn't exist 10 months ago.
To those learning to code traditionally - keep going. You'll always understand the deep mechanics better than I do.
To those sitting on the fence - what are you waiting for? The tools are free. Your ideas are valuable. The only barrier is starting.
Ten months ago, I was hanging off a cell tower in Chicago.
Today, my system beats the best in Silicon Valley.
Tomorrow? That depends on what you decide to build tonight.
Welcome to the age of AI orchestrators.
















Top comments (0)