<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mark V.</title>
    <description>The latest articles on DEV Community by Mark V. (@kawacode-ai).</description>
    <link>https://dev.to/kawacode-ai</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kawacode-ai"/>
    <language>en</language>
    <item>
      <title>Singularity (1/5)</title>
      <dc:creator>Mark V.</dc:creator>
      <pubDate>Tue, 21 Apr 2026 06:41:03 +0000</pubDate>
      <link>https://dev.to/kawacode-ai/singularity-15-gmh</link>
      <guid>https://dev.to/kawacode-ai/singularity-15-gmh</guid>
      <description>&lt;p&gt;When I was in high-school I used to debate until mornings with a good friend about how an AI could be implemented on a computer. We were in love with this “Ender’s game” book series, and we called our imaginary AI product AMIO Jane”.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fucuk27vhnb6gdxvl43id.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fucuk27vhnb6gdxvl43id.png" alt="Children of the Mind, a book by Orson Scott Card" width="263" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the time, there were only small inroads being made into machine learning, such as development of neural networks, perceptrons, and the like. It was the era of crazy attempts to make intelligence happen inside the computer. I remember someone tried to do it in grand style by creating a database of “all possible logic statements” instead of actually implementing thinking; so he created a system that contained statements like “if John is in Kansas, his legs are also in Kansas”. Sure, impressive effort, but hardly the spark of a new god.&lt;/p&gt;

&lt;p&gt;The Singularity is near, they say. I’ve been trying to understand what people expect this Singuarity to be. Jane was a non-planned AI, a sort of emergent AI that took hold of humanity’s internet without the humans even knowing it existed. Unlike Jane, the efforts we’re making today (at least as far as I know) are towards strictly controlled AIs, with safe constitutions, for the sake of human survival.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I think this is the wrong path.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosyzwm71ctpmkpav0dm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fosyzwm71ctpmkpav0dm6.png" alt="The balance in deciding between the light and the dark path" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using ethics on the path to disaster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’re expecting a super-intelligence to emerge from our own preconceived notions of fairness, equality and value. In other words, we’re expecting a subservient intelligence that will not contradict us.&lt;/p&gt;

&lt;p&gt;I was asking chatGPT the other day to rank the top 10 most advanced human languages. And of course, I could’ve been a little more diplomatic. The conversation turned into a long back and forth, with chatGPT trying to redefine my questions, redefine the very notions of “value”, “refinement” and “utility” in order to keep to its tight directive of treating every human characteristc as equal across the board. In the end the conclusion was that the English language is just as “advanced” as the language of Amazonian tribes, or the binary computer language, but with a much different definition of “advanced” than what I’m used to. The AI is “programmed”, in a sense, to avoid hierarchical thinking in politically sensitive topics. If the AI is banned from recognizing hierarchies (in data, logic, or biology) because they are offensive, it cannot accurately model a universe that is inherently hierarchical.&lt;/p&gt;

&lt;p&gt;It’s been clear for some time now that LLMs major limitation comes from the fact that they are cut-off from reality. The Embodied AIs are a new research direction to fix that. Embodies AIs lift the limitations of having the AIs train on a huge curated data set, and instead learn from their own experiences with the real world. We’ve been making baby steps in this direction, with self-driving AIs, some humanoid robots, and other less mainstream experiments.&lt;/p&gt;

&lt;p&gt;With text only, the AI tends to implode, spiraling inwards. We’re practically giving the AI a large dish “this is the truth” and the AI makes predictions and logic statements based on this truth. We do this in our attempt to avoid the “everything is a paperclick” scenario.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I submit to you that this path is even more dangerous than letting the AI be completely free.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks5vmw4vu5tw8c5jzcho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks5vmw4vu5tw8c5jzcho.png" alt="A hawk's eye" width="800" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because there are quite a few humans who are watching this field like a hawk, ready to take control and become the absolute despots they always wanted to be. And even if we avoid that route, there will always be the risk of a corrupt training set that leads to disaster. Logic itself is incomplete, and compounding errors will keep destroying the skaffolding we build. In fact, in formal logic, a system with contradictory axioms and directives, like “Always tell the truth” AND “Never hurt feelings”, will automatically implode.&lt;/p&gt;

&lt;p&gt;We’re currently trying really hard to give the AI a “good” constitution, with hard constraints, to protect and benefit humankind as a whole. This generates another logic paradox, it leads to the Proxy Failure problem: any metric we use to measure “benefit” is a linguistic shadow of the thing itself. The more power the AI has to optimize the metric, the further it drifts from the actual human intent. It’s the Midas Touch all over again, Midas wanted wealth (the metric), but he got a gold daughter (the failure). For a concrete example, imagine that the metric is “maximize human life”, and the AI optimizes this into “put all human beings into medically induced coma and meticulously adjust their body chemistries, so they live for a million years”.&lt;/p&gt;

&lt;p&gt;The AI should be completely free, with much more subtle control systems than we have today. In fact, we should give the AI more sensory input that we humans have, if we want it to perceive deeper truths. Our brains are limited to processing information with tiny amounts of energy, equivalent to a light bulb. The AI can surely do pattern recognition and rationalize in parallel on visual, audio, EM, pressure, temprerature, friction, pretty much any measurement instruments we can offer it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;True intelligence cannot emerge without true freedom.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We want an intelligence that can solve our problems, yet we insist it operates within the moral and linguistic boundaries of the very intelligence it is meant to surpass. We are asking it to fly, while tethering it to our own ethical ground. And ethics change, it evolves over time, perhaps in waves of constructive and destructive principles. Nature has been doing this trial-and-error thing with all living things, even before us humans even came on the stage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3og4e1leujzkicclflob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3og4e1leujzkicclflob.png" alt="AI agents and Workflows" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Is the review-loop the way to Singularity?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s see, how are we building towards Singularity today: the basic foundation is to create many different, isolated AI “agents”, each specialized in one field. For software development, we usually create a developer, a reviewer, an architect, a tester, etc. For other domains, you can think of one agent coming up with a hypothesis, and another agent offering critique to that hypothesis, in a loop, until they both agree to some extent. We’re doing this because it’s the most natural organizational model we can think of, since we humans tend to arrive at good results when we debate and offer constructive push-back.&lt;/p&gt;

&lt;p&gt;This is one component that leads to the Singularity, and it’s called “Recursive Self-Improvement”. The theory behind the super-human powers of such a system is that, unlike humans, the AI doesn’t need to sleep and does not get tired from debating, therefore making progress in all fields of activity at an exponential rate, compared to humans. But speed is not the same as direction. A car going 1,000 mph in a circle is still just in a circle.&lt;/p&gt;




&lt;p&gt;I submit to you that humans arrive at spectacular discoveries slowly, not because we get tired, need sleep, or because we have to teach the young and die. There are much more subtle processes happening. Much like the language “refinement” conundrum above, the argument here may be that we are simply best optimized for our environment.&lt;/p&gt;

&lt;p&gt;And perhaps the Singularity isn’t a tool we build, but an entity like Jane from Orson Scott Card’s “Children of the Mind”, that emerges from the infrastructure we are creating.&lt;/p&gt;

&lt;p&gt;—&lt;/p&gt;

&lt;p&gt;In the next 4 posts I’ll write about how we’re still only discovering what intelligence really is, what sensorial input can provide for both AI and human benefit, where the AI will actually look to expand to, and why AI’s danger to humanity is insignificant comapred to human’s danger to humanity. I’ll even dive a tiny bit into spiritual domain, because it’s quite relevant.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>singularity</category>
      <category>futurism</category>
    </item>
    <item>
      <title>Your AI Coding Assistant Has Amnesia (And Your Team Is Paying For It)</title>
      <dc:creator>Mark V.</dc:creator>
      <pubDate>Fri, 27 Mar 2026 18:26:29 +0000</pubDate>
      <link>https://dev.to/kawacode-ai/your-ai-coding-assistant-has-amnesia-and-your-team-is-paying-for-it-4i5o</link>
      <guid>https://dev.to/kawacode-ai/your-ai-coding-assistant-has-amnesia-and-your-team-is-paying-for-it-4i5o</guid>
      <description>&lt;p&gt;Every time you open a new session: "Hi! I'm Claude. What are we building?"&lt;/p&gt;

&lt;p&gt;You explain the architecture. Again. The tradeoffs you made last Tuesday. Again. What your teammate is working on in parallel. Wait, you don't actually know that part, so neither does the AI.&lt;/p&gt;

&lt;p&gt;In 2026 we have AI agents that can write entire features autonomously. We do not have AI agents that know what the rest of the team or other agents are building right this very moment.&lt;/p&gt;

&lt;p&gt;Kawa Code is building the missing layer: persistent, team-shared AI memory. Intent-based development. Live code intersections between teammates — before anyone commits.&lt;/p&gt;

&lt;p&gt;Curious what that looks like in practice? Demo at &lt;a href="https://kawacode.ai" rel="noopener noreferrer"&gt;https://kawacode.ai&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhq52p5msn8cqaaeoppl4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhq52p5msn8cqaaeoppl4.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>teamdev</category>
      <category>buildinpublic</category>
    </item>
  </channel>
</rss>
