<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ed de Almeida</title>
    <description>The latest articles on DEV Community by Ed de Almeida (@eddealmeidajr).</description>
    <link>https://dev.to/eddealmeidajr</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/eddealmeidajr"/>
    <language>en</language>
    <item>
      <title>When Codes of Conduct Clash with Code: The Algorithmic Hypocrisy We're All Building</title>
      <dc:creator>Ed de Almeida</dc:creator>
      <pubDate>Sat, 25 Oct 2025 12:47:07 +0000</pubDate>
      <link>https://dev.to/eddealmeidajr/when-codes-of-conduct-clash-with-code-the-algorithmic-hypocrisy-were-all-building-5a41</link>
      <guid>https://dev.to/eddealmeidajr/when-codes-of-conduct-clash-with-code-the-algorithmic-hypocrisy-were-all-building-5a41</guid>
      <description>&lt;p&gt;Hey dev community 👋&lt;/p&gt;

&lt;p&gt;I've been wrestling with something that might resonate with you all. Recently, I was debugging a recommendation algorithm and had this uncomfortable realization: we're writing moral frameworks into our systems without even realizing it.&lt;/p&gt;

&lt;p&gt;Think about it: every time we choose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What data to train our models on&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What "success" looks like in our metrics&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Which edge cases to prioritize&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How to handle controversial content&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;...we're making ethical decisions. But here's the kicker: these embedded moral choices often directly contradict the beautiful Codes of Conduct our companies proudly display.&lt;/p&gt;

&lt;p&gt;The Pinterest example hit me hard: searching "beautiful black woman" vs "beautiful white woman" returns dramatically different results. The algorithm learned society's biases, then amplified them. Meanwhile, their Code of Conduct promises inclusivity and diversity.&lt;/p&gt;

&lt;p&gt;This isn't about bad actors—it's about unconscious moral debt piling up in our systems. We're so focused on shipping features that we forget we're shipping value judgments too.&lt;/p&gt;

&lt;p&gt;Some questions I've been asking myself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;How many of our "optimizations" are actually ethical compromises?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Are we auditing our algorithms with the same rigor we audit our security?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When did "move fast and break things" include breaking social trust?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I dove deeper into this paradox between stated ethics and embedded ethics in a piece that explores solutions beyond just writing better Codes of Conduct. Not sharing it to self-promote, but because I genuinely think we need to have this conversation as a community: &lt;a href="https://blog.thecodejedi.online/2025/10/code-of-conduct-hidden-moral-frameworks.html" rel="noopener noreferrer"&gt;https://blog.thecodejedi.online/2025/10/code-of-conduct-hidden-moral-frameworks.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What ethical dilemmas have you encountered in your work? Have you ever had to push back against a feature that felt morally questionable? Let's talk about the real-world ethics of building systems that shape human behavior.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Is AI failing? (Part 2)</title>
      <dc:creator>Ed de Almeida</dc:creator>
      <pubDate>Fri, 03 Jan 2025 06:02:58 +0000</pubDate>
      <link>https://dev.to/eddealmeidajr/is-ai-failing-part-2-234f</link>
      <guid>https://dev.to/eddealmeidajr/is-ai-failing-part-2-234f</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/eddealmeidajr/is-ai-failing-part-1-3418"&gt;part 1 of this series&lt;/a&gt;, I reflected on how the hype surrounding AI mirrors past technological waves, such as the rise and fall of Expert Systems in the 1980s.&lt;/p&gt;

&lt;p&gt;Back then, the promise of AI transforming industries and solving complex problems was met with enthusiasm — and eventual disillusionment. Today, we find ourselves in a similar cycle, but the stakes are higher, and the misunderstandings are more widespread. In this article, let's dig deeper into what has changed since then and explore whether we're truly learning from history or simply repeating it.&lt;/p&gt;

&lt;p&gt;In Part 1, we explored how public fascination with tools like ChatGPT has led to unrealistic expectations about AI’s capabilities. This disconnect stems from the way LLMs function, which often gives the illusion of intelligence while lacking its true essence. To understand why this distinction matters, we need to revisit what makes LLMs so captivating—and so misunderstood.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern Predictors, Not Thinkers&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;In the first article, I highlighted how people marveled at ChatGPT’s ability to generate coherent responses. However, this "intelligence" is no more than advanced pattern prediction. LLMs don’t "think"; they predict the next word based on patterns in their training data. This is a far cry from the intentional, goal-driven reasoning we associate with human intelligence.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Role of Expectations&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;As discussed earlier, inflated expectations have always plagued AI advancements, from Expert Systems in the 1980s to today’s generative models. Ordinary people, unfamiliar with the nuances of AI, interpret fluency in language as evidence of deep understanding. But LLMs don’t understand—they merely mimic understanding, drawing from pre-existing data without grasping meaning.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Absence of Intentionality and Goals&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;True intelligence requires intentionality — the ability to form goals, pursue them, and adapt based on experience. When ChatGPT or similar models provide an answer, it’s not because they "want" to help; they’re simply following statistical algorithms. This lack of purpose underscores a fundamental limitation that separates these systems from human cognition.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No Understanding of Meaning&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;In Part 1, I reflected on Marvin Minsky’s insight that intelligence cannot be modeled without first understanding its nature. LLMs embody this limitation — they process symbols without understanding their meaning. For instance, they know that "fire" often appears with words like "hot" or "burn," but they don’t comprehend the concept of fire as a physical or cultural phenomenon.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependence on Training Data&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;We’ve seen how AI mirrors the data it’s trained on, for better or worse. In the same way that the limitations of Expert Systems were tied to hardware constraints in the 1980s, today’s LLMs are bound by the biases, inaccuracies, and gaps in their training datasets. Unlike humans, who can question and adapt, LLMs lack critical thinking and moral reasoning.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No Self-Awareness or Emotion&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;In Part 1, I questioned whether replicating human intelligence is desirable, given our flaws. LLMs, devoid of self-awareness or emotion, avoid some of our pitfalls but also miss out on the richness that emotions and consciousness bring to decision-making and creativity.  &lt;/p&gt;

&lt;p&gt;Human intelligence isn’t just about processing information — it’s about understanding, adapting, and creating meaning. In this sense, LLMs are remarkable tools, but they are not intelligent beings. They reflect the strengths and weaknesses of their programming and data, and their "creativity" is limited to recombining existing information in new ways.  &lt;/p&gt;

&lt;p&gt;To build something truly intelligent, as Marvin Minsky suggested, we must first understand our own minds. And as I posed in the first article: given the state of humanity — with our wars, hunger, and environmental destruction — are we even ready to replicate intelligence?  &lt;/p&gt;

&lt;p&gt;The limitations of LLMs remind us that AI is still a long way from true intelligence. For now, they remain what they were always meant to be: tools for augmenting human capabilities, not replacing them.&lt;/p&gt;

&lt;p&gt;The story of Artificial Intelligence has always been one of both potential and caution. As we marvel at the capabilities of tools like LLMs, we must also remain grounded in their limitations. They are powerful allies but far from the sentient beings we often imagine them to be.&lt;/p&gt;

&lt;p&gt;In Part 1, I reflected on how our inflated expectations can lead to disillusionment. In this article, we’ve explored why LLMs, while extraordinary, cannot yet fulfill the dream of true intelligence. Perhaps that’s a good thing. Before we strive to replicate our minds in machines, let us first strive to understand— and improve — our own intelligence.&lt;/p&gt;

&lt;p&gt;AI has the potential to amplify our best qualities, but it can just as easily reflect our worst. If we proceed thoughtfully, we might find that AI doesn’t need to replace our intelligence to revolutionize our world. It simply needs to help us use the intelligence we already have, more wisely and compassionately.&lt;/p&gt;

&lt;p&gt;Maybe the real question isn’t whether AI is failing — it’s whether we’re ready to succeed alongside it&lt;/p&gt;

&lt;p&gt;See you next article, right?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Is AI failing? (Part 1)</title>
      <dc:creator>Ed de Almeida</dc:creator>
      <pubDate>Fri, 30 Aug 2024 22:13:22 +0000</pubDate>
      <link>https://dev.to/eddealmeidajr/is-ai-failing-part-1-3418</link>
      <guid>https://dev.to/eddealmeidajr/is-ai-failing-part-1-3418</guid>
      <description>&lt;p&gt;Statistics are out there and people say numbers don't lie. About 80% of all AI projects fail.&lt;/p&gt;

&lt;p&gt;Is AI a complete failure? What is the problem with it?&lt;/p&gt;

&lt;p&gt;There is no problem at all. Not with AI anyway.&lt;/p&gt;

&lt;p&gt;The problem lies in the high expectations that have been built since ChatGPT was launched.&lt;/p&gt;

&lt;p&gt;Up until this point in human history, Artificial Intelligence was a subject for academics. Ordinary people had never had such direct contact with this topic.&lt;/p&gt;

&lt;p&gt;Suddenly ChatGPT was there, talking to them. Giving coherent answers. Writing organized, rational texts. And ordinary people thought, "Hey, that's real intelligence!"&lt;/p&gt;

&lt;p&gt;Academics knew, of course, it was just a LLM, a large language model trained to give those answers, write those texts. But who listens to academics anyway? Fantasy was much more fascinating, much more romantic, than reality.&lt;/p&gt;

&lt;p&gt;Within a week of ChatGPT’s appearance, YouTube was filled with videos about how to get rich with AI, how to write the perfect prompts, how to discover life, the universe, and everything. And the answer was no longer a simple “42”. Forget that Douglas Adams! The answer now was in pages and pages of text written by that LLM.&lt;/p&gt;

&lt;p&gt;Soon entrepreneurs (and who isn't an entrepreneur these days?) started creating their own wonderful projects using Artificial Intelligence. Fanciful ideas, based only on the hype about ChatGPT. These are the projects that are failing now!&lt;/p&gt;

&lt;p&gt;This is not the first time this has happened. I am 56 years old now and have been working in software since 1984. Exactly 40 years. And this means that I was in the market in the late 80s, when another shock wave of Artificial Intelligence hit the world: Expert Systems.&lt;/p&gt;

&lt;p&gt;I still remember articles like "In five years we will no longer need doctors, because all medical knowledge will be condensed into computers and Expert Systems will perform consultations and diagnose illnesses."&lt;/p&gt;

&lt;p&gt;Almost 40 years later, we still need doctors. And no one talks about Expert Systems anymore.&lt;/p&gt;

&lt;p&gt;In those pre-Internet days, we ran into the limitations of the computing power of computers. We had the right idea at the wrong time. We knew that it was possible to create elaborate knowledge bases, we knew that it was possible to enter information about a specific branch of knowledge into these knowledge bases. But we had to deal with old PC-XTs with 16 megabytes of RAM, an 8088 processor and 200 megabyte hard drives.&lt;/p&gt;

&lt;p&gt;We are now in the age of distributed computing. Memory has become cheap. Hard drives have become cheap. And who cares about them when we have server farms and the cloud? Theoretical modeling has also evolved significantly. What on earth is going wrong?&lt;/p&gt;

&lt;p&gt;Our expectations have grown too high! No one wants anything less than the solution to life, the universe and everything. And no one accepts "42" as an answer anymore!&lt;/p&gt;

&lt;p&gt;It turns out that, with all due respect to the creators of ChatGPT, and its imitators that spring up from the ground every day, true Artificial Intelligence is still a long way off. It is much more than Machine Learning and LLMs. And the researchers who work on this know what I'm talking about. Those who don't know are the ordinary people, who were excited by ChatGPT's answers, because they discovered that ChatGPT could write better than they could. Which, by the way, wasn't that hard.&lt;/p&gt;

&lt;p&gt;Assuming that one day we can truly define what intelligence is and that, based on this definition, we can create something similar, we will still have to face the reality that this artificial intelligence will be like our own. And when I think that we are destroying our planet and reducing the chances of survival of our species day by day, I wonder if we are really that intelligent, and if it is really worth reproducing our intelligence in machines.&lt;/p&gt;

&lt;p&gt;Yes, if we ever create a true artificial intelligence, it will be based on our own intelligence, with its strengths and weaknesses. Of course, it will have access to more data than an average person can gather in a lifetime, but it will also have to process it with the equivalent of a human mind.&lt;/p&gt;

&lt;p&gt;I had the opportunity to meet one of the pioneers of Artificial Intelligence research, Professor Marvin Minsky.&lt;/p&gt;

&lt;p&gt;Professor Marvin, in recent decades, was more concerned with defining intelligence and understanding its processes than with its computational modeling. For him, the limitation of research in this area was that we were not sure what we were trying to model. We needed to understand the human mind before we could replicate it.&lt;/p&gt;

&lt;p&gt;With all due respect to Professor Marvin, a brilliant man and a truly worthy human being, I will go a little further.&lt;/p&gt;

&lt;p&gt;I believe that we need to improve our intelligence before it becomes worthy of being replicated.&lt;/p&gt;

&lt;p&gt;As long as we have hunger, wars, terrorism by groups and states, children dying of starvation and other evils, are we really that intelligent? Is it really interesting to replicate something that is not working well? That is not solving the problems of our world?&lt;/p&gt;

&lt;p&gt;Yes, I know. This all got a little too philosophical. That's how I feel today. But since this is the first article in a series I plan to write on the subject here, I've allowed myself the freedom to address general issues before getting to the heart of the matter.&lt;/p&gt;

&lt;p&gt;I hope this introduction has given you food for thought. And I would be very happy to receive comments and criticism. As I said, I am 56 years old and therefore I am from the generation when criticism was not seen as a personal attack and we did not get depressed when we were criticized.&lt;/p&gt;

&lt;p&gt;See you in the next article!&lt;/p&gt;

&lt;p&gt;Ed de Almeida&lt;br&gt;
&lt;a href="mailto:edvaldoajunior@gmail.com"&gt;edvaldoajunior@gmail.com&lt;/a&gt;&lt;br&gt;
+55(41)99620-8429&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
