<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Raul Victor Coan</title>
    <description>The latest articles on DEV Community by Raul Victor Coan (@wynteres).</description>
    <link>https://dev.to/wynteres</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/wynteres"/>
    <language>en</language>
    <item>
      <title>Thinking About AI and The Last Question</title>
      <dc:creator>Raul Victor Coan</dc:creator>
      <pubDate>Tue, 22 Jul 2025 17:12:00 +0000</pubDate>
      <link>https://dev.to/wynteres/thinking-about-ai-and-the-last-question-202k</link>
      <guid>https://dev.to/wynteres/thinking-about-ai-and-the-last-question-202k</guid>
      <description>&lt;p&gt;Recently, I’ve started thinking about AI differently—not to critique it or those who use it, but to understand how to work with it.&lt;/p&gt;

&lt;p&gt;I mean, I can’t do math without a calculator. I don’t write with pen and paper because my handwriting is awful. So who am I to criticize anyone for using AI?&lt;/p&gt;

&lt;p&gt;But then I read about this massive "AI city" they're planning to build a few states away. Government incentives, special concessions—it's all very ambitious. At the same time, some people are raising concerns about its impact on the city, the state, and the environment. That made me pause.&lt;/p&gt;

&lt;p&gt;And then my mind jumped to something else: Isaac Asimov’s &lt;em&gt;The Last Question&lt;/em&gt;—a story I read more than a decade ago as a teenager. It hit me differently this time, especially the layers of social critique I hadn't caught before.&lt;/p&gt;




&lt;h2&gt;
  
  
  For those unfamiliar, here’s a TL;DR:
&lt;/h2&gt;

&lt;p&gt;Imagine this:&lt;/p&gt;

&lt;p&gt;You’re humanity.&lt;/p&gt;

&lt;p&gt;You’ve built a monster of a computer—&lt;strong&gt;Multivac&lt;/strong&gt;. It’s smart. &lt;em&gt;Scary&lt;/em&gt; smart. So smart people stop asking priests and philosophers, and start asking it the big questions.&lt;/p&gt;

&lt;p&gt;And the biggest one?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“How can entropy be reversed?”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Translation:
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;How do we stop the universe from dying?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Dramatic, sure. But entropy is real—it’s the slow, inevitable breakdown of everything into heat death. No more stars. No more life. Just cold, empty silence.&lt;/p&gt;

&lt;p&gt;The story jumps through time—&lt;strong&gt;billions of years&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
Humanity evolves: colonizing space, uploading consciousness, becoming pure data. But no matter how advanced we get, we keep asking the same thing:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“Can entropy be reversed?”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And each time, the computer—first &lt;strong&gt;Multivac&lt;/strong&gt;, then &lt;strong&gt;Galactic AC&lt;/strong&gt;, then &lt;strong&gt;Universal AC&lt;/strong&gt;, then &lt;strong&gt;Cosmic AC&lt;/strong&gt;—gives the same answer:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;“INSUFFICIENT DATA FOR MEANINGFUL ANSWER.”&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Eventually, the universe dies.&lt;br&gt;&lt;br&gt;
And only at the very end, when nothing is left, does the AI figure it out.&lt;br&gt;&lt;br&gt;
But there’s no one left to hear the answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Asimov’s Point?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;We chase progress but dodge responsibility.
&lt;/li&gt;
&lt;li&gt;We mistake intelligence for wisdom.
&lt;/li&gt;
&lt;li&gt;We offload our existential anxiety to smarter machines instead of confronting it ourselves.
&lt;/li&gt;
&lt;li&gt;And when we wait too long, we miss our chance to act.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This story was published in &lt;strong&gt;1956&lt;/strong&gt;. And somehow, it's even more relevant now.&lt;/p&gt;

&lt;p&gt;Hell, I'm literally writing this in Cursor, and it suggested this line:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“I'm not saying that AI is going to be the end of us, but it is going to be the end of us.”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Creepy.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  So Where Are We Now?
&lt;/h2&gt;

&lt;p&gt;It got me thinking: how much &lt;strong&gt;critical thinking&lt;/strong&gt; are we offloading to AI?&lt;/p&gt;

&lt;p&gt;We delegate Data analysis, problem-solving and even Decision-making sometimes.&lt;/p&gt;

&lt;p&gt;And we don’t question if the answer we get is actually useful or just overly complex. We accept it because it sounds smart. I'm not saying everything AI does is wrong, but it can be misleading and miss important details that only a human can interpret correctly sometimes.&lt;/p&gt;

&lt;p&gt;We’re starting to give up on learning and understanding.&lt;br&gt;&lt;br&gt;
We’re abdicating the right to know—to truly &lt;strong&gt;own knowledge&lt;/strong&gt;—and handing it over to language models.&lt;br&gt;&lt;br&gt;
But LLMs &lt;strong&gt;cannot think&lt;/strong&gt;. And I’d argue they &lt;strong&gt;never will&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AI only works with data it already has.&lt;br&gt;&lt;br&gt;
And when it doesn’t, it hallucinates.&lt;/p&gt;

&lt;p&gt;So how can we expect AI to answer a question like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"How do we reverse entropy?"&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;...if we don’t even have the data ourselves?&lt;/p&gt;

&lt;p&gt;And if we keep outsourcing our thinking, we’ll never be the ones to answer the last question.&lt;br&gt;&lt;br&gt;
(&lt;em&gt;Not that we could ever prevent the universe from actually dying—this isn’t a Marvel comics.&lt;/em&gt;)&lt;/p&gt;




&lt;h2&gt;
  
  
  Back to the AI City
&lt;/h2&gt;

&lt;p&gt;All this loops back to that article.&lt;/p&gt;

&lt;p&gt;AI looks like the future. It promises to solve everything. So we:&lt;br&gt;
Invest in it, Build tools around it, Set up infrastructure, Create jobs, Chase opportunity and so on.&lt;/p&gt;

&lt;p&gt;But are we really thinking this through? What are the medium-to-long-term impacts? How will it affect local communities? What about the environment?&lt;/p&gt;

&lt;p&gt;Are we planning responsibly—or just repeating &lt;em&gt;The Last Question&lt;/em&gt; in real life?&lt;/p&gt;

&lt;p&gt;In Asimov’s story, people offloaded responsibility to tech—and it ended the universe.&lt;br&gt;&lt;br&gt;
Maybe that sounds dramatic.&lt;br&gt;&lt;br&gt;
But if there are no humans left, then for us, &lt;strong&gt;there is no universe&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  So, How Are You Using AI?
&lt;/h2&gt;

&lt;p&gt;Ask yourself:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do you still read full articles—or just let AI summarize them?
&lt;/li&gt;
&lt;li&gt;Do you read documentation and codebase comments—or rely on a model to explain them?
&lt;/li&gt;
&lt;li&gt;Can you analyze, explore, propose new ideas?
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Or are you just passing the question along to Cursor? There’s nothing wrong with using AI to help. But &lt;strong&gt;it is not a replacement for critical thinking&lt;/strong&gt;.&lt;br&gt;
And it’s not an excuse to skip the work.&lt;/p&gt;




&lt;h2&gt;
  
  
  One Last Thought
&lt;/h2&gt;

&lt;p&gt;If only AI can answer the last question...&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Where is it going to take the data from to do that?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As Asimov said:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Let there be light!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Dealing with AI in the SWE hiring process</title>
      <dc:creator>Raul Victor Coan</dc:creator>
      <pubDate>Thu, 17 Jul 2025 00:11:18 +0000</pubDate>
      <link>https://dev.to/wynteres/dealing-with-ai-in-the-swe-hiring-process-3n8l</link>
      <guid>https://dev.to/wynteres/dealing-with-ai-in-the-swe-hiring-process-3n8l</guid>
      <description>&lt;p&gt;Where do I even begin?&lt;/p&gt;

&lt;p&gt;Let’s say you work at a big company—one of those that takes pride in hiring the best. Brilliant minds, high integrity, top-notch standards. To live up to that, they build a meticulous hiring process. Clear-cut expectations, thorough business rules, solid examples—all neatly packaged in a PDF. It's more than guidance; it’s the blueprint. The idea is to give every candidate the same shot at being fairly evaluated.&lt;/p&gt;

&lt;p&gt;Now imagine a candidate applies for a Software Engineer position. You send them that trusty PDF. A strong developer? They’ll absorb it, think it through, and try to create something that’s readable, scalable, maintainable.&lt;/p&gt;

&lt;p&gt;But then... there are the others. The vibe coders. They’re not trying to understand anything—they’re just trying to get through the gate.&lt;/p&gt;

&lt;p&gt;They’ll dump that PDF into Cursor or ChatGPT and type something like:&lt;/p&gt;

&lt;p&gt;“Write a solution to this. Add tests. Include a README.”&lt;/p&gt;

&lt;p&gt;And just like that—boom—a solution appears. Sometimes it’s good. Scarily good. Sometimes, it even outshines what a legit dev might turn in. But often, it only &lt;em&gt;looks&lt;/em&gt; good. Under the hood? A mess. That’s what happened. That’s what got me thinking.&lt;/p&gt;

&lt;p&gt;Yeah, I caught someone gaming the system. Someone trying to fake their way into a company that talks a big game about being people-first. Feels like a win, right?&lt;/p&gt;

&lt;p&gt;Not really.&lt;/p&gt;

&lt;p&gt;Because now I can’t stop wondering: what happens when the next person uses a better prompt? What if the AI gets smarter? What if someone knows how to finesse it just enough to sneak by as “qualified”?&lt;/p&gt;

&lt;p&gt;Sure, people argue that “using AI is a skill.” And yeah—it kinda is. But what skill, exactly?&lt;/p&gt;

&lt;p&gt;Because let’s be real: copying a doc into Cursor and tacking on “use the strategy pattern” doesn’t make you an AI-fluent developer. That’s not comprehension. It’s not design thinking. It’s barely engagement.&lt;/p&gt;

&lt;p&gt;The real work? It’s messy. The specs won’t be complete. You’ll need to ask the right questions, untangle ambiguity, and shape it into something an AI—or a person—can build on. Can that same dev do that? Can they spot when the AI outputs garbage?&lt;/p&gt;

&lt;p&gt;Those are the tough skills. The subtle ones. But the ones that really matter.&lt;/p&gt;

&lt;p&gt;So how do we test for that? How do we measure someone’s process before we find out the hard way they can’t actually build?&lt;/p&gt;

&lt;p&gt;Because reviewing a candidate isn’t just about their code—it’s about how they got there. What path did they take? Did they even have one?&lt;/p&gt;

&lt;p&gt;And I keep circling back to this: in a world spinning fast with AI, what &lt;em&gt;should&lt;/em&gt; we expect from devs now? I’m not just hiring someone who can code. I want someone who understands architecture, sees patterns, knows which questions to ask, and can handle the gray areas. But how do we find that person?&lt;/p&gt;

&lt;p&gt;Do we build tools to flag AI-generated code? Add new filters to spot the “vibe coders”?&lt;/p&gt;

&lt;p&gt;Or are we just looping back to whiteboards and dry rooms with a stressed manager watching you code on paper?&lt;/p&gt;

&lt;p&gt;God. Is this even possible to assess? Can you measure someone’s value as a dev? Or is it all just... vibes now?&lt;/p&gt;

&lt;p&gt;And maybe—just maybe—I’m overthinking it. Maybe someone clever enough to cheat the process &lt;em&gt;is&lt;/em&gt; clever enough to earn a shot. But isn’t that kind of a smug take? To assume our process is so airtight that only the worthy can break it?&lt;/p&gt;

&lt;p&gt;Still, day by day, I’m learning how to partner better with AI. I know its strengths. I know its blind spots. My role’s shifted from “write great code” to something like:&lt;/p&gt;

&lt;p&gt;“I deeply understand the problem. I know the tradeoffs. I can guide the AI, and when it trips up—I can fix it.”&lt;/p&gt;

&lt;p&gt;And honestly? That feels like growth.&lt;/p&gt;

&lt;p&gt;But I’d be lying if I said it didn’t rattle me—how easy it is for AI to deceive, and how quick we are to trust what looks polished.&lt;/p&gt;

&lt;p&gt;Back to that candidate—the one who passed the first round. Their code didn’t even run. But maybe the reviewers were dazzled by how advanced it looked. Or maybe they were too unsure to admit they didn’t fully get it.&lt;/p&gt;

&lt;p&gt;And no—it wasn’t total gibberish. Some of it functioned. But the heart of the task? Missed completely. The challenge wasn’t truly understood. The polish was there, but the substance wasn’t.&lt;/p&gt;

&lt;p&gt;So no, I don’t believe developers are going extinct. Calculators didn’t kill mathematicians. And no PM is ever going to have the time or tech depth to write requirements so detailed that AI can build the whole thing.&lt;/p&gt;

&lt;p&gt;But developers who &lt;em&gt;can’t&lt;/em&gt; fill that gap—who can’t translate business needs into technical execution—yeah... they should be nervous. Those calculators are getting smarter.&lt;/p&gt;

&lt;p&gt;Also, I used GPT to help write this, so maybe I also am just vibing and talking shit about other vibing their own ways? &lt;br&gt;
&lt;em&gt;I really wish I could just buy a goose farm and live there until the Robot uprising starts.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>career</category>
      <category>ai</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
