<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Juan Gómez</title>
    <description>The latest articles on DEV Community by Juan Gómez (@longor).</description>
    <link>https://dev.to/longor</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/longor"/>
    <language>en</language>
    <item>
      <title>Fault-Tolerant Quantum Computers for Muggles</title>
      <dc:creator>Juan Gómez</dc:creator>
      <pubDate>Wed, 02 Jul 2025 01:41:28 +0000</pubDate>
      <link>https://dev.to/longor/fault-tolerant-quantum-computers-for-muggles-58id</link>
      <guid>https://dev.to/longor/fault-tolerant-quantum-computers-for-muggles-58id</guid>
      <description>&lt;p&gt;"Fault-Tolerant Quantum Computers” might sound like jargon, but what we’re really talking about are universally useful quantum computers—machines that can do something meaningful for real-world problems. &lt;br&gt;
Let me share the excitement about what have been recently published  because I genuinely believe it is big. Bottom line: &lt;strong&gt;We know how to build a quantum computer that is going to be universally useful&lt;/strong&gt;. One that will be able to function despite all the problems that naturally arise from the quantum world. One that can correct information fast enough so it can be used to solve a non-negligible range of problems way better than our classical counterparts.&lt;/p&gt;

&lt;p&gt;This is an attempt to explain in plain English all the nitty gritty details, so you can explain to your colleagues while taking some cañas on a Thursday after work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Correcting noise
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/longor/the-quantum-computing-race-for-muggles-1bgi"&gt;I’ve already talked about it in the past&lt;/a&gt;, but let me insist, because this is the key to understanding why we are doing what we are doing.&lt;/p&gt;

&lt;p&gt;Whenever we try to act upon our qubits so we can get a computational advantage of their quantum properties, qubits tend to disorientate over time and we can’t get anything meaningful out of them. So yes, we need a way to fix this otherwise these expensive machines are worthless. So far, we haven’t found a way to keep them completely focused long enough, and yes, it’s probably not possible, but that is ok. Did you know that our classical transistor-based hardware is actually not perfect either?. Yes, it makes mistakes from time to time too. The difference, though, is that we found a way a long time ago to correct these mistakes so computers, phones and fridges are useful nowadays.&lt;br&gt;
We think there’s a way to fix our qubits, too. We made some experiments about it, and we are confident that this can be done. That was not the case not very long ago, even though we kept the faith, faith was not going to fix the qubits - science will.&lt;/p&gt;

&lt;p&gt;What is more interesting, the techniques used for correcting errors in our qubits are heavily inspired by those used in classical hardware. Everything is about using several qubits to represent a piece of information, and some other auxiliary qubits to help in the process. This is the reason why you would hear about this concept of “logical qubits”, vs “physical qubits”. A logical qubit is error-corrected and useful, but it requires several physical qubits - which play different roles - to make that possible. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnaule265whze7v2ah9i7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnaule265whze7v2ah9i7.png" alt="A logical qubits is not more than a collection of physical qubits with different purposes in order to correct errors" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Surface Codes
&lt;/h2&gt;

&lt;p&gt;There’s a myriad of techniques to correct qubit errors, each one has its pros and cons. Most of the industry has put a lot of focus on a family of error correction techniques called “Surface codes”, but one of the major problems is the huge amount of physical qubits required to protect a logical qubit, something in the order of thousands, but, in the other hand, they work very well with noisy qubits. &lt;/p&gt;

&lt;h2&gt;
  
  
  LDPCQ &amp;amp; Gross Codes
&lt;/h2&gt;

&lt;p&gt;Scalability (number of qubits) matters if the goal is to build a quantum computer that can solve real-world practical problems. One promising family of codes for building large-scale fault-tolerant quantum computers is the LDPCQ (Low-Density Parity-Check Quantum) codes. Within this family, there’s a notable subfamily known as the “Gross code,” named after Nobel Prize–winning physicist David Gross.&lt;br&gt;
It offers several attractive properties, but also comes with tradeoffs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It doesn’t require too many physical qubits to operate, just 12 or 24 (depending on the use case)! so it scales very well &lt;/li&gt;
&lt;li&gt;It’s relatively easy to build specialized hardware that reduces latency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On the flip side:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detecting and interpreting errors is more challenging &lt;/li&gt;
&lt;li&gt;It requires way lower error rates in the physical qubits &lt;/li&gt;
&lt;li&gt;The hardware architecture is also tailored for this technique&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Is a very strong commitment, so it better works as expected!.&lt;br&gt;
As you can imagine, making such an important decision requires extensive experimentation, prototyping, and an organization designed for rapid hypothesis validation. This gives us high confidence in the chosen path. More importantly, it allows us to quickly adjust course if needed.&lt;/p&gt;

&lt;p&gt;These new fault-tolerant quantum computers are not the end of the journey–they're the beginning of the truly useful era for quantum computing.&lt;br&gt;
They are not initially going to solve all kinds of problems… I know what you are thinking… Don't worry, &lt;a href="https://thequantuminsider.com/2025/05/24/google-researcher-lowers-quantum-bar-to-crack-rsa-encryption/" rel="noopener noreferrer"&gt;your bitcoins are going to be safe&lt;/a&gt; for many years to come.&lt;br&gt;
They are definitely going to solve some of today's intractable problems, though.&lt;br&gt;
I can’t wait to see how this technology unfolds—and how it shapes science, industry, and everyday life. &lt;br&gt;
For me, it’s already having a great impact :).&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Quantum Computing Race for Muggles</title>
      <dc:creator>Juan Gómez</dc:creator>
      <pubDate>Wed, 15 Jan 2025 03:16:21 +0000</pubDate>
      <link>https://dev.to/longor/the-quantum-computing-race-for-muggles-1bgi</link>
      <guid>https://dev.to/longor/the-quantum-computing-race-for-muggles-1bgi</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;This is an attempt to explain in plain English, from an insider, what’s going on with the quantum computing race that is giving a lot of mixed feelings to investors lately. Our own particular singularity will be reached when someone builds an error-corrected quantum computer, capable of doing &lt;strong&gt;universal computation&lt;/strong&gt; (something apparently forgotten by media histeria). But until then, there are key milestones we must conquer.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI enters the room
&lt;/h2&gt;

&lt;p&gt;AI is the new kid in the block, everyone is paying attention to it, and for a good reason, so it's time for us, the quantum computing community, to refocus on what is really important. The new kid has an astronomical potential to change everything what we know dramatically, even in the short-to-mid term. This leaves no room for us to make empty promises anymore. &lt;strong&gt;We need to perform now or watch investors shift their focus to AI.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The price is succulent, the economic reward is beyond expectations but what is really motivating most of the people who have embraced this odyssey is the opportunity to realize one of humanity's most ambitious goals. We all want to be on the team that earns a place among the pantheon of gods of humanity's most impactful achievements.&lt;/p&gt;

&lt;p&gt;With such stakes, it’s no surprise that some contenders muddy the waters to grab attention. It’s a strategy like any other: publish a paper, let the marketing team do their job, and wait for the media to spin epic, click-bait stories no one will bother to fact-check. After all, who spends their Sunday investigating why error-corrected non-Clifford gates are so hard to implement? Muggles certainly don’t. It’s easier—and more profitable—to talk about machines that open portals to multiple universes. (Don’t get me wrong—I love science fiction too!)&lt;/p&gt;

&lt;p&gt;It took immense effort to move beyond the “Quantum Supremacy” nonsense, and now we’ve kind of agreed that &lt;strong&gt;Quantum Advantage&lt;/strong&gt; is the next big thing before the &lt;em&gt;real&lt;/em&gt; next big thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Quantum Advantage
&lt;/h2&gt;

&lt;p&gt;I won’t answer this without some dramatic context first. If there’s one thing we’ve learned from past “quantum supremacy” delusions, it’s that finding a problem worth solving is actually the hard part. If the chosen problem is so niche that only a few mathematicians with supernatural powers can understand it, then it’s not really quantum advantage. It’s progress, sure, and that’s welcome. But classical computers won’t be in any real danger.&lt;/p&gt;

&lt;p&gt;Drawing attention to yourself is tempting, especially when you want to captivate investors and keep the research going. Quantum computing is so unintuitive and complex that progress announcements are easily misinterpreted—and some players exploit this. It’s unfair, but hey, who are we to spoil the party for the Muggles? Let them have their fun!&lt;/p&gt;

&lt;p&gt;Now, back to the big question: What is Quantum Advantage?&lt;br&gt;
Here’s a somewhat disappointing answer: The community hasn’t reached a consensus on its definition yet.&lt;/p&gt;

&lt;p&gt;What we agreed upon so far: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; The problem must be a &lt;strong&gt;real-life problem&lt;/strong&gt; (™).&lt;/li&gt;
&lt;li&gt; There’s currently no classical way to solve it efficiently (though eventually, someone might approximate it using classical methods—there are plenty of brilliant minds out there).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The quantum computing community must align on a clear definition to send a strong and unified message to the world: &lt;strong&gt;Be ready. This is coming.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;But what is preventing us to have Quantum Advantage today?&lt;/p&gt;

&lt;p&gt;Ok, let’s talk about the elephant in the room: &lt;strong&gt;Noise&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Noise everywhere
&lt;/h2&gt;

&lt;p&gt;Qubits are noisy, qubits are sensitive, qubits are ephemeral. This is their nature, at least from our macro-world perspective. We are building a machine that is providing Qubits a home where they feel clam, relaxed, open-minded so they are willing to converse and chat about love, music, Frodo Baggins. But there are forces outside of this home that are actively trying to disrupt this enjoyable moment; Cosmic rays, heat, vibrations, magnetic fields, photons, fake news, northern lights! (yes, really!), full moon!, Elon!, you name it. There are many distractions out there who turn our conversations into some sort of unspeakable gibberish where nothing makes sense anymore. This is what we call error accumulation. There’s no practical computation when all these errors stack up, and they do so very quickly!. Unmatched engineering challenge if you ask me. &lt;/p&gt;

&lt;p&gt;This is a battle to death, either we keep this error under control or we die trying. &lt;/p&gt;

&lt;p&gt;We will only win this battle once we know how to keep the noise and/or the outer-world distractions out of the qubits. Error corrected qubits is where we spend millions. &lt;/p&gt;

&lt;p&gt;I have some good news for you. We’re starting to see &lt;strong&gt;green shoots&lt;/strong&gt; of progress. Confidence in the field is higher than ever, and this is a much-needed boost. Quantum computing &lt;strong&gt;will&lt;/strong&gt; happen, but we’re not there yet. We’re still far from declaring victory—but we’re visualizing it. (Okay, I’m biased—I’ve been working in this field for eight years! Let me be excited!)&lt;/p&gt;

&lt;p&gt;Now we know that quantum advantage is only going to happen once we have error corrected qubits… isn’t it? hmmm… Not entirely. While &lt;strong&gt;error-corrected qubits&lt;/strong&gt; are critical, there’s also room for creative approaches. We’re exploring &lt;strong&gt;noise mitigation&lt;/strong&gt; as a workaround for specific problems. This doesn’t fully solve the noise problem, but it allows us to clean up noisy results through post-processing and extract meaningful data. This approach works for small, specific problems—real-life ones, nonetheless—but it doesn’t scale well. The ultimate goal remains a &lt;strong&gt;fault-tolerant quantum computer&lt;/strong&gt; (FTQC).&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's talk about Bitcoins
&lt;/h2&gt;

&lt;p&gt;No, your &lt;strong&gt;Bitcoins are not in danger&lt;/strong&gt; anytime soon.&lt;br&gt;
By the time a quantum computer can break cryptography, all major industries have been taking advantage of the benefits of quantum computation for years. There's plenty of time to get ready and look for fixing the situation... you could also not believe in what I say, but let me know because Bitcoin is too expensive so when we announce quantum advantage in the coming years, I'll be ready for the investment of my life.&lt;/p&gt;

&lt;p&gt;Finally, let’s touch on why recent claims of quantum advantage fall short: &lt;strong&gt;Universal Computation.&lt;/strong&gt; But that’s a discussion for another day. 🙂&lt;/p&gt;

&lt;p&gt;Quantum advantage is coming, even if the definition is still evolving. We’re closer than ever, but there’s still much work to be done. Let’s keep separating the noise from the signal and moving forward.&lt;/p&gt;

&lt;p&gt;Stay tunned.&lt;/p&gt;

</description>
      <category>quantumcomputing</category>
    </item>
    <item>
      <title>Complexity Determines Everything</title>
      <dc:creator>Juan Gómez</dc:creator>
      <pubDate>Thu, 12 Dec 2024 02:58:59 +0000</pubDate>
      <link>https://dev.to/longor/complexity-determines-everything-12m</link>
      <guid>https://dev.to/longor/complexity-determines-everything-12m</guid>
      <description>&lt;p&gt;Not too long ago, a few years back, I started building a perception I've had for a very long time, 25 years (of professional career), which sounds quick to say. I've built a mental model that allows me to approach software projects with an extremely high degree of confidence. Extrapolating from the wonderful Cynefin framework, I could say that in most cases, I can move from the domain of the complex to the domain of the complicated, and from there, to the domain of the simple. I'm going to try an unprecedented communication exercise to convey this somewhat abstract mental model and see what comes out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finnsigm4cfnoxeca8ed5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Finnsigm4cfnoxeca8ed5.png" alt="Cynefin is a framework and a mental model" width="591" height="472"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  A Developer’s Reality
&lt;/h2&gt;

&lt;p&gt;If you've been working for a few years in the wonderful field of software development, you'll have realized a great truth:&lt;br&gt;
Formal training (college degree, bootcamp, etc.) is rarely enough. In fact, I can confirm something that might be a great fear or a great motivation: You're going to be studying and adapting for the rest of your professional career. This is how it is; survival without continuous learning leads to frustration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learn to identify complexity
&lt;/h2&gt;

&lt;p&gt;One of the biggest errors I've seen repeated countless times is related to the vague interpretation of many so-called best practices, principles, and methodologies. But it's not a common mistake for nothing; I myself have committed this error hundreds of times, and often this misinterpretation ends up manifesting in a very serious problem: Over-engineering, or as Fred Brooks brilliantly exposed in his paper "No Silver Bullet": Accidental complexity.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v3y8k6ugqgtpfhobnjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9v3y8k6ugqgtpfhobnjc.png" alt="Essential and Accidental complexity over Effort and Time" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  On Over-engineering
&lt;/h2&gt;

&lt;p&gt;My mistake was failing to assess the complexity of the problem or misjudging the domain of complexity I was operating in, which inevitably led to over-engineering.. You can only realize how much pain it causes when you have to maintain your own creation and start worrying about things like delivery performance or your team's health (development experience).&lt;/p&gt;

&lt;p&gt;I'll give you an example that was on everyone's lips for a while (until ChatGPT diverted all our attention). Microservices!! But what madness has everyone gotten into with microservices architecture! No, microservices architecture is not bad. Yes, your experience with this architecture is horrific because probably the essential complexity of the problem you wanted to solve did not justify using this architecture. And yes, microservices architecture adds a great deal of complexity, and is crushing you because you fail to evaluate your domain of complexity.&lt;/p&gt;

&lt;p&gt;Another example. Yes, Domain-Driven Design is truly a before and after in how we build Product in the industry. But Eric Evans himself says it explicitly, even in the title of his extremely boring book: "Tackling Complexity in the Heart of Software". You don't need to implement each and every tactical pattern Evans proposes if what you're going to do is CRUD where you can tell me from memory how many tables you'll use. In fact, you don't even need to decouple Domain from Infrastructure! You have 1000 lines of business code, Juan!. The cognitive load is minimal!, you don't need to pay for another abstraction, it's only adding extra complexity. Remember, all indirections have a cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Keep learning
&lt;/h2&gt;

&lt;p&gt;There's something even worst and more perverse, though, and I'm going to be very clear and somewhat harsh: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you don't know how to separate responsibilities&lt;/li&gt;
&lt;li&gt;You ignore the benefits of writing readable code&lt;/li&gt;
&lt;li&gt;You are unaware of architectures that allow you to test your software without severely affecting your mental health&lt;/li&gt;
&lt;li&gt;You name variables with the first thing that comes to your mind&lt;/li&gt;
&lt;li&gt;You are not aware of the consequences of coupling or low cohesion&lt;/li&gt;
&lt;li&gt;You are not able to articulate a coherent testing strategy&lt;/li&gt;
&lt;li&gt;You are unfamiliar with basic design patterns&lt;/li&gt;
&lt;li&gt;You make decisions without being guided by data&lt;/li&gt;
&lt;li&gt;You don't talk to your Domain experts&lt;/li&gt;
&lt;li&gt;You create proof of concepts that extend for months and then, of course, then complain about things like "the code isn't ready for Production! Stakeholder don't allow me to build it from scratch!". Juan! Wake up! Your team just spent four months’ worth of salaries &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm sorry, but complexity is going to eat you alive. Every professional programmer who wants to have a successful career should arm themselves with all the knowledge at their disposal to fight against complexity. In fact, if you ask me, I would say that software engineering is precisely about keeping accidental complexity under control, and to fight against it, you need knowledge. Don't skip the introduction chapters of the top technical books, almost all of them refer precisely to complexity, because it's important, because &lt;strong&gt;it determines everything&lt;/strong&gt;. Otherwise be prepared for frustration, stress... survival. Well, this isn't a guarantee either, okay? Paraphrasing once more, if you'll allow me, the illustrious Frederick Brooks: "No Silver Bullet". But with this arsenal of knowledge at your disposal, you will have maximized your chances of enjoying a much more pleasant and successful professional career experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  On Agile
&lt;/h2&gt;

&lt;p&gt;One last thing I really need to talk to you about how my epic journey towards discovering new ways to fight complexity, from a technical leadership role, has led me back to the world of &lt;strong&gt;Agile&lt;/strong&gt;. Yes, I denied it for a long time, and I think I wasn't wrong back then, but now? Now it makes a lot of sense. And a personal reflection, along with a prediction. My intuition regarding why Agile is not the default option in most projects today is because 15 years ago, the Rock Star Developa' would save your ass every single time, and today, you need several teams to support your business. It's now where Agile shines, it's now where it makes sense. The prediction, evident at this point, is that Agile will return, whether people like it or not, and I believe this is good news.&lt;/p&gt;

&lt;h2&gt;
  
  
  Radical Simplicity
&lt;/h2&gt;

&lt;p&gt;I'm finalizing. There will come a point in your professional career where you will correctly identify the domain of complexity in which you move, you will start using all that acquired knowledge when necessary, and you'll see how everything starts to flow in a marvelous rhythmic harmony... okay, sorry, it's not that beautiful, there are many other things that will escape your control, but even so, it doesn't matter, you will know how to react and adapt accordingly. It will be at that point where you'll realize that all the principles and concepts you learned, will be eclipsed by this one one: &lt;strong&gt;Radical simplicity&lt;/strong&gt;. Honor it, and you will enjoy your profession much more.&lt;/p&gt;

&lt;p&gt;I finish. Master complexity, embrace simplicity, and thrive in your career&lt;/p&gt;

</description>
      <category>softwareengineering</category>
    </item>
  </channel>
</rss>
