<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ali Aldahmani</title>
    <description>The latest articles on DEV Community by Ali Aldahmani (@ali_aldahmani).</description>
    <link>https://dev.to/ali_aldahmani</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ali_aldahmani"/>
    <language>en</language>
    <item>
      <title>Is Learning to Code Still Worth It in the Age of AI?</title>
      <dc:creator>Ali Aldahmani</dc:creator>
      <pubDate>Tue, 10 Mar 2026 01:12:23 +0000</pubDate>
      <link>https://dev.to/ali_aldahmani/is-learning-to-code-still-worth-it-in-the-age-of-ai-45o1</link>
      <guid>https://dev.to/ali_aldahmani/is-learning-to-code-still-worth-it-in-the-age-of-ai-45o1</guid>
      <description>&lt;p&gt;A conversation that changed the way I think about programming.&lt;/p&gt;




&lt;p&gt;I'll be honest, I had a moment of doubt recently. I'm in my last year and a half of university, majoring in AI, and the more I look at what's happening in the tech world, the more a quiet question kept nagging at me: Is any of this still worth it?&lt;/p&gt;

&lt;p&gt;Every semester, I sit through classes on C++, Java, and Python — OOP concepts, data structures, and design patterns. Meanwhile, I watch people on social media generate entire working applications just by typing a sentence into ChatGPT. "Vibe coding," they call it. And it actually works. So naturally, I started wondering: if AI can write the code, why am I spending hundreds of hours learning to write it myself?&lt;/p&gt;

&lt;p&gt;I needed an answer from someone who actually knew, not an AI, and not a random post online. I needed a real person with real experience. That's when I thought of my old professor: a computer science professor and department chair, someone who has watched this field evolve for decades.&lt;/p&gt;

&lt;p&gt;So I sent him an email.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What I Asked&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I shared with him what ChatGPT had told me — that programming isn't going anywhere, that AI will just assist developers and make them more efficient, that human creativity and problem-solving will always be needed. It sounded reasonable. But I wanted to know what he thought. Does programming still matter? Will it still matter when I graduate?&lt;/p&gt;

&lt;p&gt;His reply was longer than I expected. And it completely reframed how I was thinking about the whole thing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj76wzml1kn99dd8w7ol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj76wzml1kn99dd8w7ol.png" alt=" " width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The History Lesson I Didn't Know I Needed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of giving me a straight yes or no, my professor walked me through the entire history of programming, told through one simple task: adding a series of numbers. Each era, the same problem, a totally different world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 1 — Machine Language
&lt;/h2&gt;

&lt;p&gt;It started at the very bottom. Pure binary. Instructions written as raw ones and zeros that the hardware understood directly:&lt;br&gt;
&lt;code&gt;0001 0001 0010&lt;/code&gt;&lt;br&gt;
No abstraction. No human-readable anything. Just bits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 2 — Assembly Language
&lt;/h2&gt;

&lt;p&gt;Then came Assembly, which gave human-readable names to those hardware instructions:&lt;br&gt;
&lt;code&gt;ADD R1, R2 ; R1 = R1 + R2&lt;/code&gt;&lt;br&gt;
A small step in readability, but a massive mental leap for programmers of that era.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 3 — Fortran (First High-Level Language)
&lt;/h2&gt;

&lt;p&gt;Then the first high-level language appeared — Fortran — and suddenly code started to look almost like math:&lt;br&gt;
&lt;code&gt;DO 10 I = 1, 10&lt;br&gt;
      SUM = SUM + I&lt;br&gt;
10    CONTINUE&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 4 — ADA (Second Generation High-Level)
&lt;/h2&gt;

&lt;p&gt;Languages kept evolving. ADA brought cleaner structure and readability:&lt;br&gt;
&lt;code&gt;for I in 1 .. 10 loop&lt;br&gt;
   Sum := Sum + I;&lt;br&gt;
end loop;&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 5 — Java (Object-Oriented)
&lt;/h2&gt;

&lt;p&gt;Then object-oriented programming arrived, with Java letting developers model entire systems around real-world concepts:&lt;br&gt;
&lt;code&gt;for (int i = 1; i &amp;lt;= 10; i++) {&lt;br&gt;
    sum = sum + i;&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 6 — Python (Modern High-Level)
&lt;/h2&gt;

&lt;p&gt;Then Python took things even further — doing in a few lines what used to take many, with expressive, readable syntax:&lt;br&gt;
&lt;code&gt;Create two arrays&lt;br&gt;
A = np.array([1, 2, 3, 4])&lt;br&gt;
B = np.array([5, 6, 7, 8])&lt;br&gt;
Add arrays&lt;br&gt;
C = A + B&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 7 — Vibe Programming (AI-Generated Code)
&lt;/h2&gt;

&lt;p&gt;And now? You just describe what you want in plain English, and the AI writes it:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;User: Create a Python program using NumPy that multiplies two matrices and prints the result.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;import numpy as np&lt;br&gt;
A = np.array([[1, 2],&lt;br&gt;
              [3, 4]])&lt;br&gt;
B = np.array([[5, 6],&lt;br&gt;
              [7, 8]])&lt;br&gt;
C = np.dot(A, B)&lt;br&gt;
print("Matrix A:")&lt;br&gt;
print(A)&lt;br&gt;
print("Matrix B:")&lt;br&gt;
print(B)&lt;br&gt;
print("Result:")&lt;br&gt;
print(C)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here's the thing: every single one of those transitions, from machine code to Assembly, from Assembly to Fortran, from procedural to object-oriented — felt radical at the time. People probably asked the same question at each step: "If the new tool does the heavy lifting, do I still need to understand the old way?"&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The Part That Actually Hit Me&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After walking through the history, my professor made a point that I keep coming back to.&lt;/p&gt;

&lt;p&gt;He said that vibe programming allows the programmer to think at the level of ideas and design, rather than focusing on the mechanics of writing code. That sounds like pure freedom. And in some ways, it is.&lt;/p&gt;

&lt;p&gt;But then he added the part I wasn't expecting: it is essential that the person writing the prompt actually understands the code that gets produced.&lt;/p&gt;

&lt;p&gt;Why? Because software doesn't just get written once and live forever. It has a lifecycle, and every phase of that lifecycle requires real understanding:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0v329m5sdpts922955k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe0v329m5sdpts922955k.png" alt=" " width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That last one — maintenance — is the most important. Real software gets updated, patched, extended, and fixed continuously across many versions. And if you don't understand what the AI generated, you cannot maintain it, debug it, or evolve it with confidence.&lt;/p&gt;

&lt;p&gt;He put it simply: AI eliminated some jobs that already existed — machine coders, assembly programmers. But it also created new ones. Prompt engineers. Vibe programmers. The field didn't shrink; it shifted.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The Calculator Analogy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the part that really settled the question for me. Think about what happened when calculators arrived.&lt;/p&gt;

&lt;p&gt;Nobody said "math is dead." Nobody stopped teaching arithmetic in schools. What happened instead was that the floor of what you could accomplish rose dramatically — but the ceiling only moved for the people who actually understood what was happening underneath. A calculator in the hands of someone who doesn't understand math is just a machine that produces numbers. In the hands of someone who does, it's a tool that amplifies everything they're capable of.&lt;/p&gt;

&lt;p&gt;AI and code generation are the same. The tools get more powerful. But the person operating them still needs to understand what they're doing — otherwise they're just producing output they can't explain, verify, or fix.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What I'm Taking Away From This&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I came into that email thread feeling like my curriculum might be obsolete before I even graduated. I came out of it feeling as if I finally understood why my curriculum exists.&lt;/p&gt;

&lt;p&gt;Learning C++, Java, and Python isn't about memorizing syntax that an AI can generate in seconds. It's about building a mental model of how software actually works, how memory is managed, how objects interact, and how algorithms perform at scale. That mental model is what lets me read AI-generated code critically, catch mistakes, ask better questions, and ultimately build better things.&lt;/p&gt;

&lt;p&gt;The programmers who will struggle in an AI-driven world aren't the ones who learned to code. They're the ones who learned to copy-paste without understanding. AI doesn't change that equation — it just raises the stakes.&lt;/p&gt;

&lt;p&gt;So yes, it's still worth it. Not despite AI, but especially because of it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tnx2x1q8syaby9xhqic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tnx2x1q8syaby9xhqic.png" alt=" " width="800" height="857"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>ai</category>
      <category>programming</category>
      <category>llm</category>
    </item>
    <item>
      <title>I got tired of Googling pandas methods, so I built this</title>
      <dc:creator>Ali Aldahmani</dc:creator>
      <pubDate>Thu, 26 Feb 2026 22:11:09 +0000</pubDate>
      <link>https://dev.to/ali_aldahmani/i-got-tired-of-googling-pandas-methods-so-i-built-this-35pn</link>
      <guid>https://dev.to/ali_aldahmani/i-got-tired-of-googling-pandas-methods-so-i-built-this-35pn</guid>
      <description>&lt;p&gt;I built a VS Code extension that shows Python cheat sheets right next to your code.&lt;/p&gt;

&lt;p&gt;So I got tired of switching between my editor and browser every time I forgot a &lt;code&gt;pandas&lt;/code&gt; method or a &lt;code&gt;numpy&lt;/code&gt; function. You know the drill, you're in the zone, writing code, and then you have to stop, open a new tab, search "pandas groupby example", scroll past 3 Stack Overflow answers... and by the time you get back to your code, you've lost your train of thought.&lt;/p&gt;

&lt;p&gt;So I built something to fix that.&lt;/p&gt;

&lt;p&gt;DevLens is a VS Code extension that opens a cheat sheet panel right beside your code. It covers HTML, CSS, Tailwind, NumPy, Pandas, Matplotlib, Seaborn, and Scikit-learn (for now).&lt;/p&gt;

&lt;p&gt;The part I'm most happy with is the auto-detection. When you open a .py file, it scans your imports and automatically switches to the right library. Open a file with import pandas as pd, and it's already showing you pandas snippets. No clicking, no selecting.&lt;/p&gt;

&lt;p&gt;Every snippet has two buttons: Insert drops it directly at your cursor, and Copy puts it on your clipboard. Small thing, but it saves a surprising amount of time.&lt;/p&gt;

&lt;p&gt;What's next:&lt;/p&gt;

&lt;p&gt;More Python libraries (requests, os, datetime...)&lt;br&gt;
JavaScript and TypeScript support&lt;br&gt;
Java support&lt;br&gt;
Install the library button directly from the panel&lt;/p&gt;

&lt;p&gt;It's open source, still early, and I'd love feedback from other developers.&lt;/p&gt;

&lt;p&gt;👉 GitHub: &lt;a href="https://github.com/Ali-Aldahmani/devlens" rel="noopener noreferrer"&gt;https://github.com/Ali-Aldahmani/devlens&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love to hear what libraries you'd want added first. Drop a comment! 🙌&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Introducing AI</title>
      <dc:creator>Ali Aldahmani</dc:creator>
      <pubDate>Thu, 15 Jan 2026 07:40:59 +0000</pubDate>
      <link>https://dev.to/ali_aldahmani/introducing-ai-3no</link>
      <guid>https://dev.to/ali_aldahmani/introducing-ai-3no</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This article is a simple summary of what I learned while studying the basics of Artificial Intelligence. I focused on understanding the core ideas—what AI is, how machine learning works, why data matters, and how deep learning and generative AI are changing the world.&lt;br&gt;
Even though I studied this through an Amazon program, I chose not to talk about specific Amazon services. My goal was to keep the information general, clear, and useful for anyone who wants to understand AI, no matter which platform they use.&lt;br&gt;
AI is a broad term.&lt;br&gt;
It refers to a field that has been developing since the 1950s, and it includes many different methods and approaches that help machines do things that usually need human intelligence.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsydqe5ukfnzbk499kvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjsydqe5ukfnzbk499kvg.png" alt=" " width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To truly benefit from AI, we also need to manage its risks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp41y2f0ky1vcxi9c5odi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp41y2f0ky1vcxi9c5odi.jpg" alt=" " width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AI systems can analyze data and discover insights that traditional programming could never find.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AI can automate tasks and boost efficiency, saving time and effort by handling repetitive or boring work.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ٍ&lt;strong&gt;Risks&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If AI systems are designed poorly, they may include biases or mistakes that lead to unfair or wrong decisions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When AI systems are not transparent or accountable, it becomes hard to fix problems or deal with their negative effects.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5auiwuejtrh1y71ny82.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5auiwuejtrh1y71ny82.jpeg" alt=" " width="405" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Promise of AI&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;AI can analyze huge amounts of data, including sounds, images, and information from the environment, and find patterns that humans would struggle to see.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AI helps organizations detect problems and respond to them faster, in ways that would normally require large teams of experts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Today, many organizations use AI to tackle big global challenges such as protecting wildlife, fighting hunger, and helping communities recover after disasters.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyq4pabn4gyqi77xzewq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyq4pabn4gyqi77xzewq.jpeg" alt=" " width="490" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Challenges of AI&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Some studies have shown that certain AI systems used in healthcare can show racial bias when suggesting treatments, which means some patients may not receive the care they truly need.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In other cases, lawyers have used AI tools that produced fake or incorrect references, and these mistakes were included in court documents. This has led to serious legal problems and multiple lawsuits.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Responsible AI should be at the heart of every AI system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7p2m0g27cvx85mi5075.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp7p2m0g27cvx85mi5075.png" alt=" " width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These examples show why practicing responsible AI is so important. Responsible AI includes the rules, best practices, and tools that help make sure AI systems are safe, fair, and protected from the risks that come with copying human thinking.&lt;/p&gt;

&lt;p&gt;No matter what kind of AI you use or build, making sure it is designed and maintained responsibly should always be a top priority.&lt;/p&gt;

&lt;p&gt;The module called Practicing Generative AI Responsibly focuses more on how these ideas apply specifically to generative AI. Since this field is changing very fast, it’s important to use the provided resources to learn more and stay updated with the latest guidelines and tools.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Machine learning concepts&lt;/em&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;ML is a common type of AI&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6v8g8tkz237ea4vaorhz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6v8g8tkz237ea4vaorhz.png" alt=" " width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Machine learning&lt;/strong&gt; is a group of AI methods that use existing data to train a mathematical model. The model learns patterns from this data so it can make accurate predictions when it sees new data later.&lt;/p&gt;

&lt;p&gt;Traditional machine learning uses algorithms to learn from data. After training, you get a model that can make predictions or decisions based on what it has learned. Over time, models became more advanced, like neural networks, and with more data and stronger computing power, new techniques like generative AI появились.&lt;/p&gt;

&lt;p&gt;At its core, a model is simply guessing something based on past experience—like checking if a payment is fraud or suggesting a restaurant.&lt;/p&gt;

&lt;p&gt;To make it easy, think of a model like a brain. Instead of giving it strict rules, you show it many examples. Just like a child learns the difference between a dog and a cat by seeing them many times, a model learns from repeated examples in data. The more it sees, the better it gets.&lt;/p&gt;

&lt;p&gt;In real life, data scientists choose the right data and method, train the model, test it, and improve it again and again until it works well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Trained ML models make inferences&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Inference: The output is the result the model gives when it uses what it learned during training to understand new data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqrcs52kd3sy7xwppu0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftqrcs52kd3sy7xwppu0q.png" alt=" " width="774" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a model is trained, improved, and tested, and its results are good, it is deployed so it can be used on real data. For example, a bank may use a trained model in its live system to check credit card transactions and decide if they are fraud.&lt;/p&gt;

&lt;p&gt;Each time the model looks at new data and gives an answer, this is called an inference—it is the model using what it learned to make a new decision.&lt;/p&gt;

&lt;p&gt;Once the model is live, it must be watched and checked regularly. If the data in the real world changes, the model can become less accurate. For example, fraud methods may change. When this happens, the model needs to be retrained with new data so it can learn again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Determining if ML is the right approach&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffo6klkimlenyfyku5the.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffo6klkimlenyfyku5the.png" alt=" " width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before choosing machine learning, you should first ask: do I really need ML for this problem?&lt;/p&gt;

&lt;p&gt;Some problems can be solved easily with traditional data analytics. If the rules are simple and can be written clearly in code, analytics is often the better choice. For example, a store can use past sales data and basic statistics to decide how much stock to keep each month—no ML needed.&lt;/p&gt;

&lt;p&gt;But if the problem is more complex, like grouping customers by behavior and giving them personalized product recommendations in real time, then ML can be a good choice.&lt;/p&gt;

&lt;p&gt;Even then, you must check a few things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Do you have enough good-quality data to train a model?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is your problem okay with predictions that may not be 100% exact?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Do you need clear explanations for every decision?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Does your system need very fast answers?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If ML fits your needs, you also have to think about cost. Training, running, and maintaining ML models is not free. The benefit of the solution must be worth the effort and money. Cloud tools make it easier to test and find a cost-effective way to use ML.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Good data is critical to high-value, responsible outcomes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Choosing data sources&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Do you have enough data?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is the data high quality?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is the data biased or not truly representative?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is the data recent enough?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What type of data is it?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How much effort is needed to prepare and use this data?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Managing data&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Does the data include personal or sensitive information that must be protected?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How will you keep the data safe during development and after deployment?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;What rules or systems will you use to avoid duplicate or inconsistent data?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Data is the fuel that runs machine learning. It helps models find patterns, make predictions, and take decisions. The better and more diverse the data, the better the model will perform.&lt;/p&gt;

&lt;p&gt;Since ML learns from data, bad or limited data leads to bad results. If the data is biased, outdated, or too small, the model can become inaccurate or unfair. The more important the decision is, the more important it is to use high-quality and unbiased data.&lt;/p&gt;

&lt;p&gt;Think of it like teaching a child the difference between dogs and cats:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Volume: How many examples do they see?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Quality: Are the examples clear or blurry?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bias: Are they seeing only one type of dog?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Timeliness: Is the information still relevant?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the data is weak, experts clean it, fix it, and prepare it before using it. This step takes a lot of time but is very important.&lt;/p&gt;

&lt;p&gt;When choosing an ML method, two big things matter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What type of data you have (structured or not)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Whether the data is labeled or not&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;General data types that ML might use&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfh0knnhox65ble9pdpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbfh0knnhox65ble9pdpv.png" alt=" " width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data usually falls into three main types: structured, semistructured, and unstructured.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structured&lt;/strong&gt; data is the kind you see in tables, like in Excel sheets or databases. Everything has a fixed place—rows, columns, and clear rules. This makes it easy to search and analyze, but not very flexible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Semistructured&lt;/strong&gt; data is more flexible. It uses tags or labels to organize information, but not in strict tables. For example, emails have parts like sender, subject, and body, but the content can vary a lot. Formats like JSON and XML are also semistructured.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unstructured&lt;/strong&gt; data has no fixed format at all. Things like text files, photos, audio, and videos don’t follow a clear structure. This makes them harder to analyze, but with the right tools, they can be very powerful.&lt;/p&gt;

&lt;p&gt;Depending on your problem, you either choose a model that can handle your data type, or you first prepare the data so the model can use it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Labeled data identifies the target of an ML prediction&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Target&lt;/em&gt;: Is the image a dog or a cat?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0xemej0442ihsi1pzjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0xemej0442ihsi1pzjg.png" alt=" " width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One important thing in machine learning is whether your data is labeled or not.&lt;/p&gt;

&lt;p&gt;Labeled data means each example already has an answer. Like a photo that is marked “dog” or “cat,” or a transaction marked “fraud” or “not fraud.” This helps the model learn faster and more clearly.&lt;/p&gt;

&lt;p&gt;Unlabeled data has no answers. The model must figure out patterns by itself, which is harder and usually needs more data and stronger models.&lt;/p&gt;

&lt;p&gt;If data is not labeled, experts—or even other AI tools—may first label it before training the main model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic ML paradigms and common problem types&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwibtxed5vomn92xxsy7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwibtxed5vomn92xxsy7w.png" alt=" " width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Machine learning, deep learning, and generative AI&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Deep learning is a type of ML that uses neural networks&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytbkx4aeug7ykrlvzsnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytbkx4aeug7ykrlvzsnj.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Neural networks&lt;/em&gt; are built from layers: an input layer, one or more hidden layers where thinking happens, and an output layer that gives the final answer. When a model has many layers, we call it deep learning.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Deep learning&lt;/em&gt; stands out because it uses huge amounts of different data and very deep models. These models are great at working with messy, unstructured data like images, sound, and text, and at finding complex patterns.&lt;/p&gt;

&lt;p&gt;Traditional ML usually learns from a clear dataset to answer one specific question, for instance: is this transaction fraud or not?&lt;/p&gt;

&lt;p&gt;Deep learning, on the other hand, handles bigger and more complex problems. For example, a self-driving car must understand roads, signs, people, and traffic rules—all at once—to decide how to drive. It has one main goal, but it makes many small decisions to reach it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generative AI uses deep learning to create new content&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihfp7li6aafp0l2m8wbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihfp7li6aafp0l2m8wbr.png" alt=" " width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Generative AI&lt;/em&gt; is built on &lt;em&gt;deep learning&lt;/em&gt; models called &lt;em&gt;foundation models&lt;/em&gt;. These models are trained on huge amounts of data, so they can handle many different types of requests—even ones they were not trained on directly.&lt;/p&gt;

&lt;p&gt;Instead of just choosing an answer, they create new content, like writing text, chatting naturally, or generating images from a description.&lt;/p&gt;

&lt;p&gt;Because they are so flexible, foundation models can be used in many situations, even for ideas and applications that were not planned from the start.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Learning about AI showed me that it is not just about machines—it is about data, decisions, responsibility, and impact. From simple models that predict outcomes, to deep learning systems that understand images and language, AI is becoming part of everyday life.&lt;/p&gt;

&lt;p&gt;But with this power comes responsibility. Building AI is not only about making it smart—it must also be fair, safe, transparent, and trustworthy. Good data, good design, and good intentions matter.&lt;/p&gt;

&lt;p&gt;This is just the beginning of my journey in AI. What I learned here gave me a strong foundation, and I am excited to keep learning, building, and using AI in ways that truly help people and make a positive difference.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
