<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: David Kabera</title>
    <description>The latest articles on DEV Community by David Kabera (@dave_kabera).</description>
    <link>https://dev.to/dave_kabera</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dave_kabera"/>
    <language>en</language>
    <item>
      <title>Irony in My AI Engineering Journey</title>
      <dc:creator>David Kabera</dc:creator>
      <pubDate>Fri, 24 Jan 2025 09:34:15 +0000</pubDate>
      <link>https://dev.to/dave_kabera/irony-in-my-ai-engineering-journey-4amj</link>
      <guid>https://dev.to/dave_kabera/irony-in-my-ai-engineering-journey-4amj</guid>
      <description>&lt;p&gt;As I continue my journey into AI Engineering, I can't help but laugh at the ironic twists along the way. Like many beginners, I started with uninformed optimism—believing that if I just learned the right models and algorithms, fed them enough data, everything would magically fall into place. But, much like a novice boxer stepping into the ring, reality quickly delivered a punch that forced me to reassess.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hospital Data Challenge
&lt;/h2&gt;

&lt;p&gt;My latest project involved hospital data aimed at improving resource management. The dataset included various patient metrics, with readmission data as the target variable. I followed the standard playbook:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Inspection and Cleaning&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exploratory Data Analysis (EDA)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature Engineering&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Selection and Training&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Model Evaluation&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I started with simple algorithms and gradually increased complexity. However, despite my efforts, the results plateaued—or worse, they declined. Ultimately, the model’s performance barely exceeded random guessing (AUC-ROC of around 61%).&lt;/p&gt;

&lt;p&gt;Naturally, this was frustrating. I had hoped the data would yield better insights, potentially offering valuable real-world applications. Instead, I was left scratching my head.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Practice Makes Perfect (Eventually)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Every time I preprocess data, engineer features, and build models, I get a little faster and a little better. There’s real skill-building happening, even when the end result isn’t stellar.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Leverage the Community&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Platforms like Kaggle and GitHub are invaluable. Viewing others’ work not only offers new techniques and perspectives but also shows you what doesn’t work—helping us collectively push the boundaries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Domain Knowledge is Gold&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The data you have might not contain all the answers you need. Sometimes the most critical features are outside your dataset, and they’re not always easy to capture. This is where understanding the domain deeply becomes essential.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cultivate Resilience&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Machine learning is as much about learning from failures as it is about celebrating successes. Just like a loss function provides feedback to a model, these disappointing results were feedback for me. While I thought I was refining the model, the model was actually refining my approach.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Continuing the Journey
&lt;/h2&gt;

&lt;p&gt;These experiences have shown me that AI Engineering is a constant dance of iteration and humility. While the results from this hospital dataset may not have been jaw-dropping, the process itself was a powerful learning experience. Each challenge nudges me to explore deeper, refine my methods, and remind myself that sometimes the most significant progress is made through the harshest failures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Onward to the next challenge...&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
    </item>
    <item>
      <title>Irony in My AI Engineering Journey</title>
      <dc:creator>David Kabera</dc:creator>
      <pubDate>Fri, 24 Jan 2025 09:34:15 +0000</pubDate>
      <link>https://dev.to/dave_kabera/irony-in-my-ai-engineering-journey-5hdo</link>
      <guid>https://dev.to/dave_kabera/irony-in-my-ai-engineering-journey-5hdo</guid>
      <description>&lt;p&gt;As I continue my journey into AI Engineering, I can't help but laugh at the ironic twists along the way. Like many beginners, I started with uninformed optimism—believing that if I just learned the right models and algorithms, fed them enough data, everything would magically fall into place. But, much like a novice boxer stepping into the ring, reality quickly delivered a punch that forced me to reassess.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Hospital Data Challenge
&lt;/h2&gt;

&lt;p&gt;My latest project involved hospital data aimed at improving resource management. The dataset included various patient metrics, with readmission data as the target variable. I followed the standard playbook:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Inspection and Cleaning&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exploratory Data Analysis (EDA)&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature Engineering&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Model Selection and Training&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Model Evaluation&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I started with simple algorithms and gradually increased complexity. However, despite my efforts, the results plateaued—or worse, they declined. Ultimately, the model’s performance barely exceeded random guessing (AUC-ROC of around 61%).&lt;/p&gt;

&lt;p&gt;Naturally, this was frustrating. I had hoped the data would yield better insights, potentially offering valuable real-world applications. Instead, I was left scratching my head.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Practice Makes Perfect (Eventually)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Every time I preprocess data, engineer features, and build models, I get a little faster and a little better. There’s real skill-building happening, even when the end result isn’t stellar.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Leverage the Community&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Platforms like Kaggle and GitHub are invaluable. Viewing others’ work not only offers new techniques and perspectives but also shows you what doesn’t work—helping us collectively push the boundaries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Domain Knowledge is Gold&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The data you have might not contain all the answers you need. Sometimes the most critical features are outside your dataset, and they’re not always easy to capture. This is where understanding the domain deeply becomes essential.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cultivate Resilience&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Machine learning is as much about learning from failures as it is about celebrating successes. Just like a loss function provides feedback to a model, these disappointing results were feedback for me. While I thought I was refining the model, the model was actually refining my approach.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Continuing the Journey
&lt;/h2&gt;

&lt;p&gt;These experiences have shown me that AI Engineering is a constant dance of iteration and humility. While the results from this hospital dataset may not have been jaw-dropping, the process itself was a powerful learning experience. Each challenge nudges me to explore deeper, refine my methods, and remind myself that sometimes the most significant progress is made through the harshest failures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Onward to the next challenge...&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Diving Into Convolutional Neural Networks</title>
      <dc:creator>David Kabera</dc:creator>
      <pubDate>Sat, 04 Jan 2025 10:17:44 +0000</pubDate>
      <link>https://dev.to/dave_kabera/diving-into-convolutional-neural-networks-5ggd</link>
      <guid>https://dev.to/dave_kabera/diving-into-convolutional-neural-networks-5ggd</guid>
      <description>&lt;p&gt;As I delve deeper into the fascinating world of Artificial Intelligence and Machine Learning, I am captivated by the possibilities of machines replicating human intelligence and productivity. This journey feels akin to the excitement of the Industrial Revolution over a century ago. But let’s save that broader discussion for another day and dive into what I’ve learned about Convolutional Neural Networks (CNNs) this past week.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding Overfitting&lt;/strong&gt;&lt;br&gt;
One challenge encountered was overfitting. Simply put, overfitting happens when a model performs exceptionally well on training data but struggles to generalize to new, unseen data. Imagine training a model to differentiate between sailors and civilians. If the model learns that sailors often wear hats, it might incorrectly classify a construction worker in a hard hat or someone wearing a sunhat as a sailor. Overfitting limits the model’s utility in real-world scenarios, but there are strategies to address this issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance Enhancement Methods&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Data Augmentation
Data augmentation is the process of creating new training data from existing samples by applying random transformations. For example:
Flipping and Rotations: Teach the model to recognize objects regardless of orientation.
Zooming and Translations: Ensure the model handles close-ups and objects in varied locations within images.
Contrast Adjustments: Train the model to identify objects in different lighting conditions.
This technique not only increases data diversity but also helps the model generalize better without requiring new data.&lt;/li&gt;
&lt;li&gt;Dropout Regularization
Dropout is a technique where certain neurons in the network are randomly “dropped” (i.e., ignored) during training. This forces the model to rely on multiple neurons to make predictions, discouraging over-dependence on any single neuron. Imagine a team project where different members are temporarily unavailable, requiring everyone to learn all tasks to some extent. Similarly, dropout helps neural networks develop robust, distributed representations.&lt;/li&gt;
&lt;li&gt;Transfer Learning
Why reinvent the wheel when you can build on existing knowledge? Transfer learning allows us to use pre-trained models that have been trained on massive datasets. By leveraging these models, we can fine-tune them for specific tasks, saving time and resources. This approach has been transformative in my work, enabling faster and more efficient training.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Reflections and Insights&lt;/strong&gt;&lt;br&gt;
Beyond these methods, I explored additional tools and concepts:&lt;br&gt;
Softmax Activation: A reliable choice for multi-class classification.&lt;br&gt;
Data Management: I’m improving at organizing directories and handling APIs.&lt;br&gt;
Model Compilation: Combining pre-trained layers with custom layers and watching the metrics improve is deeply satisfying.&lt;br&gt;
Cloud Computing: A game-changer for handling computationally intensive tasks.&lt;br&gt;
Surprisingly, building the model itself often requires minimal code. The bulk of the work lies in managing and visualizing data—a vital skill I’m developing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Looking Ahead&lt;/strong&gt;&lt;br&gt;
I’m thrilled to dive into Natural Language Processing (NLP) next week. I’ll share updates on my progress and the challenges I encounter. Stay tuned!&lt;/p&gt;

</description>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
