<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: anesmeftah</title>
    <description>The latest articles on DEV Community by anesmeftah (@anesmeftah).</description>
    <link>https://dev.to/anesmeftah</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/anesmeftah"/>
    <language>en</language>
    <item>
      <title>RIP Fine-Tuning — The Rise of Self-Tuned AI</title>
      <dc:creator>anesmeftah</dc:creator>
      <pubDate>Fri, 10 Oct 2025 12:09:54 +0000</pubDate>
      <link>https://dev.to/anesmeftah/rip-fine-tuning-the-rise-of-self-tuned-ai-1cj5</link>
      <guid>https://dev.to/anesmeftah/rip-fine-tuning-the-rise-of-self-tuned-ai-1cj5</guid>
      <description>&lt;p&gt;Stanford just dropped a paper called Agentic Context Engineering (ACE) — and it might just end fine-tuning as we know it.&lt;/p&gt;

&lt;p&gt;No retraining. No weights touched.&lt;br&gt;
The model literally rewrites and evolves its own prompt, learning from every mistake and success.&lt;/p&gt;

&lt;p&gt;🔥 The results:&lt;/p&gt;

&lt;p&gt;🚀 +10.6% better than GPT-4 agents&lt;/p&gt;

&lt;p&gt;💰 +8.6% on finance reasoning&lt;/p&gt;

&lt;p&gt;⚡ −86.9% cost &amp;amp; latency&lt;/p&gt;

&lt;p&gt;🧠 No labels. Just feedback&lt;/p&gt;

&lt;p&gt;Everyone’s been chasing short, clean prompts.&lt;br&gt;
ACE flips that idea completely.&lt;/p&gt;

&lt;p&gt;Turns out, LLMs don’t want simplicity — they want context density.&lt;br&gt;
They perform better when surrounded by rich, evolving information.&lt;/p&gt;

&lt;p&gt;So yeah, the next wave of AI won’t be fine-tuned.&lt;br&gt;
It’ll be self-tuned.&lt;/p&gt;

&lt;p&gt;Welcome to the era of living prompts.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llms</category>
      <category>research</category>
    </item>
    <item>
      <title>Python Tip: Use else with Loops Like a Pro</title>
      <dc:creator>anesmeftah</dc:creator>
      <pubDate>Mon, 06 Oct 2025 17:19:31 +0000</pubDate>
      <link>https://dev.to/anesmeftah/python-tip-use-else-with-loops-like-a-pro-39h6</link>
      <guid>https://dev.to/anesmeftah/python-tip-use-else-with-loops-like-a-pro-39h6</guid>
      <description>&lt;p&gt;Most people think else is only for if. But in Python, loops can have else too!&lt;/p&gt;

&lt;p&gt;&lt;code&gt;for i in range(5):&lt;br&gt;
    if i == 10:&lt;br&gt;
        print("Found it!")&lt;br&gt;
        break&lt;br&gt;
else:&lt;br&gt;
    print("Not found 😅")&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;strong&gt;Output:&lt;/strong&gt; Not found 😅&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it’s powerful:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Runs only if the loop didn’t break&lt;br&gt;
Perfect for search algorithms&lt;br&gt;
Cleaner than flags or extra variables&lt;/p&gt;

&lt;p&gt;Use it to write more Pythonic, elegant code your loops just got smarter.&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>python</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>5 Free Books for Data Science &amp; Machine Learning</title>
      <dc:creator>anesmeftah</dc:creator>
      <pubDate>Sat, 04 Oct 2025 20:32:45 +0000</pubDate>
      <link>https://dev.to/anesmeftah/5-free-books-for-data-science-machine-learning-7ne</link>
      <guid>https://dev.to/anesmeftah/5-free-books-for-data-science-machine-learning-7ne</guid>
      <description>&lt;p&gt;Looking to learn Data Science or Machine Learning for free? Here are &lt;strong&gt;5 excellent books&lt;/strong&gt; you can access online — covering Python, ML, Deep Learning, Bayesian methods, and Statistics.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Python Data Science Handbook
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What it covers:&lt;/strong&gt; NumPy, Pandas, Matplotlib, and introductory Machine Learning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Author:&lt;/strong&gt; Jake VanderPlas&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://jakevdp.github.io/PythonDataScienceHandbook/" rel="noopener noreferrer"&gt;jakevdp.github.io/PythonDataScienceHandbook&lt;/a&gt;
A must-read for anyone starting with Python for data analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0k5b7lnmpptccdfotsms.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0k5b7lnmpptccdfotsms.png" alt="data science handbook" width="378" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Deep Learning
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What it covers:&lt;/strong&gt; Full deep learning theory and practice&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authors:&lt;/strong&gt; Ian Goodfellow, Yoshua Bengio, Aaron Courville&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://www.deeplearningbook.org/" rel="noopener noreferrer"&gt;deeplearningbook.org&lt;/a&gt;
This is the definitive deep learning book, used in many university courses.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7zpb1vkl83f6tf7plag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe7zpb1vkl83f6tf7plag.png" alt="deep learning book" width="292" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Probabilistic Programming &amp;amp; Bayesian Methods for Hackers
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What it covers:&lt;/strong&gt; Bayesian thinking with Python and PyMC&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Author:&lt;/strong&gt; Cameron Davidson-Pilon&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
A fun, practical guide to understanding Bayesian statistics through Python.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2x8ibg26o8fjgqmhe2mq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2x8ibg26o8fjgqmhe2mq.png" alt="Probabilistic Programming &amp;amp; Bayesian Methods for Hackers book" width="496" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Dive into Deep Learning
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What it covers:&lt;/strong&gt; Hands-on deep learning with PyTorch and MXNet&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Authors:&lt;/strong&gt; Aston Zhang, Zachary C. Lipton, Mu Li, Alex Smola&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://d2l.ai/" rel="noopener noreferrer"&gt;d2l.ai&lt;/a&gt;
Perfect for learners who want practical coding examples alongside theory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bruwmattfp21xhco4hf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bruwmattfp21xhco4hf.png" alt="Dive into Deep Learning book" width="518" height="647"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Think Stats
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;What it covers:&lt;/strong&gt; Introduction to probability and statistics using Python&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Author:&lt;/strong&gt; Allen B. Downey&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Link:&lt;/strong&gt; &lt;a href="https://greenteapress.com/wp/think-stats-2e/" rel="noopener noreferrer"&gt;greenteapress.com/wp/think-stats-2e/&lt;/a&gt;
Great for beginners to understand statistical thinking with Python examples.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadwkg7pl9pb3bb8ehj08.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadwkg7pl9pb3bb8ehj08.png" alt="Think Stats book" width="286" height="375"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;💡 &lt;strong&gt;Tip:&lt;/strong&gt; Bookmark these books and follow along with the exercises — learning by doing is the fastest way to become proficient.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>tutorial</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Vision Transformer (ViT) from Scratch in PyTorch</title>
      <dc:creator>anesmeftah</dc:creator>
      <pubDate>Thu, 02 Oct 2025 11:32:56 +0000</pubDate>
      <link>https://dev.to/anesmeftah/vision-transformer-vit-from-scratch-in-pytorch-3l3m</link>
      <guid>https://dev.to/anesmeftah/vision-transformer-vit-from-scratch-in-pytorch-3l3m</guid>
      <description>&lt;p&gt;For years, &lt;strong&gt;Convolutional Neural Networks (CNNs)&lt;/strong&gt; ruled computer vision. But since the paper &lt;em&gt;“An Image is Worth 16x16 Words”&lt;/em&gt;, the &lt;strong&gt;Vision Transformer (ViT)&lt;/strong&gt; has challenged CNNs by treating an image as a sequence of patches—similar to how words form a sentence.&lt;/p&gt;

&lt;p&gt;In this post, we’ll walk through a &lt;strong&gt;PyTorch implementation of ViT&lt;/strong&gt;, trained on a small food classification dataset (&lt;code&gt;pizza&lt;/code&gt;, &lt;code&gt;steak&lt;/code&gt;, &lt;code&gt;sushi&lt;/code&gt;).&lt;/p&gt;




&lt;h2&gt;
  
  
  Core Idea
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw2p67xnppufh33zfvu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnw2p67xnppufh33zfvu3.png" alt="Architecture of The ViT" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Split an image into fixed-size patches (e.g., 16×16).&lt;/li&gt;
&lt;li&gt;Flatten patches into vectors → feed them as tokens.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;[CLS] Token&lt;/strong&gt; → represents the entire image for classification.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Positional Embeddings&lt;/strong&gt; → retain spatial info.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Process the sequence with a &lt;strong&gt;Transformer Encoder&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  ViT-Base Config
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image size&lt;/strong&gt;: 224×224&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Patch size&lt;/strong&gt;: 16×16 → 196 tokens&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embedding dim&lt;/strong&gt;: 768&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Layers&lt;/strong&gt;: 12&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attention heads&lt;/strong&gt;: 12&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Params&lt;/strong&gt;: ~85.8M&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Dataset
&lt;/h2&gt;

&lt;p&gt;We used a &lt;strong&gt;3-class dataset&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🍕 Pizza&lt;/li&gt;
&lt;li&gt;🥩 Steak&lt;/li&gt;
&lt;li&gt;🍣 Sushi&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All images resized to &lt;strong&gt;224×224&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Training Setup
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Parameter&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Optimizer&lt;/td&gt;
&lt;td&gt;Adam&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Loss&lt;/td&gt;
&lt;td&gt;CrossEntropyLoss&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;LR&lt;/td&gt;
&lt;td&gt;0.001&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Batch Size&lt;/td&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Epochs&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Device&lt;/td&gt;
&lt;td&gt;GPU (CUDA)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruo4af5l7qfj06iuzhi4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruo4af5l7qfj06iuzhi4.png" alt="plots of the training and testing loss and accuracy" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Training Loss&lt;/strong&gt; → decreases fast (ViT is very powerful).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validation Loss&lt;/strong&gt; → may plateau or rise (overfitting risk).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy&lt;/strong&gt; → Training near 100%, validation reflects true performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;ViTs are large models. On small datasets, they overfit quickly. For real use, try &lt;strong&gt;pretrained ViTs&lt;/strong&gt; + fine-tuning.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;ViT proves attention works for vision, not just text.&lt;/li&gt;
&lt;li&gt;Even a scratch implementation highlights the shift from &lt;strong&gt;pixels → patches → tokens&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Next steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Try on larger datasets (CIFAR-100, ImageNet subset).&lt;/li&gt;
&lt;li&gt;Use pretrained weights (HuggingFace, timm).&lt;/li&gt;
&lt;li&gt;Experiment with augmentations (Mixup, CutMix).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




</description>
      <category>deeplearning</category>
      <category>pytorch</category>
      <category>machinelearning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Python Tip: Quickly Comment or Uncomment Multiple Lines</title>
      <dc:creator>anesmeftah</dc:creator>
      <pubDate>Tue, 30 Sep 2025 14:52:23 +0000</pubDate>
      <link>https://dev.to/anesmeftah/python-tip-quickly-comment-or-uncomment-multiple-lines-24jc</link>
      <guid>https://dev.to/anesmeftah/python-tip-quickly-comment-or-uncomment-multiple-lines-24jc</guid>
      <description>&lt;p&gt;Tired of typing # on every line in Python? Most editors (VS Code, PyCharm, Jupyter) have a shortcut that makes this instant:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Windows/Linux&lt;/strong&gt;: Ctrl + /&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mac&lt;/strong&gt;: Cmd + /&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;How it works:&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Highlight the lines you want to comment or uncomment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Press the shortcut.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Done! Press again to remove the comments.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;x = 10&lt;br&gt;
y = 20&lt;br&gt;
print(x + y)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Highlight and hit Ctrl + / → all lines get commented out in one go. Press again → they’re back!&lt;/p&gt;

&lt;p&gt;A tiny shortcut, but a huge time-saver when debugging or experimenting.&lt;/p&gt;

</description>
      <category>python</category>
      <category>coding</category>
      <category>cleancode</category>
      <category>programming</category>
    </item>
    <item>
      <title>Struggling to figure out what to learn next in tech?</title>
      <dc:creator>anesmeftah</dc:creator>
      <pubDate>Mon, 29 Sep 2025 22:05:09 +0000</pubDate>
      <link>https://dev.to/anesmeftah/struggling-to-figure-out-what-to-learn-next-in-tech-3b6b</link>
      <guid>https://dev.to/anesmeftah/struggling-to-figure-out-what-to-learn-next-in-tech-3b6b</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2tjiicoymsz4rqcp9jl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2tjiicoymsz4rqcp9jl.png" alt=" " width="800" height="783"&gt;&lt;/a&gt;&lt;br&gt;
Check out &lt;a href="https://roadmap.sh/" rel="noopener noreferrer"&gt;roadmap.sh&lt;/a&gt;&lt;br&gt;
Visual roadmaps for any role, from Frontend to AI Engineer.&lt;br&gt;
Plan your path, track your progress, and never get lost in tutorials again!&lt;/p&gt;

</description>
      <category>career</category>
      <category>learningpath</category>
      <category>coding</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
