<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Adarsh Sriuma</title>
    <description>The latest articles on DEV Community by Adarsh Sriuma (@tsadarsh).</description>
    <link>https://dev.to/tsadarsh</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tsadarsh"/>
    <language>en</language>
    <item>
      <title>My Model Cheated: How Grad-CAM Exposed a 95% Accuracy Lie</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Sun, 30 Nov 2025 21:40:05 +0000</pubDate>
      <link>https://dev.to/tsadarsh/my-model-cheated-how-grad-cam-exposed-a-95-accuracy-lie-2n81</link>
      <guid>https://dev.to/tsadarsh/my-model-cheated-how-grad-cam-exposed-a-95-accuracy-lie-2n81</guid>
      <description>&lt;h2&gt;
  
  
  The Project
&lt;/h2&gt;

&lt;p&gt;This week I was trying to work on a simple Deep Learning project to get familiar with PyTorch APIs. Since I had just returned from a staycation where I drove a rental car for over 600 miles, I decided to build a "Car Damage Classifier." I kept the goal simple: the model classifies if an uploaded image of a car is "DAMAGED" or "UNDAMAGED".&lt;/p&gt;

&lt;p&gt;I couldn't find a ready-to-use dataset, so I created one by merging a damaged car dataset (source: [lplenka/coco-car-damage-detection-dataset]) and a new car dataset (source: [yamaerenay/100-images-of-top-50-car-brands]). I made sure I roughly had the same number of images in my training set to avoid class imbalance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmkjguwd6uzrmlxmglh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmkjguwd6uzrmlxmglh9.png" alt="Image showing number of images in dataset" width="501" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I began training using ResNet-18 as the backbone. Just after 10 epochs, my code reported a validation accuracy of 95.08%. I was thrilled. I tested the model with random images from the internet, and it looked like it had truly learned to classify the cars with high confidence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3jpm0ztudi2kluvcmz0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3jpm0ztudi2kluvcmz0.png" alt="Image showing the model correctly classifying a damaged car" width="608" height="603"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Discovery
&lt;/h2&gt;

&lt;p&gt;Instead of calling it a day, out of curiosity, I wanted to visualize where the model was looking to make its decision. I implemented Grad-CAM (Gradient-weighted Class Activation Mapping) to draw a heatmap over the image, highlighting the pixels that triggered the classification.&lt;/p&gt;

&lt;p&gt;I was expecting a red "hot spot" on the dents, scratches, or broken bumpers.&lt;/p&gt;

&lt;p&gt;But the results were the direct opposite.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezulofsz4d6yfk9hotnv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezulofsz4d6yfk9hotnv.png" alt="Image of a heatmap overlayed on the picture of a damaged car showing red hotspots in the background and blue spots in the damaged parts of the car" width="794" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I observed the background of the image was red, while the car itself was often blue (ignored).&lt;/p&gt;

&lt;p&gt;This was a big "Aha!" moment. The model was cheating, and it was very good at it. Taking a closer look at my data, I realized the bias:&lt;/p&gt;

&lt;p&gt;Damaged Cars: Most images were taken in junkyards or accident sites, meaning the backgrounds were cluttered with debris, dirt, and fences.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgoclxpurlpooii1lk81s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgoclxpurlpooii1lk81s.png" alt="Row of 4 images showing a sample batch from dataset" width="552" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Undamaged Cars: Most images were professional stock photos with clean, smooth backgrounds (showrooms, open roads, nature).&lt;/p&gt;

&lt;p&gt;The model wasn't looking at the car at all. It was just checking: "Is the background messy? If yes, then Damaged."&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Showroom" Test
&lt;/h2&gt;

&lt;p&gt;To verify this, I took a photo of a damaged car that the model correctly classified as "DAMAGED." I used Photoshop to remove the messy background and replaced it with a smooth white gradient (mimicking a showroom).&lt;/p&gt;

&lt;p&gt;Sure enough, the model instantly changed its prediction to "UNDAMAGED."&lt;/p&gt;

&lt;p&gt;How do I fix this? I am brainstorming ways to force the model to look at the car, not the scenery:&lt;/p&gt;

&lt;p&gt;Architecture Change: Employing an Object Detection model (like YOLO) to force the model to look only inside a bounding box around the car.&lt;/p&gt;

&lt;p&gt;Preprocessing: Removing the background of all images before training (though this is computationally expensive).&lt;/p&gt;

&lt;p&gt;I am curious to learn from other Deep Learning practitioners -how do you usually solve this "background bias" problem in your projects?&lt;/p&gt;

</description>
      <category>deeplearning</category>
      <category>pytorch</category>
      <category>todayilearned</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How I wrote a working compensation algorithm in one sitting</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Mon, 24 Nov 2025 19:36:26 +0000</pubDate>
      <link>https://dev.to/tsadarsh/how-i-wrote-a-working-compensation-algorithm-in-one-sitting-35gh</link>
      <guid>https://dev.to/tsadarsh/how-i-wrote-a-working-compensation-algorithm-in-one-sitting-35gh</guid>
      <description>&lt;p&gt;I am working on a research project at my university. I was tasked to implement an algorithm that modifies the analog readout of a sensor depending on the sensor location. This algorithm involved looking up two separate look up tables and computing percentage offset required for the sensor readout. &lt;/p&gt;

&lt;p&gt;This new feature needed to be integrated with the existing firmware running on the microcontroller. When I took a look at the existing firmware I knew I was in trouble. The code was poorly documented with no unit tests. I was short on time and I needed my algorithm to work.&lt;/p&gt;

&lt;p&gt;I decided to try following the Test-Driven-Development (TDD) workflow because I was taking the Clean Code course by Robert C. Martin as part of my internship requirement a few days back.&lt;/p&gt;

&lt;p&gt;The workflow was simple. Write a "failing" unit test. Code up the new feature just enough to make the unit test pass. Add a new "failing" unit test. Jump back to the feature to make it pass. Repeat this till you get a fully working unit test suite + module. &lt;/p&gt;

&lt;p&gt;Contrary to what I assumed, following this procedure took less time to implement the feature and it was easy to identify if something broke when I changed a part of the new feature. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you have not given TDD a chance or have not heard about it, this is your sign to give it a try. TDD forces you to write testable code. Free documentation if you name your unit test name very verbose.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>tdd</category>
    </item>
    <item>
      <title>Lifelogging: An intresting read 📖</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Fri, 11 Nov 2022 16:56:18 +0000</pubDate>
      <link>https://dev.to/tsadarsh/lifelogging-an-intresting-read-4odn</link>
      <guid>https://dev.to/tsadarsh/lifelogging-an-intresting-read-4odn</guid>
      <description>&lt;p&gt;I was trying to find out ways to integrate the Intel Realsense d435i camera into the ROS2 navigation stack that I am building when I stumbled upon the term VIO (Visual Inertial Odometry). This emerging technology uses CMOS (Complementary Metal Oxide Semiconductors) sensors for vision and IMU (Inertial Measurement Unit) sensors to sense balance and orientation. VIO tries to emulate our natural behavior to localize and map an unknown environment.&lt;/p&gt;

&lt;p&gt;This &lt;a href="https://dev.intelrealsense.com/docs/intel-realsensetm-visual-slam-and-the-t265-tracking-camera" rel="noopener noreferrer"&gt;article&lt;/a&gt; by Anders Grunnet-Jepsen, Michael Harville, Brian Fulkerson, Daniel Piro, Shirit Brook, Jim Radford gives a detailed explanation and insights about Visual SLAM (Simultaneous Localization and Mapping) and how the Intel T265 Tracking Camera uses this technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Egocentric vision
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;... examples of using such an embedded sensor for &lt;strong&gt;ego-centric&lt;/strong&gt; tracking of 6DOF for a robot and AR/VR headset respectively. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The term, "ego-centric" seemed very peculiar to me, so I did a Google search, and I am happy that I did. While going through &lt;a href="https://en.wikipedia.org/wiki/Egocentric_vision" rel="noopener noreferrer"&gt;this Wikipedia article&lt;/a&gt; I came across the hyperlink to &lt;a href="https://en.wikipedia.org/wiki/Microsoft_SenseCam" rel="noopener noreferrer"&gt;Microsoft Sensecam&lt;/a&gt;. Now this got me very fascinated. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Microsoft's SenseCam is a lifelogging camera with fisheye lens and trigger sensors, such as accelerometers, heat sensing, and audio, invented by Lyndsay Williams, patent[1] granted in 2009. Usually worn around the neck, Sensecam is used for the &lt;strong&gt;MyLifeBits&lt;/strong&gt; project, a lifetime storage database. Early developers were James Srinivasan and Trevor Taylor.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  MyLifeBits
&lt;/h2&gt;

&lt;p&gt;What's this project about? Wikipedia, here I come.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;MyLifeBits is a life-logging experiment begun in 2001.[1] It is a Microsoft Research project inspired by Vannevar Bush's hypothetical Memex computer system. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After reading a little bit about Vannevar Bush, I started to understand the purpose of this project. MyLifeBits project was an attempt to fulfill Vannevar Bush's vision to have a system that complements our memory. I was surprised to know that the chief experiment subject, Gordon Bell wore a camera around his neck for close to 8 years!&lt;/p&gt;

&lt;p&gt;The collected data from the camera, the books, emails and articles read by Gordon Bell was later compiled and also &lt;a href="http://totalrecallbook.com/" rel="noopener noreferrer"&gt;published&lt;/a&gt; as a paperback! It was a little disappointing to find out that Gordon stopped wearing the camera since 2009. This article titled, "&lt;a href="https://www.computerworld.com/article/3048497/lifelogging-is-dead-for-now.html" rel="noopener noreferrer"&gt;Lifelogging is dead (for now)&lt;/a&gt;" gives more details on why this project ended. It is an interesting read. &lt;/p&gt;

&lt;p&gt;This is an excerpt from the "Lifelogging is dead (for now)" article which made me super excited:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We'll interact with our data using future versions of Siri-like virtual assistants. Instead of searching through terabytes of data, we'll simply ask our assistant: "Hey, what was the name of that restaurant I enjoyed in London a few years ago?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;Back to learning on how to integrate my Intel Realsense camera in my navigation stack.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>xmlrpc.client.Fault: &lt;Fault 1: "&lt;class 'rclpy._rclpy_pybind11.InvalidHandle'&gt;:cannot use Destroyable because destruction was...</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Thu, 10 Nov 2022 06:27:08 +0000</pubDate>
      <link>https://dev.to/tsadarsh/xmlrpcclientfault-fault-1-class-rclpyrclpypybind11invalidhandlecannot-use-destroyable-because-destruction-was-3f35</link>
      <guid>https://dev.to/tsadarsh/xmlrpcclientfault-fault-1-class-rclpyrclpypybind11invalidhandlecannot-use-destroyable-because-destruction-was-3f35</guid>
      <description>&lt;p&gt;After sourcing &lt;code&gt;. /opt/ros/humble/setup.bash&lt;/code&gt; and running &lt;code&gt;ros2 topic list&lt;/code&gt;, I got the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Traceback (most recent call last):
  File "/opt/ros/humble/bin/ros2", line 33, in &amp;lt;module&amp;gt;
    sys.exit(load_entry_point('ros2cli==0.18.3', 'console_scripts', 'ros2')())
  File "/opt/ros/humble/lib/python3.10/site-packages/ros2cli/cli.py", line 89, in main
    rc = extension.main(parser=parser, args=args)
  File "/opt/ros/humble/lib/python3.10/site-packages/ros2topic/command/topic.py", line 41, in main
    return extension.main(args=args)
  File "/opt/ros/humble/lib/python3.10/site-packages/ros2topic/verb/list.py", line 56, in main
    topic_names_and_types = get_topic_names_and_types(
  File "/opt/ros/humble/lib/python3.10/site-packages/ros2topic/api/__init__.py", line 41, in get_topic_names_and_types
    topic_names_and_types = node.get_topic_names_and_types()
  File "/usr/lib/python3.10/xmlrpc/client.py", line 1122, in __call__
    return self.__send(self.__name, args)
  File "/usr/lib/python3.10/xmlrpc/client.py", line 1464, in __request
    response = self.__transport.request(
  File "/usr/lib/python3.10/xmlrpc/client.py", line 1166, in request
    return self.single_request(host, handler, request_body, verbose)
  File "/usr/lib/python3.10/xmlrpc/client.py", line 1182, in single_request
    return self.parse_response(resp)
  File "/usr/lib/python3.10/xmlrpc/client.py", line 1354, in parse_response
    return u.close()
  File "/usr/lib/python3.10/xmlrpc/client.py", line 668, in close
    raise Fault(**self._stack[0])
xmlrpc.client.Fault: &amp;lt;Fault 1: "&amp;lt;class 'rclpy._rclpy_pybind11.InvalidHandle'&amp;gt;:cannot use Destroyable because destruction was requested"&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This happens when some node unexpectedly crashed previously. Although &lt;code&gt;ros2 launch&lt;/code&gt; and &lt;code&gt;ros2 run&lt;/code&gt; commands still work, &lt;code&gt;ros2 topic list&lt;/code&gt; doesn't work anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;code&gt;ros2 daemon stop&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ros2 daemon start&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This fixed the issue for me.&lt;br&gt;
&lt;a href="https://lightrun.com/answers/ros2-ros2cli-ros2-node-list-stops-working-after-unrelated-node-crashes" rel="noopener noreferrer"&gt;Source&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ros2</category>
      <category>humble</category>
    </item>
    <item>
      <title>Migrate your free Heroku dynos to Fly.io now!</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Thu, 10 Nov 2022 06:07:02 +0000</pubDate>
      <link>https://dev.to/tsadarsh/migrate-your-free-heroku-dynos-to-flyio-now-5g6f</link>
      <guid>https://dev.to/tsadarsh/migrate-your-free-heroku-dynos-to-flyio-now-5g6f</guid>
      <description>&lt;p&gt;So I had this blog website of mine hosted in Heroku. I was using one of their free dynos. Life was good. And then I get a mail from Heroku saying that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Reminder Upgrade now: Heroku free product plans end November 28th&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I was also using their free Heroku Prostgres to store my blog entries. So I was least interested to migrate my &lt;em&gt;cool&lt;/em&gt; blog from Heroku. I decided to upgrade my free dyno to a basic (paid) dyno. It was after all only 5$ per month. Wait, it was 5$ a month! That is no small amount for me. So I decided not to upgrade (tbh, I went ahead and put in my card details only to know that Indian banks have put a hold to recurring payments, so my card became invalid for Heroku subscription). &lt;/p&gt;

&lt;p&gt;Fortunately on September 27th PyCoder's Weekly &lt;a href="https://pycoders.com/issues/544" rel="noopener noreferrer"&gt;Issue #544&lt;/a&gt; had an &lt;a href="https://testdriven.io/blog/heroku-alternatives/" rel="noopener noreferrer"&gt;article&lt;/a&gt; about Heroku alternatives. It is from this article that I came to know about &lt;a href="https://fly.io/" rel="noopener noreferrer"&gt;Fly.io&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Fly.io turned out to be cooler than I expected. This is what they say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you can build it into a Dockerfile, we can run it. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;and more importantly:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you have a Heroku app you'd like to bring closer to your users, give the Fly &lt;a href="https://fly.io/launch/heroku" rel="noopener noreferrer"&gt;Turboku Launcher&lt;/a&gt; a spin.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So guess what I did. I opened up &lt;a href="https://fly.io/launch/heroku" rel="noopener noreferrer"&gt;Turboku Launcher&lt;/a&gt; and followed the steps mentioned there. It's only 3 simple steps. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Connect to Heroku&lt;/li&gt;
&lt;li&gt;Configure Heroku (choose app-name, location, heroku-app-name).&lt;/li&gt;
&lt;li&gt;Launch on fly.io (On a click of button).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. Fly.io takes care of all the other things. All I needed to do extra was to enter my card details for verification, which surprisingly worked here (stripe gateway).&lt;/p&gt;

&lt;p&gt;Up and running 🚀. My &lt;a href="https://tsadarsh.fly.dev/" rel="noopener noreferrer"&gt;blog website&lt;/a&gt; was migrated in less than 5 minutes. Happy again!&lt;/p&gt;

</description>
      <category>heroku</category>
      <category>flyio</category>
      <category>deployment</category>
      <category>migration</category>
    </item>
    <item>
      <title>LXD for runing different ROS2 versions</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Wed, 09 Nov 2022 18:01:01 +0000</pubDate>
      <link>https://dev.to/tsadarsh/lxd-for-runing-different-ros2-versions-362d</link>
      <guid>https://dev.to/tsadarsh/lxd-for-runing-different-ros2-versions-362d</guid>
      <description>&lt;p&gt;This post gives brief details on how to user LXD to run multiple ROS/ROS2 versions in a Ubuntu machine.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;LXD is a next generation system container and virtual machine manager. It offers a unified user experience around full Linux systems running inside containers or virtual machines.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In my use case I am running ROS2 Humble in my LXD-host machine (Ubuntu 22.04). I want to run ROS2 Foxy (needs Ubuntu 20.04) in a LXD container in order to use &lt;code&gt;realsense-ros&lt;/code&gt; &lt;a href="https://github.com/IntelRealSense/realsense-ros" rel="noopener noreferrer"&gt;package&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install LXD
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;snap install lxd&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  First time configuration
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;lxd init&lt;/code&gt; # defaults are fine&lt;/p&gt;

&lt;h2&gt;
  
  
  Launch the desired Ubuntu version
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;lxc launch ubuntu:20.04 ros-foxy&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Run a login shell as the default ubuntu user:
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;lxc exec ros-foxy -- sudo -iu ubuntu bash&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Do a &lt;code&gt;sudo apt update&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;If the packages fail to download and you get a bunch of "W: Failed to fetch ..." warnings, there is no internet access inside the container. &lt;/p&gt;

&lt;p&gt;If you have Docker installed, then most probably the docker daemon is causing this problem. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Yep, it is. Docker made your default FORWARD policy to be DROP which will eat all traffic that’s not meant for Docker itself…&lt;/p&gt;

&lt;p&gt;iptables -P FORWARD ACCEPT should temporarily fix that&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So run: &lt;code&gt;iptables -P FORWARD ACCEPT&lt;/code&gt; to fix this issue and try doing a &lt;code&gt;sudo apt update&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Run RViz and other GUI applications
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;lxc profile create gui&lt;/code&gt;
Note your display variable by running &lt;code&gt;echo $DISPLAY&lt;/code&gt; in your terminal. My terminal returned &lt;code&gt;:1&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;lxc profile edit gui&lt;/code&gt;
Paste the following config and change the display variable (in places where "#change here" is mentioned):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;config:
  environment.DISPLAY: :1 #change here
  raw.idmap: both 1000 1000
  user.user-data: |
    #cloud-config
    runcmd:
      - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
      - 'echo export PULSE_SERVER=unix:/tmp/.pulse-native | tee --append /home/ubuntu/.profile'
    packages:
      - x11-apps
      - mesa-utils
      - pulseaudio
description: GUI LXD profile
devices:
  PASocket:
    path: /tmp/.pulse-native
    source: /run/user/1000/pulse/native
    type: disk
  X0:
    path: /tmp/.X11-unix/X1 #change here
    source: /tmp/.X11-unix/X1 # change here
    type: disk
  mygpu:
    type: gpu
name: gui
used_by:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Apply this profile to the existing container and restart using:
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;lxc profile add ros-foxy gui&lt;/code&gt;&lt;br&gt;
&lt;code&gt;lxc restart ros-foxy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Launch the &lt;code&gt;ros-foxy&lt;/code&gt; container and &lt;code&gt;lxc exec&lt;/code&gt; into it using &lt;code&gt;lxc exec ros-foxy -- sudo -iu ubuntu bash&lt;/code&gt;. Now all the GUI will be X11 forwarded.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unsupported version 0 of Verneed record
&lt;/h2&gt;

&lt;p&gt;If this error shows up when trying to &lt;code&gt;lxc exec&lt;/code&gt;, try restarting your device. This should temporarily fix it. More information can be found &lt;a href="https://forum.snapcraft.io/t/unsupported-version-0-of-verneed-record-linux-6-0/32160" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ubuntu.com/blog/ros-development-with-lxd" rel="noopener noreferrer"&gt;ROS Development with LXD&lt;/a&gt;&lt;br&gt;
&lt;a href="https://discuss.linuxcontainers.org/t/no-internet-access-inside-container-but-container-is-able-to-ping-to-host/13168" rel="noopener noreferrer"&gt;No internet access inside container but container is able to ping to host&lt;/a&gt;&lt;/p&gt;

</description>
      <category>lxd</category>
      <category>ros2</category>
      <category>containerapps</category>
      <category>foxy</category>
    </item>
    <item>
      <title>Run Nav2 in a Docker Container</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Thu, 27 Oct 2022 20:57:17 +0000</pubDate>
      <link>https://dev.to/tsadarsh/run-nav2-in-a-docker-container-33ge</link>
      <guid>https://dev.to/tsadarsh/run-nav2-in-a-docker-container-33ge</guid>
      <description>&lt;p&gt;Nav2 is the Navigation stack for ROS2 (foxy). This post lists out the step required to setup and test Nav2 stack inside a Docker container.&lt;/p&gt;

&lt;p&gt;Install Docker by going though the Docker Docs.&lt;/p&gt;

&lt;p&gt;Pull the ROS2 docker image from hub.docker.com:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker pull tsadarsh/ros2-foxy&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;To view the GUI applications like RViz and Gazebo, enable clients to access your device XServer:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;xhost +&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Set DISPLAY variable and mount X11 unix-domain socket to pipe the GUI applications outside the container:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix/ tsadarsh/ros2-foxy&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Run a &lt;code&gt;sudo apt update&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Follow the Getting started guide from the official ROS2 &lt;a href="https://navigation.ros.org/getting_started/index.html" rel="noopener noreferrer"&gt;docs&lt;/a&gt; and install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install ros-foxy-navigation2
sudo apt install ros-foxy-nav2-bringup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the Turtlebot 3 packages:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo apt install ros-foxy-turtlebot3*&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;To run the example, first source the foxy &lt;code&gt;setup.bash&lt;/code&gt; and set the environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source /opt/ros/&amp;lt;ros2-distro&amp;gt;/setup.bash
export TURTLEBOT3_MODEL=waffle
export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:/opt/ros/&amp;lt;ros2-distro&amp;gt;/share/turtlebot3_gazebo/models
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally launch the example turtlebot3 simulation:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ros2 launch nav2_bringup tb3_simulation_launch.py headless:=False&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;Note: Gazebo initially takes 5-10 minutes to start, download and cache all the required models. Until then the screen appears to be frozen.&lt;/p&gt;

&lt;p&gt;If you encounter this error: &lt;code&gt;libGL error: MESA-LOADER: failed to open amdgpu: /usr/lib/dri/amdgpu_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)&lt;/code&gt;, then mount &lt;code&gt;/dev/dri&lt;/code&gt; to your container:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix/ -v /dev/dri:/dev/dri tsadarsh/ros2-foxy&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

</description>
      <category>ros2</category>
      <category>gazebo</category>
      <category>docker</category>
    </item>
    <item>
      <title>2022</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Mon, 03 Jan 2022 18:15:55 +0000</pubDate>
      <link>https://dev.to/tsadarsh/2022-3gm4</link>
      <guid>https://dev.to/tsadarsh/2022-3gm4</guid>
      <description>&lt;p&gt;Here is another vent-out from me. I felt like writing today so here I am. It is roughly 72 hours past 2022 and I am still how to utilize my time effectively in this new year. I had high hopes in the last few weeks of Dec'21 that I would make the next year a perfect one. But to be honest it hasn't been a great start.&lt;/p&gt;

&lt;p&gt;So where is the problem? I suspect it is the lack of plan. I never planned anything solid to do in 2022. All I wanted was to be a better version of myself by doing more things that are productive. What activities do I classify as "productive"? That is a good starting question. If I can answer this question then I will get to know if my activities are aligned with my goals. Let me make it clear once again that my goal is to "make myself a better version by doing things that are productive".&lt;/p&gt;

&lt;p&gt;I am doing an undergraduate course in Electrical and Electronics Engineering. Though I used to work/learn in Computer Science projects before I even took up the undergrad course it only makes sense to put more of my energy into projects related to my current course of study - Electrical and Electronics. The reason why I chose the Electrical and Electronics course instead of a CS major is a separate topic by itself. I did try to involve myself simultaneously in both CS and EE projects in the previous year only to end up in a hectic schedule and frequent burn-outs. So I am now clear to delve into only one area of interest at a time. &lt;/p&gt;

&lt;p&gt;Okay, so it is fixed that my primary focus is in doing projects and learning skills aligned towards Electrical and Electronics. What do I exactly do in this field? I can think of a few pathways. One, I could do a little extra of what I get to learn in my professional course. Like, if I get to learn about some circuits in today's class I could come back in the evening and try building a prototype of them. Or I could learn stuff that is not taught in the course but still is related to the field of Electrical or Electronics. The third pathway can be crossed-out as it involves me in CS projects after my day in college.&lt;/p&gt;

&lt;p&gt;I am still not clear what I should choose: pathway 1 or 2. But yes, I did bring more clarity by writing out my thought process. Maybe I will come up with some concrete decision in some time. Meanwhile, the comments section are open to putting up your thoughts and how your New Year started. Did you start the year in a well planned out manner? How many of you are still holding on to your resolutions? &lt;/p&gt;

</description>
      <category>newyear</category>
      <category>aboutme</category>
      <category>life</category>
    </item>
    <item>
      <title>Can we help the people of Kabul?</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Mon, 16 Aug 2021 08:01:36 +0000</pubDate>
      <link>https://dev.to/tsadarsh/worsening-situation-in-kabul-jan</link>
      <guid>https://dev.to/tsadarsh/worsening-situation-in-kabul-jan</guid>
      <description>&lt;p&gt;It deeply pains me to hear the news about the situation in Kabul. As developers are we capable of doing anything constructive regarding this matter? &lt;/p&gt;

&lt;p&gt;I am aware that the governments and and big cruel organizations are involved in this. But I personally feel handicapped not being to do anything. Can we developers come together to render help in any possible way?&lt;/p&gt;

</description>
      <category>help</category>
    </item>
    <item>
      <title>Tracking every minute of my day.</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Wed, 04 Aug 2021 03:02:24 +0000</pubDate>
      <link>https://dev.to/tsadarsh/tracking-every-minute-of-my-day-4ad8</link>
      <guid>https://dev.to/tsadarsh/tracking-every-minute-of-my-day-4ad8</guid>
      <description>&lt;p&gt;I became more aware of where I was spending my time. It was like someone watching me from behind. I became my own boss. Here I will be sharing my experience in tracking every minute of day for two weeks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Even before I started tracking my time I was well aware of the fact that I was spending a lot of time doing unproductive activities. I used to dabble between YouTube, coding, college classes, Instagram whenever I felt like. There was no accountability. &lt;/p&gt;

&lt;p&gt;Every now and then I also used to read blogs on &lt;a href="//dev.to"&gt;Dev&lt;/a&gt;. On July 18 I happen to read a very interesting blog by &lt;a class="mentioned-user" href="https://dev.to/dragosbln"&gt;@dragosbln&lt;/a&gt; titled &lt;strong&gt;"I tracked every minute of my time for the last 4 months. Here are 7 totally unexpected results"&lt;/strong&gt;. &lt;a href="https://dev.to/dragosbln/i-tracked-every-minute-of-my-time-for-the-last-4-months-here-are-7-totally-unexpected-results-2dna"&gt;Here&lt;/a&gt; I'm also linking the article. It was an eye opener for me. The desire to track my own time also was very strong that I started tracking my time from that very minute.&lt;/p&gt;

&lt;p&gt;I am grateful to &lt;a class="mentioned-user" href="https://dev.to/dragosbln"&gt;@dragosbln&lt;/a&gt; for sharing his experience in a very detailed manner. The Toggl timer application proved to be extremely easy and fun to use. The Toggl timer app also syncs across the devices. Though I use the mobile app mostly to track my time, the web-app gives better analysis and plots of the tracked time.&lt;/p&gt;

&lt;h2&gt;
  
  
  About Toggl
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://toggl.com/" rel="noopener noreferrer"&gt;Toggl&lt;/a&gt; timer provides a month of free premium trial after which one need to pay to continue using the &lt;strong&gt;Premium&lt;/strong&gt; plan. The &lt;strong&gt;free plan&lt;/strong&gt; packs enough features for getting most of things done. &lt;/p&gt;

&lt;p&gt;I started by creating some projects: &lt;strong&gt;Leisure, Online class, Sleep, Routines, Fitness, Spiritual and Misc&lt;/strong&gt;. These &lt;strong&gt;projects&lt;/strong&gt; are similar to the concept of &lt;em&gt;class&lt;/em&gt; in Object oriented programming paradigm. Every activity I track falls into one of the above mentioned projects. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffw8tix7hgoi4l0bfiqw4.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffw8tix7hgoi4l0bfiqw4.PNG" alt="track" width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is definitely a pain to track time in the starting. I often used to forget to start/stop the timer when I move to the next activity. Furthermore, it is not pleasant to know that you have spent more than 40% of the day in &lt;strong&gt;Leisure&lt;/strong&gt; activities on some days.&lt;/p&gt;

&lt;p&gt;But yes, this is where we can seek improvement. I was able to cut down a lot of "switching-between" activities. Checking WhatsApp in between became a strict no-no unless I made an entry in Toggl timer. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It felt simply great to know that I had control over my time. After two weeks of tracking my time I am now beginning to develop this as a habit. The daily and weekly analysis gives me enough information to know the activities which consume most of my time. &lt;/p&gt;

&lt;p&gt;Though I haven't made any significant changes to my daily activities, I made a point to get comfortable tracking my time by just recording my time and not focusing too much on optimizing it. Some of the comments on the OP mentioned about methods like "time-blocking" to improve the efficiency and productivity. Now that I am able to comfortably track my time I will try to implement these methods in the coming week.&lt;/p&gt;

&lt;p&gt;I now believe that every person needs to have control over his time. Tracking the time spent is probably the best way to get started. &lt;/p&gt;

</description>
      <category>productivity</category>
      <category>career</category>
      <category>motivation</category>
    </item>
    <item>
      <title>Why you should cold mail?</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Sun, 23 May 2021 15:59:15 +0000</pubDate>
      <link>https://dev.to/tsadarsh/journey-to-the-center-of-the-hiss-1jd6</link>
      <guid>https://dev.to/tsadarsh/journey-to-the-center-of-the-hiss-1jd6</guid>
      <description>&lt;p&gt;I could easily transition from Python2 to Python3. I didn't have to unlearn a lot. I took the Codecadamy's new course on Python3. After a few days in I realized that I needed to buy a subscription to continue the next chapter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Teens have credit cards?
&lt;/h2&gt;

&lt;p&gt;Either provide us teens with a credit card and a monthly stipend or provide the services for free. No seriously, how am I supposed to pay for the subscription? "Adarsh, why don't you ask your parents to buy you the subscription. After all it's for educational purpose."&lt;/p&gt;

&lt;p&gt;They wouldn't have agreed. At least that is what I thought that time. I would have been asked to provide a long list of justifications on why it is fruitful to invest money on my this "new" endeavor. I started to learn programming with no long-term goals or with any specific goals but just as a hobby. &lt;/p&gt;

&lt;h2&gt;
  
  
  Dejected but then enlightened by Mr. Jobs
&lt;/h2&gt;

&lt;p&gt;I felt really dejected. What can a teen do? How was I to pay for the subscription. Sure, I could learn Python from some other source which doesn't involve "credit cards". But I had fallen love with &lt;a href="https://www.codecademy.com/" rel="noopener noreferrer"&gt;Codecadamy&lt;/a&gt;. I was hoping for a miracle to happen.&lt;/p&gt;

&lt;p&gt;And the miracle happened! I had read the biography of Steve Jobs some time back. And I recalled an incident where teen Steve wanted some gadget/tools so he looked up everywhere before deciding to directly contact Henry Ford! Teen Steve found the contact number of the big business man from yellow pages and made the call. And to Steve's surprise Mr. Ford answered the call! Steve got the gadgets that he wanted plus an internship. How crazy is that?&lt;/p&gt;

&lt;p&gt;So I did just that. No, I didn't call Henry Ford! But, I mailed Codecadamy expressing my desire to learn and my inability to pay for the course. I also said how Codecadamy was super awesome. I went to bed with my fingers crossed. Guess what happened when I opened Codecadamy the next morning? A violet label labelled &lt;strong&gt;Pro&lt;/strong&gt; beamed after my name in the dashboard. I had been given a free Pro account subscription for three months! The world isn't so much of a bad place after all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Training to become a Pro with the Pro account
&lt;/h2&gt;

&lt;p&gt;I was super pumped after getting the &lt;strong&gt;Pro&lt;/strong&gt; subscription. I completed the Python3 course and I had built some solid confidence in the language. I began to realize the beauty of programming. Though I couldn't build anything new with the knowledge I had, the process of learning was giving me immense enjoyment.&lt;/p&gt;

&lt;p&gt;What did I do with my newly learnt skills? I will write about that soon.&lt;/p&gt;

</description>
      <category>howilearnt</category>
      <category>experience</category>
      <category>beginners</category>
    </item>
    <item>
      <title>"Hello, World!" for Communication</title>
      <dc:creator>Adarsh Sriuma</dc:creator>
      <pubDate>Sun, 16 May 2021 16:27:47 +0000</pubDate>
      <link>https://dev.to/tsadarsh/hello-world-for-communication-586g</link>
      <guid>https://dev.to/tsadarsh/hello-world-for-communication-586g</guid>
      <description>&lt;p&gt;This is my first post. I wanted to write about myself and the stuff I do for a long time. After reading the post by &lt;a class="mentioned-user" href="https://dev.to/danytulumidis"&gt;@danytulumidis&lt;/a&gt; on &lt;a href="https://dev.to/danytulumidis/communication-is-key-4h48"&gt;Communication is Key&lt;/a&gt; I decided to finally do this.&lt;/p&gt;

&lt;h1&gt;
  
  
  A little about me
&lt;/h1&gt;

&lt;p&gt;I'm doing my undergraduate in Electrical and Electronics Engineering from SRMIST, Chennai. In my high school days, programming was not a thing which my friends or the people I knew were involved in. After dabbling on some languages I finally fell in love with Python. &lt;/p&gt;

&lt;p&gt;I had taken a Computer Science course to learn C before I knew Python. It was horrible! All I learnt was boilerplate code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight c"&gt;&lt;code&gt;&lt;span class="cp"&gt;#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;stdio.h&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
#include&lt;/span&gt; &lt;span class="cpf"&gt;&amp;lt;conio.h&amp;gt;&lt;/span&gt;&lt;span class="cp"&gt;
&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My teacher did a very poor job of explaining what these actually meant (maybe I never asked). I couldn't wrap my mind around these repeating lines of code in every piece of C code I write. For a beginner like me back then it meant missing blocks of key information.&lt;/p&gt;

&lt;p&gt;I completed the course somehow but never opened the C reference book again. I was over with programming then. Fin.&lt;/p&gt;

&lt;p&gt;But things changed. After a few months I tried to learn programming once again but this time from a good resource. The top Google result brought me to &lt;a href="https://www.codecademy.com/" rel="noopener noreferrer"&gt;Codecademy&lt;/a&gt;. After reading some articles on the language I should learn, I decided to start a journey in JavaScript. I took &lt;strong&gt;notes&lt;/strong&gt; and became more aware of programming practices. The most important takeaway for me was to &lt;strong&gt;refer the documentation&lt;/strong&gt;. I always thought one needed to know all the nitty-gritty stuff of the language!&lt;/p&gt;

&lt;p&gt;I didn't last long in the course. I don't know why I left the course mid-way and I never used JS after that (till now!). My next encounter with programming was when I entered my 11th grade. Until then the CS course in 11th grade was &lt;strong&gt;Introduction to C++&lt;/strong&gt;. The education system decided to give us a choice to learn either C++ or Python. And Python was chosen.&lt;/p&gt;

&lt;p&gt;I wanted to have a head-start so I took up a Python course in Codecadamy. I fell in love with Python from the very first print statement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print "Hello, World!"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It was so easy to learn Python. The language syntax was very close to spoken English. I didn't need to encounter any technical jargon. When I was a few weeks in, my CS teacher told that we were learning the older version of Python (Python2.7) and that we need to update to the latest Python3.6. A lot of syntax changes had taken place in the updated version. Notably, my favorite print statement was now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;print("Hello, World!")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Did I take up the challenge to learn Python3.6?&lt;/p&gt;

&lt;p&gt;P.S. Writing this post gave me more insight to myself. I've never shared my this journey with anyone. I will try to write more soon.&lt;/p&gt;

</description>
      <category>introduction</category>
      <category>aboutme</category>
      <category>howilearntprogramming</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
