<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Darren Broderick (DBro)</title>
    <description>The latest articles on DEV Community by Darren Broderick (DBro) (@iamdbro).</description>
    <link>https://dev.to/iamdbro</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/iamdbro"/>
    <language>en</language>
    <item>
      <title>Unleashing the Future: AWS-sponsored STEAM Event Ignites Passion for Machine Learning</title>
      <dc:creator>Darren Broderick (DBro)</dc:creator>
      <pubDate>Wed, 13 Mar 2024 16:03:10 +0000</pubDate>
      <link>https://dev.to/iamdbro/unleashing-the-future-aws-sponsored-steam-event-ignites-passion-for-machine-learning-52pe</link>
      <guid>https://dev.to/iamdbro/unleashing-the-future-aws-sponsored-steam-event-ignites-passion-for-machine-learning-52pe</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno7xsgdmoe6bpc4sly7w.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno7xsgdmoe6bpc4sly7w.jpg" alt="Image description" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On January 23rd, 2024, the Mid Antrim Museum in Ballymena hosted an inspiring AWS-sponsored event, immersing students in robotics and machine learning. Physicist Brian Cox inspired four groups to explore the captivating world of machine learning, focusing on the AWS DeepRacer League—an intersection of technology and education. &lt;/p&gt;

&lt;p&gt;A key highlight for me was to demystify machine learning and mathematics for the students attending the track, using the AWS DeepRacer kit to demonstrate real-world applications in autonomous racing. &lt;/p&gt;

&lt;p&gt;The event’s comprehensive curriculum provided students with a holistic understanding of machine learning and robotics. They engaged with the AWS DeepRacer kit, designing and racing models on the track, bridging theory and application. &lt;/p&gt;

&lt;p&gt;Professor Brian Cox, renowned for science education, enthusiastically joined the DeepRacer fun, winning second place in a friendly competition. &lt;/p&gt;

&lt;p&gt;I had a fantastic day representing OpenData Belfast and AWS by teaching students more about opportunities in data and machine learning. On the track we had a lot of engagement throughout the day with a personal highlight of having Professor Brian Cox getting hands on and even asking a few questions about ODB!” &lt;/p&gt;

&lt;p&gt;The Mid Antrim Museum transformed into an innovation hub, underscoring the connection between technology and education. This AWS-supported event delivered an exhilarating day of learning and discovery, spotlighting the dedicated support for STEAM education!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l5z25x06elkm7emqzgi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6l5z25x06elkm7emqzgi.jpg" alt="Image description" width="800" height="408"&gt;&lt;/a&gt;  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>2024 : AWS DeepRacer Local Training On DRFC</title>
      <dc:creator>Darren Broderick (DBro)</dc:creator>
      <pubDate>Wed, 13 Mar 2024 14:29:50 +0000</pubDate>
      <link>https://dev.to/iamdbro/aws-deepracer-local-training-drfc-l3l</link>
      <guid>https://dev.to/iamdbro/aws-deepracer-local-training-drfc-l3l</guid>
      <description>

&lt;p&gt;Training Locally on DRFC -&amp;gt;Troubleshooting&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handy Commands&lt;/strong&gt;&lt;br&gt;
dr-upload-model -b -f -i&lt;br&gt;
dr-upload-model -b -f -I "name of model"&lt;br&gt;
If you have disk problems do "docker system prune"&lt;/p&gt;

&lt;p&gt;So you’re training locally on DRFC for AWS DeepRacer, great!&lt;/p&gt;

&lt;p&gt;But it’s not always straightforward, sometimes it’s easy to forget how to run or update your stack after the inital setup or you get new errors when starting training again, especially after a season break.&lt;/p&gt;

&lt;p&gt;This article an be used as a supplement to the main DRFC guide.&lt;br&gt;
&lt;a href="https://aws-deepracer-community.github.io/deepracer-for-cloud" rel="noopener noreferrer"&gt;https://aws-deepracer-community.github.io/deepracer-for-cloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s a list of commands/ steps I follow and troubleshooting problems / solutions I’ve faced when training locally.&lt;/p&gt;

&lt;p&gt;Hopefully it can help you too, BUT it is tailored to how I run things FYI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contents&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Handy monthly items&lt;/li&gt;
&lt;li&gt;General Training Starting Steps&lt;/li&gt;
&lt;li&gt;Virtual DRFC Upload&lt;/li&gt;
&lt;li&gt;Physical DRFC Upload&lt;/li&gt;
&lt;li&gt;Container Update Links&lt;/li&gt;
&lt;li&gt;Open GL Robomaker&lt;/li&gt;
&lt;li&gt;New Sagemaker -&amp;gt; M40 Tagging&lt;/li&gt;
&lt;li&gt;Log Analysis&lt;/li&gt;
&lt;li&gt;Run Second DRFC Instance&lt;/li&gt;
&lt;li&gt;Steps for fresh DRFC&lt;/li&gt;
&lt;li&gt;Troubleshooting DRFC (List of issues &amp;amp; solutions)&lt;/li&gt;
&lt;li&gt;Miscellaneous&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Handy monthly items&lt;/strong&gt;&lt;br&gt;
Latest Robomaker Container (For Training)&lt;br&gt;
&lt;a href="https://hub.docker.com/r/aws" rel="noopener noreferrer"&gt;https://hub.docker.com/r/aws&lt;/a&gt; deep racercommunity/deepracer-robomaker/tags?page=1&amp;amp;ordering=last_updated&lt;/p&gt;

&lt;p&gt;All Track Files &amp;amp; Details (For DR_WORLD_NAME &amp;amp; Log Analysis)&lt;br&gt;
&lt;a href="https://github.com/aws-deepracer-community/deepracer-race-data/tree/main/raw_data/tracks" rel="noopener noreferrer"&gt;https://github.com/aws-deepracer-community/deepracer-race-data/tree/main/raw_data/tracks&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Commands&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;docker ps -a&lt;/li&gt;
&lt;li&gt;docker images&lt;/li&gt;
&lt;li&gt;docker service ls&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;General Training Starting Steps&lt;br&gt;
These are commands I run if starting from a reboot&lt;/p&gt;

&lt;p&gt;source bin/activate.sh&lt;br&gt;
sudo liquidctl set fan1 speed 30&lt;br&gt;
(This is my own fan setting)&lt;br&gt;
dr-increment-training -f&lt;br&gt;
dr-update OR dr-update-env (I tend to favour -env)&lt;br&gt;
dr-start-training OR dr-start-training -w&lt;br&gt;
dr-start-viewer OR dr-update-viewer&lt;br&gt;
&lt;a href="http://127.0.0.1:8100" rel="noopener noreferrer"&gt;http://127.0.0.1:8100&lt;/a&gt; OR &lt;a href="http://localhost:8100" rel="noopener noreferrer"&gt;http://localhost:8100&lt;/a&gt;&lt;br&gt;
dr-logs-robomaker (dr-logs-robomaker -n2) for worker 2 etc&lt;br&gt;
dr-logs-sagemaker&lt;br&gt;
nvidia-smi (check temperatures)&lt;br&gt;
htop to check threads and memory usage&lt;br&gt;
(Try to maximise my worker count, but keep to &amp;lt;75%)&lt;br&gt;
dr-start-evaluation -c &amp;amp; dr-stop-evaluation&lt;br&gt;
Virtual DRFC Upload&lt;br&gt;
aws configure&lt;br&gt;
dr-upload-model -b -f&lt;br&gt;
Uploads best checkpoint to s3&lt;br&gt;
Physical DRFC Upload&lt;br&gt;
dr-upload-car-zip -f&lt;br&gt;
Sagemaker must be running for this to work&lt;br&gt;
Only uses last checkpoint, not best&lt;br&gt;
Container Update Links&lt;br&gt;
Check your version with command ”docker images”&lt;/p&gt;

&lt;p&gt;docker service ls to make sure you see s3_minio.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sagemaker&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://hub.docker.com/r/awsdeepracercommunity/deepracer-sagemaker/tags?page=1&amp;amp;ordering=last_updated" rel="noopener noreferrer"&gt;https://hub.docker.com/r/awsdeepracercommunity/deepracer-sagemaker/tags?page=1&amp;amp;ordering=last_updated&lt;/a&gt;&lt;br&gt;
For new Sagemaker images follow this guide:&lt;br&gt;
&lt;a href="https://github.com/aws-deepracer-community/deepracer-for-cloud/blob/master/docs/multi_gpu.md" rel="noopener noreferrer"&gt;https://github.com/aws-deepracer-community/deepracer-for-cloud/blob/master/docs/multi_gpu.md&lt;/a&gt;&lt;br&gt;
Robomaker&lt;br&gt;
&lt;a href="https://hub.docker.com/r/aws" rel="noopener noreferrer"&gt;https://hub.docker.com/r/aws&lt;/a&gt; deep racercommunity/deepracer-robomaker/tags?page=1&amp;amp;ordering=last_updated&lt;br&gt;
RL Coach&lt;br&gt;
&lt;a href="https://hub.docker.com/r/awsdeepracercommunity/deepracer-rlcoach/tags" rel="noopener noreferrer"&gt;https://hub.docker.com/r/awsdeepracercommunity/deepracer-rlcoach/tags&lt;/a&gt;&lt;br&gt;
Linux terminal startup script is called “.bashrc”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open GL Robomaker&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws-deepracer-community.github.io/deepracer-for-cloud/opengl.html" rel="noopener noreferrer"&gt;https://aws-deepracer-community.github.io/deepracer-for-cloud/opengl.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxugz21fe15qoq5onv7le.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxugz21fe15qoq5onv7le.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;example -&amp;gt; docker pull awsdeepracercommunity/deepracer-robomaker:4.0.12-gpu-gl&lt;/li&gt;
&lt;li&gt;system.env: (Below bullet points)&lt;/li&gt;
&lt;li&gt;DR_HOST_X=True; uses the local X server rather than starting one within the docker container.&lt;/li&gt;
&lt;li&gt;DR_ROBOMAKER_IMAGE; choose the tag for an OpenGL enabled image - e.g. cpu-gl-avx for an image where Tensorflow will use CPU orgpu-glor an image where also Tensorflow will use the GPU.&lt;/li&gt;
&lt;li&gt;Do echo $DISPLAY and see what that is, should be :0 but might be :1&lt;/li&gt;
&lt;li&gt;Make system.env dr_display value same as echo value&lt;/li&gt;
&lt;li&gt;dr-reload&lt;/li&gt;
&lt;li&gt;source utils/setup-xorg.sh&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzz05gc1c5fwisa9lu76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzz05gc1c5fwisa9lu76.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;source utils/start-xorg.sh&lt;/li&gt;
&lt;li&gt;you should see the xorg stuff in nvidia-smi once you run the start-xorg.sh script&lt;/li&gt;
&lt;li&gt;sudo pkill x11vnc&lt;/li&gt;
&lt;li&gt;sudo pkill Xorg&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;New Sagemaker — M40 Tagging (redunant from v5.1.1)&lt;br&gt;
With the latest images you don’t need to compile a specific image (like your -m40 image)&lt;/p&gt;

&lt;p&gt;run -&amp;gt; docker tag 2b4e84b8c10a awsdeepracercommunity/deepracer-sagemaker:gpu-m40&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm8g5p0j16776rsu33ho.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm8g5p0j16776rsu33ho.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log Analysis&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;run -&amp;gt; dr-start-loganalysis&lt;/li&gt;
&lt;li&gt;Only change needed is for model_logs_root&lt;/li&gt;
&lt;li&gt;e.g. ‘minio/bucket/model-name/0’&lt;/li&gt;
&lt;li&gt;All Track files &amp;amp; details&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws-deepracer-community/deepracer-race-data/tree/main/raw_data/tracks" rel="noopener noreferrer"&gt;https://github.com/aws-deepracer-community/deepracer-race-data/tree/main/raw_data/tracks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Might have to upload the new track to tracks folder&lt;/li&gt;
&lt;li&gt;Repo for all racer data&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws-deepracer-community/deepracer-race-data/tree/main/raw_data/leaderboards" rel="noopener noreferrer"&gt;https://github.com/aws-deepracer-community/deepracer-race-data/tree/main/raw_data/leaderboards&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Run Second DRFC Instance&lt;/li&gt;
&lt;li&gt;Create 2 different run.env or use 2 folders&lt;/li&gt;
&lt;li&gt;The DR_RUN_ID keeps things separate&lt;/li&gt;
&lt;li&gt;Only 1 minio should be running&lt;/li&gt;
&lt;li&gt;Use a unique model name&lt;/li&gt;
&lt;li&gt;Run source bin/activate.sh run-1.env to activate a separate environment&lt;/li&gt;
&lt;li&gt;Steps for fresh DRFC&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws-deepracer-community.github.io/deepracer-for-cloud/installation.html" rel="noopener noreferrer"&gt;https://aws-deepracer-community.github.io/deepracer-for-cloud/installation.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;./bin/prepare.sh &amp;amp;&amp;amp; sudo reboot&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;docker start&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ARCH=gpu&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run LARS script -&amp;gt; source bin/lars_one.sh&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;docker swarm init (If issues run step 7 and grab IP, run step 8, check bottom for example)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;ifconfig -a&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;docker swarm init&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;docker swarm init — advertise-addr 000.000.0.000&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;sudo ./bin/init.sh -a gpu -c local&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;docker images&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;docker tag xxxxxxx awsdeepracercommunity/deepracer-sagemaker:gpu-m40&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;source bin/activate.sh&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;vim run.env&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;vim system.env&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;dr-update&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;aws configure — profile minio&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;aws configure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;(use real AWS IAM details below to allow upload of models)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;dr-reload&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;docker ps -a&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Setup multiple GPU&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;cd custom-files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;vim on the 3 files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;dr-upload-custom-files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Different editor option to vim&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;gedit&lt;/p&gt;

&lt;p&gt;Troubleshooting DRFC (List of issues &amp;amp; solutions)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;General Tip&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnaoktrqk757csh0voka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnaoktrqk757csh0voka.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s always worth checking if you are missing anything new that might have been added to the default files that DRFC would then be expecting.&lt;/p&gt;

&lt;p&gt;In particular, the system.env or template-run.env files and compare them with your own.&lt;/p&gt;

&lt;p&gt;Troubleshooting Docker Start&lt;/p&gt;

&lt;p&gt;Docker failed to start&lt;/p&gt;

&lt;p&gt;docker ps -a&lt;br&gt;
docker service ls&lt;br&gt;
sudo service docker status&lt;br&gt;
sudo service — status-all&lt;br&gt;
sudo systemctl status docker.service&lt;br&gt;
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?&lt;/p&gt;

&lt;p&gt;sudo systemctl stop docker&lt;br&gt;
sudo systemctl start docker&lt;br&gt;
sudo systemctl enable docker&lt;br&gt;
sudo systemctl restart docker&lt;br&gt;
sudo service docker restart&lt;br&gt;
snap list&lt;br&gt;
sudo su THEN apt-get install docker.io&lt;br&gt;
Re-run Installing Docker (From Lars)&lt;br&gt;
cat /etc/docker/daemon.json&lt;br&gt;
apt-cache policy docker-ce&lt;br&gt;
sudo tail /var/log/syslog&lt;br&gt;
sudo cat /var/log/syslog | grep dockerd | tail&lt;br&gt;
“For me it was a missing file”&lt;/p&gt;

&lt;p&gt;sudo gedit /etc/docker/daemon.json&lt;br&gt;
Make /etc/docker/daemon.json look like below:&lt;br&gt;
{&lt;br&gt;
“runtimes”: {&lt;br&gt;
“nvidia”: {&lt;br&gt;
“path”: “nvidia-container-runtime”,&lt;br&gt;
“runtimeArgs”: []&lt;br&gt;
}&lt;br&gt;
},&lt;br&gt;
“default-runtime”: “nvidia”&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;Make /etc/docker/daemon.json look like below:&lt;br&gt;
sudo systemctl stop docker then sudo systemctl start docker&lt;br&gt;
test with -&amp;gt; docker images&lt;br&gt;
Troubleshooting Docker Swarm&lt;/p&gt;

&lt;p&gt;Could not connect to the endpoint URL: “&lt;a href="http://localhost:9000/bucket" rel="noopener noreferrer"&gt;http://localhost:9000/bucket&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Error response from daemon: This node is not a swarm manager. Use “docker swarm init” or “docker swarm join” to connect this node to swarm and try again.&lt;/p&gt;

&lt;p&gt;You might have to disable ipv6 to stop docker pulling from multiple addresses&lt;/p&gt;

&lt;p&gt;Here’s how to disable IPv6 on Linux if you’re running a Red Hat-based system:&lt;/p&gt;

&lt;p&gt;Open the terminal window.&lt;br&gt;
Change to the root user.&lt;br&gt;
Type these commands:&lt;br&gt;
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1&lt;br&gt;
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1&lt;br&gt;
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1&lt;br&gt;
To re-enable IPv6, type these commands:&lt;/p&gt;

&lt;p&gt;sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0&lt;br&gt;
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0&lt;br&gt;
sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0&lt;br&gt;
sysctl -p&lt;br&gt;
run -&amp;gt; ./bin/init.sh (Resets run, system.env, hyperparam, RF &amp;amp; model_metadata)&lt;/p&gt;

&lt;p&gt;run -&amp;gt; docker pull minio/minio:RELEASE.2022–10–24T18–35–07Z&lt;/p&gt;

&lt;p&gt;DR_MINIO_IMAGE in system.env, make sure it’s set to:&lt;br&gt;
RELEASE.2022–10–24T18–35–07Z&lt;/p&gt;

&lt;p&gt;Useful Links&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Full Guide — &lt;a href="https://aws-deepracer-community.github.io/deepracer-for-cloud" rel="noopener noreferrer"&gt;https://aws-deepracer-community.github.io/deepracer-for-cloud&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sudo — &lt;a href="https://phpraxis.wordpress.com/2016/09/27/enable-sudo-without-password-in-ubuntudebian" rel="noopener noreferrer"&gt;https://phpraxis.wordpress.com/2016/09/27/enable-sudo-without-password-in-ubuntudebian&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Training on multiple GPU — &lt;a href="https://github.com/aws-deepracer-community/deepracer-for-cloud/blob/master/docs/multi_gpu.md" rel="noopener noreferrer"&gt;https://github.com/aws-deepracer-community/deepracer-for-cloud/blob/master/docs/multi_gpu.md&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;nvidia monitor — &lt;a href="https://stackoverflow.com/questions/8223811/a-top-like-utility-for-monitoring-cuda-activity-on-a-gpu" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/8223811/a-top-like-utility-for-monitoring-cuda-activity-on-a-gpu&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Tesla M40 24GB specs — &lt;a href="https://www.microway.com/hpc-tech-tips/nvidia-tesla-m40-24gb-gpu-accelerator-maxwell-gm200-close" rel="noopener noreferrer"&gt;https://www.microway.com/hpc-tech-tips/nvidia-tesla-m40-24gb-gpu-accelerator-maxwell-gm200-close&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Complex shutdown — &lt;a href="https://www.maketecheasier.com/schedule-ubuntu-shutdown" rel="noopener noreferrer"&gt;https://www.maketecheasier.com/schedule-ubuntu-shutdown&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sudo shutdown — &lt;a href="https://sdet.ro/blog/shutdown-ubuntu-with-timer" rel="noopener noreferrer"&gt;https://sdet.ro/blog/shutdown-ubuntu-with-timer&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Video trimmer — &lt;a href="https://launchpad.net/%7Ekdenlive/+archive/ubuntu/kdenlive-stable" rel="noopener noreferrer"&gt;https://launchpad.net/~kdenlive/+archive/ubuntu/kdenlive-stable&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Flatpak — &lt;a href="https://flatpak.org/setup/Ubuntu" rel="noopener noreferrer"&gt;https://flatpak.org/setup/Ubuntu&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Installation commands&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;sudo snap install jupyter&lt;/li&gt;
&lt;li&gt;sudo apt install git&lt;/li&gt;
&lt;li&gt;sudo apt install nvidia-cuda-toolkit&lt;/li&gt;
&lt;li&gt;sudo apt install curl&lt;/li&gt;
&lt;li&gt;sudo apt install jq&lt;/li&gt;
&lt;li&gt;sudo pip install liquidctl (to install fan controller globally)&lt;/li&gt;
&lt;li&gt;sudo apt install net-tools&lt;/li&gt;
&lt;li&gt;sudo apt install vim&lt;/li&gt;
&lt;li&gt;sudo apt-get install htop&lt;/li&gt;
&lt;li&gt;sudo apt install hddtemp&lt;/li&gt;
&lt;li&gt;sudo apt install lm-sensors&lt;/li&gt;
&lt;li&gt;pip install — user pipenv&lt;/li&gt;
&lt;li&gt;sudo apt install pipenv&lt;/li&gt;
&lt;li&gt;pipenv install jupyterlab&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Installing Docker&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;sudo su (run from root)&lt;/li&gt;
&lt;li&gt;curl -fsSL &lt;a href="https://download.docker.com/linux/ubuntu/gpg" rel="noopener noreferrer"&gt;https://download.docker.com/linux/ubuntu/gpg&lt;/a&gt; | sudo apt-key add -&lt;/li&gt;
&lt;li&gt;sudo add-apt-repository “deb [arch=amd64] &lt;a href="https://download.docker.com/linux/ubuntu" rel="noopener noreferrer"&gt;https://download.docker.com/linux/ubuntu&lt;/a&gt; $(lsb_release -cs) stable”&lt;/li&gt;
&lt;li&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y — no-install-recommends docker-ce docker-ce-cli containerd.io&lt;/li&gt;
&lt;li&gt;sudo apt-get install -y — no-install-recommends nvidia-docker2 nvidia-container-toolkit nvidia-container-runtime&lt;/li&gt;
&lt;li&gt;sudo apt-get upgrade&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Steps for Cuda upgrade&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First removed existing:&lt;/li&gt;
&lt;li&gt;sudo dpkg -P $(dpkg -l | grep nvidia-driver | awk ‘{print $2}’)&lt;/li&gt;
&lt;li&gt;sudo apt autoremove&lt;/li&gt;
&lt;li&gt;then added new:&lt;/li&gt;
&lt;li&gt;wget &lt;a href="https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin" rel="noopener noreferrer"&gt;https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600&lt;/li&gt;
&lt;li&gt;sudo apt-key adv — fetch-keys &lt;a href="https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub" rel="noopener noreferrer"&gt;https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;sudo add-apt-repository “deb &lt;a href="https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/" rel="noopener noreferrer"&gt;https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/&lt;/a&gt; /”&lt;/li&gt;
&lt;li&gt;sudo apt update&lt;/li&gt;
&lt;li&gt;sudo apt -y install cuda&lt;/li&gt;
&lt;li&gt;then rebooted and do nvidia-smi&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>deepracer</category>
      <category>localtraining</category>
      <category>drfc</category>
    </item>
    <item>
      <title>DeepRacer Virtual Racing</title>
      <dc:creator>Darren Broderick (DBro)</dc:creator>
      <pubDate>Thu, 23 Feb 2023 16:12:02 +0000</pubDate>
      <link>https://dev.to/iamdbro/aws-deepracer-local-training-drfc-2nm3</link>
      <guid>https://dev.to/iamdbro/aws-deepracer-local-training-drfc-2nm3</guid>
      <description>&lt;p&gt;Latest Track File(For Log Analysis)&lt;br&gt;
&lt;a href="https://github.com/aws-deepracer-community/deepracer-simapp/tree/master/bundle/deepracer_simulation_environment/share/deepracer_simulation_environment/routes"&gt;https://github.com/aws-deepracer-community/deepracer-simapp/tree/master/bundle/deepracer_simulation_environment/share/deepracer_simulation_environment/routes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Latest Robomaker Container (For Training)&lt;br&gt;
&lt;a href="https://hub.docker.com/r/aws"&gt;https://hub.docker.com/r/aws&lt;/a&gt; deep racercommunity/deepracer-robomaker/tags?page=1&amp;amp;ordering=last_updated&lt;/p&gt;




&lt;h2&gt;
  
  
  General Training Starting Steps
&lt;/h2&gt;

&lt;p&gt;These are commands I run if starting from a reboot&lt;br&gt;
source bin/activate.sh&lt;br&gt;
sudo liquidctl set fan1 speed 30 &lt;br&gt;
(This is my own fan setting)&lt;br&gt;
dr-increment-training -f&lt;br&gt;
dr-update OR dr-update-env (I tend to favour -env)&lt;br&gt;
dr-start-training OR dr-start-training -w&lt;br&gt;
dr-start-viewer OR dr-update-viewer&lt;br&gt;
&lt;a href="http://127.0.0.1:8100"&gt;http://127.0.0.1:8100&lt;/a&gt; OR &lt;a href="http://localhost:8100"&gt;http://localhost:8100&lt;/a&gt;&lt;br&gt;
dr-logs-robomaker (dr-logs-robomaker -n2) for worker 2 etc&lt;br&gt;
dr-logs-sagemaker&lt;br&gt;
nvidia-smi (check temperatures)&lt;br&gt;
htop to check threads and memory usage &lt;br&gt;
(Try to maximise my worker count, but keep to &amp;lt;75%)&lt;br&gt;
dr-start-evaluation -c &amp;amp; dr-stop-evaluation&lt;/p&gt;




&lt;h2&gt;
  
  
  Virtual DRFC Upload
&lt;/h2&gt;

&lt;p&gt;aws configure&lt;br&gt;
dr-upload-model -b -f&lt;br&gt;
Uploads best checkpoint to s3&lt;/p&gt;




&lt;h2&gt;
  
  
  Physical DRFC Upload
&lt;/h2&gt;

&lt;p&gt;dr-upload-car-zip -f&lt;br&gt;
Sagemaker must be running for this to work&lt;br&gt;
Only uses last checkpoint, not best&lt;/p&gt;




&lt;h2&gt;
  
  
  Container Update Links
&lt;/h2&gt;

&lt;p&gt;Check your version with command "docker images"&lt;/p&gt;

&lt;p&gt;Sagemaker&lt;br&gt;
&lt;a href="https://hub.docker.com/r/awsdeepracercommunity/deepracer-sagemaker/tags?page=1&amp;amp;ordering=last_updated"&gt;https://hub.docker.com/r/awsdeepracercommunity/deepracer-sagemaker/tags?page=1&amp;amp;ordering=last_updated&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For new Sagemaker images follow this guide: &lt;br&gt;
&lt;a href="https://github.com/aws-deepracer-community/deepracer-for-cloud/blob/master/docs/multi_gpu.md"&gt;https://github.com/aws-deepracer-community/deepracer-for-cloud/blob/master/docs/multi_gpu.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Robomaker&lt;br&gt;
&lt;a href="https://hub.docker.com/r/aws"&gt;https://hub.docker.com/r/aws&lt;/a&gt; deep racercommunity/deepracer-robomaker/tags?page=1&amp;amp;ordering=last_updated&lt;/p&gt;

&lt;p&gt;RL Coach&lt;br&gt;
&lt;a href="https://hub.docker.com/r/awsdeepracercommunity/deepracer-rlcoach/tags"&gt;https://hub.docker.com/r/awsdeepracercommunity/deepracer-rlcoach/tags&lt;/a&gt;&lt;br&gt;
Linux terminal startup script is called ".bashrc"&lt;/p&gt;




&lt;h2&gt;
  
  
  Open GL Robomaker
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws-deepracer-community.github.io/deepracer-for-cloud/opengl.html"&gt;https://aws-deepracer-community.github.io/deepracer-for-cloud/opengl.html&lt;/a&gt;&lt;br&gt;
example -&amp;gt; docker pull awsdeepracercommunity/deepracer-robomaker:4.0.12-gpu-gl&lt;br&gt;
system.env: (Below bullet points)&lt;/p&gt;

&lt;p&gt;DR_HOST_X=True; uses the local X server rather than starting one within the docker container.&lt;/p&gt;

&lt;p&gt;DR_ROBOMAKER_IMAGE; choose the tag for an OpenGL enabled image - e.g. cpu-gl-avx for an image where Tensorflow will use CPU orgpu-glor an image where also Tensorflow will use the GPU.&lt;br&gt;
Do echo $DISPLAY and see what that is, should be :0 but might be :1&lt;br&gt;
Make system.env dr_display value same as echo value&lt;br&gt;
dr-reload&lt;/p&gt;

&lt;p&gt;source utils/setup-xorg.sh&lt;br&gt;
source utils/start-xorg.sh&lt;br&gt;
you should see the xorg stuff in nvidia-smi once you run the start-xorg.sh script&lt;br&gt;
sudo pkill x11vnc&lt;br&gt;
sudo pkill Xorg&lt;/p&gt;

&lt;h2&gt;
  
  
  New Sagemaker - M40 Tagging
&lt;/h2&gt;

&lt;p&gt;run -&amp;gt; docker tag 2b4e84b8c10a awsdeepracercommunity/deepracer-sagemaker:gpu-m40&lt;/p&gt;




&lt;h2&gt;
  
  
  Log Analysis
&lt;/h2&gt;

&lt;p&gt;run -&amp;gt; dr-start-loganalysis&lt;br&gt;
Only change needed is for model_logs_root &lt;br&gt;
e.g. 'minio/bucket/model-name/0'&lt;br&gt;
Tracks&lt;br&gt;
&lt;a href="https://github.com/aws-deepracer-community/deepracer-simapp/tree/master/bundle/deepracer_simulation_environment/share/deepracer_simulation_environment/routes"&gt;https://github.com/aws-deepracer-community/deepracer-simapp/tree/master/bundle/deepracer_simulation_environment/share/deepracer_simulation_environment/routes&lt;/a&gt;&lt;br&gt;
Might have to upload the new track to tracks folder&lt;br&gt;
Repo for all racer data&lt;br&gt;
&lt;a href="https://github.com/aws-deepracer-community/deepracer-race-data/tree/main/raw_data/leaderboards"&gt;https://github.com/aws-deepracer-community/deepracer-race-data/tree/main/raw_data/leaderboards&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Run Second DRFC Instance
&lt;/h2&gt;

&lt;p&gt;Create 2 different run.env or use 2 folders&lt;br&gt;
The DR_RUN_ID keeps things separate&lt;br&gt;
Only 1 minio should be running&lt;br&gt;
Use a unique model name&lt;br&gt;
Run source bin/activate.sh run-1.env to activate a separate environment&lt;/p&gt;




&lt;h2&gt;
  
  
  Steps for fresh DRFC
&lt;/h2&gt;

&lt;p&gt;./bin/prepare.sh &amp;amp;&amp;amp; sudo reboot&lt;br&gt;
docker start&lt;br&gt;
ARCH=gpu&lt;br&gt;
Run LARS script -&amp;gt; source bin/lars_one.sh&lt;br&gt;
docker swarm init (If issues run step 7 and grab IP, run step 8, check bottom for example)&lt;br&gt;
ifconfig -a&lt;br&gt;
docker swarm init&lt;br&gt;
docker swarm init - advertise-addr 000.000.0.000&lt;br&gt;
sudo ./bin/init.sh -a gpu -c local&lt;br&gt;
docker images&lt;br&gt;
docker tag xxxxxxx awsdeepracercommunity/deepracer-sagemaker:gpu-m40&lt;br&gt;
source bin/activate.sh&lt;br&gt;
vim run.env&lt;br&gt;
vim system.env&lt;br&gt;
dr-update&lt;br&gt;
aws configure - profile minio&lt;br&gt;
aws configure &lt;br&gt;
(use real AWS IAM details below to allow upload of models)&lt;br&gt;
dr-reload&lt;br&gt;
docker ps -a&lt;br&gt;
Setup multiple GPU&lt;br&gt;
cd custom-files&lt;br&gt;
vim on the 3 files&lt;br&gt;
dr-upload-custom-files&lt;/p&gt;

&lt;p&gt;Different editor option to vim&lt;br&gt;
gedit&lt;/p&gt;




&lt;h2&gt;
  
  
  Troubleshooting DRFC
&lt;/h2&gt;

&lt;p&gt;General Tip&lt;br&gt;
It's always worth checking if you are missing anything new that might have been added to the default files that DRFC would then be expecting.&lt;br&gt;
In particular, the system.env or template-run.env files and compare them with your own.&lt;/p&gt;




&lt;h2&gt;
  
  
  Troubleshooting Docker Start
&lt;/h2&gt;

&lt;p&gt;Docker failed to start&lt;br&gt;
docker ps -a&lt;br&gt;
docker service ls&lt;br&gt;
sudo service docker status&lt;br&gt;
sudo service - status-all&lt;br&gt;
sudo systemctl status docker.service&lt;/p&gt;

&lt;p&gt;Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?&lt;br&gt;
sudo systemctl stop docker&lt;br&gt;
sudo systemctl start docker&lt;br&gt;
sudo systemctl enable docker&lt;br&gt;
sudo systemctl restart docker&lt;br&gt;
sudo service docker restart&lt;br&gt;
snap list&lt;br&gt;
sudo su THEN apt-get install docker.io&lt;br&gt;
Re-run Installing Docker (From Lars)&lt;br&gt;
cat /etc/docker/daemon.json&lt;br&gt;
apt-cache policy docker-ce&lt;br&gt;
sudo tail /var/log/syslog&lt;br&gt;
sudo cat /var/log/syslog | grep dockerd | tail&lt;/p&gt;

&lt;p&gt;"For me it was a missing file"&lt;br&gt;
udo gedit /etc/docker/daemon.json&lt;br&gt;
Make /etc/docker/daemon.json look like below:&lt;/p&gt;

&lt;p&gt;{&lt;br&gt;
 "runtimes": {&lt;br&gt;
 "nvidia": {&lt;br&gt;
 "path": "nvidia-container-runtime",&lt;br&gt;
 "runtimeArgs": []&lt;br&gt;
 }&lt;br&gt;
 },&lt;br&gt;
 "default-runtime": "nvidia"&lt;br&gt;
}&lt;br&gt;
Make /etc/docker/daemon.json look like below:&lt;br&gt;
sudo systemctl stop docker then sudo systemctl start docker&lt;br&gt;
test with -&amp;gt; docker images&lt;/p&gt;




&lt;h2&gt;
  
  
  Troubleshooting Docker Swarm
&lt;/h2&gt;

&lt;p&gt;Could not connect to the endpoint URL: "&lt;a href="http://localhost:9000/bucket"&gt;http://localhost:9000/bucket&lt;/a&gt;&lt;br&gt;
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.&lt;br&gt;
You might have to disable ipv6 to stop docker pulling from multiple addresses&lt;br&gt;
Here's how to disable IPv6 on Linux if you're running a Red Hat-based system:&lt;br&gt;
Open the terminal window.&lt;br&gt;
Change to the root user.&lt;br&gt;
Type these commands:&lt;br&gt;
sysctl -w net.ipv6.conf.all.disable_ipv6=1&lt;br&gt;
sysctl -w net.ipv6.conf.default.disable_ipv6=1&lt;br&gt;
sysctl -w net.ipv6.conf.tun0.disable_ipv6=1&lt;br&gt;
To re-enable IPv6, type these commands:&lt;br&gt;
sysctl -w net.ipv6.conf.all.disable_ipv6=0&lt;br&gt;
sysctl -w net.ipv6.conf.default.disable_ipv6=0&lt;br&gt;
sysctl -w net.ipv6.conf.tun0.disable_ipv6=0&lt;br&gt;
sysctl -p&lt;/p&gt;

&lt;p&gt;run -&amp;gt; ./bin/init.sh&lt;br&gt;
run -&amp;gt; docker pull minio/minio:RELEASE.2022–10–24T18–35–07Z&lt;br&gt;
DR_MINIO_IMAGE in system.env, make sure it's set to:&lt;br&gt;
RELEASE.2022–10–24T18–35–07Z&lt;/p&gt;




&lt;h2&gt;
  
  
  Other Fixes That Might Work for minio
&lt;/h2&gt;

&lt;p&gt;run -&amp;gt; docker swarm init&lt;br&gt;
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.&lt;br&gt;
run -&amp;gt; docker swarm leave&lt;br&gt;
run -&amp;gt; docker swarm init&lt;br&gt;
Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses on interface&lt;br&gt;
run -&amp;gt; docker network ls&lt;br&gt;
sagemaker-local should appear in the network&lt;br&gt;
IF NOT&lt;br&gt;
There's a new fix script for this called "lars_swarm_fix.sh" in the bin folder.&lt;br&gt;
run-&amp;gt; docker swarm leave - force&lt;br&gt;
run -&amp;gt; source bin/lars_swarm_fix.sh&lt;/p&gt;

&lt;p&gt;Script might need address, error message will say, This node is not a swarm manager. Use "docker swarm init"&lt;br&gt;
run -&amp;gt; docker swarm init (and grab the first addr, example below)&lt;br&gt;
docker swarm init - advertise-addr 2a00:23c8::d6c3:4a71:9adb:87ad&lt;/p&gt;

&lt;p&gt;Swarm initialized: current node (wv3eqpslrstc6hm7n65744z) is now a manager.&lt;br&gt;
ifconfig -a&lt;br&gt;
You don't need to join the token&lt;br&gt;
dr-start-training&lt;/p&gt;

&lt;p&gt;Swarm is a docker concept, you can theoretically connect multiple machines together and run DRFC over multiple machines, sagemaker on one PC, robomakers spread out, but once you have cloned DRFC you can now do bin/init.sh -a gpu -c local&lt;/p&gt;




&lt;p&gt;Issue - Minio kept making new containers every 10 seconds&lt;/p&gt;

&lt;p&gt;Fix: &lt;a href="https://github.com/aws-deepracer-community/deepracer-for-cloud/pull/102/commits/a2db4df0a624ace87b89afcc7ff27f35fe9751fe"&gt;https://github.com/aws-deepracer-community/deepracer-for-cloud/pull/102/commits/a2db4df0a624ace87b89afcc7ff27f35fe9751fe&lt;/a&gt;&lt;br&gt;
docker service rm s3_minio&lt;br&gt;
source bin/activate.sh&lt;/p&gt;

&lt;p&gt;Issue - Minio containers kept exiting within 7 seconds&lt;br&gt;
docker ps -a&lt;br&gt;
docker service rm s3_minio&lt;br&gt;
docker-compose -f $DR_DIR/docker/docker-compose-local.yml -p s3 up&lt;br&gt;
docker ps&lt;br&gt;
ls -l data&lt;br&gt;
ls -l&lt;br&gt;
Issue was I ran the init script as root&lt;br&gt;
Fix -&amp;gt; chown -R dbro:dbro .&lt;br&gt;
docker-compose -f $DR_DIR/docker/docker-compose-local.yml -p s3 up&lt;br&gt;
docker ps -a showed there were now 2 minio's running&lt;br&gt;
docker-compose -f $DR_DIR/docker/docker-compose-local.yml -p s3 down&lt;br&gt;
docker stack rm s3&lt;br&gt;
dr-reload&lt;br&gt;
docker ps&lt;br&gt;
dr-upload-custom-files&lt;/p&gt;

&lt;h2&gt;
  
  
  - - - - - General Notes - - - - -
&lt;/h2&gt;

&lt;p&gt;The m40 runs sagemaker docker&lt;br&gt;
System ram runs robomaker&lt;br&gt;
You can offload some of the robomaker to gpu by using the opengl image, but generally yes&lt;br&gt;
Basically model is living inside the GPU memory&lt;br&gt;
training checkpoints are in - cd data/minio/bucket&lt;/p&gt;

&lt;p&gt;Wouldn't go any higher than what "htop" shows below because you're at 80% on all threads&lt;br&gt;
- - - - - Additional Scripts - - -&lt;/p&gt;

&lt;h2&gt;
  
  
  Create script "lars_one.sh"
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if [[ "${ARCH}" == "gpu" ]];
then
    distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
    curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
    curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
    sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y --no-install-recommends nvidia-docker2 nvidia-container-toolkit nvidia-container-runtime
    cat /etc/docker/daemon.json | jq 'del(."default-runtime") + {"default-runtime": "nvidia"}' | sudo tee /etc/docker/daemon.json
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Miscellaneous
&lt;/h2&gt;

&lt;p&gt;Sensors&lt;/p&gt;

&lt;p&gt;"FRONT_FACING_CAMERA"&lt;br&gt;
"SECTOR_LIDAR"&lt;br&gt;
"LIDAR"&lt;br&gt;
"STEREO_CAMERAS"&lt;/p&gt;




&lt;p&gt;Check temperature commands&lt;/p&gt;

&lt;p&gt;nvidia-smi&lt;br&gt;
nvidia-smi -l 60&lt;br&gt;
watch -n900 nvidia-smi (Every 15 minutes auto calls)&lt;br&gt;
sensors&lt;/p&gt;




&lt;p&gt;Set fan speed commands&lt;/p&gt;

&lt;p&gt;sudo liquidctl set fan1 speed 30&lt;br&gt;
sudo liquidctl set fan1 speed 0&lt;/p&gt;




&lt;p&gt;Check specs / stats commands&lt;/p&gt;

&lt;p&gt;nvidia-smi -L&lt;br&gt;
GeForce GTX 1650 -&amp;gt; nvidia-smi -a -i 0&lt;br&gt;
M40 Specs -&amp;gt; nvidia-smi -a -i 1&lt;br&gt;
lspci -k | grep -EA3 'VGA|3D|Display'&lt;br&gt;
top (checks processors to help see worker limits)&lt;br&gt;
free -m&lt;br&gt;
htop&lt;br&gt;
docker stats&lt;br&gt;
docker run - rm - gpus all nvidia/cuda:11.6.0-base-ubuntu20.04 nvidia-smi&lt;/p&gt;




&lt;h2&gt;
  
  
  Useful Links
&lt;/h2&gt;

&lt;p&gt;Full Guide - &lt;a href="https://aws-deepracer-community.github.io/deepracer-for-cloud"&gt;https://aws-deepracer-community.github.io/deepracer-for-cloud&lt;/a&gt;&lt;br&gt;
Sudo - &lt;a href="https://phpraxis.wordpress.com/2016/09/27/enable-sudo-without-password-in-ubuntudebian"&gt;https://phpraxis.wordpress.com/2016/09/27/enable-sudo-without-password-in-ubuntudebian&lt;/a&gt;&lt;br&gt;
Training on multiple GPU - &lt;a href="https://github.com/aws-deepracer-community/deepracer-for-cloud/blob/master/docs/multi_gpu.md"&gt;https://github.com/aws-deepracer-community/deepracer-for-cloud/blob/master/docs/multi_gpu.md&lt;/a&gt;&lt;br&gt;
nvidia monitor - &lt;a href="https://stackoverflow.com/questions/8223811/a-top-like-utility-for-monitoring-cuda-activity-on-a-gpu"&gt;https://stackoverflow.com/questions/8223811/a-top-like-utility-for-monitoring-cuda-activity-on-a-gpu&lt;/a&gt;&lt;br&gt;
Tesla M40 24GB specs - &lt;a href="https://www.microway.com/hpc-tech-tips/nvidia-tesla-m40-24gb-gpu-accelerator-maxwell-gm200-close"&gt;https://www.microway.com/hpc-tech-tips/nvidia-tesla-m40-24gb-gpu-accelerator-maxwell-gm200-close&lt;/a&gt;&lt;br&gt;
Complex shutdown - &lt;a href="https://www.maketecheasier.com/schedule-ubuntu-shutdown"&gt;https://www.maketecheasier.com/schedule-ubuntu-shutdown&lt;/a&gt;&lt;br&gt;
Sudo shutdown - &lt;a href="https://sdet.ro/blog/shutdown-ubuntu-with-timer"&gt;https://sdet.ro/blog/shutdown-ubuntu-with-timer&lt;/a&gt;&lt;br&gt;
Video trimmer - &lt;a href="https://launchpad.net/%7Ekdenlive/+archive/ubuntu/kdenlive-stable"&gt;https://launchpad.net/~kdenlive/+archive/ubuntu/kdenlive-stable&lt;/a&gt;&lt;br&gt;
Flatpak - &lt;a href="https://flatpak.org/setup/Ubuntu"&gt;https://flatpak.org/setup/Ubuntu&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Installation commands&lt;/p&gt;

&lt;p&gt;sudo snap install jupyter&lt;br&gt;
sudo apt install git&lt;br&gt;
sudo apt install nvidia-cuda-toolkit&lt;br&gt;
sudo apt install curl&lt;br&gt;
sudo apt install jq&lt;br&gt;
sudo pip install liquidctl (to install fan controller globally)&lt;br&gt;
sudo apt install net-tools&lt;br&gt;
sudo apt install vim&lt;br&gt;
sudo apt-get install htop&lt;br&gt;
sudo apt install hddtemp&lt;br&gt;
sudo apt install lm-sensors&lt;br&gt;
pip install - user pipenv&lt;br&gt;
sudo apt install pipenv&lt;br&gt;
pipenv install jupyterlab&lt;/p&gt;




&lt;p&gt;Installing Docker&lt;/p&gt;

&lt;p&gt;sudo su (run from root)&lt;br&gt;
curl -fsSL &lt;a href="https://download.docker.com/linux/ubuntu/gpg"&gt;https://download.docker.com/linux/ubuntu/gpg&lt;/a&gt; | sudo apt-key add -&lt;br&gt;
sudo add-apt-repository "deb [arch=amd64] &lt;a href="https://download.docker.com/linux/ubuntu"&gt;https://download.docker.com/linux/ubuntu&lt;/a&gt; $(lsb_release -cs) stable"&lt;br&gt;
sudo apt-get update &amp;amp;&amp;amp; sudo apt-get install -y - no-install-recommends docker-ce docker-ce-cli containerd.io&lt;br&gt;
sudo apt-get install -y - no-install-recommends nvidia-docker2 nvidia-container-toolkit nvidia-container-runtime&lt;br&gt;
sudo apt-get upgrade&lt;/p&gt;




&lt;p&gt;Steps for Cuda upgrade&lt;br&gt;
First removed existing:&lt;br&gt;
sudo dpkg -P $(dpkg -l | grep nvidia-driver | awk '{print $2}')&lt;br&gt;
sudo apt autoremove&lt;/p&gt;

&lt;p&gt;then added new:&lt;br&gt;
wget &lt;a href="https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin"&gt;https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin&lt;/a&gt;&lt;br&gt;
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600&lt;br&gt;
sudo apt-key adv - fetch-keys &lt;a href="https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub"&gt;https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/7fa2af80.pub&lt;/a&gt;&lt;br&gt;
sudo add-apt-repository "deb &lt;a href="https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/"&gt;https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/&lt;/a&gt; /"&lt;br&gt;
sudo apt update&lt;br&gt;
sudo apt -y install cuda&lt;/p&gt;

&lt;p&gt;then rebooted and do nvidia-smi&lt;/p&gt;

&lt;p&gt;NVIDIA-SMI 510.47.03&lt;br&gt;&lt;br&gt;
Driver Version: 510.47.03&lt;br&gt;&lt;br&gt;
CUDA Version: 11.6&lt;/p&gt;




&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>deepracer</category>
      <category>reinforcement</category>
      <category>machinelearning</category>
      <category>awsdeepracer</category>
    </item>
    <item>
      <title>Beginner’s Guide To Machine Learning - Types of Machine Learning</title>
      <dc:creator>Darren Broderick (DBro)</dc:creator>
      <pubDate>Thu, 02 Dec 2021 17:48:45 +0000</pubDate>
      <link>https://dev.to/iamdbro/beginners-guide-to-machine-learning-part-1-types-of-machine-learning-581n</link>
      <guid>https://dev.to/iamdbro/beginners-guide-to-machine-learning-part-1-types-of-machine-learning-581n</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffatj9fg1r6cu6zoqv0o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fffatj9fg1r6cu6zoqv0o.png" alt="Image description" width="700" height="542"&gt;&lt;/a&gt;&lt;br&gt;
Source: chatbotslife&lt;/p&gt;

&lt;p&gt;As a beginner in the world of Machine Learning (ML) I want to document the learnings I come across and explain them in a clear manner. Over a series of articles I plan to walkthrough different elements of Machine Learning from a beginner level up to intermediate.&lt;/p&gt;

&lt;p&gt;This short article will look at only 2 things;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;What is Machine learning?&lt;/li&gt;
&lt;li&gt;Four Main Training processes (Supervised, Unsupervised, Semi-Supervised &amp;amp; Reinforcement).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What is Machine Learning (ML)?&lt;br&gt;
ML is the process of solving a problem through the following steps;&lt;/p&gt;

&lt;p&gt;Gathering a dataset (tabular data)&lt;br&gt;
Building a statistical model based on the dataset&lt;br&gt;
The inference(conclusion, or use of) that model to solve a practical problem.&lt;/p&gt;

&lt;p&gt;A Small ML Example&lt;/p&gt;

&lt;p&gt;Data gathered from a central server for traffic&lt;br&gt;
A prediction model is built from each car dataset row, input data(location and speed)&lt;/p&gt;

&lt;p&gt;To estimate upcoming congestion areas so that they can be prevented, a “model” is created.&lt;/p&gt;

&lt;p&gt;A model is simply a file that identifies patterns(makes a prediction) based on the input data.&lt;/p&gt;

&lt;p&gt;Machine learning in such scenarios helps to estimate the regions where congestion could be found.&lt;/p&gt;

&lt;p&gt;Daily traffic experiences (data) is used to train a model to essentially see the future.&lt;/p&gt;

&lt;p&gt;Machine Learning Types&lt;br&gt;
Here are the four types of ML and a short description that helps us identify the differences later on.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2yjok9z6exklpil1wdr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2yjok9z6exklpil1wdr.png" alt="Image description" width="700" height="232"&gt;&lt;/a&gt;&lt;br&gt;
Source: Simplilearn&lt;/p&gt;

&lt;p&gt;Supervised&lt;br&gt;
Using data that is labelled, when a dataset is in a collection that is labelled, each row of data is called a feature vector.&lt;/p&gt;

&lt;p&gt;Unsupervised&lt;br&gt;
Data here is unlabeled, during training the model discovers patterns and new information on its own.&lt;/p&gt;

&lt;p&gt;Semi-supervised&lt;br&gt;
The data here has only a small amount that is labelled with the rest being largely unlabelled, as a hybrid between the 2 above it is popular for classifying text documents.&lt;/p&gt;

&lt;p&gt;Reinforcement&lt;br&gt;
No data is given to the model is this type, training is very different for reinforcement as the process is random and the model makes sequences of decisions to try and get the highest return value possible. Very popular in gaming and a stable for AWS DeepRacer.&lt;/p&gt;

&lt;p&gt;Supervised Learning&lt;br&gt;
We mentioned “feature vector” for supervised but as an example, just imagine this data as the height, weight and gender of a person.&lt;br&gt;
The most important part here is that each datarow has the same labels so they can be differentiated between their features.&lt;/p&gt;

&lt;p&gt;An example of this would be spam detection for emails, you have 2 labels here; spam and not_spam.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty467cpjaruzxbfftu5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty467cpjaruzxbfftu5c.png" alt="Image description" width="700" height="304"&gt;&lt;/a&gt;&lt;br&gt;
Source: TowardsDataScience&lt;/p&gt;

&lt;p&gt;So the goal of supervised learning is to use a dataset to produce a model that takes a feature vector as an input and outputs a prediction.&lt;br&gt;
Unsupervised Learning&lt;br&gt;
The difference to supervise learning is the data is unlabeled, and the goal is to find unknown patterns within your dataset.&lt;br&gt;
An example of this would be for fraud detection, (or anomaly detection as the hierarchical term).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93tezk5dpyho6fxloxui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93tezk5dpyho6fxloxui.png" alt="Image description" width="700" height="248"&gt;&lt;/a&gt;&lt;br&gt;
Source: Cloudera&lt;/p&gt;

&lt;p&gt;It’s effective at detecting unseen events or rare occurrences by models made through unsupervised using non-anomalous training examples (datarows). i.e how different is event x from a “typical” example in the dataset.&lt;br&gt;
It’s worth showing the differences between these 2 techniques before explaining semi-supervised.&lt;br&gt;
Semi-Supervised Learning&lt;br&gt;
So for semi the dataset contains both labeled and unlabeled examples. Usually unlabeled with have a much higher percentage over labeled. The outcome of semi is the same as supervised, that is, to use the labeled data to produce a model.&lt;br&gt;
Then what’s the point of semi and having more unlabeled data?&lt;br&gt;
You might think that extra unlabeled data will harm your model training but look at it this way; you’re actually adding more information about your problem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfp7js0o3szz09d0vd1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfp7js0o3szz09d0vd1c.png" alt="Image description" width="217" height="180"&gt;&lt;/a&gt;&lt;br&gt;
Source: MLMinds&lt;/p&gt;

&lt;p&gt;Given that the labeled data came from a similar sample-set as the unlabeled then in effect you’ve improved the probability distribution for your entire dataset to leverage.&lt;br&gt;
If the data you are working with is challenging to label, consider semi-supervised learning to help ease the process.&lt;br&gt;
Reinforcement Learning&lt;br&gt;
Reinforcement learning (RL) is the training of machine learning models to make a sequence of decisions.&lt;br&gt;
It is a subfield of ML where the machine or “agent” in this learning context learns in an environment that can ingest state as a vector of features.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fek0y3k6ggzyxqj5wrh2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fek0y3k6ggzyxqj5wrh2y.png" alt="Image description" width="700" height="455"&gt;&lt;/a&gt;&lt;br&gt;
Source: MathWorks&lt;/p&gt;

&lt;p&gt;A policy is a function (similar to models in supervised learning)&lt;/p&gt;

&lt;p&gt;Taking a feature vector of the state supplied by the environment, the agent produces an action to achieve maximum rewards (usually a float value). As the process is sequential, so to is the decision making. The long term goal of the RL lifecycle is to continually optimise the actions taking to achieve better rewards.&lt;/p&gt;

&lt;p&gt;The agent learns to achieve a goal in an uncertain, potentially complex environment. In RL, an artificial intelligence faces a game-like situation.&lt;/p&gt;

&lt;p&gt;The most popular gamified existence of RL is AWS DeepRacer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3aou2gf6gokdlqcd2u9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3aou2gf6gokdlqcd2u9.png" alt="Image description" width="700" height="350"&gt;&lt;/a&gt;&lt;br&gt;
Source: AWS&lt;/p&gt;

&lt;p&gt;For DeepRacer its goal is to;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate the virtual track (environment)&lt;/li&gt;
&lt;li&gt;Gather info like car position, distance from centerline (state)&lt;/li&gt;
&lt;li&gt;Try a random speed and direction (action)&lt;/li&gt;
&lt;li&gt;Logs its result from the policy (reward)&lt;/li&gt;
&lt;li&gt;Repeat process until termination count (episodes)&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>machinelearning</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AWS Community Builders Program and Why You Should Apply</title>
      <dc:creator>Darren Broderick (DBro)</dc:creator>
      <pubDate>Fri, 19 Mar 2021 16:12:48 +0000</pubDate>
      <link>https://dev.to/iamdbro/aws-community-builders-program-and-why-you-should-apply-2olk</link>
      <guid>https://dev.to/iamdbro/aws-community-builders-program-and-why-you-should-apply-2olk</guid>
      <description>&lt;p&gt;Deadline to apply for this program is March 31st&lt;br&gt;
The program accepts a limited number of members per year&lt;br&gt;
Apply -&amp;gt; &lt;a href="https://amazonmr.au1.qualtrics.com/jfe/form/SV_8wR79G4spteIvhX" rel="noopener noreferrer"&gt;https://amazonmr.au1.qualtrics.com/jfe/form/SV_8wR79G4spteIvhX&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This short article will explain;&lt;/p&gt;

&lt;p&gt;-What the AWS Community Program is&lt;br&gt;
-Why you should apply&lt;br&gt;
-Tips for your application&lt;/p&gt;

&lt;p&gt;I work for Liberty IT in Belfast, there are 4 people in my workplace who are also community builders and hopefully you can be as well!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6ql5vyqierxlop1bobl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6ql5vyqierxlop1bobl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;What is the AWS Community Program?&lt;br&gt;
"The AWS Community Builders program offers technical resources, mentorship, and networking opportunities to AWS enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community."&lt;br&gt;
It is a massive area of connections, information, tech talks, soft skill building, beta opportunities, support for innovations and certifications.&lt;br&gt;
All of this is mostly contained in 2 areas, a private Slack Channel and a blog post site known as Dev.To, mine is &lt;a href="https://dev.to/iamdbro"&gt;https://dev.to/iamdbro&lt;/a&gt;&lt;br&gt;
The length of duration in the program is 1 year, you can re-apply at the end of your term.&lt;br&gt;
The program is free for all members.&lt;br&gt;
Expectations&lt;br&gt;
Program members are expected to join virtual calls, participate in mentorship opportunities, continue to share or produce educational and technical content, actively engage with and help build the AWS community, and demonstrate continued interest in learning more about AWS.&lt;/p&gt;




&lt;p&gt;Why you should apply&lt;br&gt;
For the many reasons listed below;&lt;br&gt;
To increase all your AWS skills in general or better yet find the niche of your focus and get expert help from a SME (subject matter expert)&lt;br&gt;
Connections - You've access to AWS tech leads in almost any tech or discipline, I've mainly made use of ML deployments and CDK.&lt;br&gt;
Beta AWS services tests (I'm currently under NDA so can't disclose)&lt;br&gt;
New writing experiences and a new audience of readers- dev.to&lt;br&gt;
Great CDK tutorials&lt;br&gt;
Opportunity to speak at Indy Meetup(Nov 2020)&lt;br&gt;
Opportunity to speak at NYC ML - March 2021&lt;br&gt;
AWS Cert Voucher - Take any AWS Cert exam for free&lt;br&gt;
A free subscription to cloud academy&lt;br&gt;
A Swag welcome kit including $500 AWS credits (see picture below)&lt;br&gt;
Reinvent ticket at a discount&lt;br&gt;
Asset pack for logos and banners for your social medias&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4n35r7gvi1zcxynd3ln.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4n35r7gvi1zcxynd3ln.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tips for your application&lt;/p&gt;

&lt;p&gt;Talk about why you want to apply. &lt;/p&gt;

&lt;p&gt;It's not about getting "the free stuff" but growing your personal skill set, expanding your career ambitions or taking a leap into a brand new area of tech you always wanted to explore.&lt;br&gt;
Write about your current or completed projects to date, (helps to add your personal github or selective repos).&lt;br&gt;
What future areas of tech you are looking to learn more about or greatly upskill in.&lt;/p&gt;

&lt;p&gt;Mention any certifications you completed or are currently studying for.&lt;br&gt;
Provide any example project ideas you have that you could get potential collaboration or feedback on.&lt;/p&gt;

&lt;p&gt;Finally I just want to wish you good luck if you choose to apply and I hope to see you in the community.&lt;/p&gt;

&lt;p&gt;Application Details - &lt;a href="https://aws.amazon.com/developer/community/community-builders/" rel="noopener noreferrer"&gt;https://aws.amazon.com/developer/community/community-builders/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building Modern Applications with AWS CDK (Session 1)</title>
      <dc:creator>Darren Broderick (DBro)</dc:creator>
      <pubDate>Mon, 15 Feb 2021 22:08:02 +0000</pubDate>
      <link>https://dev.to/iamdbro/building-modern-applications-with-aws-cdk-session-1-21o1</link>
      <guid>https://dev.to/iamdbro/building-modern-applications-with-aws-cdk-session-1-21o1</guid>
      <description>&lt;p&gt;This article is concise material from "AWS Dev Hour"&lt;br&gt;
&lt;a href="https://www.twitch.tv/videos/892197005"&gt;https://www.twitch.tv/videos/892197005&lt;/a&gt; lasts just over an hour, this article will go through the main notes and has a separate working GitHub I'm building into.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/DarrenBro/modernApp-AWS-CDK"&gt;https://github.com/DarrenBro/modernApp-AWS-CDK&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There were however some issues I came across not covered by the video that I've included fixes for or tasks you should complete on your end.&lt;/p&gt;

&lt;p&gt;Node Version 15+ had issues with deploying lambda assets &lt;a href="https://github.com/aws/aws-cdk/issues/12536"&gt;https://github.com/aws/aws-cdk/issues/12536&lt;/a&gt;&lt;br&gt;
Fix -&amp;gt;Downgrading node and deleting cdk.out/.cache should stop this issue from occurring. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://stackoverflow.com/questions/8191459/how-do-i-update-node-js"&gt;https://stackoverflow.com/questions/8191459/how-do-i-update-node-js&lt;/a&gt;&lt;br&gt;
If you continue without completing steps not shown in the video for getting pillow(PIL) into your directory (steps shown below title Pillow) you'll get the following CloudWatch log error;&lt;br&gt;
Fix -&amp;gt; I've included the necessary extracted zip file and put it in the project under the folder "reklayer".&lt;/p&gt;

&lt;p&gt;The video misses out showing the creation of the lambda layer. This allows the lambda to import 'PIL' from your attached zip file you created.&lt;/p&gt;

&lt;p&gt;Fix -&amp;gt; I've attached code snippets that need included in cdkMainStacks.ts. &lt;/p&gt;

&lt;p&gt;Otherwise you'll get the below error in your CloudWatch logs.&lt;/p&gt;

&lt;p&gt;'index' is referring to the main body of the rekognition lambdaI've flooded cdkMainStacks.ts with comments better explaining different components. This file is the main area that most of the code will get added.&lt;/p&gt;

&lt;p&gt;For me this has been a great way to get more hands on with CDK with good progression intervals and building it out yourself is the best way to get familiar with any tech, plus finding solutions for these errors has helped commit it more to memory.&lt;/p&gt;

&lt;p&gt;Tech Stack&lt;/p&gt;

&lt;p&gt;aws-cli&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/install-macos.html"&gt;https://docs.aws.amazon.com/cli/latest/userguide/install-macos.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Node.js to run the project (Version 10.3.0–14.15.3)&lt;br&gt;
&lt;a href="https://nodejs.org/en/download/"&gt;https://nodejs.org/en/download/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;IDE that can run any of these languages; &lt;br&gt;
Typescript, Javascript, Java, C# or Python&lt;br&gt;
(I'm using IntelliJ with Typescript)&lt;/p&gt;

&lt;p&gt;AWS CDK toolkit and bootstrap&lt;/p&gt;

&lt;p&gt;Details of the App&lt;/p&gt;

&lt;p&gt;Built using CDK&lt;/p&gt;

&lt;p&gt;User will be able to upload a photo through a react JS UI, stored in s3 (upcoming sessions).&lt;/p&gt;

&lt;p&gt;It will then trigger a lambda upon the s3 action and store it in dynamoDB (which stores metadata and labels, done this session)&lt;/p&gt;

&lt;p&gt;We can then query(scan) the db to check labels for the image have been added after going through rekognition service.&lt;/p&gt;

&lt;p&gt;All graphics are taken from the AWS run through video.&lt;/p&gt;

&lt;p&gt;Session 1 will consist of;&lt;br&gt;
Installing prerequisites&lt;br&gt;
Explaining what AWS CDK is&lt;br&gt;
Creating a new project and getting dependencies installed&lt;br&gt;
Building 3 components -&amp;gt; s3 / lambda &amp;amp; dynamo db&lt;br&gt;
Copying an image to scan and scanning the table&lt;/p&gt;

&lt;p&gt;What is AWS CDK?&lt;br&gt;
A framework we can use to declare our AWS resources&lt;br&gt;
Sits on top of Cloud Formation &lt;br&gt;
(CDK produces the CFT Template after synthesise "cdk synth")&lt;br&gt;
Simplifies the process of building out these resource templates&lt;/p&gt;

&lt;p&gt;This is what an application looks like in the CDK, starting with the App, and broken into stacks and constructs(which is what will be built later on).&lt;br&gt;
Steps to get started;&lt;br&gt;
"mkdir cdk-project"&lt;br&gt;
"cdk init" (shows templates available)&lt;br&gt;
"cdk init app -l=typescript" (Go with typescript)&lt;br&gt;
Install aws-iam "npm i @aws-cdk/aws-iam"&lt;/p&gt;

&lt;p&gt;Notes&lt;br&gt;
app initialised in cdk-projects.ts&lt;br&gt;
cdk.json tells cdk toolkit how to run the project&lt;/p&gt;




&lt;p&gt;Pillow&lt;br&gt;
The AWS Lambda function uses the Pillow library for the generation of thumbnail images. This library needs to be added into our project so that we can allow the CDK to package it and create an AWS Lambda Layer for us. To do this, you can use the following steps.&lt;br&gt;
Launch an Amazon EC2 Instance (t2-micro) using the Amazon Linux 2 AMI&lt;br&gt;
SSH into your instance and run the following commands:&lt;/p&gt;

&lt;p&gt;sudo yum install -y python3-pip python3 python3-setuptools&lt;br&gt;
python3 -m venv my_app/env&lt;br&gt;
source ~/my_app/env/bin/activate&lt;br&gt;
cd my_app/env&lt;br&gt;
pip3 install pillow&lt;br&gt;
cd /home/ec2-user/my_app/env/lib/python3.7/site-packages&lt;br&gt;
mkdir python &amp;amp;&amp;amp; cp -R ./PIL ./python &amp;amp;&amp;amp; cp -R ./Pillow-8.1.0.dist-info ./python &amp;amp;&amp;amp; cp -R ./Pillow.libs ./python &amp;amp;&amp;amp; zip -r pillow.zip ./python&lt;br&gt;
Copy the resulting archive 'pillow.zip' to your development environment (we used an Amazon S3 bucket for this)&lt;br&gt;
Extract the archive into the 'reklayer' folder in your project directory&lt;/p&gt;

&lt;p&gt;Your project structure should look something like this:&lt;br&gt;
project-root/reklayer/python/PIL&lt;br&gt;
project-root/reklayer/python/Pillow-8.1.0.dist-info&lt;br&gt;
project-root/reklayer/python/Pillow.libs&lt;br&gt;
Remove the python.zip file to clean up&lt;br&gt;
Terminate the Amazon EC2 Instance that you created to build the archive&lt;/p&gt;




&lt;p&gt;1st component to build is s3&lt;br&gt;
npm i @aws-cdk/aws-s3&lt;br&gt;
In cdkMainStack.ts we'll be adding;&lt;br&gt;
import s3 = require('@aws-cdk/aws-s3');&lt;br&gt;
const imageBucket = new s3.Bucket(this, imageBucketName, {&lt;br&gt;
  removalPolicy: cdk.RemovalPolicy.DESTROY&lt;br&gt;
})&lt;br&gt;
new cdk.CfnOutput(this, 'imageBucket', {value: imageBucket.bucketName});&lt;br&gt;
A quick note on the constructs here.&lt;br&gt;
So for s3 code above it's the same principle.&lt;/p&gt;




&lt;p&gt;2nd component to build is dynamoDB for storing the image labels&lt;br&gt;
npm i @aws-cdk/aws-dynamodb&lt;br&gt;
const imageTable = new dynamodb.Table(this, 'ImageLabels', {&lt;br&gt;
  partitionKey: {name: 'image', type: dynamodb.AttributeType.STRING},&lt;br&gt;
  removalPolicy: cdk.RemovalPolicy.DESTROY&lt;br&gt;
});&lt;br&gt;
new cdk.CfnOutput(this, 'cdkTable', {value: imageTable.tableName});&lt;/p&gt;




&lt;p&gt;3rd component to build is Lambda layer and Lambda&lt;br&gt;
Layer needed to import the PIL library when rekognition lambda is executed.&lt;br&gt;
const layer = new lambda.LayerVersion(this, 'pil', {&lt;br&gt;
  code: lambda.Code.fromAsset('reklayer'),&lt;br&gt;
  compatibleRuntimes: [lambda.Runtime.PYTHON_3_7],&lt;br&gt;
  license: 'Apache-2.0',&lt;br&gt;
  description: 'A layer to enable the PIL library in our Rekognition Lambda',&lt;br&gt;
});&lt;br&gt;
Lambda's job is to pull an image from the s3 bucket, take the image and send it to the rekognition service to do the image detection.&lt;br&gt;
npm i @aws-cdk/aws-lambda @aws-cdk/aws-lambda-event-sources&lt;br&gt;
const rekognitionLambdaFunc = new lambda.Function(this, 'rekognitionFunction', {&lt;br&gt;
  code: lambda.Code.fromAsset('rekognitionlambda'),&lt;br&gt;
  runtime: lambda.Runtime.PYTHON_3_7,&lt;br&gt;
  handler: 'index.handler',&lt;br&gt;
  timeout: Duration.seconds(30),&lt;br&gt;
  memorySize: 1024,&lt;br&gt;
  layers: [layer],&lt;br&gt;
  environment: {&lt;br&gt;
    "TABLE": imageTable.tableName,&lt;br&gt;
    "BUCKET": imageBucket.bucketName,&lt;br&gt;
    "RESIZEDBUCKET": resizedBucket.bucketName&lt;br&gt;
  }&lt;br&gt;
});&lt;/p&gt;




&lt;p&gt;Last part to add is Permissions and Roles&lt;br&gt;
To trigger Lambda when object(image) is created in S3.&lt;br&gt;
rekognitionLambdaFunc.addEventSource(new event_sources.S3EventSource(imageBucket, {events: [s3.EventType.OBJECT_CREATED]}))&lt;br&gt;
Permission to read from s3.&lt;br&gt;
imageBucket.grantRead(rekognitionLambdaFunc);&lt;br&gt;
resizedBucket.grantPut(rekognitionLambdaFunc);&lt;br&gt;
Permission to allow the result of rekognition service from the sent image to be stored in dynamoDB.&lt;br&gt;
imageTable.grantWriteData(rekognitionLambdaFunc);&lt;br&gt;
Permission policy to allow label detection from rekognition across all resources.&lt;br&gt;
rekognitionLambdaFunc.addToRolePolicy(new iam.PolicyStatement({&lt;br&gt;
  effect: iam.Effect.ALLOW,&lt;br&gt;
  // permission policy to allow label detection from rekognition across all resources&lt;br&gt;
  actions: ['rekognition:DetectLabels'],&lt;br&gt;
  resources: ['*']&lt;br&gt;
}))&lt;/p&gt;

&lt;p&gt;And make sure you've updated your bucket name to the same as the stack. (Would recommend changing prefix to your name, but stack with add a unique suffix ID value in template after synthesising).&lt;/p&gt;




&lt;p&gt;To create your template for your stacks run:&lt;br&gt;
cdk synth&lt;br&gt;
A set of files in a folder called "cdk.out" will be produced with assets and your json template file.&lt;br&gt;
You can re-run cdk synth to populate any code updates.&lt;br&gt;
synth will manufacture a stack (our components) down into fully formed json yaml template (CloudFormation template) which will include all the resources we added above.&lt;br&gt;
When you see the generated template it will already be quite large. Already cdk has saved us a bunch of effort!&lt;/p&gt;

&lt;p&gt;To deploy the resources that we just created run the below command.&lt;/p&gt;

&lt;p&gt;cdk deploy&lt;br&gt;
After seeing the below you just want to hit 'y'&lt;br&gt;
This should take around a few minutes to complete.&lt;br&gt;
Result of "cdk deploy" -&amp;gt; ✅ cdkMainStack&lt;/p&gt;

&lt;p&gt;Output example&lt;br&gt;
cdkMainStack.cdkTable = cdkMainStack-ImageLabelsE524135D-104WIEO86Q2JP&lt;/p&gt;

&lt;p&gt;cdkMainStack.imageBucket = cdkmainstack-dbrocdkimagebucketb661dc68-uok6ax6q62sh&lt;/p&gt;

&lt;p&gt;You can also view your CF stacks in AWS console straight away, pretty nice!&lt;/p&gt;

&lt;p&gt;So that's your s3, lambda and dynamo db all created for you.&lt;br&gt;
A quick note on the lambda. For the clients below:&lt;br&gt;
As this lambda is being invoked we don't have to re-instantiate these above clients every time the lambda is run, it can stay 'hot' and this helps with performance.&lt;/p&gt;

&lt;p&gt;Test the image upload (now that s3 resources have been created using cdk deploy)&lt;/p&gt;

&lt;p&gt;You'll need to use your unique bucket name, e.g.&lt;br&gt;
aws s3 cp testimage.jpg s3://cdkmainstack-dbrocdkimagebucketb661dc68-uok6ax6q62sh&lt;/p&gt;

&lt;p&gt;Logs&lt;/p&gt;

&lt;p&gt;Check CloudWatch logs for events and for any errors&lt;br&gt;
In log group "/aws/lambda/cdkMainStack……" -&amp;gt; click latest log stream to see the image being processed.&lt;/p&gt;

&lt;p&gt;Or you can check your new dynamoDB table in the console.&lt;/p&gt;

&lt;p&gt;These labels are from an image of Stargate's Atlantis' wormhole after it was sent through AWS Rekognition&lt;br&gt;
Or you can scan the dynamoDB table, using the output name from the cdk synth. e.g.&lt;/p&gt;

&lt;p&gt;aws dynamodb scan --table-name cdkMainStack-ImageLabelsE524135D-104WIEO86Q2JP&lt;/p&gt;

&lt;p&gt;And you should see a single output of your image in the terminal&lt;/p&gt;

&lt;p&gt;To get rid of all resources and stop any future running costs run:&lt;br&gt;
cdk destroy&lt;br&gt;
(which is also why we added to dynamoDB)&lt;br&gt;
removalPolicy: cdk.RemovalPolicy.DESTROY&lt;/p&gt;

&lt;p&gt;However, even with this policy added to S3 and using cdk destroy the formation will have a status "DELETE_FAILED"&lt;br&gt;
Without the policy added you will see that the CloudFormation stack is deleted successfully but the S3 bucket remains. Why?&lt;/p&gt;

&lt;p&gt;By default, the Construct that comes from the S3 package, has a default prop called removalPolicy: cdk.RemovalPolicy.RETAIN.&lt;br&gt;
(CloudFormation does not destroy buckets that are not empty).&lt;/p&gt;

&lt;p&gt;Option 1: Manually clean the bucket contents before destroying the stack&lt;/p&gt;

&lt;p&gt;You can do this from the AWS S3 user interface or through the command line, using the AWS CLI:&lt;/p&gt;

&lt;h1&gt;
  
  
  Cleanup bucket contents without removing the bucket itself;
&lt;/h1&gt;

&lt;p&gt;aws s3 rm s3://bucket-name --recursive&lt;/p&gt;

&lt;h1&gt;
  
  
  Then run;
&lt;/h1&gt;

&lt;p&gt;cdk destroy&lt;/p&gt;

&lt;p&gt;Then the cdk destroy will proceed without errors. However, this can quickly become a tedious activity if your stacks contain multiple S3 buckets or you use stacks as a temporary resource so some automation would help (option 2).&lt;br&gt;
Option 2: Automatically clear bucket contents and delete the bucket&lt;/p&gt;

&lt;p&gt;A 3rd party package called @mobileposse/auto-delete-bucket provides a custom CDK construct that wraps around the standard S3 construct and internally uses the CloudFormation Custom Resources framework, to trigger an automated bucket contents cleanup when a stack destroy is triggered.&lt;/p&gt;

&lt;p&gt;Install the package:&lt;br&gt;
npm i @mobileposse/auto-delete-bucket&lt;/p&gt;

&lt;p&gt;Use the new CDK construct instead of the standard one:&lt;br&gt;
import { AutoDeleteBucket } from '@mobileposse/auto-delete-bucket'&lt;br&gt;
const bucket = new AutoDeleteBucket(this, 'my-data-bucket')&lt;/p&gt;




&lt;p&gt;Summary&lt;/p&gt;

&lt;h1&gt;
  
  
  Add components / build / deploy
&lt;/h1&gt;

&lt;p&gt;cdk synth&lt;br&gt;
cdk deploy&lt;br&gt;
(If you makes any changes, you can run another 'cdk deploy')&lt;/p&gt;

&lt;h1&gt;
  
  
  Take note of outputs (example)
&lt;/h1&gt;

&lt;p&gt;cdkMainStack.cdkTable = cdkMainStack-ImageLabels&lt;br&gt;
cdkMainStack.imageBucket = cdkmainstack-dbrocdkimagebucket&lt;/p&gt;

&lt;h1&gt;
  
  
  Copy test image
&lt;/h1&gt;

&lt;p&gt;aws s3 cp testimage.jpg s3://{bucketName}&lt;/p&gt;

&lt;h1&gt;
  
  
  Check table has been populated, if not check logs
&lt;/h1&gt;

&lt;p&gt;aws dynamodb scan --table-name {tableName}&lt;/p&gt;

&lt;h1&gt;
  
  
  Cleanup tasks
&lt;/h1&gt;

&lt;p&gt;aws s3 rm s3://{bucketName} --recursive&lt;br&gt;
cdk destroy&lt;/p&gt;

&lt;p&gt;That's all for this session, take care!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS DeepRacer</title>
      <dc:creator>Darren Broderick (DBro)</dc:creator>
      <pubDate>Wed, 28 Oct 2020 13:46:59 +0000</pubDate>
      <link>https://dev.to/iamdbro/aws-deepracer-4n4k</link>
      <guid>https://dev.to/iamdbro/aws-deepracer-4n4k</guid>
      <description>&lt;p&gt;The DeepRacer car or ‘agent’ as it’s also referred to is a fully autonomous race car, programmed by us in python and trained across many iterations in AWS SageMaker on a simulation environment spun up by AWS RoboMaker.&lt;br&gt;
We don’t provide the training data upfront like in supervised and unsupervised machine learning, neither do we apply any labels initially.&lt;br&gt;
Instead the agent supplies its own timed delay label, known as the ‘reward’.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m-D4iNYL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d7kz3nymc9jwx50ijg6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m-D4iNYL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/d7kz3nymc9jwx50ijg6j.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The data is gathered by the agent’s photo lens which are turned to greyscale. These images are off the simulated track. It tries an ‘action’ (JSON format with properties of speed and angle), you set these before training. Then analyses rewards received for its attempts, and repeats the process with different actions to look for greater rewards, being a returned as a float number in your reward function (will discuss later).&lt;br&gt;
In short, the agent’s single focus is; return the maximum rewards possible.&lt;/p&gt;

&lt;p&gt;Race League and Rules&lt;/p&gt;

&lt;p&gt;The rules are simple. Everyone gets 4 minutes to achieve their best lap time on the re:Invent track. You’re allowed to come off the track a maximum of 3 times in order to qualify a lap. But each “off course” must be fixed by manually re-plotting the car back on the track, eating into your lap time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tTI1efuQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kmp2l56wbd0uswqkfgks.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tTI1efuQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/kmp2l56wbd0uswqkfgks.jpeg" alt="1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---alzkk0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yczxqgka1eco06ifnaym.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---alzkk0w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/yczxqgka1eco06ifnaym.gif" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reinforcement Learning&lt;/p&gt;

&lt;p&gt;Reinforcement Learning(RL) is a type of machine learning technique that enables an agent to learn in an interactive environment by trial and error using feedback from its own actions and experiences.&lt;br&gt;
Unlike supervised learning where feedback provided to the agent is a correct set of actions for performing a task, reinforcement learning uses rewards and punishment as signals for positive and negative behaviour.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZPkYaUOL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2c4ktf0famfyamby3nra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZPkYaUOL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/2c4ktf0famfyamby3nra.png" alt="3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Compared to unsupervised learning, reinforcement learning is different in terms of goals. While the goal in unsupervised learning is to find similarities and differences between data points, in reinforcement learning the goal is to find a suitable action model that would maximise the total cumulative reward of the agent. The figure to the left represents the basic idea and elements involved in a reinforcement learning model. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wWSjKH9v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/coghbaayrxc2gey0n79d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wWSjKH9v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/coghbaayrxc2gey0n79d.png" alt="4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Environment: Physical world in which the agent operates.&lt;br&gt;
State: Current situation of the agent.&lt;br&gt;
Reward: Feedback from the environment.&lt;br&gt;
Policy: Method to map agent’s state to actions.&lt;br&gt;
Value: Future reward that an agent would receive by taking an action in a particular state.&lt;br&gt;
SageMaker: With each batch of experiences from RoboMaker, SageMaker updates the neural network, “and hopefully your model has improved.”&lt;/p&gt;

&lt;p&gt;Tips and Tricks&lt;/p&gt;

&lt;p&gt;Keep your models simple, the model above focuses on keeping the car on the centre line, it is a great place to start with, wouldn’t change a thing above as a beginner.&lt;/p&gt;

&lt;p&gt;Don’t reward with a negative float, it can force the car to finish laps early to avoid them and cuts valuable training episodes. Punish instead with multiplying by decimal values.&lt;/p&gt;

&lt;p&gt;Get a look at the logs of your training. They are also on cloudWatch but not very readable.&lt;br&gt;
&lt;a href="https://codelikeamother.uk/using-jupyter-notebook-for-analysing-deepracer-s-logs"&gt;https://codelikeamother.uk/using-jupyter-notebook-for-analysing-deepracer-s-logs&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Train your models for 1–2 hours, you can clone them to continue further training, but 1-2 hours gives a good indication if you are making progress on the track. (See on left).&lt;/p&gt;

&lt;p&gt;My Experience&lt;/p&gt;

&lt;p&gt;I’ve really enjoyed the DeepRacer experience as a fun competition but more of a way into understanding RL and machine learning in general. It’s taken a lot of time to get through the vast material but well worth the learning journey, if you are interested in learning more just let me know!&lt;br&gt;
The best way to get involved is to race a model you’ve made yourself, then you’re hooked, which is a good thing.&lt;br&gt;
I plan to write an enhanced deep racer guide in the future to focus on ways to be competitive and efficient with your time, hopefully they work for us at the end of October :)&lt;br&gt;
Thank you!&lt;/p&gt;

&lt;p&gt;Useful Resources&lt;/p&gt;

&lt;p&gt;Getting Started&lt;br&gt;
&lt;a href="https://aws.amazon.com/deepracer/getting-started/"&gt;https://aws.amazon.com/deepracer/getting-started/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Guide&lt;br&gt;
&lt;a href="https://github.com/aws-samples/aws-deepracer-workshops/blob/master/Workshops/2019-AWSSummits-AWSDeepRacerService/Lab1/Readme.md"&gt;https://github.com/aws-samples/aws-deepracer-workshops/blob/master/Workshops/2019-AWSSummits-AWSDeepRacerService/Lab1/Readme.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Build your own track &lt;a href="https://medium.com/@autonomousracecarclub/guide-to-creating-a-full-re-invent-2018-deepracer-track-in-7-steps-979aff28a6f5"&gt;https://medium.com/@autonomousracecarclub/guide-to-creating-a-full-re-invent-2018-deepracer-track-in-7-steps-979aff28a6f5&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Training&lt;br&gt;
&lt;a href="https://www.aws.training/Details/eLearning?id=32143"&gt;https://www.aws.training/Details/eLearning?id=32143&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
