<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sathyajith Bhat</title>
    <description>The latest articles on DEV Community by Sathyajith Bhat (@sathyabhat).</description>
    <link>https://dev.to/sathyabhat</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sathyabhat"/>
    <language>en</language>
    <item>
      <title>Bulk tagging all instances in an Auto Scaling Group (ASG) using AWS CLI and JMESPath Expressions</title>
      <dc:creator>Sathyajith Bhat</dc:creator>
      <pubDate>Mon, 04 Sep 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/aws-heroes/bulk-tagging-all-instances-in-an-auto-scaling-group-asg-using-aws-cli-and-jmespath-expressions-4h0o</link>
      <guid>https://dev.to/aws-heroes/bulk-tagging-all-instances-in-an-auto-scaling-group-asg-using-aws-cli-and-jmespath-expressions-4h0o</guid>
      <description>&lt;p&gt;&lt;em&gt;Cover image generated by &lt;a href="https://stablediffusionweb.com/"&gt;Stable Diffusion Online&lt;/a&gt; with the prompt: applying name tags to a server in a data center.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why?
&lt;/h3&gt;

&lt;p&gt;I wanted to tag all the instances in an &lt;a href="https://aws.amazon.com/ec2/autoscaling/"&gt;AWS Auto Scaling Group (ASG)&lt;/a&gt; with some tags that would be used for reporting. The first order of business was to update the ASG with the relevant tags. This was done by applying the tag to the ASG using Infrastructure as Code (Terraform, in this case) so that all new instances launched would get the tag. However, the existing nodes would not inherit the tags.&lt;/p&gt;

&lt;p&gt;While I could recycle the nodes, &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html"&gt;instance refresh makes it&lt;/a&gt; that much easier. Still, I figured bulk tagging the instances in the ASG was a simpler way to set the tags.&lt;/p&gt;

&lt;h3&gt;
  
  
  Filtering with AWS CLI
&lt;/h3&gt;

&lt;p&gt;Since I wanted to tag instances of a specific ASG, I tried to use &lt;code&gt;jq&lt;/code&gt; on the output of the &lt;code&gt;aws ec2 describe-auto-scaling-groups&lt;/code&gt; command and further slice and dice the data. This was getting too painful, so I started to dig into AWS CLI’s docs and found out that the AWS CLI can &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-filter.html#cli-usage-filter-client-side"&gt;apply client-side filtering&lt;/a&gt; using the &lt;code&gt;--query&lt;/code&gt; option and &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-filter.html#cli-usage-filter-server-side"&gt;server-side filtering&lt;/a&gt; using the &lt;code&gt;--filter&lt;/code&gt; option.&lt;/p&gt;

&lt;h3&gt;
  
  
  Querying Auto Scaling Group Instances
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;aws autoscaling describe-auto-scaling-groups&lt;/code&gt; command has a response structure as shown below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AutoScalingGroupName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AutoScalingGroupARN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"LaunchConfigurationName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"lc-name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"MinSize"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"MaxSize"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"DesiredCapacity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"DefaultCooldown"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AvailabilityZones"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"ap-southeast-1a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"ap-southeast-1c"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"LoadBalancerNames"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"TargetGroupARNs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"HealthCheckType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"EC2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"HealthCheckGracePeriod"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Instances"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"InstanceId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"InstanceType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"AvailabilityZone"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"LifecycleState"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"HealthStatus"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"LaunchConfigurationName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"ProtectedFromScaleIn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CreatedTime"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"SuspendedProcesses"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"VPCZoneIdentifier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"EnabledMetrics"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Tags"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"TerminationPolicies"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="s2"&gt;"Default"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"NewInstancesProtectedFromScaleIn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ServiceLinkedRoleARN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"TrafficSources"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To query, the AWS CLI uses expressions created using &lt;a href="https://jmespath.org/"&gt;JMESPath Syntax&lt;/a&gt;. From the response above, we want to select the instance IDs of a specific Auto Scaling Group. In JMESPath query, the question mark &lt;code&gt;?&lt;/code&gt; is used to filter and select elements. Thus, to filter based on AutoScaling Group name, the JMESPath Expression would be as shown below, replacing &lt;code&gt;asg-name&lt;/code&gt; with the actual name of the Auto Scaling Group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;AutoScalingGroups[?AutoScalingGroupName&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;asg-name&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Filtering Instances Based on Auto Scaling Group
&lt;/h3&gt;

&lt;p&gt;Since I further want to select only the instance IDs, the JMESPath expression of the instance ID &lt;code&gt;.Instances[*].InstanceId&lt;/code&gt; can be chained to the above query condition. Thus, the command to fetch the instance IDs from a specific Auto Scaling Group becomes as shown below, taking care to replace &lt;code&gt;region&lt;/code&gt; and &lt;code&gt;asg-name&lt;/code&gt; with the region name and the ASG name, respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws autoscaling describe-auto-scaling-groups &lt;span class="nt"&gt;--region&lt;/span&gt; region &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'AutoScalingGroups[?AutoScalingGroupName==`asg-name`].Instances[*].InstanceId'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that there’s a list of instance IDs, these IDs can be passed over to the &lt;code&gt;aws ec2 create-tags&lt;/code&gt; command to apply the tag to all the instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ec2 create-tags &lt;span class="nt"&gt;--region&lt;/span&gt; ap-southeast-1 &lt;span class="nt"&gt;--tags&lt;/span&gt; &lt;span class="nv"&gt;Key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tagKey,Value&lt;span class="o"&gt;=&lt;/span&gt;tagValue &lt;span class="nt"&gt;--resources&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;aws autoscaling describe-auto-scaling-groups &lt;span class="nt"&gt;--region&lt;/span&gt; region &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s1"&gt;'AutoScalingGroups[?AutoScalingGroupName==`asg-name`].Instances[*].InstanceId'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; text&lt;span class="si"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Verifying the results
&lt;/h3&gt;

&lt;p&gt;Applying what I learned above, I verified that all the instances have been correctly tagged by using the below command, taking care to replace region and asg-name with the region name and the ASG name, respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ec2 describe-instances &lt;span class="nt"&gt;--region&lt;/span&gt; ap-southeast-1 &lt;span class="nt"&gt;--filters&lt;/span&gt; &lt;span class="s2"&gt;"Name=tag:aws:autoscaling:groupName,Values='asg-name'"&lt;/span&gt; &lt;span class="nt"&gt;--query&lt;/span&gt; &lt;span class="s2"&gt;"Reservations[].Instances[].{Instance:InstanceId,Name:Tags[?Key=='Name']|[0].Value}"&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; table

&lt;span class="nt"&gt;----------------------&lt;/span&gt;
| DescribeInstances  |
+--------------------+--------------+
| Instance           | Name         |
+-------------------+---------------+
| i-0xdeadbeef456242 | asg-nme-1234 |
| i-0xdeadbeef454142 | asg-nme-1235 |
+-----------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above command, the &lt;code&gt;--filter&lt;/code&gt; option applies server-side filtering and fetches the instances that have the tag, &lt;code&gt;tag:aws:autoscaling:groupName&lt;/code&gt; as the tag key and &lt;code&gt;asg-name&lt;/code&gt; as the tag value. This is then chained with the client-side filter &lt;code&gt;--query&lt;/code&gt; to display the instance ID and the name of the instance by fetching the tag with key Name.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>tagging</category>
      <category>devops</category>
    </item>
    <item>
      <title>Self-hosting FreshRSS (for free) on Fly.io in under 10 minutes</title>
      <dc:creator>Sathyajith Bhat</dc:creator>
      <pubDate>Wed, 04 Jan 2023 00:00:00 +0000</pubDate>
      <link>https://dev.to/sathyabhat/self-hosting-freshrss-for-free-on-flyio-in-under-10-minutes-acd</link>
      <guid>https://dev.to/sathyabhat/self-hosting-freshrss-for-free-on-flyio-in-under-10-minutes-acd</guid>
      <description>&lt;p&gt;&lt;a href="https://www.freshrss.org/"&gt;FreshRSS&lt;/a&gt; is a free, self-hostable RSS feeds aggregator. &lt;a href="https://fly.io/"&gt;Fly.io&lt;/a&gt; is a super amazing platform that runs application servers close to end users. &lt;/p&gt;

&lt;p&gt;Fly.io can take Docker images (or Dockerfiles, or buildpacks) and boot them into &lt;a href="https://aws.amazon.com/blogs/aws/firecracker-lightweight-virtualization-for-serverless-computing/"&gt;Firecracker&lt;/a&gt; powered microVMs. When you deploy with Fly.io, you get an Anycast IP, along with TLS offloading by them. The Firecracker microVMs can scale up &amp;amp; down per traffic, and it comes with a &lt;a href="https://fly.io/docs/about/pricing/"&gt;pretty generous free tier&lt;/a&gt; that includes&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Up to 3 shared-cpu-1x 256MB VMs&lt;/li&gt;
&lt;li&gt;3GB of total persistent volume storage &lt;/li&gt;
&lt;li&gt;160GB outbound data transfer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since FreshRSS comes with a Docker image, adapting it to fly.io was as straightforward as pointing to FreshRSS' Docker image. These are the steps if you wish to run FreshRSS on your fly.io account&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign up for &lt;a href="https://fly.io/"&gt;Fly.io&lt;/a&gt; or sign-in&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://fly.io/docs/hands-on/install-flyctl/"&gt;flyctl&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Sign into your Fly.io account by typing &lt;code&gt;flyctl auth login&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create a &lt;code&gt;fly.toml&lt;/code&gt; file similar to the one in &lt;a href="https://github.com/SathyaBhat/freshrss-on-fly-io"&gt;my repo&lt;/a&gt;. Make sure to update the appname to a unique name.&lt;/li&gt;
&lt;li&gt;Review the environment variables listed in the &lt;code&gt;[env]&lt;/code&gt; section and update if required. You can check &lt;a href="https://github.com/FreshRSS/FreshRSS/blob/edge/Docker/freshrss/example.env"&gt;FreshRSS' docs&lt;/a&gt; on the environment variables that can be updated &lt;/li&gt;
&lt;li&gt;Create a volume for persisting data using the command &lt;code&gt;fly volumes create freshrss_data --size 1&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;In this command, we create a volume of size 1GB. This can be increased later, so I selected the lowest possible number, as flyctl expects size in GB and doesn't accept fractional numbers.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Save the &lt;code&gt;fly.toml&lt;/code&gt; file. From the same directory where &lt;code&gt;fly.toml&lt;/code&gt; is present, deploy FreshRSS the application using &lt;code&gt;fly launch&lt;/code&gt;. Fly.io will pull the Docker image and launch the VM. The output should be as shown below:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
fly launch
An existing fly.toml file was found &lt;span class="k"&gt;for &lt;/span&gt;app freshrss
App is not running, deploy...
Deploying freshrss
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Validating app configuration
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Validating app configuration &lt;span class="k"&gt;done
&lt;/span&gt;Services
TCP 80/443 ⇢ 80
Searching &lt;span class="k"&gt;for &lt;/span&gt;image &lt;span class="s1"&gt;'freshrss/freshrss'&lt;/span&gt; remotely...
image found: img_lj9x4d7jkwe4wo1k
Image: registry-1.docker.io/freshrss/freshrss:latest
Image size: 75 MB
&lt;span class="o"&gt;==&amp;gt;&lt;/span&gt; Creating release
Release v1 created

You can detach the terminal anytime without stopping the deployment
Monitoring Deployment

1 desired, 1 placed, 1 healthy, 0 unhealthy
&lt;span class="nt"&gt;--&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; v1 deployed successfully
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it! FreshRSS should be accessible from the URL &lt;code&gt;&amp;lt;appname&amp;gt;.fly.dev&lt;/code&gt;. You can front a custom domain as well. Head over to the Certificates tab from Fly.io's dashboard, or use the &lt;code&gt;flyctl&lt;/code&gt; tool to provision a certificate by typing &lt;code&gt;flyctl certs add &amp;lt;domain&amp;gt;&lt;/code&gt;. Add the CNAME that Fly tells you about, and then add an A record for the (sub)domain, pointing to the Anycast IP you get from deploying FreshRSS. See &lt;a href="https://fly.io/docs/app-guides/custom-domains-with-fly/#creating-a-custom-domain-on-fly-manually"&gt;Fly.io's docs&lt;/a&gt; on how to do this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Caveats
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;FreshRSS extensions need to be available on &lt;code&gt;/var/www/FreshRSS/extensions&lt;/code&gt;, however - Fly.io doesn't support having &lt;a href="https://community.fly.io/t/mount-multiple-destinations-to-the-same-source-volume/5298"&gt;same mount for mulitple directories&lt;/a&gt;, or having &lt;a href="https://community.fly.io/t/multiple-mounts-in-one-app/4701"&gt;multiple mounts&lt;/a&gt;. 

&lt;ul&gt;
&lt;li&gt;Trying to mount just &lt;code&gt;/var/www/FreshRSS&lt;/code&gt; caused the microVM to throw kernel panics, mainly because the underlying Docker container expects &lt;em&gt;something&lt;/em&gt; there - probably the webserver config and mounting an empty VM causes errors&lt;/li&gt;
&lt;li&gt;You could try to play with Fly's internal networking, including &lt;a href="https://community.fly.io/t/how-to-copy-files-off-a-vm/1651/13"&gt;setting up a wireguard tunnel&lt;/a&gt;, but I didn't try.&lt;/li&gt;
&lt;li&gt;Another icky way is to build a custom Dockerfile copying the required extensions into the image during the build step.. &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;I haven't run this for long, will update if I run into issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Automated updates and deploys
&lt;/h3&gt;

&lt;p&gt;Fly.io's CLI-oriented approach makes updates via GitHub actions easy. While I haven't set up for FreshRSS, you can look at &lt;a href="https://github.com/SathyaBhat/sathyabh.at/pull/37"&gt;this PR&lt;/a&gt; where I add a GitHub action to deploy my blog, &lt;a href="https://sathyabh.at"&gt;sathyabh.at&lt;/a&gt;, a static site powered by Hugo to Fly.io. &lt;/p&gt;

&lt;p&gt;Fly.io's &lt;a href="https://community.fly.io/"&gt;community Discourse&lt;/a&gt; is quite active if you run into any problems. Hope this gives you a good taste of what Fly.io can offer.&lt;/p&gt;

&lt;p&gt;See also: &lt;a href="https://github.com/SathyaBhat/freshrss-on-fly-io"&gt;Github repo&lt;/a&gt; of the fly.io config file.&lt;/p&gt;

</description>
      <category>selfhosting</category>
      <category>flyio</category>
      <category>devops</category>
      <category>rss</category>
    </item>
    <item>
      <title>Second Edition of Practical Docker With Python is now available</title>
      <dc:creator>Sathyajith Bhat</dc:creator>
      <pubDate>Mon, 20 Dec 2021 10:00:00 +0000</pubDate>
      <link>https://dev.to/sathyabhat/second-edition-of-practical-docker-with-python-is-now-available-3ijn</link>
      <guid>https://dev.to/sathyabhat/second-edition-of-practical-docker-with-python-is-now-available-3ijn</guid>
      <description>&lt;p&gt;About 3 years ago, I wrote my first book - "&lt;a href="https://dev.to/2018/10/02/so-i-wrote-a-book-presenting-practical-docker-with-python/"&gt;Practical Docker with Python&lt;/a&gt;". The book focused on the fundamentals of containerization and getting started with Docker.  A few months back, I received an email from my editor at Apress asking me if I'd be interested in doing a second edition of the book. &lt;/p&gt;

&lt;p&gt;Looking back at the book, it had received good feedback, has been cited in different publications and there was some feedback I had received that I always wished I could update the book with, so this was the best chance. After many months of work, the second edition of Practical Docker With Python is available on &lt;a href="https://bit.ly/practical-docker-2e"&gt;SpringerLink&lt;/a&gt;, Amazon (&lt;a href="https://amzn.to/32dTOyD"&gt;US&lt;/a&gt;, &lt;a href="https://amzn.to/32dTOyD"&gt;India&lt;/a&gt;), &lt;a href="https://learning.oreilly.com/library/view/practical-docker-with/9781484278154/"&gt;O'Reilly Learning&lt;/a&gt; (formerly Safari Books Online) and probably every other online bookstore. &lt;/p&gt;

&lt;p&gt;The second edition, much like the first edition, targets people who are new to containerization and want a guided approach to containerizing their application. I start with setting up a Python telegram bot, building and running it as a program, and then continue containerizing the same bot, starting with steps to build the Dockerfile, adding volumes for persisting the data, setting up Docker networks for container networking, and finally multi-container orchestration with Docker Compose. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7GEAPsbR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.sathyabh.at/ss/practical-docker-with-python-2e.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7GEAPsbR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.sathyabh.at/ss/practical-docker-with-python-2e.jpg" alt="Holding my copy of my book - Practical Docker with Python" width="880" height="1173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What's new in this edition is a blurb on getting started with &lt;a href="https://docs.microsoft.com/en-us/windows/wsl/about"&gt;Windows Subsystem for Linux 2(WSL2)&lt;/a&gt;, an entire chapter dedicated to Continuous Integration, and how a container image becomes from being a local thing to being deployed on an orchestrator like Kubernetes. Writing about Kubernetes was the hardest thing - entire books have been written on it and I certainly didn't want to focus on it, but I hope it gives a taste of what you can do with k8s.&lt;/p&gt;

&lt;p&gt;Among the things that were left out include "docker  compose", and multi-arch builds using buildx. The next gen docker compose (note: NOT "docker-compose"!) was just too new to be included in the book. But there is a reference to compose spec, and "docker compose" is supposed to be a drop-in replacement, so hopefully, you can swap out the commands. Not including docker compose meant I couldn't include deploying to ECS/other container orchestrators using docker compose.&lt;/p&gt;

&lt;p&gt;I hope the second edition is a value add and you like it as much as I enjoyed writing it. Feedback, of course, is always welcome. My Twitter DMs are always open, or you can reach out to me on the channels mentioned in the &lt;a href="https://sathyabh.at/contact/"&gt;contact page&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>python</category>
      <category>docker</category>
      <category>containers</category>
    </item>
    <item>
      <title>Checking for connectivity of ElastiCache Redis instances across peering connections using VPC Reachability Analyzer</title>
      <dc:creator>Sathyajith Bhat</dc:creator>
      <pubDate>Mon, 13 Sep 2021 00:00:00 +0000</pubDate>
      <link>https://dev.to/aws-heroes/checking-for-connectivity-of-elasticache-redis-instances-across-peering-connections-using-vpc-reachability-analyzer-2pd3</link>
      <guid>https://dev.to/aws-heroes/checking-for-connectivity-of-elasticache-redis-instances-across-peering-connections-using-vpc-reachability-analyzer-2pd3</guid>
      <description>&lt;p&gt;It's quite common to work in an environment where you have your VPC connected to multiple other VPCs, either via VPC Peering or Transit Gateway, it's no different for me. I was working on deploying an application on our in-house container platform. This container platform is hosted in another account/VPC, and it had to connect to the ElastiCache Redis instances hosted in our VPC/account. I was looking into timeouts connecting to the Redis instance from the application containers, and I figured this would be a good time to test out the &lt;a href="https://aws.amazon.com/blogs/aws/new-vpc-insights-analyzes-reachability-and-visibility-in-vpcs/"&gt;VPC Reachability Analyzer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The VPC Reachability Analyzer tool is a network diagnostic tool from AWS. You can access it from the VPC Section of AWS Console. Clicking on Reachability Analyzer, you enter the source, destination, optional intermediaries and click on "Analyze Path" for AWS to trace the path between the source and the destination. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U-sMeR-D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/reachability-analyzer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U-sMeR-D--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/reachability-analyzer.png" alt="Finding Reachability Analyzer in the AWS Console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZRmC8VQ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/create-analyze-path.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZRmC8VQ7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/create-analyze-path.png" alt="Creating a Path Analysis"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I wanted to test the connection from the peered connection to the ElastiCache in the private subnet. I noticed the drop-down for destination (and the source ) did not have individual AWS services listed. That threw me off, as I couldn't select ElastiCache there, but &lt;a href="https://aws.amazon.com/developer/community/heroes/ben-bridts/"&gt;Ben Bridts&lt;/a&gt; pointed out that each of the ElastiCache nodes would have an ENI attached, so providing the ENI as the destination should work. That said, finding the ENI was not the most straightforward option - I had to head to ElastiCache, pick up the URL of a node, do a dig on node DNS to find the IP address, and search for the IP address on the ENI page on the AWS Console (the Elastic Network Interfaces is shown as an EC2 feature, to make you even more confused). I think there might be avenues to simplify this workflow here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oQU79JZI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/eni-ec2-feature.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oQU79JZI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/eni-ec2-feature.png" alt="Finding the Elastic Network Interface (ENI)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I created some test paths to check how it works, and the experience was mostly positive. &lt;/p&gt;

&lt;p&gt;For the first test, I created a path analysis with the peering connection as the source and the ENI of the ElastiCache Redis as the destination. However, I accidentally selected the wrong peering connection. When the test was completed, it showed a failure, and I was pleasantly surprised by the detailed error message, which outlined why the test failed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_S3FCvnK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/failed-no-path-no-vpc-peering.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_S3FCvnK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/failed-no-path-no-vpc-peering.png" alt="Error message due to lack of peering and no direct path"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the next attempt, I tried to analyze the path between the correct peering connection and the Redis cluster, and the path trace failed - and yet again, the feedback and error message was pretty detailed in indicating where it was going wrong. It showed the two places where the connection was failing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The route table didn't have the correct entries to route traffic from the peering into the private subnet.&lt;/li&gt;
&lt;li&gt;The security group rules were incorrectly configured and were not allowing connections from the peering connection to the Redis port.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DVMOsaaQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/destination-unreachable.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DVMOsaaQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/destination-unreachable.png" alt="Error message due to lack of peering and no direct path"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking on the details shows the full info about which route table, which security group is preventing the network communication from happening.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--l9p-RpfE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/details-of-failure.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--l9p-RpfE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/details-of-failure.png" alt="Error message due to lack of peering and no direct path"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once I corrected the errors, the reachability analyzer said all was good! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5DbkBo_W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/path-is-reachable.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5DbkBo_W--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.sbhat.me/ss/reachability-analyzer/path-is-reachable.png" alt="Reachability Analyzer says reachable"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was still seeing the timeout errors from the application container. And therein lies the trouble with the VPC Reachability Analyzer - if you have multiple routes to a destination, the analyzer seems to consider the fastest path and ignores the failing routes. This is shown briefly when you create an analysis, and it mentions using a particular intermediate component filter to analyze the alternate paths. The intermediate components can be load balancers, NAT gateways, and peering connections but not security groups, ACLs, network interfaces or route tables. In this particular case, there doesn't seem to be a way to show alternate paths. I wish the Reachability Analyzer could use different subnets as an intermediate component filter. This should show the alternate paths where there are multiple routes via different subnets in different availability zones would show all the applicable paths and which ones are failing. &lt;/p&gt;

&lt;p&gt;VPC Reachability Analyzer is still a fantastic tool to debug network connectivity issues, and I can see myself using this often. Path Analysis is charged at the rate of $0.10 per path analysis. And the fact that is done without sending any network traffic along the path is pretty awesome. For more details, you can refer to this re:Invent talk for details on hw AWS does this using automated reasoning. &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/6DX7p-OirGU"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Migrating my WordPress blogs to Hugo</title>
      <dc:creator>Sathyajith Bhat</dc:creator>
      <pubDate>Fri, 28 Aug 2020 00:00:00 +0000</pubDate>
      <link>https://dev.to/sathyabhat/migrating-my-wordpress-blogs-to-hugo-53k2</link>
      <guid>https://dev.to/sathyabhat/migrating-my-wordpress-blogs-to-hugo-53k2</guid>
      <description>&lt;p&gt;I started blogging with &lt;a href="https://sathyasays.com/2007/05/27/hello-world-2/"&gt;WordPress about 13 years ago&lt;/a&gt;. I had some free time since the joining date for my first job was about a month or so away. Armed with boredom, an Internet connection and an ample amount of free time, I started &lt;a href="https://sathyasays.wordpress.com/"&gt;Sathya Says&lt;/a&gt; on WordPress.com hosting. Soon after, I came to know about domains, shared hosting and self-hosted WordPress and with my first ever salary, purchased &lt;a href="//sathyasays.com"&gt;sathyasays.com&lt;/a&gt;, shared hosting and started writing about Linux experiences. The shared hosting served me quite well, enough to get tens of thousands of views per month(which surprised me quite a lot since I focused on Desktop Linux, a tiny niche). I experimented with AdSense and managed to hit the $100 threshold in about 4 months. I removed Adsense soon after, there was just no point with such abysmal CTRs. &lt;/p&gt;

&lt;p&gt;Fast forward a few years later, I was spending more time on keeping my WordPress install secure. The tipping point was when my DigitalOcean droplet was disconnected from their network as it was detected to be a contributor to a DDoS attack and it took DigitalOcean more than 2 days to re-enable networking so I can recover the data and fix it! I took this opportunity to disable WordPress and start exporting the data and import it into Hugo, to be hosted with &lt;a href="https://www.netlify.com/"&gt;Netlify&lt;/a&gt;. Here's how I went about exporting blogs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hugo
&lt;/h3&gt;

&lt;p&gt;Hugo is a static site generator. Unlike other systems like WordPress which dynamically builds a response page on visit, Hugo builds the pages when the content is created/updated. This doesn't mean you'll have to painstakingly build HTML for each page - Hugo provides a framework for building the pages. Your pages will be in the form of a Markdown file which stores the content of the posts. Hugo also supports Frontmatter which lets you keep the metadata of the page, within the page itself.  When you build a page in Hugo, it will parse through the Markdown file and generate a static HTML file. This static HTML can be hosted anywhere - from Amazon S3, Netlify etc. There's a lot of documentation on how to get started with but not a lot on how to migrate - so I thought I'll post on how I did the migration.&lt;/p&gt;

&lt;h3&gt;
  
  
  So how do we export the data from WordPress?
&lt;/h3&gt;

&lt;p&gt;Before starting the migration, I had two major goals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data export from WordPress and import to Hugo should be done with as little work as possible&lt;/li&gt;
&lt;li&gt;There should be no broken links - in other words, the permalinks, tags, category links etc should not change&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While WordPress comes with a Data export tool, its an XML dump of all the content and I'd have to spend a lot of time in importing it. The Tools page on Hugo's website listed some tools for working with WordPress XML backup but didn't try them out.&lt;/p&gt;

&lt;p&gt;With these two goals in mind, I tried out decided to use SchumacherFM's &lt;a href="https://github.com/SchumacherFM/wordpress-to-hugo-exporter"&gt;wordpress-to-hugo exporter&lt;/a&gt;. The exporter is actually a WordPress plugin - and comes in two forms - as a CLI as well as a proper plugin that you can activate from the WordPress plugins page. I would recommend using the CLI approach - the data export is quite intensive and you might end up with a 504/Time out/failed export if the export doesn't get completed within the timeout settings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step by step process
&lt;/h3&gt;

&lt;p&gt;Below are the step-by-step process to export the data and import to Hugo&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Clone the &lt;code&gt;wordpress-to-hugo&lt;/code&gt; exporter repo on the server where your WordPress instance is running, in the &lt;code&gt;wp-content/plugins&lt;/code&gt; directory&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/SchumacherFM/wordpress-to-hugo-exporter.git
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Change to the &lt;code&gt;wordpress-to-hugo-exporter&lt;/code&gt; directory within &lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd wordpress-to-hugo-exporter
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the exporter &lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;php hugo-export-cli.php
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Depending on the size of your data and your server specifications, it might take anywhere from few seconds to a few minutes for the export to complete and will display the path where the file is saved.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hugo-export-cli.php
[INFO] tmp folder not found, use default. You could invoke php hugo-export-cli.php with an extra argument as the temporary folder path if needful.
This is your file!
/tmp/user/0/wp-hugo.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Download the file from the server using scp&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scp user@remotehost:/path/to/wp-hugo.zip .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The compressed file will contain your blog data - posts, pages, images with the metadata in a structure and the &lt;code&gt;config.yaml&lt;/code&gt; file that Hugo expects. Just download a Hugo theme of your choice and update the config file accordingly. &lt;/p&gt;

&lt;p&gt;You can take a look at my blog repos for examples on structuring the directory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/SathyaBhat/sathyasays.com"&gt;Sathya Says&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/SathyaBhat/sathyabh.at"&gt;My World&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Some tips and points to note
&lt;/h3&gt;

&lt;p&gt;While the export is almost seamless, there are some pointers before you start the export to get the cleanest export:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Disable all your plugins, especially Jetpack. The plugins can often mess up the data exports with unnecessary cruft, making the Markdown exports bulkier than needed&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;wordpress-to-hugo-exporter&lt;/code&gt; keeps any HTML content within the page(for example any embedded content, image galleries etc) as inline HTML. This wasn't a problem earlier but as of &lt;a href="https://gohugo.io/news/0.60.0-relnotes/"&gt;Hugo version 0.60&lt;/a&gt;, the default Markdown rendering library by Hugo, Goldmark, no longer renders inline HTML out of the box and you will have to enable &lt;code&gt;unsafe&lt;/code&gt; mode as below:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;config.yaml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  markup:
    goldmark:
      renderer:
        unsafe: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;config.toml&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  [markup]
    [markup.goldmark]
      [markup.goldmark.renderer]
        unsafe = true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;config.json&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  {
    "markup": {
        "goldmark": {
          "renderer": {
              "unsafe": true
          }
        }
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If you don't add these settings, the inline HTML would be omitted and replaced by this message&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  &amp;lt;!-- raw HTML omitted --&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;With the plugin, special characters may be exported as HTML entities and may not be rendered correctly by Hugo, especially in tags.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Hosting and Deploys
&lt;/h3&gt;

&lt;p&gt;For hosting the imported blogs, I used &lt;a href="https://www.netlify.com/"&gt;Netlify&lt;/a&gt;. Netlify can connect &lt;a href="https://gohugo.io/hosting-and-deployment/hosting-on-netlify/"&gt;with your Github repo&lt;/a&gt; and deploy the changes the moment a commit has been pushed, making it quite easy to write, review, edit and rework on the posts if needed. &lt;a href="https://docs.netlify.com/site-deploys/overview/#deploy-preview-controls"&gt;Netlify's deploy preview&lt;/a&gt; means that you can write a blog post, cut a Pull Request and Netlify will automatically publish a preview version - making editing and peer reviews easy, typically as comments on the Pull Request itself. Netlify's doing well as of now but have gradually started introducing more constraints and it is quite possible that Netlify might not be feasible in the near future, in which case I might just move the entire blog to GitHub pages, as described by Shantanu Goel in &lt;a href="https://shantanugoel.com/2020/01/05/migrate-hugo-blog-gitlab-s3-github-pages-actions/"&gt;his post here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Handling images
&lt;/h3&gt;

&lt;p&gt;While using Git as the backend store for text was ideal, I wasn't happy with storing the blog images in the repo. I thought of using &lt;a href="https://git-lfs.github.com/"&gt;Git LFS&lt;/a&gt; and Netlify Large Media uses this, I ultimately didn't end up using it since it's metered &lt;a href="https://www.netlify.com/pricing/#large-media"&gt;under Netlify billing&lt;/a&gt; and Netlify's billing is not very granular. Ultimately I setup a Cloudfront CDN infront of an S3 bucket and started serving images off that. You can check &lt;a href="https://dev.to/aws-heroes/self-hosting-secured-static-web-site-using-s3-route-53-acm-cloudfront-411a"&gt;Bhuvana's blog post on how to do this&lt;/a&gt;, or refer to &lt;a href="https://github.com/SathyaBhat/cdk-cdn"&gt;my Github repo for the CDK code&lt;/a&gt; that I used to build the entire infra.&lt;/p&gt;

&lt;h3&gt;
  
  
  Commenting
&lt;/h3&gt;

&lt;p&gt;For comments, there are a bunch of open-source self-hosted solutions - &lt;a href="https://utteranc.es/"&gt;Utterances&lt;/a&gt; looks like the coolest of the lot, with comments powered by Github issues. Utterances requires a bit of theme hacking, so I didn't enable it. For now, I stuck to my existing Disqus account and might consider revisiting this later.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>beginners</category>
      <category>hugo</category>
      <category>wordpress</category>
    </item>
    <item>
      <title>Running Folding@Home on AWS with AWS CDK</title>
      <dc:creator>Sathyajith Bhat</dc:creator>
      <pubDate>Sun, 26 Apr 2020 06:16:24 +0000</pubDate>
      <link>https://dev.to/aws-heroes/running-folding-home-on-aws-with-aws-cdk-2dp0</link>
      <guid>https://dev.to/aws-heroes/running-folding-home-on-aws-with-aws-cdk-2dp0</guid>
      <description>&lt;p&gt;&lt;a href="https://foldingathome.org/about/"&gt;Folding@Home&lt;/a&gt;(aka FAH) is a distributed computing project. To quote from their website, &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;FAH is a distributed computing project for simulating protein dynamics, including the process of protein folding and the movements of proteins implicated in a variety of diseases. Folding@Home involves you donating your spare computing power by running a small client on your computer. The client then contacts the Folding@Home Work Assignment server, gets some workunits and runs them, You can choose to have it run when only when your system is idle, or have it run all the time. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;While I used to run FAH long, long back - dating back to my &lt;a href="https://sathyasays.com/about/"&gt;forum days&lt;/a&gt;, I eventually stopped due to lack of proper computing equipment. Recent events with the COVID-19 situation and FAH's projects around it (see &lt;a href="https://foldingathome.org/2020/03/15/coronavirus-what-were-doing-and-how-you-can-help-in-simple-terms/"&gt;Coronavirus - What we're doing&lt;/a&gt; and &lt;a href="https://foldingathome.org/2020/03/30/covid-19-free-energy-calculations"&gt;COVID-19 Small Molecule Screening Simulation&lt;/a&gt; for details) and the relatively &lt;a href="https://sathyabh.at/2020/01/19/hellforge-remastered-home-desktop/"&gt;powerful computer I built recently&lt;/a&gt; meant that I could run FAH on my desktop computer.&lt;/p&gt;

&lt;p&gt;Now, I had some extra credits for AWS that were to expire soon and I figured instead of letting them go to waste, I thought to myself maybe I could spin up some EC2 instances and run Folding@Home on them. I started looking at the pricing of the GPU instances - and they were a bit pricier than what I could sustain. Considering this, I selected the c5n.large instance as I didn't need instance and EBS-backed disks would be handy in setting up aa Auto Scaling Group.&lt;/p&gt;

&lt;p&gt;To reduce expenses further, I started looking at Spot prices and it turned out, the spot prices were about 68% cheaper as compared to the on-demand prices. Since we don't really care about what happens when the spot termination happens and the ASG will bring the instance count back up, I went with this option. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hjsvrz_---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sathyasays.com/images/spot-savings.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hjsvrz_---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://sathyasays.com/images/spot-savings.png" alt="Spot Savings" width="880" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The spot pricing trend revealed that the prices had remained stable and just to ensure the spot bids would be fulfilled, I kept the max spot price couple of cents more than the maximum price going then. Initially, the instances were brought up by manually launching them from the AWS Console. Since long I'd been meaning to use &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS CDK&lt;/a&gt;, this was the perfect opportunity to learn and try to use it. &lt;/p&gt;

&lt;p&gt;The CDK code will bring up a new VPC, a couple of subnets, an ASG and attach a security group to allow for SSH into the instance. The code is not the best, there's a bunch of hard-coding of regions, AMIs, SSH key names, but pull requests to clean up and make it more generic is more than welcome! Check out the code on my &lt;a href="https://github.com/SathyaBhat/folding-aws"&gt;GitHub Repo&lt;/a&gt;&lt;/p&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--566lAguM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/SathyaBhat"&gt;
        SathyaBhat
      &lt;/a&gt; / &lt;a href="https://github.com/SathyaBhat/folding-aws"&gt;
        folding-aws
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Bring up a complete AWS Compute stack with VPC, EC2, and other dependencies using AWS CDK. Set up a Folding @ Home stack with couple of commands
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
Folding on AWS&lt;/h1&gt;
&lt;p&gt;This is a CDK project which configures a multi-instance ASG. As an example, there are two sets of configs predefined: the first config creates a two-node ASG pointed to an AMI which is pre-configured to run &lt;a href="https://foldingathome.org/" rel="nofollow"&gt;Folding@Home&lt;/a&gt;, while the second config is for a single-node ASG running a base install of Ubuntu with some extras (check &lt;a href="https://github.com/SathyaBhat/folding-aws/packer/generic_base.json"&gt;packer/generic_base cvonfig&lt;/a&gt; for details)&lt;/p&gt;
&lt;p&gt;The AMIs are configured and built using &lt;a href="https://www.packer.io/" rel="nofollow"&gt;HashiCorp's Packer&lt;/a&gt;. These AMIs can then be updated in the config file.&lt;/p&gt;
&lt;h2&gt;
How to run&lt;/h2&gt;
&lt;h3&gt;
Preparing the AMI&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Install &lt;a href="https://www.packer.io/intro/getting-started/install.html" rel="nofollow"&gt;packer&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Generate &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey" rel="nofollow"&gt;AWS access keys&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the following variables in your shell's environment: &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt;, &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt;, &lt;code&gt;AWS_DEFAULT_REGION&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Change into the &lt;code&gt;packer&lt;/code&gt; sub directory: &lt;code&gt;cd packer&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Build the Folding at Home Amazon Machine Image that will be used to create the virtual machines&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;packer build -var 'fah_user=your_username' -var 'fah_passkey=your_passkey' \
            -var 'fah_team=your_team_id' fah_ami.json
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you have the &lt;code&gt;jq&lt;/code&gt; program…&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/SathyaBhat/folding-aws"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


</description>
      <category>aws</category>
      <category>tutorial</category>
      <category>foldingathome</category>
      <category>python</category>
    </item>
  </channel>
</rss>
