<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ryan Tiffany</title>
    <description>The latest articles on DEV Community by Ryan Tiffany (@greyhoundforty).</description>
    <link>https://dev.to/greyhoundforty</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/greyhoundforty"/>
    <language>en</language>
    <item>
      <title>Getting Started with IBM Cloud CLI Search</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Thu, 27 Jan 2022 16:29:27 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/getting-started-with-ibm-cloud-cli-search-3hoc</link>
      <guid>https://dev.to/greyhoundforty/getting-started-with-ibm-cloud-cli-search-3hoc</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;This guide will show you how to get started using the IBM Cloud CLI &lt;a href="https://cloud.ibm.com/docs/cli?topic=cli-ibmcloud_commands_resource#ibmcloud_resource_search"&gt;resource search&lt;/a&gt; function to locate resources on your account.  &lt;/p&gt;

&lt;p&gt;The examples are divided in to two sections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud = IBM Cloud Resources.&lt;/li&gt;
&lt;li&gt;IaaS = Classic Infrastructure (SoftLayer) Resources.
&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;If you do not have the IBM Cloud CLI installed you can refer to this &lt;a href="https://cloud.ibm.com/docs/cli?topic=cli-install-ibmcloud-cli"&gt;doc&lt;/a&gt; for installation instructions. Alternately you can use &lt;a href="https://cloud.ibm.com/docs/cloud-shell?topic=cloud-shell-getting-started"&gt;IBM Cloud Shell&lt;/a&gt; which has all of the tools we will need out of the box.  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Search Cloud Resources (Cloud)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource search &lt;span class="s1"&gt;'name:devcluster'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Search by resource name and return CRN  (Cloud)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ibmcloud resource search 'name:devcluster' --output json | jq -r '.items[].crn'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Search by resource tag (Cloud)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource search &lt;span class="s1"&gt;'tags:ryantiffany'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Return resource names (Cloud)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource search &lt;span class="s1"&gt;'tags:ryantiffany'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; json | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.items[].name'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Return resource CRNs (Cloud)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource search &lt;span class="s1"&gt;'tags:ryantiffany'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; json | jq &lt;span class="nt"&gt;-r&lt;/span&gt;  &lt;span class="s1"&gt;'.items[].crn'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Return resource types  (Cloud)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource search &lt;span class="s1"&gt;'tags:ryantiffany'&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; json | jq &lt;span class="nt"&gt;-r&lt;/span&gt;  &lt;span class="s1"&gt;'.items[].type'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Search classic infrastructure (IaaS)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource search &lt;span class="nt"&gt;-p&lt;/span&gt; classic-infrastructure &lt;span class="nt"&gt;--output&lt;/span&gt; json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Search classic infrastructure by tag (IaaS)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource search &lt;span class="s2"&gt;"tagReferences.tag.name:ryantiffany"&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; classic-infrastructure &lt;span class="nt"&gt;--output&lt;/span&gt; json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Return resource types (IaaS)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource search &lt;span class="s2"&gt;"tagReferences.tag.name:ryantiffany"&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; classic-infrastructure &lt;span class="nt"&gt;--output&lt;/span&gt; json | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.items[].resourceType'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Search by tag and filter on virtual instances (IaaS)
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource search &lt;span class="s2"&gt;"tagReferences.tag.name:ryantiffany _objectType:SoftLayer_Virtual_Guest"&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; classic-infrastructure &lt;span class="nt"&gt;--output&lt;/span&gt; json 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Search IaaS Virtual instances by Tag and return FQDNs
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource search &lt;span class="s2"&gt;"tagReferences.tag.name:ryantiffany _objectType:SoftLayer_Virtual_Guest"&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; classic-infrastructure &lt;span class="nt"&gt;--output&lt;/span&gt; json | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.items[].resource.fullyQualifiedDomainName'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Search IaaS Virtual instances by Tag and return instance ID's
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
shell
$ ibmcloud resource search "tagReferences.tag.name:&amp;lt;tag&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>ibmcloud</category>
      <category>cli</category>
    </item>
    <item>
      <title>Deploy a Consul cluster to an IBM Cloud VPC using Terraform and Ansible</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Fri, 04 Dec 2020 14:46:38 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/deploy-a-consul-cluster-to-an-ibm-cloud-vpc-using-terraform-and-ansible-556d</link>
      <guid>https://dev.to/greyhoundforty/deploy-a-consul-cluster-to-an-ibm-cloud-vpc-using-terraform-and-ansible-556d</guid>
      <description>&lt;p&gt;Lately I have been on a little bit of an &lt;strong&gt;Ansible + Terraform&lt;/strong&gt; kick so I thought I would throw together a code example for deploying a &lt;a href="https://www.consul.io/"&gt;Consul&lt;/a&gt; cluster in to an IBM Cloud VPC using these tools. &lt;/p&gt;

&lt;p&gt;Consul is a service mesh control plane with baked in service discovery, configuration, and segmentation functionality. As more and more of our deployed applications and services are spread out between clouds, Consul allows us a secure communication layer regardless of where our infrastructure is hosted. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can get $500 (USD) in credit towards VPC resources in IBM by adding the code &lt;code&gt;VPC500&lt;/code&gt; to your account. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://tfswitch.warrensbox.com/"&gt;Tfswitch&lt;/a&gt; installed &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html"&gt;Ansible&lt;/a&gt; installed &lt;/li&gt;
&lt;li&gt;An &lt;a href="https://cloud.ibm.com/docs/account?topic=account-userapikey#manage-user-keys"&gt;IBM Cloud API Key&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Terraform to Create Infrastructure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.terraform.io/"&gt;Terraform&lt;/a&gt; is an &lt;code&gt;infrastructure as code&lt;/code&gt; tool that allows you to provision and manage a wide range of clouds, infrastructure, and services. Using Terraform allows us to create consistent, repeatable deployments. &lt;/p&gt;

&lt;h3&gt;
  
  
  Steps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Clone repository:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/cloud-design-dev/ibm-vpc-consul-terraform-ansible.git
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;ibm-vpc-consul-terraform-ansible
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Copy &lt;code&gt;terraform.tfvars.template&lt;/code&gt; to &lt;code&gt;terraform.tfvars&lt;/code&gt;:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cp &lt;/span&gt;terraform.tfvars.template terraform.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Edit &lt;code&gt;terraform.tfvars&lt;/code&gt; to match your environment.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;tfswitch&lt;/code&gt; to point to the right Terraform version for this solution:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;tfswitch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Deploy all resources:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;terraform init
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform plan &lt;span class="nt"&gt;-out&lt;/span&gt; default.tfplan 
&lt;span class="nv"&gt;$ &lt;/span&gt;terraform apply default.tfplan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If the plan completes successfully you should see something like the following output:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Apply &lt;span class="nb"&gt;complete&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; Resources: 27 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the &lt;span class="nb"&gt;complete &lt;/span&gt;state
use the &lt;span class="sb"&gt;`&lt;/span&gt;terraform show&lt;span class="sb"&gt;`&lt;/span&gt; command.

State path: terraform.tfstate

Outputs:

bastion_instance_ip &lt;span class="o"&gt;=&lt;/span&gt; 10.242.0.36
bastion_public_ip &lt;span class="o"&gt;=&lt;/span&gt; x.y.x.y
consul_instance_ip &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"10.242.0.4"&lt;/span&gt;,
  &lt;span class="s2"&gt;"10.242.0.6"&lt;/span&gt;,
  &lt;span class="s2"&gt;"10.242.0.5"&lt;/span&gt;,
&lt;span class="o"&gt;]&lt;/span&gt;
consul_names &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
  &lt;span class="s2"&gt;"default-041430-eu-gb-1-consul1"&lt;/span&gt;,
  &lt;span class="s2"&gt;"default-041430-eu-gb-1-consul2"&lt;/span&gt;,
  &lt;span class="s2"&gt;"default-041430-eu-gb-1-consul3"&lt;/span&gt;,
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Our Terraform deployment has also generated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Ansible inventory file&lt;/li&gt;
&lt;li&gt;A variables file that will be used by the ansible playbook&lt;/li&gt;
&lt;li&gt;A temporary ansible.cfg file for use with our playbook&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the plan completes we can move on to deploying Consul using Ansible. &lt;/p&gt;
&lt;h2&gt;
  
  
  Run Ansible Playbook to Create the Consul Cluster
&lt;/h2&gt;

&lt;p&gt;Whereas Terraform is best suited for the deployment of infrastructure, when it comes to configuration management I prefer &lt;a href="https://www.ansible.com/overview/it-automation"&gt;Ansible&lt;/a&gt;. In this example Ansible will be used to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the base operating system&lt;/li&gt;
&lt;li&gt;Add the consul public key to the server&lt;/li&gt;
&lt;li&gt;Install the consul binary&lt;/li&gt;
&lt;li&gt;Bootstrap a 3 node cluster using Ansible templates
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;ansible 
&lt;span class="nv"&gt;$ &lt;/span&gt;ansible-playbook &lt;span class="nt"&gt;-i&lt;/span&gt; inventory playbooks/consul-cluster.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If you would like a little more insight in to what Ansible is doing behind the scenes, add &lt;code&gt;-vv&lt;/code&gt; to your ansible-playbook command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible-playbook &lt;span class="nt"&gt;-vv&lt;/span&gt; &lt;span class="nt"&gt;-i&lt;/span&gt; inventory playbooks/consul-cluster.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Verify that the cluster is running
&lt;/h2&gt;

&lt;p&gt;Since we bound the Consul agent to the main private IP of the VPC instances we first need to set the environmental variable for CONSUL_HTTP_ADDR. Take one of the consul instance IPs and run the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ansible &lt;span class="nt"&gt;-m&lt;/span&gt; shell &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"CONSUL_HTTP_ADDR=&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;http://CONSUL_INSTANCE_IP:8500&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; consul members"&lt;/span&gt; CONSUL_INSTANCE_NAME &lt;span class="nt"&gt;-i&lt;/span&gt; inventory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Example output
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ansible &lt;span class="nt"&gt;-m&lt;/span&gt; shell &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"CONSUL_HTTP_ADDR=&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;http://10.241.0.36:8500&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; consul members"&lt;/span&gt; dev-011534-us-east-1-consul1 &lt;span class="nt"&gt;-i&lt;/span&gt; inventory
dev-011534-us-east-1-consul1 | CHANGED | &lt;span class="nv"&gt;rc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;

Node                          Address           Status  Type    Build  Protocol  DC       Segment
dev-011534-us-east-1-consul1  10.241.0.36:8301  alive   server  1.9.0  2         us-east  &amp;lt;all&amp;gt;
dev-011534-us-east-1-consul2  10.241.0.38:8301  alive   server  1.9.0  2         us-east  &amp;lt;all&amp;gt;
dev-011534-us-east-1-consul3  10.241.0.37:8301  alive   server  1.9.0  2         us-east  &amp;lt;all&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Asciinema Recording of a Test Run
&lt;/h3&gt;


&lt;div class="ltag_asciinema"&gt;
  
&lt;/div&gt;



</description>
      <category>ibmcloud</category>
      <category>consul</category>
      <category>terraform</category>
      <category>ansible</category>
    </item>
    <item>
      <title>Site to Site IPsec Tunnel to IBM Cloud VPC</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Wed, 07 Oct 2020 14:27:15 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/site-to-site-ipsec-tunnel-to-ibm-cloud-vpc-2ld4</link>
      <guid>https://dev.to/greyhoundforty/site-to-site-ipsec-tunnel-to-ibm-cloud-vpc-2ld4</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;This article will walk you through the process of connecting on on-prem IPsec tunnel to the &lt;a href=""&gt;IBM Cloud VPC VPN&lt;/a&gt; as a service offering. This will allow you to communicate from your local machine to the private IP addresses assigned to your VPC compute instances. In this guide you will walk through the following steps: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provisioning an instance of  VPC VPN as a Service &lt;/li&gt;
&lt;li&gt;Adding a Peer connection to VPC VPN to connect to your local network&lt;/li&gt;
&lt;li&gt;Install strongSwan on a local machine/VM&lt;/li&gt;
&lt;li&gt;Configuring the local IPsec peer&lt;/li&gt;
&lt;li&gt;Bringing up the local IPsec tunnel and pinging VPC resources&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Provision and configure the VPC VPNaaS
&lt;/h3&gt;

&lt;p&gt;We’ll start by deploying an instance of VPNaaS. From the main &lt;a href="https://cloud.ibm.com/vpc-ext/overview" rel="noopener noreferrer"&gt;VPC landing page&lt;/a&gt; click on VPN Gateways on the left hand navigation bar:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2Fvpc-vpn-gateway.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2Fvpc-vpn-gateway.png" alt="Go to VPN Overview page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure you select the region where your VPC resides and then click Create VPN. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2Fcreate-vpn-step1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2Fcreate-vpn-step1.png" alt="Create VPN Step 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the top of the screen give the VPN a name, select the VPC where you would like the VPN deployed, and then select the subnet to use with the VPN. &lt;strong&gt;Note&lt;/strong&gt;: Only the resources in the same zone as the subnet you choose can connect through this VPN gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2Fvpn-gateway-name.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2Fvpn-gateway-name.png" alt="Create VPN Step 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the subnet selected scroll down and make sure the &lt;strong&gt;New VPN Connection for VPC&lt;/strong&gt; option is enabled. Give the new connection a name and provide the local and peer subnets along with the pre-shared key. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: If you need to generate a pre-shared key, launch Cloud Shell by clicking the terminal icon in the upper right of the IBM Cloud portal. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FShared-Image-2020-09-23-09-26-23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FShared-Image-2020-09-23-09-26-23.png" alt="Launch Cloud Shell"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once your Cloud Shell session starts run the following command to generate a 32 character pre-shared key&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tr&lt;/span&gt; &lt;span class="nt"&gt;-dc&lt;/span&gt; &lt;span class="s2"&gt;"[:alpha:][:alnum:]"&lt;/span&gt; &amp;lt; /dev/urandom | &lt;span class="nb"&gt;head&lt;/span&gt; &lt;span class="nt"&gt;-c&lt;/span&gt; 32
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example my VPC utilizes the 10.240.0.0/24 subnet and my local network uses 172.16 IPs. The &lt;em&gt;Peer gateway address&lt;/em&gt; is the public IP on your local network. If you are unsure of what this is pull up a browser and head to &lt;a href="https://www.ipchicken.com/" rel="noopener noreferrer"&gt;IP Chicken&lt;/a&gt;. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2Fadd-vpn-connection.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2Fadd-vpn-connection.png" alt="Add connection information"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With all the details added click &lt;em&gt;Create VPN Gateway&lt;/em&gt; in the right hand navigation bar to deploy the VPN.  We'll give the new VPN a few moments to deploy and then copy down the Peer address that we'll need for the local tunnel configuration. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2Fvpc-vpn-peer-address.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2Fvpc-vpn-peer-address.png" alt="Copy down VPN Peer Address"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Install strongSwan on local machine
&lt;/h3&gt;

&lt;p&gt;In my example I have a local Ubuntu 18 VM that I will be using as the local IPsec peer. The first step is to install &lt;a href="https://www.strongswan.org/" rel="noopener noreferrer"&gt;strongSwan&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; apt-get &lt;span class="nb"&gt;install &lt;/span&gt;strongswan &lt;span class="nt"&gt;-y&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With strongSwan installed we need to add IP forwarding to our kernel parameters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /etc/sysctl.conf &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
net.ipv4.ip_forward = 1 
net.ipv4.conf.all.accept_redirects = 0 
net.ipv4.conf.all.send_redirects = 0
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;sysctl &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/sysctl.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure local ipsec peer
&lt;/h3&gt;

&lt;p&gt;Next we'll update the &lt;code&gt;/etc/ipsec.secrets&lt;/code&gt; file. The syntax for the file is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;LOCAL_PEER_IP VPC_VPN_PEER_IP : PSK "Pre-shared key"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example if your local public IP was 192.168.20.2, the VPC VPN Peer address was 192.168.30.5, and the pre-shared key was &lt;code&gt;XtemrMYFfmmMCpxgdCwSYoRBKdjQ1ndb&lt;/code&gt; the file would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;192.168.20.2 192.168.30.5 : PSK "XtemrMYFfmmMCpxgdCwSYoRBKdjQ1ndb"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the secrets file updated we'll now move on to updating the strongSwan configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# ipsec.conf - strongSwan IPsec configuration file
# basic configuration
config setup
    # strictcrlpolicy=yes
    # uniqueids = no
        charondebug="all"
        uniqueids=yes
        strictcrlpolicy=no

# connection to us-east-vpc
conn home-to-vpc
  authby=secret
  left=%defaultroute
  leftid=&amp;lt;Local Server Public IP&amp;gt;
  leftsubnet=&amp;lt;Local Internal Subnet range&amp;gt;
  right=&amp;lt;VPC VPN Endpoint IP&amp;gt;
  rightsubnet=&amp;lt;VPC Subnet range&amp;gt;,166.8.0.0/14,161.26.0.0/16
  ike=aes256-sha2_256-modp1024!
  esp=aes256-sha2_256!
  keyingtries=0
  ikelifetime=1h
  lifetime=8h
  dpddelay=30
  dpdtimeout=120
  dpdaction=restart
  auto=start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We add in the ranges &lt;code&gt;166.8.0.0/14&lt;/code&gt; and &lt;code&gt;161.26.0.0/16&lt;/code&gt; so that we can communicate with IBM Cloud services over their private IP address space.&lt;/p&gt;

&lt;p&gt;With the ipsec configuration updated, add an iptables rule for post-routing. Again for my tunnel the VPC Subnet is 10.240.0.0/24 and my local internal subnet is 172.16.0.0/24 so adjust the following command to meet your needs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ iptables -t nat -A POSTROUTING -s 10.240.0.0/24 -d 172.16.0.0/24 -J MASQUERADE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now restart the ipsec service and check the status of the tunnel:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sudo ipsec restart
Stopping strongSwan IPsec...
Starting strongSwan 5.6.2 IPsec [starter]...

$ sudo ipsec status
Security Associations (1 up, 0 connecting):
 home-to-vpc[1]: ESTABLISHED 16 seconds ago, 10.0.0.67[x.x.x.x]...52.y.y.y[52.y.y.y]
 home-to-vpc{1}:  INSTALLED, TUNNEL, reqid 1, ESP in UDP SPIs: c2225f47_i ccc3d826_o
 home-to-vpc{1}:   10.0.0.0/18 === 161.26.0.0/16 166.8.0.0/14 192.168.0.0/18
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Test connectivity to VPC instance
&lt;/h3&gt;

&lt;p&gt;In my VPC I have an instance with a private IP of 10.240.0.6:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ibmcloud is instances --output json | jq -r '.[] | select(.vpc.name=="us-south-vpc-rt") | .network_interfaces[].primary_ipv4_address'
10.240.0.6

$ ping -c2 -q 10.240.0.6
PING 10.240.0.6 (10.240.0.6) 56(84) bytes of data.

--- 10.240.0.6 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 40.836/41.497/42.159/0.692 ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ibmcloud</category>
    </item>
    <item>
      <title>Build container images with Code Engine</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Wed, 23 Sep 2020 15:31:56 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/build-container-images-with-code-engine-1h57</link>
      <guid>https://dev.to/greyhoundforty/build-container-images-with-code-engine-1h57</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;This guide will show you how to use the experimental &lt;a href="https://cloud.ibm.com/docs/codeengine?topic=codeengine-about"&gt;IBM Code Engine&lt;/a&gt; to build a container image from a Source control repository. Behind the scenes Code Engine will use &lt;a href="https://tekton.dev/"&gt;Tekton&lt;/a&gt; pipelines to pull our source code from a Github repository and then create a container image using the supplied Docker file. After the build is complete Code Engine will push the new container image in to the &lt;a href="https://cloud.ibm.com/docs/Registry?topic=Registry-registry_overview"&gt;IBM Cloud Container Registry&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt;: Code Engine is currently an experimental offering and all resources are deleted every 7 days.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start a session in IBM Cloud Shell
&lt;/h2&gt;

&lt;p&gt;In the IBM Cloud console, click the IBM Cloud Shell icon &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mWteHZbX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/Shared-Image-2020-09-23-09-26-23.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mWteHZbX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/Shared-Image-2020-09-23-09-26-23.png" alt="Cloud Shell Icon"&gt;&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;A session starts and automatically logs you in through the IBM Cloud CLI. &lt;/p&gt;

&lt;h2&gt;
  
  
  Target Resource Group
&lt;/h2&gt;

&lt;p&gt;In oder to interact with the Code Engine CLI we first need to target the Resource Group where the Code Engine project will be created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud target &lt;span class="nt"&gt;-g&lt;/span&gt; &amp;lt;Your Resource Group&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create a Code Engine Project
&lt;/h2&gt;

&lt;p&gt;The first step is to create a Code Engine project. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Keep in mind during the Beta phase you are limited to one Code Engine project per region. If you already have a Code Engine project you can simply target that project using the command &lt;code&gt;ibmcloud ce project target -n &amp;lt;name of project&amp;gt;&lt;/code&gt;  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We'll specify the &lt;code&gt;--target&lt;/code&gt; option to automatically have the Code Engine cli target our new project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ibmcloud ce project create -n &amp;lt;Project Name&amp;gt; --target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The project creation can take a few minutes, but when it completes you should see something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce project create &lt;span class="nt"&gt;-n&lt;/span&gt; ce-demo-project &lt;span class="nt"&gt;--target&lt;/span&gt;
Creating project &lt;span class="s1"&gt;'ce-demo-project'&lt;/span&gt;...
Waiting &lt;span class="k"&gt;for &lt;/span&gt;project &lt;span class="s1"&gt;'ce-demo-project'&lt;/span&gt; to be &lt;span class="k"&gt;in &lt;/span&gt;ready state...
Now selecting project &lt;span class="s1"&gt;'ce-demo-project'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create an API Key for Code Engine for Registry Access
&lt;/h2&gt;

&lt;p&gt;As part of our Build process we are going be pulling a public Github repo but then pushing the built container in to &lt;a href="https://cloud.ibm.com/docs/Registry?topic=Registry-registry_overview"&gt;IBM Cloud Container Registry&lt;/a&gt;. In order for our Code Engine project to be able to push to the registry we'll need to create an API key.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud iam api-key-create &amp;lt;Project Name&amp;gt;-cliapikey &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"API Key for talking to Image registry from Code Engine"&lt;/span&gt; &lt;span class="nt"&gt;--file&lt;/span&gt; key_file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create Code Engine Registry Secret
&lt;/h2&gt;

&lt;p&gt;With our API key created, we will now add the IBM Cloud Container Registry to Code Engine. When using the IBM Container Registry the username will always be &lt;code&gt;iamapikey&lt;/code&gt;. If you would like to push to an alternate IBM Container Registry &lt;a href="https://cloud.ibm.com/docs/Registry?topic=Registry-registry_overview#registry_regions_local"&gt;endpoint&lt;/a&gt; update the &lt;code&gt;--server&lt;/code&gt; flag accordingly.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;CR_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.apikey'&lt;/span&gt; &amp;lt; key_file&lt;span class="sb"&gt;`&lt;/span&gt;

&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce registry create &lt;span class="nt"&gt;--name&lt;/span&gt; ibmcr &lt;span class="nt"&gt;--server&lt;/span&gt; us.icr.io &lt;span class="nt"&gt;--username&lt;/span&gt; iamapikey &lt;span class="nt"&gt;--password&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CR_API_KEY&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can view all of your registry secrets by running the command: &lt;code&gt;ibmcloud ce registry list&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce registry list 
Project &lt;span class="s1"&gt;'demo-rt'&lt;/span&gt; and all its contents will be automatically deleted 7 days from now.
Listing image registry access secrets...
OK

Name   Age  
ibmcr  11s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create Code Engine Build Definition
&lt;/h2&gt;

&lt;p&gt;With the Registry access added we can now create our Build definition. If you do not already have a Container namespace to push images to, please follow [this guide to create one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce build create &lt;span class="nt"&gt;--name&lt;/span&gt; go-app-example-build &lt;span class="nt"&gt;--source&lt;/span&gt; https://github.com/greyhoundforty/ce-build-example-go &lt;span class="nt"&gt;--strategy&lt;/span&gt; kaniko &lt;span class="nt"&gt;--size&lt;/span&gt; medium &lt;span class="nt"&gt;--image&lt;/span&gt; us.icr.io/&amp;lt;namespace&amp;gt;/go-app-example &lt;span class="nt"&gt;--registry-secret&lt;/span&gt; ibmcr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The breakdown of the command:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name: The name of the build definition &lt;/li&gt;
&lt;li&gt;source: The Source control repository where our code lives&lt;/li&gt;
&lt;li&gt;strategy:  The &lt;a href="https://cloud.ibm.com/docs/codeengine?topic=codeengine-plan-build#build-strategy"&gt;build strategy&lt;/a&gt;  we will use to build the image. In this case since our repository has a Dockerfile we will use &lt;code&gt;kaniko&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;size: The size of the build defines how CPU cores, memory, and disk space are assigned to the build&lt;/li&gt;
&lt;li&gt;image: The Container Registry namespace and image name to push our built container image&lt;/li&gt;
&lt;li&gt;registry-secret: The Container Registry secret that allows Code Engine to push and pull images&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Submit the Build Job
&lt;/h2&gt;

&lt;p&gt;Before the Build run is submitted (the actual process of building the container image), we’ll want to target the underlying Kubernetes cluster that powers Code Engine. This will allow us to see the pods that are spun up for the build as well as track it’s progress. To have &lt;code&gt;kubctl&lt;/code&gt; within Cloud Shell target our cluster run the following command: &lt;code&gt;ibmcloud ce project target -n &amp;lt;Name of Project&amp;gt; -k&lt;/code&gt;  &lt;/p&gt;

&lt;p&gt;You should see output similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce project target &lt;span class="nt"&gt;-n&lt;/span&gt; demo-rt &lt;span class="nt"&gt;-k&lt;/span&gt; 
Selecting project &lt;span class="s1"&gt;'demo-rt'&lt;/span&gt;...
Added context &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s1"&gt;'demo-rt'&lt;/span&gt; to the current kubeconfig file.
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With &lt;code&gt;kubectl&lt;/code&gt; properly configured we can now launch the actual build of our container image using the &lt;code&gt;buildrun&lt;/code&gt; command. We specify the build definition we created previously with the &lt;code&gt;--build&lt;/code&gt; flag:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce buildrun submit &lt;span class="nt"&gt;--name&lt;/span&gt; go-app-buildrun-v1 &lt;span class="nt"&gt;--build&lt;/span&gt; go-app-example-build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can check the status of the build run using the command &lt;code&gt;ibmcloud ce buildrun get --name &amp;lt;Name of build run&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce buildrun get &lt;span class="nt"&gt;--name&lt;/span&gt; go-app-buildrun-v1
Project &lt;span class="s1"&gt;'demo-rt'&lt;/span&gt; and all its contents will be automatically deleted 7 days from now.
Getting build run &lt;span class="s1"&gt;'go-app-buildrun-v1'&lt;/span&gt;...
OK

Name:          go-app-buildrun-v1
ID:            d378e865-ecf4-4e26-932d-acb437eef0ef
Project Name:  demo-rt
Project ID:    ab07a001-9a77-4fd8-82e8-d4f8395ad735
Age:           36s
Created:       2020-09-23 09:13:33 &lt;span class="nt"&gt;-0500&lt;/span&gt; CDT
Status:
  Reason:      Running
  Registered:  Unknown

Instances:
  Name                                Running  Status   Restarts  Age
  go-app-buildrun-v1-xpqfq-pod-hqchd  2/4      Running  0         34s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also check on the status of the Kubernetes pods by running &lt;code&gt;kubectl get pods&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME                                 READY   STATUS      RESTARTS   AGE
go-app-buildrun-v1-xpqfq-pod-hqchd   2/4     Running     0          41s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the build completes successfully the pods will show &lt;code&gt;Completed&lt;/code&gt; and the build run will show &lt;code&gt;Succeeded&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods
NAME                                 READY   STATUS      RESTARTS   AGE
go-app-buildrun-v1-xpqfq-pod-hqchd   0/4     Completed   0          4m10s

&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce buildrun get &lt;span class="nt"&gt;--name&lt;/span&gt; go-app-buildrun-v1
Project &lt;span class="s1"&gt;'demo-rt'&lt;/span&gt; and all its contents will be automatically deleted 7 days from now.
Getting build run &lt;span class="s1"&gt;'go-app-buildrun-v1'&lt;/span&gt;...
OK

Name:          go-app-buildrun-v1
ID:            d378e865-ecf4-4e26-932d-acb437eef0ef
Project Name:  demo-rt
Project ID:    ab07a001-9a77-4fd8-82e8-d4f8395ad735
Age:           4m26s
Created:       2020-09-23 09:13:33 &lt;span class="nt"&gt;-0500&lt;/span&gt; CDT
Status:
  Reason:      Succeeded
  Registered:  True

Instances:
  Name                                Running  Status     Restarts  Age
  go-app-buildrun-v1-xpqfq-pod-hqchd  0/4      Succeeded  0         4m24s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>ibmcloud</category>
    </item>
    <item>
      <title>Use IBM Cloud Code Engine to Sync Object Storage Buckets Between Accounts</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Thu, 20 Aug 2020 18:44:23 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/use-ibm-cloud-code-engine-to-sync-object-storage-buckets-between-accounts-ned</link>
      <guid>https://dev.to/greyhoundforty/use-ibm-cloud-code-engine-to-sync-object-storage-buckets-between-accounts-ned</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;In this guide I will show you how to sync ICOS bucket objects between accounts using &lt;a href="https://cloud.ibm.com/docs/codeengine"&gt;Code Engine&lt;/a&gt;. Code Engine provides a platform to unify the deployment of all of your container-based applications on a Kubernetes-based infrastructure. The Code Engine experience is designed so that you can focus on writing code without the need for you to learn, or even know about, Kubernetes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Code Engine is currently an experimental offering and all resources are deleted every 7 days.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Preparing Accounts&lt;/li&gt;
&lt;li&gt;
Source Account

&lt;ul&gt;
&lt;li&gt;Create Service ID&lt;/li&gt;
&lt;li&gt;Create Reader access policy for newly created service id&lt;/li&gt;
&lt;li&gt;Generate HMAC credentials tied to our service ID&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
Destination Account

&lt;ul&gt;
&lt;li&gt;Create Service ID&lt;/li&gt;
&lt;li&gt;Create Reader access policy for newly created service id&lt;/li&gt;
&lt;li&gt;Generate HMAC credentials tied to our service ID&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Create Code Engine Project via Cloud Shell&lt;/li&gt;
&lt;li&gt;Create Code Engine Secrets&lt;/li&gt;
&lt;li&gt;Create Code Engine Project Job definition with environmental variables&lt;/li&gt;
&lt;li&gt;Submit Code Engine Job&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Preparing Accounts
&lt;/h2&gt;

&lt;p&gt;We will be using &lt;a href="https://cloud.ibm.com/shell"&gt;Cloud Shell&lt;/a&gt; to generate Service IDs and Object Storage credentials for both the source and destination accounts. &lt;/p&gt;

&lt;h3&gt;
  
  
  Source Account
&lt;/h3&gt;

&lt;p&gt;We will create a service ID on the source account. A service ID identifies a service or application similar to how a user ID identifies a user. We can assign specific access policies to the service ID that restrict permissions for using specific services: in this case it gets read-only access to an IBM Cloud Object Storage bucket. &lt;/p&gt;

&lt;h4&gt;
  
  
  Create Service ID
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud iam service-id-create &amp;lt;name-of-your-service-id&amp;gt; &lt;span class="nt"&gt;--description&lt;/span&gt; &lt;span class="s2"&gt;"Service ID for read-only access to bucket"&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NDK8KRPw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/source-service-id.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NDK8KRPw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/source-service-id.png" alt="Service ID Creation"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Create Reader access policy for newly created service id
&lt;/h4&gt;

&lt;p&gt;Now we will limit the scope of this service ID to have read only access to our source Object Storage bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud iam service-policy-create &amp;lt;Service ID&amp;gt; &lt;span class="nt"&gt;--roles&lt;/span&gt; Reader &lt;span class="nt"&gt;--service-name&lt;/span&gt; cloud-object-storage &lt;span class="nt"&gt;--service-instance&lt;/span&gt; &amp;lt;Service Instance GUID&amp;gt; &lt;span class="nt"&gt;--resource-type&lt;/span&gt; bucket &lt;span class="nt"&gt;--resource&lt;/span&gt; &amp;lt;bucket-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Service Instance GUID&lt;/em&gt;  - This is the GUID of the Cloud Object Storage instance. You can retrieve this with the command: &lt;code&gt;ibmcloud resource service-instance &amp;lt;name of icos instance&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3ZbYz3EZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/create-source-service-policy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3ZbYz3EZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/create-source-service-policy.png" alt="Expected Output Example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Generate HMAC credentials tied to our service ID
&lt;/h4&gt;

&lt;p&gt;In order for the Minio client to talk to each Object Storage instance it will need HMAC credentials (Access Key and Secret Key in S3 parlance).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource service-key-create source-icos-service-creds Reader &lt;span class="nt"&gt;--instance-id&lt;/span&gt; &amp;lt;Service Instance GUID&amp;gt; &lt;span class="nt"&gt;--service-id&lt;/span&gt; &amp;lt;Service ID&amp;gt; &lt;span class="nt"&gt;--parameters&lt;/span&gt; &lt;span class="s1"&gt;'{"HMAC":true}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the &lt;strong&gt;access_key_id&lt;/strong&gt; and &lt;strong&gt;secret_access_key&lt;/strong&gt; as we will be using these in our Code Engine project. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7ZC1XBHP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/source-hmac-credentials.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7ZC1XBHP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/source-hmac-credentials.png" alt="Create HMAC Credentials"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Destination Account
&lt;/h3&gt;

&lt;p&gt;We will create a service ID on the destination account. A service ID identifies a service or application similar to how a user ID identifies a user. We can assign specific access policies to the service ID that restrict permissions for using specific services: in this case it gets write access to an IBM Cloud Object Storage bucket.  &lt;/p&gt;

&lt;h4&gt;
  
  
  Create Service ID
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud  iam service-id-create &amp;lt;name-of-your-service-id&amp;gt; &lt;span class="nt"&gt;--description&lt;/span&gt; &lt;span class="s2"&gt;"Service ID for write access to bucket"&lt;/span&gt; &lt;span class="nt"&gt;--output&lt;/span&gt; json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6xvoERpz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/destination-service-id.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6xvoERpz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/destination-service-id.png" alt="Expected Output Example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Create Reader access policy for newly created service id
&lt;/h4&gt;

&lt;p&gt;Now we will limit the scope of this service ID to have read only access to our source Object Storage bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud iam service-policy-create &amp;lt;Service ID&amp;gt; &lt;span class="nt"&gt;--roles&lt;/span&gt; Writer &lt;span class="nt"&gt;--service-name&lt;/span&gt; cloud-object-storage &lt;span class="nt"&gt;--service-instance&lt;/span&gt; &amp;lt;Service Instance GUID&amp;gt; &lt;span class="nt"&gt;--resource-type&lt;/span&gt; bucket &lt;span class="nt"&gt;--resource&lt;/span&gt; &amp;lt;bucket-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Service Instance GUID&lt;/em&gt;  - This is the GUID of the Cloud Object Storage instance. You can retrieve this with the command: &lt;code&gt;ibmcloud resource service-instance &amp;lt;name of icos instance&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Generate HMAC credentials tied to our service ID
&lt;/h4&gt;

&lt;p&gt;We'll follow the same procedure as last time to generate the HMAC credentials, but this time on the destination account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud resource service-key-create destination-icos-service-creds Writer &lt;span class="nt"&gt;--instance-id&lt;/span&gt; &amp;lt;Service Instance GUID&amp;gt; &lt;span class="nt"&gt;--service-id&lt;/span&gt; &amp;lt;Service ID&amp;gt; &lt;span class="nt"&gt;--parameters&lt;/span&gt; &lt;span class="s1"&gt;'{"HMAC":true}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save the &lt;strong&gt;access_key_id&lt;/strong&gt; and &lt;strong&gt;secret_access_key&lt;/strong&gt; as we will be using these in with our Code Engine project. &lt;/p&gt;

&lt;h2&gt;
  
  
  Create Code Engine Project via Cloud Shell
&lt;/h2&gt;

&lt;p&gt;In order to create our Code Engine project we need to make sure that our cloud shell session is targeting the correct resource group. You can do this by using the &lt;code&gt;target -g&lt;/code&gt; option with the IBM Cloud CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud target &lt;span class="nt"&gt;-g&lt;/span&gt; &amp;lt;Resource Group&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the correct Resource Group set, we can now create our Code Engine project. We add the &lt;code&gt;--target&lt;/code&gt; flag to ensure that future Code Engine commands are targeting the correct project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ ibmcloud ce project create -n &amp;lt;project_name&amp;gt; --target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9ogU09zb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/ce-create-project.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9ogU09zb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/ce-create-project.png" alt="Create Code Engine Project"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Code Engine Secrets
&lt;/h2&gt;

&lt;p&gt;In order for our minio powered container to sync objects between the accounts it needs access to the Access and Secret keys we created earlier. We will use the &lt;code&gt;secret create&lt;/code&gt; option to store all of values in a single secret that we can then reference in our job definition.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SOURCE_ACCESS_KEY: Access Key generated on Source account&lt;/li&gt;
&lt;li&gt;SOURCE_SECRET_KEY: Secret Key generated on Source account&lt;/li&gt;
&lt;li&gt;SOURCE_REGION: Cloud Object Storage endpoint for the Source bucket&lt;/li&gt;
&lt;li&gt;SOURCE_BUCKET: Name of bucket on Source account&lt;/li&gt;
&lt;li&gt;DESTINATION_ACCESS_KEY: Access Key generated on Destination account&lt;/li&gt;
&lt;li&gt;DESTINATION_SECRET_KEY: Secret Key generated on Destination account&lt;/li&gt;
&lt;li&gt;DESTINATION_REGION: Cloud Object Storage endpoint for the Destination bucket&lt;/li&gt;
&lt;li&gt;DESTINATION_BUCKET: Name of bucket on Destination account
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce secret create &lt;span class="nt"&gt;--name&lt;/span&gt; ce-sync-secret &lt;span class="nt"&gt;--from-literal&lt;/span&gt; &lt;span class="nv"&gt;SOURCE_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;VALUE &lt;span class="nt"&gt;--from-literal&lt;/span&gt; &lt;span class="nv"&gt;SOURCE_SECRET_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;VALUE &lt;span class="nt"&gt;--from-literal&lt;/span&gt; &lt;span class="nv"&gt;SOURCE_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;VALUE &lt;span class="nt"&gt;--from-literal&lt;/span&gt; &lt;span class="nv"&gt;SOURCE_BUCKET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;VALUE &lt;span class="nt"&gt;--from-literal&lt;/span&gt; &lt;span class="nv"&gt;DESTINATION_ACCESS_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;VALUE &lt;span class="nt"&gt;--from-literal&lt;/span&gt; &lt;span class="nv"&gt;DESTINATION_SECRET_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;VALUE &lt;span class="nt"&gt;--from-literal&lt;/span&gt; &lt;span class="nv"&gt;DESTINATION_REGION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;VALUE &lt;span class="nt"&gt;--from-literal&lt;/span&gt; &lt;span class="nv"&gt;DESTINATION_BUCKET&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;VALUE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dS9FDVmd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/ce-create-secret.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dS9FDVmd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/ce-create-secret.png" alt="Create Code Engine Secret"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create Code Engine Project Job definition with environmental variables
&lt;/h2&gt;

&lt;p&gt;Now that our project has been created we need to create our Job definition. In Code Engine terms a job is a stand-alone executable for batch jobs. Unlike applications, which react to incoming HTTP requests, jobs are meant to be used for running container images that contain an executable that is designed to run one time and then exit.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce jobdef create &lt;span class="nt"&gt;--name&lt;/span&gt; JOBDEF_NAME &lt;span class="nt"&gt;--image&lt;/span&gt; IMAGE_REF &lt;span class="nt"&gt;--env-from-secret&lt;/span&gt; SECRET_NAME 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Y4abYiLv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/ce-create-jobdef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y4abYiLv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dsc.cloud/quickshare/ce-create-jobdef.png" alt="Create Job Definition"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can view the definition of the job using the command &lt;code&gt;ibmcloud ce jobdef get -n &amp;lt;name of job definition&amp;gt;&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce jobdef get &lt;span class="nt"&gt;-n&lt;/span&gt; ce-mc-sync-jobdef
Project &lt;span class="s1"&gt;'ce-minio-sync'&lt;/span&gt; and all its contents will be automatically deleted 7 days from now.
Getting job definition &lt;span class="s1"&gt;'ce-mc-sync-jobdef'&lt;/span&gt;...
Name:        ce-mc-sync-jobdef  
Project ID:  1d7514d2-ce89  
Metadata:    
  Creation Timestamp:  2020-08-17 14:51:00 +0000 UTC  
  Generation:          1  
  Resource Version:    223595401  
  Self Link:           /apis/codeengine.cloud.ibm.com/v1alpha1/namespaces/1d7514d2-ce89/jobdefinitions/ce-mc-sync-jobdef  
  UID:                 0fb11f71-a912-44f9-88e3-1d1612f8e8ab  
Spec:        
  Containers:  
    Image:  greyhoundforty/icos-ce-sync:1  
    Name:   ce-mc-sync-jobdef  
    Commands:  
    Arguments:  
    Env:    
      Name:  SOURCE_BUCKET  
      Value From Secret Key Ref:  
        Key:   SOURCE_BUCKET  
        Name:  ce-sync-secret  
    Env:    
      Name:  SOURCE_REGION  
      Value From Secret Key Ref:  
        Key:   SOURCE_REGION  
        Name:  ce-sync-secret  
    Env:    
      Name:  SOURCE_SECRET_KEY  
      Value From Secret Key Ref:  
        Key:   SOURCE_SECRET_KEY  
        Name:  ce-sync-secret  
    Env:    
      Name:  DESTINATION_ACCESS_KEY  
      Value From Secret Key Ref:  
        Key:   DESTINATION_ACCESS_KEY  
        Name:  ce-sync-secret  
    Env:    
      Name:  DESTINATION_BUCKET  
      Value From Secret Key Ref:  
        Key:   DESTINATION_BUCKET  
        Name:  ce-sync-secret  
    Env:    
      Name:  DESTINATION_REGION  
      Value From Secret Key Ref:  
        Key:   DESTINATION_REGION  
        Name:  ce-sync-secret  
    Env:    
      Name:  DESTINATION_SECRET_KEY  
      Value From Secret Key Ref:  
        Key:   DESTINATION_SECRET_KEY  
        Name:  ce-sync-secret  
    Env:    
      Name:  SOURCE_ACCESS_KEY  
      Value From Secret Key Ref:  
        Key:   SOURCE_ACCESS_KEY  
        Name:  ce-sync-secret  
    Resource Requests:  
      Cpu:     1  
      Memory:  128Mi  
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Submit Code Engine Job
&lt;/h2&gt;

&lt;p&gt;It is now time to submit our Job to Code Engine. The maximum time a job can run is 10 hours, but in most cases ICOS syncing takes significantly less time to complete.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce job run &lt;span class="nt"&gt;--name&lt;/span&gt; &amp;lt;name of job&amp;gt; &lt;span class="nt"&gt;--jobdef&lt;/span&gt; &amp;lt;name of job definition&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my testing I am only syncing a few times so by the time I check the Kubernetes pods, the job has already completed. Looking at the logs I am able to verify that the contents have been synced between the Object Storage buckets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ryan@cloudshell:~&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce job run &lt;span class="nt"&gt;--name&lt;/span&gt;  ce-mc-sync-jobv1 &lt;span class="nt"&gt;--jobdef&lt;/span&gt; ce-mc-sync-jobdef
Project &lt;span class="s1"&gt;'ce-minio-sync'&lt;/span&gt; and all its contents will be automatically deleted 7 days from now.
Creating job &lt;span class="s1"&gt;'ce-mc-sync-jobv1'&lt;/span&gt;...
OK

ryan@cloudshell:~&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce project target &lt;span class="nt"&gt;-n&lt;/span&gt; ce-minio-sync &lt;span class="nt"&gt;--kubecfg&lt;/span&gt;
Targeting project &lt;span class="s1"&gt;'ce-minio-sync'&lt;/span&gt;...
Added context &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="s1"&gt;'ce-minio-sync'&lt;/span&gt; to the current kubeconfig file.
OK
Now targeting environment &lt;span class="s1"&gt;'ce-minio-sync'&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;

ryan@cloudshell:~&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl get pods 
NAME                   READY   STATUS      RESTARTS   AGE
ce-mc-sync-jobv1-0-0   0/1     Completed   0          53s

ryan@cloudshell:~&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce job list
Project &lt;span class="s1"&gt;'ce-minio-sync'&lt;/span&gt; and all its contents will be automatically deleted 7 days from now.
Listing jobs...
Name               Age   
ce-mc-sync-jobv1   2m13s   
OK
Command &lt;span class="s1"&gt;'job list'&lt;/span&gt; performed successfully

ryan@cloudshell:~&lt;span class="nv"&gt;$ &lt;/span&gt;ibmcloud ce job logs &lt;span class="nt"&gt;-n&lt;/span&gt; ce-mc-sync-jobv1
Project &lt;span class="s1"&gt;'ce-minio-sync'&lt;/span&gt; and all its contents will be automatically deleted 7 days from now.
Logging job &lt;span class="s1"&gt;'ce-mc-sync-jobv1'&lt;/span&gt; on pod &lt;span class="s1"&gt;'0'&lt;/span&gt;...
Added &lt;span class="sb"&gt;`&lt;/span&gt;source_acct&lt;span class="sb"&gt;`&lt;/span&gt; successfully.
Added &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct&lt;span class="sb"&gt;`&lt;/span&gt; successfully.
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/as-policy.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/as-policy.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/create-source-service-policy.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/create-source-service-policy.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/add-backup-repository.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/add-backup-repository.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/direct-link-standard.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/direct-link-standard.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/create-workspace.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/create-workspace.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/direct-link-byoip.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/direct-link-byoip.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/add-scale-out.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/add-scale-out.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/filter-vpc.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/filter-vpc.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/k8s-storage.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/k8s-storage.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/Picture1.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/Picture1.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/iks-storage-Page-1.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/iks-storage-Page-1.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/dl-copy.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/dl-copy.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/rt-us-east.gv.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/rt-us-east.gv.png&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;span class="sb"&gt;`&lt;/span&gt;source_acct/wandering-thunder-68-source/ns-secrets-cm.png&lt;span class="sb"&gt;`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`&lt;/span&gt;destination_acct/sparkling-sky-47-destination/ns-secrets-cm.png&lt;span class="sb"&gt;`&lt;/span&gt;
Total: 0 B, Transferred: 2.30 MiB, Speed: 1.66 MiB/s

OK
Command &lt;span class="s1"&gt;'job logs'&lt;/span&gt; performed successfully
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you need to sync contents from the source bucket to the destination bucket again, simply run another job (with a new name) and Code Engine will take care of it for you. &lt;/p&gt;

</description>
      <category>ibmcloud</category>
    </item>
    <item>
      <title>Quick Project Templates in shell</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Tue, 14 Jul 2020 15:55:45 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/quick-project-templates-in-shell-48l4</link>
      <guid>https://dev.to/greyhoundforty/quick-project-templates-in-shell-48l4</guid>
      <description>&lt;p&gt;As someone that has to jump between a host of Terraform deployment environments having a standard template of files for each environment is a real time saver. Today I want to highlight a shell utility I have been using to help streamline this process: &lt;a href="https://github.com/EivindArvesen/prm"&gt;prm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;prm&lt;/em&gt; allows you to use &lt;a href="https://en.wikipedia.org/wiki/Create,_read,_update_and_delete"&gt;CRUD&lt;/a&gt; methodology for projects within your shell. Upon activation, each projects runs its associated start up script and upon deactivation, it can run scripts to clean up the environment. &lt;/p&gt;

&lt;p&gt;These start and stop scripts can be used for changing directories, setting environment variables, cleanup, etc. &lt;/p&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;One of the IBM Cloud Services I interact with the most is &lt;a href="https://cloud.ibm.com/docs/schematics?topic=schematics-about-schematics"&gt;Schematics&lt;/a&gt;. Schematics is a hosted &lt;a href="https://www.terraform.io/intro/index.html"&gt;Terraform&lt;/a&gt; environment for defining and deploying Infrastructure as Code both within and outside of IBM Cloud. &lt;/p&gt;

&lt;p&gt;Everytime I start a new Schematics project I need to do the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a Directory structure in a specified location&lt;/li&gt;
&lt;li&gt;Copy a set of example Terraform files&lt;/li&gt;
&lt;li&gt;Initialize a new Git repository&lt;/li&gt;
&lt;li&gt;Open Visual Studio Code in the newly created project directory&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;prm&lt;/em&gt; allows me to do that with the following start up script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Command line arguments can be used, $3 would be the first argument after your project name.&lt;/span&gt;
&lt;span class="nv"&gt;PROJECT_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;TEMPLATE_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HOME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/Sync/Code/Terraform/templates/prm/schematics"&lt;/span&gt;
&lt;span class="nv"&gt;TF_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;HOME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/Sync/Code/Terraform"&lt;/span&gt;

&lt;span class="nv"&gt;dt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s2"&gt;"+%Y.%m.%d"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PROJECT_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PROJECT_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$_&lt;/span&gt;
    &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TF_PROJECT_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PROJECT_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;else
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Directory already exists, creating new one with date appended"&lt;/span&gt;
    &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PROJECT_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;dt&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="nv"&gt;$_&lt;/span&gt;
    &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TF_PROJECT_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;PROJECT_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;dt&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TEMPLATE_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/Terraform.gitignore &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_PROJECT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;/.gitignore
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TEMPLATE_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/main.tf"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_PROJECT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/main.tf"&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TEMPLATE_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/variables.tf"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_PROJECT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/variables.tf"&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TEMPLATE_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/providers.tf"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_PROJECT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/providers.tf"&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TEMPLATE_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/install.yml"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_PROJECT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/install.yml"&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TEMPLATE_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/installer.sh"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_PROJECT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/installer.sh"&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TEMPLATE_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/data.tf"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;TF_PROJECT_DIR&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/data.tf"&lt;/span&gt;

git init 
code &lt;span class="nb"&gt;.&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Invoking PRM
&lt;/h2&gt;

&lt;p&gt;To start a &lt;em&gt;prm&lt;/em&gt; Schematics project I simply run &lt;code&gt;prm start schematics &amp;lt;name of project&amp;gt;&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;tycho ◲ prm start schematics testfordevto
Starting project schematics
Initialized empty Git repository &lt;span class="k"&gt;in&lt;/span&gt; /Users/ryan/Sync/Code/Terraform/testfordevto/.git/

~/Sync/Code/Terraform/testfordevto master&lt;span class="k"&gt;*&lt;/span&gt;
tycho ◲ ls-l 
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt;  1 ryan  staff    68 Jul 14 10:40 data.tf
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt;  1 ryan  staff  2440 Jul 14 10:40 install.yml
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt;  1 ryan  staff  2259 Jul 14 10:40 installer.sh
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt;  1 ryan  staff   599 Jul 14 10:40 main.tf
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt;  1 ryan  staff   121 Jul 14 10:40 providers.tf
&lt;span class="nt"&gt;-rw-r--r--&lt;/span&gt;  1 ryan  staff  1136 Jul 14 10:40 variables.tf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When running &lt;code&gt;prm stop schematics&lt;/code&gt; &lt;em&gt;prm&lt;/em&gt; invokes the following actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Checks if latest code changes have been pushed to Source control and if not prompts the user to do so. &lt;/li&gt;
&lt;li&gt;Clears out any local &lt;code&gt;*.tfplan&lt;/code&gt; or &lt;code&gt;*.tfvars&lt;/code&gt; files.&lt;/li&gt;
&lt;li&gt;Prompts if any new TODO items need to be added to the README file. &lt;/li&gt;
&lt;li&gt;Changes directory back to $HOME&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>shell</category>
      <category>prm</category>
    </item>
    <item>
      <title>Deploying IBM Cloud infrastructure using Terraform and Gitlab</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Fri, 29 May 2020 17:47:27 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/deploying-ibm-cloud-infrastructure-using-terraform-and-gitlab-1bnm</link>
      <guid>https://dev.to/greyhoundforty/deploying-ibm-cloud-infrastructure-using-terraform-and-gitlab-1bnm</guid>
      <description>&lt;p&gt;Today I will be walking you through how to set up Environmental Variables and a &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file to deploy IBM Cloud resources using Terraform and the Gitlab CI/CD. &lt;/p&gt;

&lt;h3&gt;
  
  
  Setting CI/CD Variables in Gitlab
&lt;/h3&gt;

&lt;p&gt;All of my automated IBM Cloud Terraform projects land under the same &lt;a href="https://docs.gitlab.com/ee/user/group/" rel="noopener noreferrer"&gt;Gitlab group&lt;/a&gt;. There are many reasons to use Groups in Gitlab but for me it is mainly so that I don't have to set &lt;em&gt;per-project&lt;/em&gt; Environmental Variables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; From the top navigation bar click on &lt;em&gt;Groups&lt;/em&gt; and select &lt;em&gt;Your Groups&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FChooseGroup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FChooseGroup.png" alt="Groups page"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Go to your Gitlab Group page and from the left hand navigation click Settings &amp;gt; CI/CD. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FSetCiCDProjectVars.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FSetCiCDProjectVars.png" alt="Configure Group CICD"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt; Click Expand in the Variables section&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FExpandGroupVars.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FExpandGroupVars.png" alt="Expand Group Vars"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The variables you will want to set are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IC_API_KEY = Your IBM Cloud API Key&lt;/li&gt;
&lt;li&gt;SL_API_KEY = Your IBM Cloud IaaS (SoftLayer) API Key&lt;/li&gt;
&lt;li&gt;SL_USERNAME = Your IBM Cloud IaaS (SoftLayer) username&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check the &lt;em&gt;Masked&lt;/em&gt; option for the variables. Setting the masked variable option means that the value of the variables will be hidden in job logs during the CI/CD runs. When you're variables are set click &lt;em&gt;Save&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FSetVars.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FSetVars.png" alt="Set Variables"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Note About Remote States&lt;/strong&gt;: It is highly recommended to use a remote state for your Terraform deployments. With remote state, Terraform writes the state data to a remote data store, which can then be shared between all members of a team. If you will be using a remote state for your Terraform backend make sure you set the appropriate environmental variables for the backend provider. For instance if you are using the &lt;a href="https://www.terraform.io/docs/backends/types/consul.html" rel="noopener noreferrer"&gt;Consul backend provider&lt;/a&gt; you would want to set the &lt;code&gt;CONSUL_TOKEN&lt;/code&gt; and &lt;code&gt;CONSUL_HTTP_ADDR&lt;/code&gt; environmental variables. &lt;/p&gt;

&lt;h3&gt;
  
  
  Test Gitlab Automation
&lt;/h3&gt;

&lt;p&gt;In order to test our Gitlab automation let's deploy a single Ubuntu 18 Virtual Instance. The first step is to create new Project in Github. When creating the project click the &lt;strong&gt;Import project&lt;/strong&gt; tab and click on &lt;em&gt;Repo by URL&lt;/em&gt;. Under the &lt;em&gt;Git Repository URL&lt;/em&gt; section enter &lt;code&gt;https://git.cloud-design.dev/ryan/ibm-tf-gitlab-example.git&lt;/code&gt;. Give your newly imported project a name, set it's visibility level and the click &lt;em&gt;Create Project&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FRepoByURL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FRepoByURL.png" alt="Import Project"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a few moments Gitlab will create the new project. Once the project has been created you can dive in to the code to tweak the example deployment. You will need to do the following at the very least:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the &lt;code&gt;main.tf&lt;/code&gt; file with the name of your IaaS SSH Key in the &lt;em&gt;data.ibm_compute_ssh_key&lt;/em&gt; resource.&lt;/li&gt;
&lt;li&gt;You will need to rename the &lt;code&gt;example.gitlab-ci.yml&lt;/code&gt; file to &lt;code&gt;.gitlab-ci.yml&lt;/code&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With your changes complete go ahead and commit your code to start the Gitlab CI/CD Pipeline. You can watch the progress of the Pipeline by clicking on the &lt;strong&gt;CI/CD&lt;/strong&gt; left hand navigation link and selecting &lt;strong&gt;Pipelines&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FWatchPipeline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FWatchPipeline.png" alt="View Pipelines"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From there you can view the progress of the Pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FGitlabPIpeline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdsc.cloud%2Fquickshare%2FGitlabPIpeline.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Notes
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Remote States&lt;/strong&gt;&lt;br&gt;
If your deployments will be using a remote state make sure to change &lt;code&gt;terraform init&lt;/code&gt; to &lt;code&gt;terraform init -backend-config="lock=true"&lt;/code&gt; in the &lt;em&gt;before_script&lt;/em&gt; section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gitlab and SSL&lt;/strong&gt;&lt;br&gt;
If you're using Let's Encrypt generated certificates you may see issues with the certificate not being trusted. To get around this you can add the following to the &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;GIT_SSL_NO_VERIFY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Targetting Gitlab Runners&lt;/strong&gt;&lt;br&gt;
If you need to use specific &lt;a href="https://docs.gitlab.com/runner/" rel="noopener noreferrer"&gt;Gitlab Runners&lt;/a&gt; for your deployments you will want to add a &lt;em&gt;tag&lt;/em&gt; decleration. For instance is you are targetting runners with the &lt;code&gt;docker&lt;/code&gt; tag you would want to add the following to all the Ci/CD stages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Alternative
&lt;/h2&gt;

&lt;p&gt;IBM Cloud has recently launched a hosted Terraform offering called &lt;a href="https://cloud.ibm.com/docs/schematics?topic=schematics-about-schematics" rel="noopener noreferrer"&gt;Schematics&lt;/a&gt;. IBM Cloud Schematics supports all IBM Cloud resources that are provided by the &lt;a href="https://ibm-cloud.github.io/tf-ibm-docs/index.html" rel="noopener noreferrer"&gt;IBM Cloud Provider plug-in for Terraform&lt;/a&gt; with the advantage that you don't have to install the Terraform CLI and the IBM Cloud Provider plug-in. You can find some good Schematics example templates &lt;a href="https://github.com/IBM-Cloud/terraform-provider-ibm/tree/master/examples" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>gitlab</category>
      <category>ibmcloud</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Migrating IBM Cloud Object Storage Data Between Accounts</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Thu, 28 May 2020 15:08:10 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/migrating-ibm-cloud-object-storage-data-between-accounts-30ln</link>
      <guid>https://dev.to/greyhoundforty/migrating-ibm-cloud-object-storage-data-between-accounts-30ln</guid>
      <description>&lt;p&gt;Today I will be showing you how you can migrate the contents of one &lt;a href="https://www.ibm.com/cloud/object-storage"&gt;IBM Cloud Object Storage&lt;/a&gt; bucket to a different instance of ICOS (IBM Cloud Object Storage). We will be using the tool &lt;a href="https://rclone.org/"&gt;rclone&lt;/a&gt; in order to sync the contents between buckets. In this scenario the ICOS instances exist on the same account but the process will work between distinct IBM Accounts as well. &lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-reqs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;HMAC credentials generated for each instance of Cloud Object Storage. See this &lt;a href="https://cloud.ibm.com/docs/services/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main"&gt;guide&lt;/a&gt; for generating ICOS credentials with HMAC.
&lt;/li&gt;
&lt;li&gt;rclone installed. See the official installation docs &lt;a href="https://rclone.org/install/"&gt;here&lt;/a&gt;. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Configuring rclone
&lt;/h3&gt;

&lt;p&gt;Once you have rclone installed you will need to generate a configuration file that will define our 2 ICOS instances. You can do this by running the command &lt;code&gt;rclone config&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ rclone config
2020/01/16 09:39:33 NOTICE: Config file "/Users/ryan/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q&amp;gt; n
name&amp;gt; icos-instance-1
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / 1Fichier
   \ "fichier"
 2 / Alias for an existing remote
   \ "alias"
 3 / Amazon Drive
   \ "amazon cloud drive"
 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
   \ "s3"
 5 / Backblaze B2
   \ "b2"
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Choose option 3 to get a list of S3 compatible offerings, then choose &lt;code&gt;IBM COS S3&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Storage&amp;gt; 4
** See help for s3 backend at: https://rclone.org/s3/ **

Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ "AWS"
 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
   \ "Alibaba"
 3 / Ceph Object Storage
   \ "Ceph"
 4 / Digital Ocean Spaces
   \ "DigitalOcean"
 5 / Dreamhost DreamObjects
   \ "Dreamhost"
 6 / IBM COS S3
   \ "IBMCOS"
 7 / Minio Object Storage
   \ "Minio"
 8 / Netease Object Storage (NOS)
   \ "Netease"
 9 / Wasabi Object Storage
   \ "Wasabi"
10 / Any other S3 compatible provider
   \ "Other"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Add your HMAC Access Key and Secret Key&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;env_auth&amp;gt; 1
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id&amp;gt; xxxxxxxxxxxxxxxxxxxxx
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key&amp;gt; xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Use this if unsure. Will use v4 signatures and an empty region.
   \ ""
 2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
   \ "other-v2-signature"
region&amp;gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Next you will want to chose the IBM Cloud Object Storage endpoint to use and the storage tier for the bucket you will be using. In this instance I am targetting the US-Cross regional endpoint and a standard tier bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / US Cross Region Endpoint
   \ "s3-api.us-geo.objectstorage.softlayer.net"
 2 / US Cross Region Dallas Endpoint
   \ "s3-api.dal.us-geo.objectstorage.softlayer.net"
 3 / US Cross Region Washington DC Endpoint
   \ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
...
endpoint&amp;gt; 1

Location constraint - must match endpoint when using IBM Cloud Public.
For on-prem COS, do not make a selection from this list, hit enter
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / US Cross Region Standard
   \ "us-standard"
 2 / US Cross Region Vault
   \ "us-vault"
 3 / US Cross Region Cold
   \ "us-cold"
 4 / US Cross Region Flex
   \ "us-flex"
...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;On the next prompt you will need to specify an ACL policy. I am choosing &lt;code&gt;private&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
   \ "public-read"
 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
   \ "public-read-write"
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
   \ "authenticated-read"
acl&amp;gt; 1
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Skip the advanced config and &lt;code&gt;rclone&lt;/code&gt; should present you with your new configuration details. Double check that everything is correct and then select &lt;code&gt;n&lt;/code&gt; to add your second ICOS instance.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Edit advanced config? (y/n)
y) Yes
n) No
y/n&amp;gt; n
Remote config
--------------------
[icos-instance-1]
type = s3
provider = IBMCOS
env_auth = false
access_key_id = xxxxxx
secret_access_key = xxxxxxxxx
endpoint = s3-api.us-geo.objectstorage.softlayer.net
location_constraint = us-standard
acl = private
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&amp;gt; y
Current remotes:

Name                 Type
====                 ====
icos-instance-1      s3

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Follow the steps again to add your second ICOS instance and when you've verified that everything looks correct choose &lt;code&gt;q&lt;/code&gt; to quit the configuration process. &lt;/p&gt;

&lt;h2&gt;
  
  
  Inspecting our ICOS buckets
&lt;/h2&gt;

&lt;p&gt;With rclone configured we can now start the actual sync process between our buckets, but first I will start by listing the contents of our source and destination buckets:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ rclone ls icos-instance-1:source-bucket-on-instance-1
    45338 AddNFSAccess.png
    48559 AddingNFSAccess.png
    66750 ChooseGroup.png
     2550 CloudPakApplications.png
     4643 CloudPakAutomation.png
     4553 CloudPakData.png
     5123 CloudPakIntegration.png
     4612 CloudPakMultiCloud.png
    23755 CompletedAddingNFSAccess.png
   174525 CreateNetworkShare1.png
    69836 CreateNetworkShare2.png
    76863 CreateStoragePool.png
    50489 CreateStoragePool1.png
    56297 CreateStoragePool2.png
     2340 applications-icon.svg
     6979 automation-icon.svg
   120584 cloud-paks-leadspace.png
     9255 data-icon.svg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Destination&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ rclone ls icos-instance-2:destination-bucket-on-instance-2
$
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h3&gt;
  
  
  Syncing Bucket Objects
&lt;/h3&gt;

&lt;p&gt;In this example I am going to be syncing the contents of the bucket &lt;code&gt;source-bucket-on-instance-1&lt;/code&gt; from my first instance of ICOS to the bucket &lt;code&gt;destination-bucket-on-instance-2&lt;/code&gt; on my second instance of ICOS. The &lt;code&gt;-P&lt;/code&gt; flag allows us to see the progress if the sync operation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ rclone sync -P icos-instance-1:source-bucket-on-instance-1 icos-instance-2:destination-bucket-on-instance-2
Transferred:      754.933k / 754.933 kBytes, 100%, 151.979 kBytes/s, ETA 0s
Errors:                 0
Checks:                 0 / 0, -
Transferred:           18 / 18, 100%
Elapsed time:        4.9
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Now if we look at the &lt;code&gt;destination-bucket-on-instance-2&lt;/code&gt; bucket again we'll see our files have synced over:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ rclone ls icos-instance-2:destination-bucket-on-instance-2
    45338 AddNFSAccess.png
    48559 AddingNFSAccess.png
    66750 ChooseGroup.png
     2550 CloudPakApplications.png
     4643 CloudPakAutomation.png
     4553 CloudPakData.png
     5123 CloudPakIntegration.png
     4612 CloudPakMultiCloud.png
    23755 CompletedAddingNFSAccess.png
   174525 CreateNetworkShare1.png
    69836 CreateNetworkShare2.png
    76863 CreateStoragePool.png
    50489 CreateStoragePool1.png
    56297 CreateStoragePool2.png
     2340 applications-icon.svg
     6979 automation-icon.svg
   120584 cloud-paks-leadspace.png
     9255 data-icon.svg
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Taking it Further
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.ibm.com/docs/services/cloud-object-storage?topic=cloud-object-storage-rclone#rclone-sync"&gt;Sync Options&lt;/a&gt; - The &lt;code&gt;sync&lt;/code&gt; operation makes the source and destination identical, and modifies the destination only. Destination is updated to match source, including deleting files if necessary. If you need to modify this default behavior take a look these additional configuration options for the &lt;code&gt;sync&lt;/code&gt; command. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.ibm.com/docs/services/cloud-object-storage?topic=cloud-object-storage-rclone#rclone-sync-schedule"&gt;Automated Sync&lt;/a&gt; - If you need to set an automatic &lt;code&gt;sync&lt;/code&gt; between buckets you will need to use a scheduling took like &lt;em&gt;Task Scheduler&lt;/em&gt; for Windows or &lt;em&gt;crontab&lt;/em&gt; for Linux/macOS. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cloud.ibm.com/docs/services/cloud-object-storage?topic=cloud-object-storage-rclone#rclone-reference"&gt;Supported rclone commands&lt;/a&gt; - The full list of rclone &lt;code&gt;subcommands&lt;/code&gt; for interacting with Cloud Object Storage.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ibmcloud</category>
      <category>cloud</category>
      <category>objectstorage</category>
      <category>rclone</category>
    </item>
    <item>
      <title>Time to cheat in the shell</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Thu, 28 May 2020 15:00:41 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/time-to-cheat-in-the-shell-5mc</link>
      <guid>https://dev.to/greyhoundforty/time-to-cheat-in-the-shell-5mc</guid>
      <description>&lt;p&gt;Like most people that work in &lt;code&gt;the cloud&lt;/code&gt; there are some commands that just flow from my fingers without even a moments hesitation, while others inevitably lead to the man pages or googling. Today we're going to look at a cool little utility for when you just need to grab the proper flags for a command or need a quick refresher on the proper syntax for your fingers to perform their magic. All hail the &lt;a href="https://github.com/cheat/cheat"&gt;cheat&lt;/a&gt; command.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;cheat allows you to create and view interactive cheatsheets on the command-line. It was designed to help remind nix system administrators of options for commands that they use frequently, but not frequently enough to remember.  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;cheat&lt;/code&gt; has no dependencies. To install it, download the executable from the &lt;a href="https://github.com/cheat/cheat/releases"&gt;releases&lt;/a&gt; page and place it on your &lt;code&gt;PATH&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding Community Cheat Sheets
&lt;/h2&gt;

&lt;p&gt;By default &lt;code&gt;cheat&lt;/code&gt; itself does not come with any cheatsheets however there is a large number of community provided ones hosted on &lt;a href="https://github.com/cheat/cheatsheets"&gt;Github&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/.config/cheat/
&lt;span class="nv"&gt;$ &lt;/span&gt;git clone https://github.com/cheat/cheatsheets
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you will need to edit the default &lt;code&gt;~/config/cheat/conf.yml&lt;/code&gt; file to ensure that the &lt;em&gt;community&lt;/em&gt; cheatpath points to &lt;code&gt;~/.config/cheat/cheatsheets&lt;/code&gt;. If everything is set up correctly the community cheatpath entry should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;community&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;~/.config/cheat/cheatsheets&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;community&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;readonly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the community added sheets in the correct location you can use the command &lt;code&gt;cheat -l&lt;/code&gt; to view them:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cheat -l
title:        file:                                               tags:
7z            /Users/ryan/.config/cheat/cheatsheets/7z            community,compression
ab            /Users/ryan/.config/cheat/cheatsheets/ab            community
alias         /Users/ryan/.config/cheat/cheatsheets/alias         community
ansi          /Users/ryan/.config/cheat/cheatsheets/ansi          community
apk           /Users/ryan/.config/cheat/cheatsheets/apk           community,packaging
apparmor      /Users/ryan/.config/cheat/cheatsheets/apparmor      community
apt           /Users/ryan/.config/cheat/cheatsheets/apt           community,packaging
apt-cache     /Users/ryan/.config/cheat/cheatsheets/apt-cache     community,packaging
apt-get       /Users/ryan/.config/cheat/cheatsheets/apt-get       community,packaging
aptitude      /Users/ryan/.config/cheat/cheatsheets/aptitude      community,packaging
aria2c        /Users/ryan/.config/cheat/cheatsheets/aria2c        community
asciiart      /Users/ryan/.config/cheat/cheatsheets/asciiart      community
asterisk      /Users/ryan/.config/cheat/cheatsheets/asterisk      community
....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Diving in
&lt;/h2&gt;

&lt;p&gt;One of the commands that I always struggle with is &lt;code&gt;sed&lt;/code&gt;. I can never remember the syntax no matter how many times I've tried. Cheat to the rescue: &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cheat sed
# To replace all occurrences of "day" with "night" and write to stdout:
sed 's/day/night/g' file.txt

# To replace all occurrences of "day" with "night" within file.txt:
sed -i 's/day/night/g' file.txt

# To replace all occurrences of "day" with "night" on stdin:
echo 'It is daytime' | sed 's/day/night/g'

# To remove leading spaces
sed -i -r 's/^\s+//g' file.txt

# To remove empty lines and print results to stdout:
sed '/^$/d' file.txt

# To replace newlines in multiple lines
sed ':a;N;$!ba;s/\n//g'  file.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Creating Custom CheatSheets
&lt;/h2&gt;

&lt;p&gt;Create your own cheatsheets using the &lt;code&gt;-e&lt;/code&gt; flag. This will open a new file with your default editor and place the new cheatsheet in the &lt;code&gt;personal&lt;/code&gt; cheatsheet path. For instance at my job I often have to deal with authenticating against our &lt;a href="https://cloud.ibm.com/docs/iam?topic=iam-iamoverview"&gt;IAM&lt;/a&gt; offering. No matter how many times I do this the command just don't stick for me so I decided to create a cheatsheet for that reason:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cheat -e ibmcloud-iam
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Using the &lt;code&gt;-e&lt;/code&gt; flag opens a new file in your default editor. The file is stored in the &lt;code&gt;~/.cheat&lt;/code&gt; directory as a regular text file.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cheat ibmcloud-iam
# Get IAM Token from API Key
curl -s -k -X POST --header "Content-Type: application/x-www-form-urlencoded" --header "Accept: application/json" --data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" --data-urlencode "apikey=${IBMCLOUD_API_KEY}" "https://iam.cloud.ibm.com/identity/token"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>shell</category>
    </item>
    <item>
      <title>Hot and Cold backups using IBM Bluemix Cloud Object Storage</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Tue, 13 Jun 2017 15:02:47 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/hot-and-cold-backups-using-ibm-bluemix-cloud-object-storage</link>
      <guid>https://dev.to/greyhoundforty/hot-and-cold-backups-using-ibm-bluemix-cloud-object-storage</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In this walk through we will be looking at utilizing &lt;code&gt;rsnapshot&lt;/code&gt; and &lt;code&gt;s3cmd&lt;/code&gt; to have local "hot" backups, and "cold" backups in IBM Bluemix Cloud Object Storage (S3). The &lt;code&gt;rsnapshot&lt;/code&gt; package will be used to generate the backups of the host system as well as remote linux systems if required. The &lt;code&gt;s3cmd&lt;/code&gt; utility is used to push these backups to IBM Bluemix Cloud Object Storage (S3). &lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;One or more linux server with rsnapshot, rsync, and s3cmd installed. 

&lt;ul&gt;
&lt;li&gt;RHEL/CentOS: yum install s3cmd rsnapshot rsync (you may need to add the epel repository as outlined &lt;a href="http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/"&gt;http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Ubuntu/Debian: apt-get install s3cmd rsnapshot rsync &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;SSH-Keys generated on your main backup server. In this guide that is referred to as 'Backupserver'&lt;/li&gt;
&lt;li&gt;SSH port open on your servers firewall. Rsnapshot uses rsync, which in turn uses SSH to pull backups from the remote-hosts so you will want to ensure you have the proper port whitelisted. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Servers:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Backupserver - main server running rsnapshot. Local and remote backups are stored on File Storage mount and once a day pushed to COS&lt;/li&gt;
&lt;li&gt;bck1 &amp;amp; bck2 - Two more servers we want to backup. Our Backupserver will pull the backups using rsync. &lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Generate and copy ssh-key to remote hosts
&lt;/h3&gt;

&lt;p&gt;For a guide on generating your servers public ssh-key as well as how to copy it to your remote servers please see our KnowledgeLayer article &lt;a href="http://knowledgelayer.softlayer.com/procedure/generating-and-using-ssh-keys-remote-host-authentication"&gt;Generating and using SSH-Keys for remote host authentication&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up and Configuring rsnapshot
&lt;/h2&gt;

&lt;p&gt;The rsnapshot utility will be used to backup our local system as well as remote-hosts to a single directory. This will allow us to then compress those backups and send them to Cloud Object Storage (S3) using the &lt;code&gt;s3cmd&lt;/code&gt; utility. &lt;/p&gt;

&lt;p&gt;We are including an example &lt;code&gt;rsnapshot.conf&lt;/code&gt; file that you can use as well. The example file has the following defaults: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backups are stored in the /backups/ directory. Rsnapshot can create the directory if it does not already exist. &lt;/li&gt;
&lt;li&gt;The retention scheme is set to keep 6 alpha backups, 7 beta backups, and 4 gamma backups. We'll touch on the syntax a little but further down. &lt;/li&gt;
&lt;li&gt;Rsnapshot will backup /home, /etc, and /usr/local. You will need to adjust this to fit your needs. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/greyhoundforty/COSTooling/master/rsnapshot.conf"&gt;Example rsnapshot.conf&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Note About rsnapshot.conf&lt;/strong&gt; - The rsnapshot configuration file is very picky when it comes to tabs vs spaces. Always use tabs when editing the file. If there is an issue running &lt;code&gt;rsnapshot configtest&lt;/code&gt; will show you the offending line. &lt;/p&gt;

&lt;p&gt;To use the example configuration file run the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ mv /etc/rsnapshot.conf{,.bak}
$ wget -O /etc/rsnapshot.conf https://raw.githubusercontent.com/greyhoundforty/COSTooling/master/rsnapshot.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The syntax here breaks down to all local &lt;code&gt;alpha&lt;/code&gt;backups will go to /backups/alpha.X/localhost and the &lt;code&gt;alpha&lt;/code&gt; backup for the remote server bck1 will go to /backups/alpha.X/bck1.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@backuptest:~# grep snapshot_root /etc/rsnapshot.conf
snapshot_root   /backups/

backup  /home/      localhost/
backup  /etc/       localhost/
backup  /usr/local/ localhost/
backup  root@10.176.18.15:/var/ bck1/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test the rsnapshot configuration by running the command &lt;code&gt;rsnapshot configtest&lt;/code&gt;. You can also do a dry-run backup that will show you what &lt;code&gt;rsnapshot&lt;/code&gt; will actually do when running the backup job by passing the &lt;code&gt;-t&lt;/code&gt; flag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@backuptest:~# rsnapshot configtest
Syntax OK

root@backuptest:~# rsnapshot -t alpha
echo 14407 &amp;gt; /var/run/rsnapshot.pid
/bin/rm -rf /backups/alpha.5/
mv /backups/alpha.4/ /backups/alpha.5/
mv /backups/alpha.3/ /backups/alpha.4/
mv /backups/alpha.2/ /backups/alpha.3/
mv /backups/alpha.1/ /backups/alpha.2/
/bin/cp -al /backups/alpha.0 /backups/alpha.1
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    /home/ /backups/alpha.0/localhost/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded /etc/ \
    /backups/alpha.0/localhost/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    /usr/local/ /backups/alpha.0/localhost/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    --rsh=/usr/bin/ssh root@10.176.18.15:/var/ /backups/alpha.0/bck1/
touch /backups/alpha.0/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run a test backup
&lt;/h3&gt;

&lt;p&gt;If the configtest and dry run don't return any errors proceed to run your first &lt;code&gt;alpha&lt;/code&gt; backup job. &lt;em&gt;Note:&lt;/em&gt; By default the &lt;code&gt;rsnapshot&lt;/code&gt; command will not produce any output when running backup jobs. If you would like to see what it is doing in real time pass the &lt;code&gt;-v&lt;/code&gt; flag.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@backuptest:~# rsnapshot -v alpha
echo 25845 &amp;gt; /var/run/rsnapshot.pid
/bin/rm -rf /backups/alpha.5/
mv /backups/alpha.4/ /backups/alpha.5/
mv /backups/alpha.3/ /backups/alpha.4/
mv /backups/alpha.2/ /backups/alpha.3/
mv /backups/alpha.1/ /backups/alpha.2/
/bin/cp -al /backups/alpha.0 /backups/alpha.1
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    /home/ /backups/alpha.0/localhost/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded /etc/ \
    /backups/alpha.0/localhost/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    /usr/local/ /backups/alpha.0/localhost/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    --rsh=/usr/bin/ssh root@10.176.18.15:/var/ /backups/alpha.0/bck1/
touch /backups/alpha.0/
rm -f /var/run/rsnapshot.pid

root@backuptest:~# ls -l /backups/alpha.0
total 8
drwxr-xr-x 3 root root 4096 Feb 10 17:01 bck1
drwxr-xr-x 5 root root 4096 Feb  8 00:00 localhost
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Add other servers and adjust your schedule
&lt;/h3&gt;

&lt;p&gt;Now that you have tested rsnapshot go ahead and add additional servers to &lt;code&gt;rsnapshot.conf&lt;/code&gt; and configure your backup frequency. The rsnapshot utility uses the terms &lt;code&gt;alpha, beta, gamma, and delta&lt;/code&gt; but you can think of them as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alpha = hourly backups
beta = daily backups 
gamma = weekly backups 
delta = monthly backups
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default rsnapshot ships with the following retention scheme.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;retain  alpha   6
retain  beta    7
retain  gamma   4
#retain delta   3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means that your server will keep 6 hourly backups, 7 daily backups, and 4 weekly backups. For example: When the &lt;code&gt;alpha&lt;/code&gt; backup job runs for the 7th time the oldest backup is rotated out and deleted so that it only has 6 &lt;code&gt;alpha&lt;/code&gt; backups. The rsnapshot utility also ships with a default cron.d file on Debian/Ubuntu. For RHEL/Centos the package does not ship with a default cron file. In either case you will want to run the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wget -O /etc/cron.d/rsnapshot https://raw.githubusercontent.com/greyhoundforty/COSTooling/master/rsnapshotcron
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;0 */4   * * *       root    /usr/bin/rsnapshot alpha
0 21    * * *       root    /usr/bin/rsnapshot beta
0  3    * * 1       root    /usr/bin/rsnapshot gamma
# 30 2  1 * *       root    /usr/bin/rsnapshot delta
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default the &lt;code&gt;alpha&lt;/code&gt; job will run every 4 hours, the beta every day at 9pm, and so on. This is where you will want to customize to suit your needs. With rsnapshot taken care of, we will now move on to configuring &lt;code&gt;s3cmd&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring s3cmd
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;s3cmd&lt;/code&gt; python script is an open-source utility that allows a *nix or osx box to talk to S3 compatible services. After the utility is installed all the customer has to do is download our example &lt;code&gt;.s3cfg&lt;/code&gt; file and update it with their COS Access and Secret Key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ wget -O $HOME/.s3cfg https://raw.githubusercontent.com/greyhoundforty/COSTooling/master/s3cfg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The 5 lines that need to be updated are 2,30,31, and 55 (if using our example .s3cfg file). For lines 30 and 31 you will want to replace &lt;code&gt;cos_endpoint&lt;/code&gt; with the Cloud Object Storage (S3) endpoint you are using. Once all the lines have been updated with the COS (S3) details from the Customer portal you can test the connection by issuing the command &lt;code&gt;s3cmd ls&lt;/code&gt; which will list all the buckets on the account.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ s3cmd ls                                                                                                                                                  
2017-02-03 14:52  s3://backuptest
2017-02-03 21:23  s3://largebackup
2017-02-07 20:49  s3://po9bmbnem531ehdreyfh-winbackup
2017-02-07 17:44  s3://winbackup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating our backup bucket
&lt;/h3&gt;

&lt;p&gt;To create the bucket we will store our backups in we will use the &lt;code&gt;s3cmd mb&lt;/code&gt; command. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Few Notes About Buckets&lt;/strong&gt; - Cloud Object Storage (S3) has a 100 bucket limit per account. Keep this in mind if you set up each backup to create its own bucket or do a per month bucket. Bucket names must be DNS-compliant. Names must be between 3 and 63 characters long, must be made of lowercase letters, numbers, and dashes, must be globally unique, and cannot appear to be an IP address. A common approach to ensuring uniqueness is to append a UUID or other distinctive suffix to bucket names.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ s3cmd mb s3://coldbackups/
Bucket 's3://coldbackups/' created 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Pushing backups to Cloud Object Storage (S3)
&lt;/h3&gt;

&lt;p&gt;To manually push your backups to Cloud Object Storage (S3) I would recommend using tar to compress the backup directory with a date stamp for easier sorting should you need to pull the backups from Cloud Object Storage (S3) for restoration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tar -czf $(date "+%F").backup.tar.gz /backups/
s3cmd put $(date "+%F").backup.tar.gz s3://coldbackups/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To automate this process you would likely want to a use a cron job to commpress the backups and send them to Cloud Object Storage (S3) at regular intervals. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Note About Retention&lt;/strong&gt; - Cloud Object Storage (S3) does not currently support a retention scheme. This means that whatever you push to COS (S3) will remain there until you delete it. &lt;/p&gt;

&lt;h2&gt;
  
  
  Restoring Files from Cloud Object Storage (S3)
&lt;/h2&gt;

&lt;p&gt;To restore a file or directory from Cloud Object Storage (S3) you will need to use the &lt;code&gt;get&lt;/code&gt; command to pull down the backup. Once the file or directory has been downloaded to your server you can use &lt;code&gt;cp&lt;/code&gt;, &lt;code&gt;rsync&lt;/code&gt;, or &lt;code&gt;mv&lt;/code&gt; to restore the file. If the file was from a remote host backed up using &lt;code&gt;rsnapshot&lt;/code&gt; you would use &lt;code&gt;scp&lt;/code&gt; or &lt;code&gt;rsync&lt;/code&gt; to move it back to the original host system.&lt;/p&gt;

&lt;h3&gt;
  
  
  To pull a single file or compressed backup
&lt;/h3&gt;

&lt;p&gt;By default the file you download from Cloud Object Storage (S3) will be stored in the same directory you are currently in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ sc3md get s3://bucket/path/to/file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also specify the directory you would like the downloaded file stored in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ s3cmd get s3://bucket/path/to/file /local/path/ 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  To pull a directory
&lt;/h3&gt;

&lt;p&gt;In order to pull a directory as well as all sub-items, use the &lt;code&gt;--recursive&lt;/code&gt; flag&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ s3cmd get s3://bucket/ /local/path/ --recursive 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Automating the process
&lt;/h2&gt;

&lt;p&gt;I have created a bash script that can be used to set up both &lt;code&gt;rsnapshot&lt;/code&gt; and &lt;code&gt;s3cmd&lt;/code&gt;. This script is offered as is and you are free to make any modifications you like. &lt;a href="https://github.com/greyhoundforty/COSTooling/blob/master/backup_script.sh"&gt;backup_script.sh&lt;/a&gt;&lt;/p&gt;

</description>
      <category>bluemix</category>
    </item>
    <item>
      <title>Deploying Hugo on Bluemix</title>
      <dc:creator>Ryan Tiffany</dc:creator>
      <pubDate>Thu, 08 Jun 2017 19:48:03 +0000</pubDate>
      <link>https://dev.to/greyhoundforty/deploying-hugo-on-bluemix</link>
      <guid>https://dev.to/greyhoundforty/deploying-hugo-on-bluemix</guid>
      <description>&lt;p&gt;In today's post we will be showing how to deploy a &lt;a href="https://gohugo.io/"&gt;Hugo&lt;/a&gt; site to &lt;a href="https://www.ibm.com/cloud-computing/bluemix/"&gt;Bluemix&lt;/a&gt; using the Cloud Foundry command line and build tools. This guide assumes that you have Hugo installed already. If this is not the case please see the &lt;a href="https://gohugo.io/overview/installing/"&gt;Hugo Documentation&lt;/a&gt; for installing and configuring Hugo.&lt;/p&gt;

&lt;p&gt;You will want to pick a Hugo theme and run a build so that your static files are sent to the &lt;code&gt;public&lt;/code&gt; folder. I am using the heather-hugo theme on my example site so the command is:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ hugo -t heather-hugo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once you have generated your static files you can move on to installing and configuring the Cloud Foundry cli to push your app to Bluemix.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install and configure Bluemix Cloud Foundry cli
&lt;/h2&gt;

&lt;p&gt;The installation will depend on your specific OS but you can use &lt;a href="https://github.com/cloudfoundry/cli"&gt;this page&lt;/a&gt; to get the command line interface installed. Once the install has completed you will need to login using the &lt;code&gt;cf&lt;/code&gt; command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cf login
API endpoint: https://api.ng.bluemix.net

Email&amp;gt; user@example.com

Password&amp;gt;
Authenticating...
OK

Targeted org dev_test

Select a space (or press enter to skip):
1. dev
2. testingground

Space&amp;gt; 1
Targeted space dev

API endpoint:   https://api.ng.bluemix.net (API version: 2.40.0)
User:           user@example.com
Org:            dev_test
Space:          dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Create your manifest file
&lt;/h2&gt;

&lt;p&gt;One of the most important requirements for a Cloud Foundry app is the &lt;code&gt;manifest.yml&lt;/code&gt; file. This file defines metadata about your application. More information about manifest files can be found here: &lt;a href="https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html"&gt;Deploying with Application Manifests&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here is my hugo app &lt;code&gt;manifest.yml&lt;/code&gt; file&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;applications:
- path: public/
  memory: 1024M
  instances: 1
  name: hugo
  host: hugo
  disk_quota: 1024M
  buildpack: https://github.com/cloudfoundry-incubator/staticfile-buildpack.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here is the breakdown of the file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;path: Specify what folder gets pushed. If not defined, defaults to the current directory.
&lt;/li&gt;
&lt;li&gt;memory: Specify how much memory your application needs.
&lt;/li&gt;
&lt;li&gt;instances: Specify the number of app instances that you want to start upon push.
&lt;/li&gt;
&lt;li&gt;name: Name of your application.
&lt;/li&gt;
&lt;li&gt;host: This defines the name for the subdomain (yourhost.myBluemix.net).
&lt;/li&gt;
&lt;li&gt;buildpack: Specify what kind of buildpack your application needs. More information here: &lt;a href="https://docs.cloudfoundry.org/concepts/stacks.html"&gt;Cloud Foundry stacks&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Push application to Bluemix
&lt;/h2&gt;

&lt;p&gt;Now that we have all our ducks in a row we can push our application to Bluemix using the Cloud Foundry cli:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cf push hugo
Using manifest file /Users/ryan/bluemix/cf-apps/hugo/manifest.yml

Updating app hugo in org user@example.com / space dev as user@example.com...
OK

Using route hugo.testingbig.blue
Uploading hugo...
Uploading app files from: /Users/ryan/bluemix/cf-apps/hugo/public
Uploading 78.9K, 23 files
Done uploading
OK

Stopping app hugo in org user@example.com / space tinylab as user@example.com...
OK

Starting app hugo in org user@example.com / space tinylab as user@example.com...
-----&amp;gt; Downloaded app package (40K)
-----&amp;gt; Downloaded app buildpack cache (4.0K)
Cloning into '/tmp/buildpacks/staticfile-buildpack'...
Submodule 'compile-extensions' (https://github.com/cloudfoundry/compile-extensions.git) registered for path 'compile-extensions'
Cloning into 'compile-extensions'...
Submodule path 'compile-extensions': checked out '26a578c06a62c763205833561fec1c5c6d34deb6'
-------&amp;gt; Buildpack version 1.3.1
Downloaded [https://pivotal-buildpacks.s3.amazonaws.com/concourse-binaries/nginx/nginx-1.9.10-linux-x64.tgz]
grep: Staticfile: No such file or directory
-----&amp;gt; Using root folder
-----&amp;gt; Copying project files into public/
-----&amp;gt; Setting up nginx
grep: Staticfile: No such file or directory
-----&amp;gt; Uploading droplet (2.6M)

0 of 1 instances running, 1 starting
1 of 1 instances running

App started
OK

App hugo was started using this command `sh boot.sh`

Showing health and status for app hugo in org user@example.com / space dev as user@example.com...
OK

requested state: started
instances: 1/1
usage: 1G x 1 instances
urls: hugo.mybluemix.net
last uploaded: Thu Feb 25 17:19:03 UTC 2016
stack: cflinuxfs2
buildpack: https://github.com/cloudfoundry-incubator/staticfile-buildpack.git

     state     since                    cpu    memory       disk         details
#0   running   2016-02-25 11:19:36 AM   0.0%   2.6M of 1G   5.6M of 1G
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Create custom domain in Bluemix
&lt;/h2&gt;

&lt;p&gt;For simple testing and proof of concept you can certainly keep using the &lt;code&gt;mybluemix.net&lt;/code&gt; domain, but if you are wanting to use this as a full time site you can configure Bluemix to use a custom domain. The first step to using your custom domain is to create the domain in Bluemix and associate it with an organization. The syntax is &lt;code&gt;cf create-domain ORG DOMAIN_NAME&lt;/code&gt;. In my case the ORG is tinylab and my domain is testingbig.blue:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cf create-domain [ORG] testingbig.blue
Creating domain testingbig.blue for org tinylab as user@example.com...
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In order to use your custom domain you have to map the domain to your Bluemix application using the &lt;code&gt;map-route&lt;/code&gt; command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cf map-route hugo testingbig.blue
Creating route testingbig.blue for org tinylab / space Production_US as user@example.com...
OK
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Point your custom domain to Bluemix
&lt;/h2&gt;

&lt;p&gt;My domain testingbig.blue is currently using the SoftLayer DNS service so I added a record to point my domain to the Bluemix DNS system using the awesome &lt;a href="http://softlayer-python.readthedocs.org/en/latest/cli.html"&gt;SoftLayer CLI&lt;/a&gt;. &lt;em&gt;Disclaimer: &lt;a href="https://www.linkedin.com/in/ryan-tiffany-786a036a"&gt;I Work at SoftLayer&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ slcli dns record-add hugo.testingbig.blue @ A 75.126.81.68
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Push to Bluemix to use the Custom domain
&lt;/h2&gt;

&lt;p&gt;In order to see the site at the new custom domain we'll go ahead and push the app again.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ cf push hugo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>bluemix</category>
    </item>
  </channel>
</rss>
