<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jack Moore</title>
    <description>The latest articles on DEV Community by Jack Moore (@jmoore53).</description>
    <link>https://dev.to/jmoore53</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jmoore53"/>
    <language>en</language>
    <item>
      <title>BGP and Cilium Kubernetes</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Tue, 20 Sep 2022 05:00:00 +0000</pubDate>
      <link>https://dev.to/jmoore53/bgp-and-cilium-kubernetes-9fk</link>
      <guid>https://dev.to/jmoore53/bgp-and-cilium-kubernetes-9fk</guid>
      <description>&lt;p&gt;For the current setup I have with kubernetes I ended up using Cilium as the networking provider configured with BGP on the OPNSense Firewall I have. This setup was a bit different because Cilium didn’t support having Nodes on the same network as the pods.. which makes sense, however it should have allowed for better configuration of the pod CIDR. Below is how I configured Cilium and OPNSense:&lt;/p&gt;

&lt;p&gt;I made an attempt to run clium configured with BGP, but needed to move the nodes to 172.16.0.0/24 network.&lt;/p&gt;

&lt;p&gt;This was the process I used:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create new vlan on switch for 172.16.0.0/12 network; 

&lt;ul&gt;
&lt;li&gt;Provisioning of nodes was done with Terraform;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Pluck off a space in the network for kubernetes;&lt;/li&gt;
&lt;li&gt;Add Firewall interface for the new L2 network&lt;/li&gt;
&lt;li&gt;Update Terraform to move everything to a 172.16.0.0/12 network;&lt;/li&gt;
&lt;li&gt;Update Scripts&lt;/li&gt;
&lt;li&gt;Try Cilium again;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Installing Helm and Cilium
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

helm repo add cilium https://helm.cilium.io/

helm install cilium cilium/cilium --version 1.12.2 --namespace kube-system --set bgp.enabled=true --set bgp.announce.loadbalancerIP=true --set hubble.relay.enabled=true --set hubble.ui.enabled=true #--set bgp.announce.podCIDR=true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installing Clium I needed to add a config map for bgp pointing to opnsense (note this may have needed to have been done before installing cilium):if email_message.is_multipart():&lt;br&gt;
    for part in email_message.get_payload():&lt;br&gt;
        body = part.get_payload()&lt;br&gt;
        # more processing?&lt;br&gt;
else:&lt;br&gt;
    body = email_message.get_payload()&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Note the peer address is my firewall ip&lt;/li&gt;
&lt;li&gt;Note the peer asn is the AS of the firewall&lt;/li&gt;
&lt;li&gt;Note the current AS is the AS of the kubernetes cluster
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: bgp-config
  namespace: kube-system
data:
  config.yaml: |
    peers:
      - peer-address: 172.16.0.254
        peer-asn: 64512
        my-asn: 64513
    address-pools:
      - name: default
        protocol: bgp
        addresses:
          - 172.16.1.0/24

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  OPNSense Side:
&lt;/h2&gt;

&lt;p&gt;On the opnsense side I had to install the &lt;code&gt;frr&lt;/code&gt; package which allowed for better routing. From there I had to configure BGP.&lt;/p&gt;

&lt;p&gt;This meant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;enabling bgp&lt;/li&gt;
&lt;li&gt;Setting the BGP AS Number to 64512&lt;/li&gt;
&lt;li&gt;Adding all the k8s nodes into the neigbors of the bgp config with the correct peer asn (peer asns are 64513)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After everything has been configured LoadBalancer IPs can now be used on the 172.16.1.0/24 network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;h2&gt;
  
  
  - &lt;a href="https://docs.cilium.io/en/v1.12/gettingstarted/bgp/"&gt;BGP Cilium 1.12.2&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;| &lt;a href="http://docs.frrouting.org/en/stable-7.4/bgp.html#clicmd-%5Bno%5DneighborPEERupdate-source%3CIFNAME%20%7C%20ADDRESS%3E"&gt;BGP.html&lt;/a&gt; |&lt;/p&gt;

&lt;h2&gt;
  
  
  - &lt;a href="https://help.accuknox.com/open-source/cilium-vm-k8s/"&gt;Cilium in VMs&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;| &lt;a href="https://helm.sh/docs/intro/install/"&gt;Helm | Installing Helm&lt;/a&gt; |&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.cilium.io/en/v1.12/gettingstarted/hubble/#hubble-ui"&gt;Service Map and Hubble UI&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dickingwithdocker.com/2021/05/using-bgp-to-integrate-cilium-with-opnsense/"&gt;Using BGP to Integrate Cilium with OPNSense&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>k8s</category>
      <category>networking</category>
      <category>firewall</category>
    </item>
    <item>
      <title>A Bit of Nginx Load Balancing</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Wed, 19 Jan 2022 05:00:00 +0000</pubDate>
      <link>https://dev.to/jmoore53/a-bit-of-nginx-load-balancing-4efi</link>
      <guid>https://dev.to/jmoore53/a-bit-of-nginx-load-balancing-4efi</guid>
      <description>&lt;p&gt;Similar to the last post, I wanted to look more into NGINX features including load balancing and geographically serving requests from servers within the same region. In this post I look into load balancing requests with GeoIP2, serving requests based on locations and IPs, and nginx upstream servers for load balancing. I also added the echo module to nginx, and explored other logging formats for nginx.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containers and Docker Networking
&lt;/h2&gt;

&lt;p&gt;After committing the image I created last post (&lt;code&gt;nginx:geocity&lt;/code&gt;), this post will use that same image to create multiple containers to server requests. I did move the image to a different server so I didn’t accidentally delete any production containers. To copy the image, I used the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker save -o /opt/nginx/nginx-geocity nginx:geocity
scp /opt/nginx/nginx-geocity jack@remoteserver:/opt/nginx/nginx-geocity
docker load -i /opt/nginx/nginx-geocity

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the new host, it all starts with a network to connect the docker containers. For this simple POC, I created all the containers on the same host and used docker networking to link them all together. Below is the creation of the containers and network:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create Geocity container to proxy all requests
docker run --name nginx_geocity_1 -p 8081:80 nginx:geocity

# Create network
docker network create --driver bridge nginx-net

# Connect already running container to network
docker network connect nginx-net nginx_geocity_1

# Create new container connected to network
docker run --name nginx_region_1 --network nginx-net -d nginx:geocity
docker run --name nginx_region_2 --network nginx-net -d nginx:geocity
docker run --name nginx_region_3 --network nginx-net -d nginx:geocity

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point there are 4 containers running, we have one main nginx server to process incoming requests (&lt;code&gt;nginx_geocity_1&lt;/code&gt;) and three “regional” servers to serve requests (&lt;code&gt;nginx_region_[1,2,3]&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Because the default page simply displays “Welcome to Nginx”, I needed a way to distinguish which regional server I was connecting to. Again because this is a POC, I edited the html/index.html to say “Welcome to Region $regional_server_id” which meant when I loaded regional server 1 it displayed “Welcome to Region 1”, and for regional server 2 it displayed “Welcome to Region 2”. Simple, but necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nginx Configuration
&lt;/h2&gt;

&lt;p&gt;After this was setup, I modified the &lt;code&gt;nginx.conf&lt;/code&gt; file in the &lt;code&gt;nginx_geocity_1&lt;/code&gt; server to look like the following to server requests based on region:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http {
    geoip2 /opt/GeoLite2/GeoLite2-City.mmdb {
        auto_reload 60m;
        $geoip2_metadata_city_build metadata build_epoch;
        $geoip2_data_city_name city names en;
    }
    geoip2 /opt/GeoLite2/GeoLite2-Country.mmdb {
        auto_reload 60m;
        $geoip2_metadata_country_build metadata build_epoch;
        $geoip2_data_country_code country iso_code;
        $geoip2_data_country_name country names en;
        $geoip2_data_continent_code continent code;
    }

    map $geoip2_data_continent_code $nearest_server {
        default all;
        EU eu;
        NA na;
        AS as;
        AF af;
    }

    # geo $geo {
    # default all;
    # 172.17.0.0/24 eu;
    # 127.0.0.1/32 all;
    # 10.0.4.0/24 na;
    # }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note this is very similar to the last post. Essentially not much has changed, however we have added a &lt;code&gt;geo $geo&lt;/code&gt; block (for testing purposes used later). Basically continent codes get mapped to a &lt;code&gt;$nearest_server&lt;/code&gt; variable now and also the &lt;code&gt;$geo&lt;/code&gt; variable also maps to continent character codes. These variables are for when serving requests where to direct the requests to the next server (ie &lt;code&gt;eu&lt;/code&gt; will go to Europe servers, &lt;code&gt;na&lt;/code&gt; will go to North American servers, &lt;code&gt;all&lt;/code&gt; is used for defaults).&lt;/p&gt;

&lt;p&gt;Now the &lt;code&gt;default.conf&lt;/code&gt; site has been modified to look like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server {

    listen 80;
    listen [::]:80;
    server_name localhost;

    location /echo {
        echo $geo;
    }

    location / {
        proxy_pass http://$nearest_server;
    }
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }
}

upstream all {
        server nginx_region_1;
}

upstream eu {
        server nginx_region_2;
}

upstream na {
        server nginx_region_3;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now any request that comes in is mapped a country code from the GeoIP2 module, and then is served data via proxy from the closest geographic server. (I could and really should have used mod_rewrite to reroute the request for better performance instead of proxying requests through the one server, but will look into this later.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I ran into problems
&lt;/h2&gt;

&lt;p&gt;The above looks great. Everything looks like it works and all is well except that this is all on one server. It is hard to test out geo codes from localhost and from an internal network because all the requests are going to end up at the same proxy_pass location.&lt;/p&gt;

&lt;p&gt;I tried a few different things including adding the echo module, and setting real_ip_addr for the proxy, but these didn’t work. I even contemplated IP Spoofing, but stayed away to prevent tangent-ing off into networking issues. My first thought was to add more logging. I added more logging to track the upstream requests to the containers that were serving the requests.&lt;/p&gt;

&lt;p&gt;I added the following to my &lt;code&gt;nginx.conf&lt;/code&gt; configuration within the &lt;code&gt;http&lt;/code&gt; block to keep the regular logging, as well as log the requests sent to the upstream servers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    log_format upstreamlog '$server_name to: $upstream_addr {$request} '
                           'upstream_response_time $upstream_response_time'
                           ' request_time $request_time';

    access_log /var/log/nginx/access.log main;
    access_log /var/log/nginx/access.log upstreamlog;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although this was helpful to have extra logging, it did not help me solve my issue, it just showed I was able to process requests &lt;em&gt;fast&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Then I added the echo module to nginx, and this is why there is a location to serve &lt;code&gt;/echo&lt;/code&gt; which will just return the &lt;code&gt;continent_code&lt;/code&gt; on a request. (Note: I had to re-compile nginx with the &lt;code&gt;echo&lt;/code&gt; module because it is not builtin to NGINX by default.)&lt;/p&gt;

&lt;p&gt;This is where the NGINX builtin &lt;code&gt;geo&lt;/code&gt; module also came into play. With a few quick modifications I was able to test connections from the network I’m currently on, the local machine, and the docker network to the geocity server to see where requests were being sent. Uncommenting the geo block that’s shown above in my configuration, I modified the configuration and instead of proxy_passing the &lt;code&gt;$nearest_server&lt;/code&gt; to proxy_pass to &lt;code&gt;$geo&lt;/code&gt; instead.&lt;/p&gt;

&lt;p&gt;Now when I curled from my local machine I would get different requests back from the server showing which variables were being used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Curl request from my local machine to the server:
curl remote.ip.addr.here:8081/echo
# response: na

# Curl request from within the container
curl localhost/echo
# response: all

# Curl request from a different docker container on the same subnet
curl nginx_geocity_1/echo
# response: eu

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then after this was complete, I tested to see if the &lt;code&gt;/index.html&lt;/code&gt; page returned “Welcome to Region $region_id” indicating I was infact getting a response from the correct upstream server. After this verification everything was looking good. I figured this was a good POC, and continent codes would match, but would likely need to be tested on multiple servers before being moved into production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Adding more servers to the upstream
&lt;/h2&gt;

&lt;p&gt;NGINX upstream allows for multiple servers and load balancing by simply adding more servers within the upstream blocks. It also allows for different methods of loadbalancing including round robin (appears to be the default if two servers are added to the upstream), least_conn (least connections), ip_hash, and generic hash.&lt;/p&gt;

&lt;p&gt;I tested this out by adding all the servers to one upstream and using &lt;code&gt;least_conn&lt;/code&gt;. The upstream block looked like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;upstream na {
        least_conn;
        server nginx_region_1;
        server nginx_region_2;
        server nginx_region_3;
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using my browser and refreshing a few times, I was able to see all three region servers were serving the request.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Here
&lt;/h2&gt;

&lt;p&gt;From here I want to look into IP Spoofing for testing including using the &lt;code&gt;scrapy&lt;/code&gt; package, and &lt;code&gt;iptables&lt;/code&gt;. I also want to look into docker overlay netowrks for networking across docker hosts (although I’d like to do this without docker swarm or k8s for right now.)&lt;/p&gt;

&lt;p&gt;Also, as I noted above, this is just an ok solution for georouting. It is far from perfect. Realistically, for the best georouting it would start at DNS georouting. Then from there if someone from europe managed to up hit the US version of the site at “example.com” it may be better to use mod_rewrite to send them to the “.eu” version of the site on their first request to better serve them. I may look into these types of solutions in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/"&gt;Nginx Reverse Proxy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.nginx.com/nginx/admin-guide/security-controls/controlling-access-by-geoip/"&gt;Nginx Restricting Access by Geographical Location&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://nginx.org/en/docs/http/ngx_http_geo_module.html#geo"&gt;Nginx Geo Module&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/"&gt;Nginx Load Balancing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sandilands.info/sgordon/address-spoofing-with-iptables-in-linux"&gt;Address Spoofing with iptables in Linux&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://scapy.readthedocs.io/en/latest/introduction.html"&gt;Scrapy Introduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/openresty/echo-nginx-module"&gt;Github - Echo NGINX Module&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/a/58408267"&gt;StackOverflow - Testing Load Balancing in NGINX&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serverfault.com/questions/743414/how-can-i-check-if-remote-addr-ip-is-not-in-cidr-range-in-nginx"&gt;StackOverflow - Geo Block&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>web</category>
      <category>html</category>
      <category>nginx</category>
      <category>configuration</category>
    </item>
    <item>
      <title>Clustered Storage with DRBD</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Fri, 04 Dec 2020 05:00:00 +0000</pubDate>
      <link>https://dev.to/jmoore53/clustered-storage-with-drbd-2h3m</link>
      <guid>https://dev.to/jmoore53/clustered-storage-with-drbd-2h3m</guid>
      <description>&lt;h2&gt;
  
  
  Logical Volumes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)"&gt;Create an LVM on each server for the NFS Mount&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On both servers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lvcreate -n nfspoint -V 550G pve/data
mkfs.ext4 /dev/pve/nfspoint
echo '/dev/pve/nfspoint /var/lib/nfspoint ext4 defaults 0 2' &amp;gt;&amp;gt; /etc/fstab
mkdir /var/lib/nfspoint
mount /dev/pve/nfspoint /var/lib/nfspoint/
lvs

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;vim /etc/exports&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/var/lib/nfspoint 10.0.0.0/255.0.0.0(rw,no_root_squash,no_all_squash,sync)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DRBD
&lt;/h2&gt;

&lt;p&gt;DRBD is a little tricky.. Because LVM was thin provisioned it looked like some metadata was written to teh device &lt;code&gt;/dev/pve/nfspoint&lt;/code&gt;. This means I had to zero out the nfspoint.&lt;/p&gt;

&lt;p&gt;DRBD Configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global { usage-count no; }
common { syncer { rate 100M; } }
resource r0 {
        protocol C;
        device /dev/drbd0 minor 0;
        startup {
        wfc-timeout 120;
            degr-wfc-timeout 60;
            become-primary-on both;
        }
        net {
            cram-hmac-alg sha1;
            allow-two-primaries;
            shared-secret "secret";
        }
        on HLPMX1 {
            disk /dev/pve/nfspoint;
            address 10.0.0.2:7788;
            meta-disk internal;
        }
        on HLPMX2 {
            disk /dev/pve/nfspoint;
            address 10.0.0.3:7788;
            meta-disk internal;
        }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This wasn’t a problem because there was no data. This may be a problem when I expand the volume with LVM or need to make any kind of resource changes to the device.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dd if=/dev/zero of=/dev/pve/nfspoint bs=1M count=200


mkfs.ext4 –b 4096 /dev/drbd0


curl --output drbd9.15.tar.gz https://launchpad.net/ubuntu/+archive/primary/+sourcefiles/drbd-utils/9.15.0-1/drbd-utils_9.15.0.orig.tar.gz
tar -xf drbd9.15.tar.gz
cd drbd9.15
./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc
make all
make install

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Node 2: (because gcc wasn’t installed)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt install build-essential
apt install gcc
apt install flex


# Copy config to other server 
sudo drbdadm create-md r0
sudo systemctl start drbd.service
sudo drbdadm -- --overwrite-data-of-peer primary all
mkfs.ext4 /dev/drbd0
mkdir /srv/nfspoint
sudo mount /dev/drbd0 /srv/nfspoint

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Splitbrain
&lt;/h3&gt;

&lt;p&gt;On Split Brain Victim&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;drbdadm disconnect r0
drbdadm secondary r0
drbdadm connect --discard-my-data r0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On Split Brain Survivor&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;drbdadm primary r0
drbdadm connect r0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Inverting Resourcing
&lt;/h3&gt;

&lt;p&gt;On Current Primary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;umount /srv/nfspoint
drbdadm secondary r0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On Secondary:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;drbdadm primary r0
mount /dev/drbd0 /srv/nfspoint

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DRDB Broken?
&lt;/h2&gt;

&lt;p&gt;Reboot both servers&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl start drbd.service
drbdadm status

umount /dev/pve/nfspoint
mkfs.ext4 -b 4096 /dev/pve/nfspoint

dd if=/dev/zero of=/dev/drbd0 status=progress


mkfs -t ext4 /dev/drbd0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Troubleshooting DRBD Issues
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Odd Mountpoint with Loop Device
&lt;/h4&gt;

&lt;p&gt;What should appear off &lt;code&gt;lsblk&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.7T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 1.7T 0 part
  ├─pve-swap 253:0 0 8G 0 lvm [SWAP]
  ├─pve-root 253:1 0 96G 0 lvm /
  ├─pve-data_tmeta 253:2 0 15.6G 0 lvm
  │ └─pve-data-tpool 253:4 0 1.5T 0 lvm
  │ ├─pve-data 253:5 0 1.5T 0 lvm
  │ ├─pve-vm--105--disk--0 253:6 0 20G 0 lvm
  │ └─pve-nfspoint 253:7 0 550G 0 lvm
  │ └─drbd0 147:0 0 550G 0 disk /srv/nfspoint
  └─pve-data_tdata 253:3 0 1.5T 0 lvm
    └─pve-data-tpool 253:4 0 1.5T 0 lvm
      ├─pve-data 253:5 0 1.5T 0 lvm
      ├─pve-vm--105--disk--0 253:6 0 20G 0 lvm
      └─pve-nfspoint 253:7 0 550G 0 lvm
        └─drbd0 147:0 0 550G 0 disk /srv/nfspoint

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What was incorrectly appearing off &lt;code&gt;lsblk&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;loop0 7:0 0 200M 0 loop /srv/nfspoint
sda 8:0 0 1.7T 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 1.7T 0 part
  ├─pve-swap 253:0 0 8G 0 lvm [SWAP]
  ├─pve-root 253:1 0 96G 0 lvm /
  ├─pve-data_tmeta 253:2 0 15.6G 0 lvm
  │ └─pve-data-tpool 253:4 0 1.5T 0 lvm
  │ ├─pve-data 253:5 0 1.5T 0 lvm
  │ ├─pve-vm--100--disk--0 253:6 0 30G 0 lvm
  │ ├─pve-vm--102--disk--0 253:7 0 100G 0 lvm
  │ ├─pve-vm--103--disk--0 253:8 0 20G 0 lvm
  │ ├─pve-vm--104--disk--0 253:9 0 20G 0 lvm
  │ ├─pve-vm--101--disk--0 253:10 0 20G 0 lvm
  │ └─pve-nfspoint 253:11 0 550G 0 lvm
  │ └─drbd0 147:0 0 550G 0 disk
  └─pve-data_tdata 253:3 0 1.5T 0 lvm
    └─pve-data-tpool 253:4 0 1.5T 0 lvm
      ├─pve-data 253:5 0 1.5T 0 lvm
      ├─pve-vm--100--disk--0 253:6 0 30G 0 lvm
      ├─pve-vm--102--disk--0 253:7 0 100G 0 lvm
      ├─pve-vm--103--disk--0 253:8 0 20G 0 lvm
      ├─pve-vm--104--disk--0 253:9 0 20G 0 lvm
      ├─pve-vm--101--disk--0 253:10 0 20G 0 lvm
      └─pve-nfspoint 253:11 0 550G 0 lvm
        └─drbd0 147:0 0 550G 0 disk

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the server is creating a loop device and mounting that as the mount point, try rebooting the server, restarting the drbd service, and clearing out the device on the server having the issue and resyncing it with the primary.&lt;/p&gt;

&lt;p&gt;I still dont know if it was split brain or what, but these were the commands I ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;umount /nfs/sharepoint # This was the location of the mountpoint
drbdadm disconnect
drbdadm connect --discard-my-data r0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I honestly think rebooting the server in question fixed this issue. I think it was something about the &lt;code&gt;/dev/drbd0&lt;/code&gt; device that wasn’t working properly or created properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://help.ubuntu.com/community/HighlyAvailableNFS"&gt;NFS Cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Distributed_Replicated_Block_Device"&gt;DRBD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.linbit.com/linbit-software-download-page-for-linstor-and-drbd-linux-driver/"&gt;DRBD&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Links 2
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.howtoforge.com/high%5C_availability%5C_nfs%5C_drbd%5C_heartbeat%5C_p2"&gt;https://www.howtoforge.com/high\_availability\_nfs\_drbd\_heartbeat\_p2&lt;/a&gt; &lt;a href="https://pve.proxmox.com/wiki/Logical%5C_Volume%5C_Manager%5C_(LVM)"&gt;https://pve.proxmox.com/wiki/Logical\_Volume\_Manager\_(LVM)&lt;/a&gt; &lt;a href="https://linux.die.net/man/8/lvremove"&gt;https://linux.die.net/man/8/lvremove&lt;/a&gt; &lt;a href="https://access.redhat.com/documentation/en-us/red%5C_hat%5C_enterprise%5C_linux/6/html/logical%5C_volume%5C_manager%5C_administration/lv%5C_remove"&gt;https://access.redhat.com/documentation/en-us/red\_hat\_enterprise\_linux/6/html/logical\_volume\_manager\_administration/lv\_remove&lt;/a&gt; &lt;a href="https://serverfault.com/questions/266697/cant-remove-open-logical-volume"&gt;https://serverfault.com/questions/266697/cant-remove-open-logical-volume&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ubuntu.com/server/docs/ubuntu-ha-drbd"&gt;Ubuntu - Configure HA drbd&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serverfault.com/a/372001"&gt;SO - DRDB Not syncing between my nodes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serverfault.com/a/870223"&gt;SO - Split Brain&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://icicimov.github.io/blog/virtualization/Adding-DRBD-shared-volumes-to-Proxmox-to-support-Live-Migration/"&gt;Adding DRBD shared volumes to Proxmox to support Live Migration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://launchpad.net/ubuntu/+source/drbd-utils/9.15.0-1"&gt;drbd-utils 9.15.0-1 Source&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://wiki.debian.org/BuildingTutorial"&gt;BuildingTutorial&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://tracker.debian.org/pkg/drbd-utils"&gt;drbd-utils&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://manpages.ubuntu.com/manpages/eoan/en/man8/drbdadm.8.html"&gt;drbdadm&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://documentation.suse.com/sle-ha/15-GA/html/SLE-HA-all/art-sleha-nfs-quick.html"&gt;Suse Configuring HA&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Bug Reports
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://bugs.launchpad.net/ubuntu/+source/drbd-utils/+bug/1866458"&gt;Bug Report&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Less useful links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/a/36347664"&gt;Primary to Primary&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://forum.proxmox.com/threads/how-do-i-delete-an-lvm-configuration-not-just-remove-the-storage.50398/"&gt;Delete an LVM Link&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pve.proxmox.com/wiki/Logical_Volume_Manager_(LVM)"&gt;Creating Logical Volume in Proxmox&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serverfault.com/a/863320"&gt;SO - cant run drbdadm up&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/system_design_guide/assembly_configuring-active-passive-nfs-server-in-a-cluster-system-design-guide"&gt;Redhat - configuring an active/passive nfs server in a red hat high availability cluster&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://help.ubuntu.com/community/HighlyAvailableNFS"&gt;Ubuntu HA NFS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://documentation.suse.com/sle-ha/15-SP1/html/SLE-HA-all/art-sleha-nfs-quick.html"&gt;Suse - HA NFS Storage with DRBD and pacemaker&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>systemconfiguration</category>
      <category>sysadmin</category>
      <category>homelab</category>
      <category>networking</category>
    </item>
    <item>
      <title>The Blog Post about the Blog</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Tue, 04 Aug 2020 05:00:00 +0000</pubDate>
      <link>https://dev.to/jmoore53/the-blog-post-about-the-blog-1225</link>
      <guid>https://dev.to/jmoore53/the-blog-post-about-the-blog-1225</guid>
      <description>&lt;p&gt;Without getting too meta, this is the post about my blog as it stands and where I see the future of it going.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reasoning for the Meta
&lt;/h2&gt;

&lt;p&gt;I was exploring my Matomo Instance recently and noticed over the past year I have had 800 page views on this site. While this is really quite a small number, I was starting to think about the future of this blog and starting to spread the word on it more for both feedback and to spread more information to those who want/need it.&lt;/p&gt;

&lt;p&gt;I was thinking about mediums I can share these posts on and getting the word out on some of the current projects I am working on. I think it would be a great way to get more involved with the developer/sysadmin community and also spread information for anyone wanting to learn more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Current Blog
&lt;/h2&gt;

&lt;p&gt;With the current blog as it stands, I create posts whenever I want about whatever I want. Nothing about this blog is a chore and I for the most part enjoy maintining this blog and writing these posts. Everything with this blog is easy. Netlify hosts it for free (shoutout netlify), DNS is configured for basically forever, and SSL on the site renews automatically. All 3 basic components to a blog are there. It’s everything I need right now to be perfect reference material for myself when I run into pesky recurring issues.&lt;/p&gt;

&lt;h3&gt;
  
  
  Material
&lt;/h3&gt;

&lt;p&gt;The material for the blog is about things I run into and the fixes/workarounds for the issues pertaining to infrasture, developing ruby/rails/python/whatever and also some around concepts I find interesting. I think this is partially why the material flows/ comes out easily for me. I am able to just dump fixes into posts and write about what I learned. As long as I continue to learn, the posts will continue to come. Whether good or bad, I am usually always documenting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing Style
&lt;/h3&gt;

&lt;p&gt;The writing in these posts is very: &lt;em&gt;here’s the point, here’s why this works for me, if you want a more in depth explaination look up details on your own&lt;/em&gt; and also very &lt;em&gt;i wrote one draft of this post and here it is&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately for the reader, I really don’t read these posts back to myself like a novel in any way. Everything is meant to be technical, but usable. Most of the posts don’t “flow” per se. They more or less just come out with whatever English makes sense to me at the time. Dare I even say, every post for the most part is slapped together in a cryptic way I am able to comprehend for when I run into the issue the next time it happens. &lt;em&gt;if it’s happened once, it’s bound to happen again&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This blog is mostly for my reference, but if you are able to take something away, then cheers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Community
&lt;/h2&gt;

&lt;p&gt;I think feedback would be nice and I am all for writing back. This is mostly why I am going to start sharing these to a wider audience.&lt;/p&gt;

&lt;p&gt;I love comments and questions and will always do my best to answer them!&lt;/p&gt;

</description>
      <category>blog</category>
      <category>personalplog</category>
      <category>status</category>
      <category>meta</category>
    </item>
    <item>
      <title>Vim Configuration</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Tue, 12 May 2020 05:00:00 +0000</pubDate>
      <link>https://dev.to/jmoore53/vim-configuration-4afo</link>
      <guid>https://dev.to/jmoore53/vim-configuration-4afo</guid>
      <description>&lt;p&gt;Vim, my server editor of choice. (I prefer vscode for development on my workstation.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Plugins
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;vim-rainbow&lt;/li&gt;
&lt;li&gt;powerline&lt;/li&gt;
&lt;li&gt;nerdtree&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ~/.vimrc
&lt;/h2&gt;

&lt;p&gt;This .vimrc is pretty gross looking, but please just note the major configurations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Line/Relative Line&lt;/li&gt;
&lt;li&gt;Tabs are 4 spaces&lt;/li&gt;
&lt;li&gt;Typing &lt;code&gt;{&lt;/code&gt; automatically generates a closing &lt;code&gt;\n}&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;NerdTree is enabled&lt;/li&gt;
&lt;li&gt;The theme is Solarized&lt;/li&gt;
&lt;li&gt;Parentheses are pretty..
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"VIMRC Configuration

"Default configuration
"Relative Line Number
:set number
:set rnu

"Spacing
:set tabstop=4
:set shiftwidth=4
:set expandtab

"Remap..
inoremap {&amp;lt;cr&amp;gt; {&amp;lt;cr&amp;gt;}&amp;lt;esc&amp;gt;ko
inoremap ( ()&amp;lt;left&amp;gt;

"Powerline for VIM
set rtp+=/home/jack/.local/lib/python3.8/site-packages/powerline/bindings/vim
set laststatus=2
set t_Co=256

"Nerd Tree
autocmd StdinReadPre * let s:std_in=1
autocmd VimEnter * if argc() == 0 &amp;amp;&amp;amp; !exists("s:std_in") | NERDTree | endif

autocmd StdinReadPre * let s:std_in=1
autocmd VimEnter * if argc() == 1 &amp;amp;&amp;amp; isdirectory(argv()[0]) &amp;amp;&amp;amp; !exists("s:std_in") | exe 'NERDTree' argv()[0] | wincmd p | ene | exe 'cd '.argv()[0] | endif

autocmd bufenter * if (winnr("$") == 1 &amp;amp;&amp;amp; exists("b:NERDTree") &amp;amp;&amp;amp; b:NERDTree.isTabTree()) | q | endif

map &amp;lt;C-n&amp;gt; :NERDTreeToggle&amp;lt;CR&amp;gt;

"Solarized Theme
syntax enable
set background=dark
colorscheme solarized
let g:solarized_termcolors=256

"Vim-Rainbow
let g:rainbow_active = 1

let g:rainbow_load_separately = [
    \ ['*' , [['(', ')'], ['\[', '\]'], ['{', '}']] ],
    \ ['*.tex' , [['(', ')'], ['\[', '\]']] ],
    \ ['*.cpp' , [['(', ')'], ['\[', '\]'], ['{', '}']] ],
    \ ['*.{html,htm}' , [['(', ')'], ['\[', '\]'], ['{', '}'], ['&amp;lt;\a[^&amp;gt;]*&amp;gt;', '&amp;lt;/[^&amp;gt;]*&amp;gt;']] ],
    \ ]

let g:rainbow_guifgs = ['RoyalBlue3', 'DarkOrange3', 'DarkOrchid3', 'FireBrick']
let g:rainbow_ctermfgs = ['lightblue', 'lightgreen', 'yellow', 'red', 'magenta']

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bash Powerline
&lt;/h2&gt;

&lt;p&gt;Install Powerline&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install git+git://github.com/Lokaltog/powerline
wget https://github.com/powerline/powerline/raw/develop/font/PowerlineSymbols.otf
wget https://github.com/powerline/powerline/raw/develop/font/10-powerline-symbols.conf
mv PowerlineSymbols.otf /usr/share/fonts/
fc-cache -vf /usr/share/fonts/
mv 10-powerline-symbols.conf /etc/fonts/conf.d/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;~/.bashrc&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Powerline
export PATH="$HOME/.local/bin:$PATH"
export POWERLINE_COMMAND=powerline
export POWERLINE_CONFIG_COMMAND=powerline-config
powerline-daemon -q
POWERLINE_BASH_CONTINUATION=1
POWERLINE_BASH_SELECT=1

POWERLINE_BASH_CONTINUATION=1
POWERLINE_BASH_SELECT=1
. /home/jack/.local/lib/python3.8/site-packages/powerline/bindings/bash/powerline.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Getting Glyphs to work when SSHing from Mac to Server
&lt;/h2&gt;

&lt;p&gt;I used FIRA Code to get powerline working on the server.&lt;/p&gt;

&lt;p&gt;In ITerm2 &lt;code&gt;command&lt;/code&gt; + &lt;code&gt;,&lt;/code&gt; to open preferences, &lt;code&gt;profiles&lt;/code&gt; &amp;gt; &lt;code&gt;text&lt;/code&gt; &amp;gt; Non ASCII Font &lt;code&gt;Fira Code&lt;/code&gt; retinat font size 12.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/preservim/nerdtree"&gt;Github - NerdTree&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/altercation/vim-colors-solarized"&gt;Github - vim-colors-solarized&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/powerline/powerline"&gt;Github - Powerline&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/frazrepo/vim-rainbow"&gt;Github - vim-rainbow&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/questions/10921441/where-is-my-vimrc-file"&gt;SO - Where is my vimrc?&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/powerline/powerline/issues/850"&gt;Github Issues - Powerline for Bash&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.tecmint.com/powerline-adds-powerful-statuslines-and-prompts-to-vim-and-bash/"&gt;TecMint - Powerline&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>vim</category>
      <category>configuration</category>
      <category>systemadministraton</category>
      <category>config</category>
    </item>
    <item>
      <title>Metaprogramming Rails Helper Module for Accessible Attributes</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Thu, 16 Apr 2020 05:00:00 +0000</pubDate>
      <link>https://dev.to/jmoore53/metaprogramming-rails-helper-module-for-accessible-attributes-2iik</link>
      <guid>https://dev.to/jmoore53/metaprogramming-rails-helper-module-for-accessible-attributes-2iik</guid>
      <description>&lt;p&gt;Dynamically creating instance methods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module RundeckHelper

    def self.attr_value(*args)
        # Singleton Very Important
        singleton_class = class &amp;lt;&amp;lt; self; self; end
        args.each do |key|
            name = key
            key = key.to_s.upcase
            val = nil
            if !ENV[key].nil?
                if ENV[key] != ""
                    val = ENV[key]
                else
                    val = nil
                end
            elsif !Rails.application.credentials.rundeck.nil?
                key2 = key.downcase
                if !Rails.application.credentials.rundeck[key2].nil?
                    val = Rails.application.credentials.rundeck[key2]
                else
                    val = nil
                end
            elsif !RundeckConfigurationOption.find_by(name: key).nil?
                val = RundeckConfigurationOption.find_by(name: key).value
            else
                val = nil
            end
            # Send the method to the instance
            singleton_class.send(:define_method, name) do
                return val
            end
        end
    end

    attr_value :base_url, :create_job_id, :project_id, :create_environment_id, :create_instance_id
end

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What is this code and how does it work?
&lt;/h2&gt;

&lt;p&gt;Going line by line, this code exists in a Rails module. This is why I believe this was difficult. Creating custom &lt;code&gt;attr_&lt;/code&gt;’s had to be hacked as everything in this module is called as an instance method. The module isn’t instantiated like a class would be. This module in particular has all its methods called as instance methods. The module basically provides helpers to the classes that &lt;code&gt;require&lt;/code&gt; this module.&lt;/p&gt;

&lt;p&gt;Breaking down the key lines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;def self.attr_value(*args)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;singleton_class = class &amp;lt;&amp;lt; self; self; end&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;singleton_class.send(:define_method, name) do&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;attr_value :base_url, :create_job_id, :project_id, :create_environment_id, :create_instance_id&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These three lines are crucial to creating custom attributes for the variables passed into the &lt;code&gt;attr_value&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;Starting from the top: &lt;code&gt;def self.attr_value(*args)&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This line defines the method and the method accepts multiple arguments.&lt;/li&gt;
&lt;li&gt;It is an instance method meaning it can be called from within the module&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Moving on: &lt;code&gt;singleton_class = class &amp;lt;&amp;lt; self; self; end&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We define a singleton_class inside this method to accept from the current object which is the module&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sending Methods: &lt;code&gt;singleton_class.send(:define_method, name) do&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We take the singleton_class we created that is attached to the module and send it the method&lt;/li&gt;
&lt;li&gt;the &lt;code&gt;name&lt;/code&gt; variable, as seen above, is just the name of the symbol that was passed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Calling the Attribute: &lt;code&gt;attr_value :base_url, :create_job_id, :project_id, :create_environment_id, :create_instance_id&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;basically just instantiates the values we need and allows us to use the methods &lt;code&gt;RundeckHelper.base_url&lt;/code&gt; and &lt;code&gt;self.base_url&lt;/code&gt;returning the value stored in the environment, in the secrets, or in the database.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/a/15025036"&gt;SO Link to Creating Instance Methods Dynamically&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ruby</category>
      <category>attributes</category>
      <category>getters</category>
      <category>setters</category>
    </item>
    <item>
      <title>E2E Modded Minecraft Server</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Sun, 15 Mar 2020 05:00:00 +0000</pubDate>
      <link>https://dev.to/jmoore53/e2e-modded-minecraft-server-3j39</link>
      <guid>https://dev.to/jmoore53/e2e-modded-minecraft-server-3j39</guid>
      <description>&lt;p&gt;This is the second time I have had to run through install of a minecraft client and server. I decided I need documentation this go around for the next time I want to install a modded minecraft server.&lt;/p&gt;

&lt;p&gt;I decided to use ansible to make deployments easy. My biggest struggle was upgrading from an older version of ansible using python 2.7 to ansible 2.9.6 using python 3.5.&lt;/p&gt;

&lt;h2&gt;
  
  
  Client
&lt;/h2&gt;

&lt;p&gt;Getting the client setup wasn’t too difficult.&lt;/p&gt;

&lt;h3&gt;
  
  
  MultiMc
&lt;/h3&gt;

&lt;p&gt;I am running the Minecraft Client from my XPS workstation which runs Ubuntu 18.04.&lt;/p&gt;

&lt;p&gt;I knew I had run Minecraft from the workstation before, but when I went to find the &lt;code&gt;.jar&lt;/code&gt; client I didn’t know where it was.I couldn’t find it anywhere. I looked recursively in &lt;code&gt;/home/jack&lt;/code&gt; and in &lt;code&gt;/opt&lt;/code&gt;. It was nowhere to be found.&lt;/p&gt;

&lt;p&gt;I started to install the forge client on my workstaion when I stumbled upon MultiMC which supports linux.&lt;/p&gt;

&lt;p&gt;I then stumbled upon MultiMC already installed on my workstation with some older clients installed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setting RAM
&lt;/h3&gt;

&lt;p&gt;MultiMC allows to set the RAM. I have 16GB of RAM available. I allocated 8GB to the MC instance/ to JVM. I haven’t run into any issues thus far.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing the E2E ModPack
&lt;/h3&gt;

&lt;p&gt;Installing the Modpack was relatively simple besides the fact that the modpack is reccomended to be installed using the Twitch client. I had to find the zip file of the client. The version I wanted was &lt;a href="https://www.curseforge.com/minecraft/modpacks/enigmatica2expert/files/2889433"&gt;here&lt;/a&gt;. It was about 20MB.&lt;/p&gt;

&lt;p&gt;The filename is a zip, &lt;code&gt;Enigmatica2Expert-1.77.zip&lt;/code&gt;. In MultiMC &lt;code&gt;add an instance&lt;/code&gt; and then choose a zip file modpack. Also make sure to have the correct Minecraft Version installed. This modpack uses MC 1.12.2.&lt;/p&gt;

&lt;h2&gt;
  
  
  Server
&lt;/h2&gt;

&lt;p&gt;The client requires 5GB, but recommends 6GB to 8GB. I wanted chunk loading and a few other compute and ram intensive features of Minecraft to run on a server so my workstation isn’t completely bogged down with processes.&lt;/p&gt;

&lt;p&gt;The server also allows for other players to join.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ansible
&lt;/h3&gt;

&lt;p&gt;I know I will want to spin up a MC server again and since I have done this before I have decided to create an ansible playbook for the service.&lt;/p&gt;

&lt;p&gt;The playbook will need to be revised, but currently everything is in the common role. As I start to add more servers for games I will revise the roles for the different applications.&lt;/p&gt;

&lt;p&gt;The directory service looks like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
| ____ group_vars
| | ____ all
| ____ site.yml
| ____ roles
| | ____ common
| | | ____ files
| | | | ____ forge-1.12.2-14.23.5.2847-installer.jar
| | | | ____ eula.txt
| | | | ____ mc13.service
| | | ____ tasks
| | | | ____ configure_modpack.yml
| | | | ____ main.yml
| | | | ____ install_java.yml
| | | | ____ install_forge.yml
| | | | ____ start_server.yml
| ____ hosts
| ____ host_vars

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;site.yml&lt;/code&gt; in the playbook root directory calls the common role (&lt;code&gt;role/common/tasks/main.yml&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;The common role runs as root. Really this is a prelimenary playbook that runs everything as root. It is really a temporary server meant to be run for fun. A more permanent solution would run as a minecraft user and take more variables into account.&lt;/p&gt;

&lt;p&gt;This example uses a server with 8gb of ram and 2 cpus.&lt;/p&gt;

&lt;p&gt;The one note I will make on this ansible setup is that the &lt;code&gt;roles/common/tasks/configure_modpack.yml&lt;/code&gt; file copies files from my user profile up to the server. (I had trouble pulling down &lt;code&gt;zip&lt;/code&gt; server files..) Here is what the task looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: (Configure Modpack) Copy Folder - Mods
  copy:
    src: /home/jack/.local/share/multimc/instances/Enigmatica2Expert-1.77/minecraft/mods
    dest: /root

- name: (Configure Modpack) Copy Folder - Scripts
  copy:
    src: /home/jack/.local/share/multimc/instances/Enigmatica2Expert-1.77/minecraft/scripts
    dest: /root

- name: (Configure Modpack) Copy Folder - Schematics
  copy:
    src: /home/jack/.local/share/multimc/instances/Enigmatica2Expert-1.77/minecraft/schematics
    dest: /root

- name: (Configure Modpack) Copy Folder - Resources
  copy:
    src: /home/jack/.local/share/multimc/instances/Enigmatica2Expert-1.77/minecraft/resources
    dest: /root

- name: (Configure Modpack) Copy Folder - Config
  copy:
    src: /home/jack/.local/share/multimc/instances/Enigmatica2Expert-1.77/minecraft/config
    dest: /root

- name: (Configure Modpack) Copy Folder - Manifest.json
  copy:
    src: /opt/minecraft/manifest.json
    dest: /root

- name: (Configure Modpack) Add Eula
  copy:
    src: eula.txt
    dest: /root

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Forge Installer vs Forge Universal
&lt;/h3&gt;

&lt;p&gt;Forge installer is required to install MC Server. The Forge Universal is required to run the server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Turning the Minecraft Server into a Service
&lt;/h3&gt;

&lt;p&gt;The service runs java from an absolute path specifying 8gb of ram allocated to the JVM pointing to a jar file at the root users directory. The &lt;code&gt;nogui&lt;/code&gt; at the end specifys to forge not to open a GUI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Minecraft Server
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=never
RestartSec=1
User=root
WorkingDirectory=/root
ExecStart=/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Xms8G -jar /root/forge-1.12.2-14.23.5.2847-universal.jar nogui

[Install]
WantedBy=multi-user.target

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>minecraft</category>
      <category>server</category>
      <category>hosting</category>
      <category>gaming</category>
    </item>
    <item>
      <title>Tacotron-2 - Text to Speech, My Speech - Part 1</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Tue, 19 Nov 2019 05:00:00 +0000</pubDate>
      <link>https://dev.to/jmoore53/tacotron-2-text-to-speech-my-speech-part-1-2k5d</link>
      <guid>https://dev.to/jmoore53/tacotron-2-text-to-speech-my-speech-part-1-2k5d</guid>
      <description>&lt;p&gt;The gist of this post is that every day for the past few weeks I have gone home and talked to my computer.Yes, you read that correctly. I have gone home and talked to my computer.Also, don’t let the title fool you. The beef of this post is setting up an environment and a process to help me develop a custom text to speech program. I would only call this a mere glance into the Text-To-Speech Algorithms.&lt;/p&gt;

&lt;p&gt;I need the future, now. This project is to build a text to speech system using my own voice as the training model! I am very excited to build this from the ground up with my own voice as the training data.&lt;/p&gt;

&lt;p&gt;I will be using a handful of Artificial Intelligence libraries to ensure this process goes as smooth as possible. Some include Mozilla’s TTS, Gentle for speech mapping, SOX for data cutting, and of course Python…&lt;/p&gt;

&lt;h2&gt;
  
  
  Picking a Library
&lt;/h2&gt;

&lt;p&gt;After looking at and evaluating the libraries that are out there, I decided to go with &lt;a href="https://github.com/mozilla/TTS"&gt;Mozilla’s TTS&lt;/a&gt; Library. I felt that their library was relatively easy to put together and felt I could easily reproduce my own voice from text with their library.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building a Dataset
&lt;/h2&gt;

&lt;p&gt;It appears having a good, no scratch that, perfect, dataset is the most important part of building any decent text to speech application.&lt;/p&gt;

&lt;p&gt;I looked at the LJ Speech Dataset and decided about 24hrs, (close to ~13,100 utterances/audioclips) of my own time would be needed to record and collect data. I am sure this will take upwards of 40-50hrs to ensure the data is properly Extracted, Transformed and Loaded. I plan to use the &lt;a href="https://www.readbeyond.it/aeneas/docs/"&gt;aeneas&lt;/a&gt; library to match my speech to text. Confirming this is the correct way to build a model, I also looked at building other datasets as well. The other popular dataset is from the &lt;a href="http://www.cstr.ed.ac.uk/projects/blizzard/"&gt;Blizzard challenge&lt;/a&gt;, and the &lt;a href="https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/"&gt;M-AILABS Speech Dataset&lt;/a&gt;. I could have used these, but the LJ Speech Dataseet seemed easier to replicate.&lt;/p&gt;

&lt;p&gt;Mozilla provides a great &lt;a href="https://discourse.mozilla.org/t/custom-voice-tts-not-learning/40897"&gt;article here&lt;/a&gt; on how to build a custom voice Text to Speech Application. The article mentioned will be one of many I will be using to learn more about building my custom TTS program.&lt;/p&gt;

&lt;h3&gt;
  
  
  There’s really just no chance
&lt;/h3&gt;

&lt;p&gt;There is not a fucking chance I sit down and read 13,000 one liners back to back. I need to find an already exiting text broken up and I need it then matched to the &lt;code&gt;wav&lt;/code&gt; file and broken down into 5 second &lt;code&gt;wav&lt;/code&gt; files. &lt;strong&gt;Enter Python.&lt;/strong&gt; This isn’t too crazy, but my plan is to basically read a chapter a day until I am done with three books. This should get me to ~15,000 sentences which should be all that’s needed for the training model. I will feed the model more data if I feel it is necessary.&lt;/p&gt;

&lt;p&gt;The process will look like the following: 1) Find a Full Plain Text Book Online 2) Parse Text Sentence by Sentence into a single file data (python..) 3) Read and Record the Single file to a single wav file 4) Use Python Library Aeneas to match text to speech (still in bigger file) 5) Use Python to break up the large wav file into a smaller wav file using ffmpeg 6) Aeneas to Create the LJ wav folder and &lt;code&gt;.csv&lt;/code&gt; file’&lt;/p&gt;

&lt;p&gt;&lt;em&gt;So I thought…&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Refined Process
&lt;/h3&gt;

&lt;p&gt;The 6 step process above is nice, but almost a little unrealistic and way too time consuming for someone as lazy as myself. If it can be better I will make it better. Here is the new process: 1) Find Text/Plaintext Script (Movie Scripts are fun to read) () -&amp;gt; &lt;code&gt;plaintext&lt;/code&gt; file 2) Record Text on Garageband (Im on Mac, I couldn’t get my MIC to work with audacity) and save it to Wav Format () -&amp;gt; &lt;code&gt;wav&lt;/code&gt; file 3) Upload Text &amp;amp; Unbroken Large Wav File to Gentle for it () -&amp;gt; &lt;code&gt;json&lt;/code&gt; file 4) Parse JSON returned from Gentle and break large file with sox into LJSpeech Dataset, () -&amp;gt; &lt;code&gt;wavs&lt;/code&gt; folder and &lt;code&gt;csv&lt;/code&gt; mapping to the file 5) Pass LJSpeech Dataset to TTS Model&lt;/p&gt;

&lt;p&gt;Seems Easy Enough.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dataset
&lt;/h3&gt;

&lt;p&gt;Microsoft’s site says, “the data needs to be a collection (.zip) of audio files (.wav) as individual utterances. Each audio file should be 15 seconds or less in length, paired with a formatted transcript (.txt).” They are basically correct with the information they provide.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/dubreuia/hosting/tree/master/mozilla-tts/custom-dataset-sample"&gt;This dataset example for mozillas TTS&lt;/a&gt; is what the custom dataset example should look like. I found the link on the &lt;a href="https://discourse.mozilla.org/t/training-custom-voice-doesnt-train/42272"&gt;Mozilla Form here&lt;/a&gt;. There is a good forum post &lt;a href="https://discourse.mozilla.org/t/training-custom-voice-doesnt-train/42272"&gt;mostly here&lt;/a&gt; and &lt;a href="https://discourse.mozilla.org/t/custom-voice-tts-not-learning/40897"&gt;here&lt;/a&gt; that goes over training a custom voice.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;/custom-dataset-sample/&lt;/code&gt; directory there exists a &lt;code&gt;wavs&lt;/code&gt; directory and a &lt;code&gt;metadata_sample.csv&lt;/code&gt; file. The &lt;code&gt;wavs&lt;/code&gt; directory stores &lt;code&gt;.wav&lt;/code&gt; files and the &lt;code&gt;metadata_sample.csv&lt;/code&gt; is structured to map &lt;code&gt;wavs/file1.wav&lt;/code&gt; to the text inside of the &lt;code&gt;wav&lt;/code&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing a Preprocessor
&lt;/h3&gt;

&lt;p&gt;Because we will be using a simliar format to the LJ Dataset, I will need to make sure the preprocessor uses the correct data processor. This could really fuck my model otherwise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Training the Model
&lt;/h2&gt;

&lt;p&gt;Looking at &lt;a href="https://keithito.github.io/audio-samples/"&gt;this example of the tacotron example&lt;/a&gt;, it appears the LJ Speech Dataset went through 441k steps and the results sound decent. I will be using the Tacotron2 library.&lt;/p&gt;

&lt;h2&gt;
  
  
  Looking Forward
&lt;/h2&gt;

&lt;p&gt;Currently I know the process I am going to follow to achieve this goal of having my voice used by a computer. My plan is to write part 2 of this series after I am done with all the data collection.&lt;/p&gt;

&lt;p&gt;This will allow me to really dive deep into curve fitting and understand the specifics of how ML/AI works.I plan to have a demystified understanding of AI/ML when I return for the second post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/kkroening/ffmpeg-python"&gt;FFMPEG Python Library&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://www.erogol.com/text-speech-deep-learning-architectures/"&gt;Text To Speech Deep Learning Architectures&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/readbeyond/aeneas"&gt;Github Aeneas - A set of tools to automagically synchronize audio and text&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.readbeyond.it/aeneas/docs/"&gt;Aeneas Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/NVIDIA/tacotron2"&gt;Github - Tacotron2 Nvidia&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Rayhane-mamah/Tacotron-2#tacotron-2"&gt;Github - Tacotron2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/"&gt;The M-AILABS Speech Dataset&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Rayhane-mamah/Tacotron-2/issues/4"&gt;Tacotron-2 Implementation Status and planned TODOs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/r9y9/wavenet_vocoder"&gt;WaceNet vocoder&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/keithito/tacotron"&gt;Tacotron&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://keithito.github.io/audio-samples/"&gt;Tacotron Audio Example&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Rayhane-mamah/Tacotron-2/issues/4#issuecomment-378741465"&gt;Tacotron 2 Quick Observations Sharing&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://keithito.com/LJ-Speech-Dataset/"&gt;The LJ Speech Dataset&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reddit.com/r/MachineLearning/comments/az4rb5/d_what_makes_a_good_tts_dataset/"&gt;What makes a good Dataset&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.reddit.com/r/MachineLearning/comments/87klvo/r_expressive_speech_synthesis_with_tacotron/dwfm9p5/"&gt;Expressive Speech Synthesis with Tacotron&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://news.ycombinator.com/item?id=18490291"&gt;HN - Building a dataset&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-custom-voice-prepare-data"&gt;Microsoft - Data Types&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-custom-voice-create-voice"&gt;Microsoft - Create A Custom Voice&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Resources Final?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/mozilla/TTS/wiki/Dataset"&gt;Github - Mozilla - Dataset&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>backend</category>
      <category>tts</category>
      <category>fullstack</category>
    </item>
    <item>
      <title>Mounting S3 as a filesystem with S3FS</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Wed, 01 May 2019 05:00:00 +0000</pubDate>
      <link>https://dev.to/jmoore53/mounting-s3-as-a-filesystem-with-s3fs-538i</link>
      <guid>https://dev.to/jmoore53/mounting-s3-as-a-filesystem-with-s3fs-538i</guid>
      <description>&lt;h3&gt;
  
  
  S3FS
&lt;/h3&gt;

&lt;p&gt;This month I spent time working on creating a seamless file transfer system between my development machines and the cloud using an AWS S3 bucket and S3FS. I have a Mac laptop and an Ubuntu based desktop that have important files that often get out of sync with one another. These files exist outside of version control and I was looking for a better way than Google Drive or emails with attachments to transfer them among machines. After doing some quick research, I found &lt;a href="https://github.com/s3fs-fuse/s3fs-fuse"&gt;S3FS&lt;/a&gt; as a way to mount S3 as a filesystem and decided this would be the best tool to use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Easy Install
&lt;/h3&gt;

&lt;p&gt;The developers of S3FS make it pretty easy to install the tool across unix based platforms and I had no trouble getting it installed on my Mac using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew cask install osxfuse
brew install s3fs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and on Ubuntu with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install s3fs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Credentials &amp;amp; AWS
&lt;/h3&gt;

&lt;p&gt;The most complicated part of getting the tool up and running is configuring the aws credentials and the S3 bucket.&lt;/p&gt;

&lt;p&gt;Creating the bucket was pretty much a snap, but be sure to set the correct settings on the bucket and also give it a DNS compliant name. I named mine &lt;em&gt;itsltns&lt;/em&gt;, as I knew it was DNS compliant and it would be easy to remember.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TO MAKE THIS CLEAR:&lt;/strong&gt; the credentials they are looking for are the &lt;em&gt;Access Key ID&lt;/em&gt; and &lt;em&gt;Access Secret Key&lt;/em&gt;. These can be found by logging in to AWS and finding the &lt;a href="https://console.aws.amazon.com/iam/home?region=us-east-2#/security_credentials"&gt;&lt;strong&gt;My Security Credentials&lt;/strong&gt;&lt;/a&gt; page. This pair of keys is most commonly used by developers for development of AWS Applications. Make sure these keys or your IAM user has the correct permissions for S3, I gave my IAM user full permissions to read and write.&lt;/p&gt;

&lt;p&gt;After generating the key pair they need to go in a file anywhere where you can find them. It was &lt;a href="https://github.com/s3fs-fuse/s3fs-fuse#examples"&gt;reccommended&lt;/a&gt; to put them in &lt;code&gt;~/.passwd-s3fs&lt;/code&gt; using &lt;code&gt;ACCESS_KEY_ID:ACCESS_KEY_SECRET&lt;/code&gt; as the format, but I put them with my AWS SSH keys in &lt;code&gt;~/.ssh/AWS/passwd-s3fs&lt;/code&gt;. This file needs to have the permissions of the owner having read and write and group and others having none. Using &lt;code&gt;chmod 600 ~/.ssh/AWS/passwd-s3fs&lt;/code&gt; we are able to run the s3fs command and access the S3 Bucket.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mountpoints
&lt;/h3&gt;

&lt;p&gt;Keeping everything simple and the mountpoint single user owned, I opted to create a mountpoint on both machines within my home directory. I created this mountpoint using &lt;code&gt;mkdir ~/itsltns-s3&lt;/code&gt; and as a little foreshadow, I also ran the command &lt;code&gt;chmod a+rwx ~/itsltns-s3&lt;/code&gt; because I ran into read/write permission issues when reading and writing files from mac-to-ubuntu and ubuntu-to-mac.&lt;/p&gt;

&lt;p&gt;With the mountpoint configured, I was up and running using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s3fs itsltns ~/itsltns-s3 -o passwd_file=${HOME}/.ssh/AWS/passwd-s3fs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allowed me to run a basic &lt;code&gt;ls ~/itsltns-s3&lt;/code&gt; and it returned with nothing in it. To test if the mount was working, I added a &lt;code&gt;test.txt&lt;/code&gt; file and sure enough it was copied up to the cloud and the &lt;code&gt;test.txt&lt;/code&gt; was in the &lt;code&gt;itsltns&lt;/code&gt; s3 bucket.&lt;/p&gt;

&lt;h3&gt;
  
  
  Issues &amp;amp; Logging
&lt;/h3&gt;

&lt;p&gt;If you run into any issues with mounting or &lt;code&gt;s3fs&lt;/code&gt; at all, I would highly reccommend using the logging function with the &lt;code&gt;-o dbglevel=info -f -o curldbg&lt;/code&gt; flag at the end. After adding the debug option, the command would look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s3fs itsltns ~/itsltns-s3 -o passwd_file=${HOME}/.ssh/AWS/passwd-s3fs -o dbglevel=info -f -o curldbg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  MacOS - launchd - mount on user login
&lt;/h3&gt;

&lt;p&gt;One of the features I wanted was automounting the S3 drive on login, so I opted to create a &lt;a href="https://www.launchd.info/"&gt;launchd&lt;/a&gt; service to run on login.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When working with launchd services&lt;/strong&gt; don’t bother installing the LaunchControl that is advertised on the launchd website unless you plan on doing serious agent or service development. The application has minimal functionality and requires a license.&lt;/p&gt;

&lt;p&gt;Setting up the launchd wasn’t terrible, but I did have to debug the service with &lt;code&gt;launchctl list | grep itsltns&lt;/code&gt; a couple times to get the status code of the application.&lt;/p&gt;

&lt;p&gt;Without getting into too much into launchd detail, I created the file &lt;code&gt;~/Library/LaunchAgents/local.itsltns-s3.plist&lt;/code&gt; as my launchd service with the contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;
&amp;lt;!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"&amp;gt;
&amp;lt;plist version="1.0"&amp;gt;
    &amp;lt;dict&amp;gt;
        &amp;lt;key&amp;gt;Label&amp;lt;/key&amp;gt;
        &amp;lt;string&amp;gt;local.itsltns-s3.plist&amp;lt;/string&amp;gt;
        &amp;lt;key&amp;gt;Program&amp;lt;/key&amp;gt;
        &amp;lt;string&amp;gt;/Users/Jack/Library/LaunchAgents/itsltns-s3.sh&amp;lt;/string&amp;gt;
        &amp;lt;key&amp;gt;RunAtLoad&amp;lt;/key&amp;gt;
        &amp;lt;true/&amp;gt;
    &amp;lt;/dict&amp;gt;
&amp;lt;/plist&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you look in the &lt;code&gt;.plist&lt;/code&gt; xml file, you will see the &lt;code&gt;&amp;lt;key&amp;gt;&lt;/code&gt; tags surrounding the word “Program” and &lt;code&gt;&amp;lt;string&amp;gt;&lt;/code&gt; tags surrounding the location of the shell script to be executed.&lt;/p&gt;

&lt;p&gt;With this, I also created the file &lt;code&gt;~/Library/LaunchAgents/itsltns-s3.sh&lt;/code&gt; with the contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
/usr/local/bin/s3fs itsltns /Users/Jack/itsltns-s3/ -o passwd_file=${HOME}/.ssh/aws/passwd-s3fs -o volname="ITSLTNS - S3"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;itsltns-s3.sh&lt;/code&gt; script uses absolute file paths for the &lt;code&gt;s3fs&lt;/code&gt; command, location of the mount drive, and the location of the s3fs password file. I didn’t bother adding the path to the script.&lt;/p&gt;

&lt;p&gt;The one additional piece you may notice in the above script is the osxfuse option to rename the attached drive with the &lt;code&gt;-o volname=""&lt;/code&gt; command which makes the volume look prettier in Finder (replace in-between the quotes with desired drive name).&lt;/p&gt;

&lt;p&gt;With the additions and launchd service created, my Mac was setup and ready to go on boot. I got it configured and everything was up to par.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ubuntu - /etc/fstab - mounting on boot
&lt;/h3&gt;

&lt;p&gt;Configuring Ubuntu to mount the S3 bucket was a little more challenging and I ran into some issues along the way, but it really only took me about 20 minutes, a quick google search, and one reboot.&lt;/p&gt;

&lt;p&gt;S3FS offers an example for automounting using the &lt;code&gt;/etc/fstab&lt;/code&gt; file, and I ended up with a similiar configuration to their example file. My &lt;code&gt;/etc/fstab&lt;/code&gt; file had &lt;code&gt;s3fs#mybucket /path/to/mountpoint fuse _netdev,allow_other 0 0&lt;/code&gt; added on the bottom of it.&lt;/p&gt;

&lt;p&gt;I ran &lt;code&gt;sudo mount -a&lt;/code&gt; and sure enough the S3FS couldn’t mount because it had &lt;a href="https://github.com/s3fs-fuse/s3fs-fuse/issues/128"&gt;no idea where my developer credentials were&lt;/a&gt;. The &lt;code&gt;sudo mount -a&lt;/code&gt; command spit out a response something like “cannot find access and secret keys.”&lt;/p&gt;

&lt;p&gt;&lt;code&gt;/etc/fstab&lt;/code&gt; requires root permissions (modified with &lt;code&gt;sudo&lt;/code&gt;) and therefore it looks for a different password file, so I used a symbolic link to fix that. Using &lt;code&gt;ln -s /home/jack/Documents/.ssh/aws/passwd-s3fs /etc/passwd-s3fs&lt;/code&gt; was the bandaid that prevented me from doing something dumb.&lt;/p&gt;

&lt;p&gt;I re-ran &lt;code&gt;sudo mount -a&lt;/code&gt; and sure enough it mounted with ease.&lt;/p&gt;

&lt;p&gt;I rebooted my machine and all seemed to be in proper order.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pricing
&lt;/h3&gt;

&lt;p&gt;I haven’t run this for months on end, actually I just got it set up, but it looks like it will be around $2/month with all the requests that are made and the storage pricing. This was based off 16gb/mo and 600,000 total requests.&lt;/p&gt;

&lt;p&gt;If I have to, I will revert back to manually mounting when I need to, but for now it is nice to have S3 auto mounted on login (for mac) and boot (for linux).&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://calculator.s3.amazonaws.com/index.html"&gt;AWS Pricing Calculator&lt;/a&gt; for more on pricing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Notes &amp;amp; Todo
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;MacOS attaches attributes and metadata to files and it kinda sucks. File attributes can be removed with &lt;code&gt;xattr -c filename&lt;/code&gt;, but I need to make sure every file in the bucket has them stripped. The reason for this is that Ubuntu is unable to read these attributes (different filesystems) and they also bug me.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After looking at this post, I need to move keys on both machines to similiar locations. Not fun managing two computers where &lt;code&gt;~/Documents/.ssh/aws&lt;/code&gt; and &lt;code&gt;~/.ssh/aws&lt;/code&gt; have similiar files. They should really be one or the other.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I will probably have to revise the s3fs code to allow for multiple s3 mounts as I definitely see it as a way to manage personal and work files between machines and aws accounts. I also might look into &lt;a href="https://wiki.archlinux.org/index.php/autofs"&gt;autofs&lt;/a&gt; with some custom scripts to get this fixed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Permissions among the mountpoints on different machines may need to be looked at, but for now I am the only person using my machines so there is no security threat. &lt;em&gt;knocks on wood&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I also want to find a place on my Mac for launchd scripts. The &lt;code&gt;~/Library/LaunchAgents&lt;/code&gt; seems like a place where files could easily be placed and lost. Appears to be a place that can get messy quick.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>s3</category>
    </item>
    <item>
      <title>React and Rails with Webpacker in Production</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Thu, 07 Mar 2019 19:09:45 +0000</pubDate>
      <link>https://dev.to/jmoore53/react-and-rails-with-webpacker-in-production-pnf</link>
      <guid>https://dev.to/jmoore53/react-and-rails-with-webpacker-in-production-pnf</guid>
      <description>&lt;p&gt;I have written a hand full of Rails applications and typically I use the asset pipeline and coffeescript with bootstrap and the likes when deploying. Most of the projects are somewhat barebones from a front end perspective, but that's not to say they aren't functional from the server side. I don't usually have a hard time moving projects from development to production, most of the work ends up being trivial configuration details. &lt;/p&gt;

&lt;p&gt;For my most recent rails project, I was looking to add a react form walk-through on sign up. I found a few ruby gems that worked with react and I decided to pick out &lt;a href="https://github.com/reactjs/react-rails"&gt;reactjs/react-rails&lt;/a&gt;. I had no problem getting the development environment up and running, but when I tried to move the project into a test environment and then production, webpacker was giving me a time and a half. I ended up scrapping the project and moving forward to something new.&lt;/p&gt;

&lt;p&gt;Anyone have similar experiences with dumping side projects? How much work do you usually put into a problem before cutting the cord and moving forward?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Everything I Learned from an Interview that Never Happened</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Wed, 06 Mar 2019 04:12:53 +0000</pubDate>
      <link>https://dev.to/jmoore53/everything-i-learned-from-an-interview-that-never-happened-5h6p</link>
      <guid>https://dev.to/jmoore53/everything-i-learned-from-an-interview-that-never-happened-5h6p</guid>
      <description>&lt;h1&gt;
  
  
  Interview Background
&lt;/h1&gt;

&lt;p&gt;I was asked to interview with a large tech company, but the interview never took place due to timing issues and them moving forward with another candidate before I had the chance to interview in person. Truthfully I was neither qualified nor prepared for the job, but I saw no harm or foul with dropping my CV in the bucket and hoping someone saw it.&lt;/p&gt;

&lt;p&gt;For a little background, I am getting ready to graduate school with a Bachelor of Science in Business Administration with a specialization in Management Information Systems, but the role they were looking to hire for was a Software Engineer. I knew going into the application process my degree would only get me so far and I would be required to learn a lot of the CompSci fundamentals on my own time to prepare for the position.&lt;/p&gt;

&lt;p&gt;An internet search later landed me with millions of results and things to do. I started to feel in over my head before I had even begun preparation. This is everything I did, and all the concepts I learned in an attempt to prepare for a technical interview. &lt;/p&gt;

&lt;h1&gt;
  
  
  Baseline Coding Ability and Concepts
&lt;/h1&gt;

&lt;p&gt;Coming from somewhat of a nontraditional degree for a Software Engineering position, I knew my technical bounds and coding ability would be tested. I don't spend every waking minute writing Python, Java, or *. I spend most of it working on things I want to work on (which is mostly infrastructure related) and looking at memes.&lt;/p&gt;

&lt;p&gt;I have the fundamentals of Python down and I've built a couple smaller projects with it, so I ended up picking it as my language of choice for the interview process. This may have come back to haunt me in the end as with Python you have to roll most of your own data structures. I'll come back to this later though. For now all you need to know is I didn't have a problem with the first coding questions.&lt;/p&gt;

&lt;p&gt;I didn't have a problem with the preliminary weed out problems, but I did prepare for them as much as I could using &lt;a href="https://leetcode.com/problemset/all/"&gt;Leetcode&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does it Scale (Big-O)
&lt;/h2&gt;

&lt;p&gt;Something not covered in my degree but is in a CompSci is Big-O and the concept of computational time and space usage. I picked up &lt;a href="http://www.crackingthecodinginterview.com/"&gt;Cracking the Coding Interview&lt;/a&gt;, read the chapter on Big-O and then I watched as many Youtube videos as I could on the concept to the extent I can now explain it to someone with a basic knowledge of programming.&lt;/p&gt;

&lt;p&gt;Truth be told, I never ran into the issue of refactoring code to fit it within time and space bounds because the interview never took place. This is for better or worse, but I could have used the experience with a practicing Software Engineer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dynamic Programming and Recursion
&lt;/h2&gt;

&lt;p&gt;Dynamic Programming and Recursion were tricky concepts for me. I was originally exposed to these in a high school Computer Science class, but past that class, these two concepts always turned into an after thought for me. &lt;/p&gt;

&lt;p&gt;Never in my flask/rails/scripting side projects did I ever think of running a recursive method because most of the time I stored everything in a database and the code followed a flow. &lt;/p&gt;

&lt;p&gt;Luckily, &lt;a href="https://www.youtube.com/playlist?list=PLnfg8b9vdpLn9exZweTJx44CII1bYczuk"&gt;Youtube - Stanford Cs106B, Data Structures and Algorithms&lt;/a&gt; helped me to learn more about recursion and most of the basic data structures. I only watched a hand full of the playlist, but it helped me substantially.&lt;/p&gt;

&lt;p&gt;For dynamic programming, I watched a hand full of shorter videos and applied recursive techniques to most of the problems. &lt;/p&gt;

&lt;h1&gt;
  
  
  Baseline Infrastructure and Systems Concepts
&lt;/h1&gt;

&lt;p&gt;Writing code that scales is important to most major tech companies, but what's also important is the infrastructure that scales with systems as users join and use these systems. I didn't make it to the systems design portion, but the concepts were in my head and I didn't want to be caught off guard with a question I wasn't prepared for.&lt;/p&gt;

&lt;p&gt;I prepared for these with whiteboard sessions and reviewing engineering blogs of popular tech companies. I enjoyed reading about the tools and architectures these companies use to build highly scalable systems. &lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;Having only 3 weeks to prepare, I prioritized my time towards the concepts I thought I would be the most relevant for the position as well as the ones I was the weakest in. &lt;/p&gt;

&lt;p&gt;When it came down to the end, I truly wish I had applied earlier for the position. Applying so late put me at a bit of a disadvantage from the beginning, but I wasn't disappointed with the results as I never set my expectations in the beginning. &lt;/p&gt;

&lt;h1&gt;
  
  
  Recommended Resources
&lt;/h1&gt;

&lt;p&gt;Like I said earlier, everyone is only a search away from the answers they are looking for, but one must have to know the questions first. Here are the resources I found most valuable, but I would highly recommend picking up a book on the subject as well. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/jwasham/coding-interview-university"&gt;Github - Coding Interview University&lt;/a&gt;
I would recommend going through the Github coding interview resource initially to prepare for a technical interview. It helped me immensely in grasping some of the basic Software Engineering concepts. &lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/donnemartin/system-design-primer"&gt;Github - The System Design Primer&lt;/a&gt;
Even as I walked through the System Design Primer repo preparing for an interview, I think this repo would prepare me for building any type of system. It also has the links to popular engineering blogs of some of the larger tech companies towards the bottom.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition"&gt;Github - Fizz Buzz Enterprise Edition&lt;/a&gt;
Excerpt from the &lt;a href="https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpriseEdition/blob/master/README.md"&gt;Readme.md&lt;/a&gt; on github, "This project is an example of how the popular FizzBuzz game might be built were it subject to the high quality standards of enterprise software." I just think this repo is funny.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>interview</category>
      <category>career</category>
    </item>
    <item>
      <title>My experience with Development Cycles</title>
      <dc:creator>Jack Moore</dc:creator>
      <pubDate>Thu, 01 Nov 2018 05:00:00 +0000</pubDate>
      <link>https://dev.to/jmoore53/my-experience-with-development-cycles-g8m</link>
      <guid>https://dev.to/jmoore53/my-experience-with-development-cycles-g8m</guid>
      <description>&lt;p&gt;The first release of any my dev cycle feels like garbage. Whether it’s integrating a new api or library to building on an idea that’s not all the way flushed out, I often find my first iteration of a project gets thrown out.&lt;/p&gt;

&lt;p&gt;This post describes some of the reasons why my first attempt on a proof of concept application isn’t a masterpiece.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unfamiliarity
&lt;/h2&gt;

&lt;p&gt;Being unfamiliar with tools and new libraries has forced me to read more documentation and learn more about deploying multiple types of applications. This unfamiliarity has ultimately made me look at projects in a new way.&lt;/p&gt;

&lt;p&gt;The unfamiliarity with libraries has made my applications delicate rather than hardened, and I often find myself searching deep in github issues on projects attempting to fix mangled together parts of an application. By the time I am in production, most of the tools are mangled together and on the edge of breaking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Library Versions featuring Brokenness
&lt;/h2&gt;

&lt;p&gt;I often stay very up to date on library versions and pull most source code from repositories right off of github. With all these dependencies on other applications being tossed around, I sometimes find myself and my applications in a broken state because of dependency issues.&lt;/p&gt;

&lt;p&gt;The most recent time I ran into an issue was upgrading a ruby project and running into some backwards compatability problems with methods. Methods were deleted that my application used and I essentially downgraded back to the older version to get the application back on track.&lt;/p&gt;

&lt;h2&gt;
  
  
  Perfection
&lt;/h2&gt;

&lt;p&gt;I have a high standard for production applications, and I prefer to have most of my application tested before tossing it into the wild. It also helps to have a web app with decent views, so I tend to spend a lot more time working on frontend pieces than I would prefer.&lt;/p&gt;

&lt;p&gt;This often holds the processes of releases up as I like to ensure the application works before I push it out the door.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipelines
&lt;/h2&gt;

&lt;p&gt;I was bad about pipelines in the past and didn’t truly value them for what they do and how much time they save compared to &lt;code&gt;scp&lt;/code&gt;-ing code to a machine and praying a deploy works and the libraries are the same. Looking ahead to containres, I now see the value in containerized releases. I am able to push my application to a container and run it on any machine. (Containers of today are the Java of Yesterday - Write Once, Run Anywhere)&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
