<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ray Smets</title>
    <description>The latest articles on DEV Community by Ray Smets (@rsmets).</description>
    <link>https://dev.to/rsmets</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rsmets"/>
    <language>en</language>
    <item>
      <title>How-To: portable, maintainable git and shell configs made easy</title>
      <dc:creator>Ray Smets</dc:creator>
      <pubDate>Sat, 11 Dec 2021 21:57:54 +0000</pubDate>
      <link>https://dev.to/rsmets/how-to-portable-maintainable-git-and-shell-configs-made-easy-41dm</link>
      <guid>https://dev.to/rsmets/how-to-portable-maintainable-git-and-shell-configs-made-easy-41dm</guid>
      <description>&lt;p&gt;In case you have ever had to deal with getting your shell and git config dialed in on a new unix machine, this how-to is for you. It also for those of who you want to keep configs in sync across multiple machines. &lt;/p&gt;

&lt;p&gt;I recently had to setup a brand new computer without being able to restore from backup and this little solution not only made the setup very simple and quick but also maintainable. It will be useful moving forward to keep a single source of config truth that is easily obtainable between machines.&lt;/p&gt;

&lt;p&gt;In order to streamline a repeatable setup exactly as desired the (not so) secret is leveraging git and (fairly secret) symlinks. &lt;/p&gt;

&lt;p&gt;Git clearly is great for the version control properties of the relevant config files (&lt;code&gt;~/.gitconfig&lt;/code&gt;, &lt;code&gt;~/.zshrc&lt;/code&gt;, &lt;code&gt;~/.oh-my-zsh/custom&lt;/code&gt;, etc). Definitely keep these files (sans sensitive information) in version control. &lt;/p&gt;

&lt;p&gt;However, how to manage config files that span multiple directories? The files listed above all live in the home directory, but what if a config file you want to track was in &lt;code&gt;~/dev/configs/myawesomeconfg&lt;/code&gt;? This where one of the advantages of symlinks come into play.&lt;/p&gt;

&lt;p&gt;By creating a symlink from your config-repo directory to a config's regular path all disparate configs can be managed in one git directory. Furthermore by using symlinks, instead of hardlinks, one can then pull updates from the repo (say from another machine you work on) and the files that are symlinked  in their default locations are also updated. &lt;/p&gt;

&lt;p&gt;For example, I create a symlink for my &lt;code&gt;.gitconfig&lt;/code&gt; from my config-repo, &lt;code&gt;~/dev/config-repo/.gitconfig&lt;/code&gt;, to its default location, the home directory. This is done by:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~
ln -s ~/dev/config/.gitconfig .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if that git managed &lt;code&gt;.gitconfig&lt;/code&gt; file is ever updated and I pull the changes in config-repo locally the git config, in its "normal" place in the home directory is also updated. And visa versa if I update &lt;code&gt;~/.gitconfig&lt;/code&gt; the changes are reflected in git-repo change log. I use this same setup for a .zshrc config file.&lt;/p&gt;

&lt;p&gt;This setup also works for my &lt;code&gt;~/.oh-my-zsh/custom&lt;/code&gt; directory. I allow each machine's oh-my-zsh installation to manage it's own installation, but I symlink the custom directory so I get all my shell aliases and functions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/.oh-my-zsh
rm -rf custom
ln -s ~/dev/config/oh-my-zsh/custom .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In conclusion, this solution enables tracking essential configurations in version control for maintainability and enables interoperability between machines. This can of course be extended to any type of config files / directories. I hope this how-to was informative and helpful. &lt;/p&gt;

&lt;p&gt;Ray&lt;/p&gt;

</description>
      <category>devops</category>
      <category>productivity</category>
      <category>git</category>
    </item>
    <item>
      <title>AWS Best Practices at Work</title>
      <dc:creator>Ray Smets</dc:creator>
      <pubDate>Fri, 02 Oct 2020 06:04:57 +0000</pubDate>
      <link>https://dev.to/rsmets/aws-best-practices-at-work-278e</link>
      <guid>https://dev.to/rsmets/aws-best-practices-at-work-278e</guid>
      <description>&lt;p&gt;&lt;strong&gt;The Story&lt;/strong&gt;&lt;br&gt;
Nexkey is a smarter access control system, featuring a mobile app and cloud-connected hardware to make any door smart. We allow businesses to manage access instantly, get insights into their space to streamline operations, message users, and automate workflows through a flexible API.&lt;/p&gt;

&lt;p&gt;We take reliable and secure access control seriously. In order to see our vision of a physical keyless society, those are tenets that we must uphold as an organization. There is a great need for responsive, secure and scalable cloud infrastructure to provide frictionless key sharing experiences, real-time alerting, and audit log capabilities. In order to be truly frictionless, users must have a sense of immediacy in their user experience, even as the number of devices and users under management scale.&lt;/p&gt;

&lt;p&gt;Having joined the company in October 2018 the pleasure of leading the backend re-architecture initiative started shortly after — taking calculated steps to ensure that we can best serve our growing customer base. Taking a look at the last year we have great empirical data to show for our work highlighted by the 31ms average response time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbzy8gjj9fba8gbitqw83.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbzy8gjj9fba8gbitqw83.jpeg" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Figure 1. Our SLA report as produced via New Relic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it was done&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Network re-architecture.&lt;/em&gt; Proper use of VPCs, dynamic firewalls, application load balancers&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Database modifications.&lt;/em&gt; More efficient database indexing &amp;amp; changed hosted database service&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Service-oriented app decoupling.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Lots of code refactoring.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Network Redesigned&lt;/em&gt;&lt;br&gt;
While this certainly had one of the most drastic effects on our scalability potential and response times this was relatively easily improved upon. The former configuration was still using default Virtual Private Cloud settings of which the database was not a part of. We found that leveraging this AWS cloud formation template made for a very simple standing up of a new VPC with properly configured public and private subnets and their corresponding NAT gateway configurations. It allowed for the basis of a reproducible VPC. Similar to nearly all SAAS orgs we currently have multi-tenant Production and Staging environments however the potential for creating a single-tenant VPC to meet potentially large enterprise standards would be trivial.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Database Improvements&lt;/em&gt;&lt;br&gt;
We use the non-relational database, MongoDB, and in October of 2018 we were using mLab as our hosted database service. One highlight of the now-defunct service was their auto-indexing which we used to our advantage as it was extremely simple yet powerful. Upon their acquisition and impending decommissioning we opted to switch to ScaleGrid due to being able to have the database instances managed with our own and newly created VPCs. Many competing hosted databases services run in their third party cloud and one must peer a connection to get the benefits of private interface networking. While this is viable, it did not give us the control desired over our data layer. ScaleGrid affords this and peace of mind of managing redundancy and observability tools to keep an eye on our data persistence layer.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Security&lt;/em&gt;&lt;br&gt;
With the migration of our services into a properly configured VPC, we inherently have much greater control over our network. Prior to this work, all of our services and instances were exposed over public interfaces. Now everything is safeguarded behind a Web Application Firewall, load balancers, and very strict security group settings. We have also migrated nearly all of our services to a server-less AWS compute solution, ECS Fargate, which not only decreases operational complexity but also increases our security profile. This is due to the underlying compute being managed, patched and updated all behind the scenes. We will never have to manually perform a server security patch again. Furthermore leveraging Firecracker to manage microVMs used under the ECS Fargate hood allows for extremely fast instance spin up times further bolstering our on-demand scalability capabilities.&lt;/p&gt;

&lt;p&gt;Taking advantage of new application load balancers from VPC migration we were able to pull off the shelve initial Web Application Firewall configurations that include rules against botnets, network scanners, and known blacklist ips enabling the ability to stop noisy and potentially malicious traffic at the gate before ever making it our backend services.&lt;/p&gt;

&lt;p&gt;It is also worth noting that our mobile apps have also implemented certification pinning as an additional measure to ensure the authenticity of our network traffic to our mobile applications.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Service-Oriented Decoupling&lt;/em&gt;&lt;br&gt;
While we still have a mostly monolithic application, some key operations have been decoupled to allow for more elastic scalability requirements. One of the most notable services that halved our response time was the Web Socket service. Leveraging our new application load balancers from the VPC migrations in hand with our new ECS service configurations we were able to simply apply load balancer filtering rules to divert traffic to separate target groups which can scale independently one another.&lt;/p&gt;

&lt;p&gt;In order for the formerly tightly coupled web socket service and our main app to still communicate with one another Redis was leveraged as a fast and distributed message queue.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Conclusion&lt;/em&gt;&lt;br&gt;
As a result of the culmination of our work we are now 11x faster while serving 5x more traffic to our web portal and mobile apps in addition to far more effective security policies in place. This can be attributed to leveraging the vast number of resources that AWS provides and taking the time to simply implement many operational best practices. We have really been able to see the benefits. Furthermore, moving to serverless compute infrastructure, we are benefiting from both the industry-driven reduced costs and the easy to manage and scale characteristics. It’s a fun time over here! We have lots of exciting and well-engineered innovation happening on the hardware and mobile app front as well. Thanks for reading and please follow for updates.&lt;/p&gt;

&lt;p&gt;Ray Smets, Nexkey Lead Backend Engineer&lt;/p&gt;

</description>
      <category>aws</category>
      <category>microservices</category>
      <category>architecture</category>
      <category>distributedsystems</category>
    </item>
  </channel>
</rss>
