<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: DevGraph</title>
    <description>The latest articles on DEV Community by DevGraph (@devgraph).</description>
    <link>https://dev.to/devgraph</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/devgraph"/>
    <language>en</language>
    <item>
      <title>Evolution of Encrypted Credentials in Rails 6.2</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Wed, 27 Oct 2021 10:51:32 +0000</pubDate>
      <link>https://dev.to/devgraph/evolution-of-encrypted-credentials-in-rails-62-3m4</link>
      <guid>https://dev.to/devgraph/evolution-of-encrypted-credentials-in-rails-62-3m4</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--npBwVIc0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z7jiubb3gcabf5lbfga8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--npBwVIc0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z7jiubb3gcabf5lbfga8.jpg" alt="Encrypted Credentials in Rails 6.2" width="760" height="404"&gt;&lt;/a&gt;&lt;br&gt;
By &lt;a href="https://blog.engineyard.com/author/ritu-chaturvedi"&gt;Ritu Chaturvedi&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your blog The concept of encrypted secrets evolved and acquired a better shape with each update of the &lt;a href="https://weblog.rubyonrails.org/releases/"&gt;Rails version&lt;/a&gt;. Recently, Rails 6.2 has brought in many such updates to the credentials feature. Let us analyze and discuss encrypted credentials further, how to read them, the advantages, and finally, how to manage a secret key base.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Evolution of Encrypted credentials&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Encrypted secrets were introduced with Rails 5.1 in a view to bringing more security to the secrets handled. In this version, these were referred to as ‘&lt;em&gt;secrets&lt;/em&gt;’ and were referenced by&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7OKjKGbY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image2-4.png%3Fwidth%3D620%26name%3Dimage2-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7OKjKGbY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image2-4.png%3Fwidth%3D620%26name%3Dimage2-4.png" alt="Rails Encrypted credentials" width="620" height="42"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;secrets.yml.enc&lt;/em&gt; file handles the secrets along with an encryption key.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Handling secrets before Rails 5.1&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt; Before this version of Rails, there were two methods to commit the secrets. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first method was to store secrets onto the &lt;em&gt;secrets.yml&lt;/em&gt; file, read secrets from environment variables, and commit the &lt;em&gt;secrets.yml&lt;/em&gt; file to the repository. Though this method was easy to operate, it had a high-security risk as any gem used can dump environment variables. Since the data is open and not encrypted, this can be read by anyone accessing the repository.&lt;/li&gt;
&lt;li&gt;The second method was to store all secrets in the &lt;em&gt;secrets.yml&lt;/em&gt; file and not commit them to the repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Handling secrets in Rails 5.1&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;By default, from this version of Rails, the secrets were passed as secrets.yml file along with an encryption key. Without this key, the secrets stored in the file will look like some junk characters. To initiate using secrets, the user needs to run:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XPTqO46T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image2-Oct-27-2021-10-26-52-86-AM.png%3Fwidth%3D620%26name%3Dimage2-Oct-27-2021-10-26-52-86-AM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XPTqO46T--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image2-Oct-27-2021-10-26-52-86-AM.png%3Fwidth%3D620%26name%3Dimage2-Oct-27-2021-10-26-52-86-AM.png" alt="encrypted credentias in rails 5.1" width="620" height="42"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This would create two files: &lt;em&gt;config/secrets.yml.key&lt;/em&gt; and &lt;em&gt;config/secrets.yml.enc&lt;/em&gt;. The &lt;em&gt;key&lt;/em&gt; file will hold the secret key to decrypt data in the &lt;em&gt;enc&lt;/em&gt; file.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Encrypted Credentials in Rails 5.2&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;An update to the older secret handling, this Rails version removed plain text secrets, and only encrypted credentials were allowed. Credentials were stored in &lt;em&gt;config/credentials.yml.enc,&lt;/em&gt; and the key was stored on &lt;em&gt;config/master.key&lt;/em&gt;. Thus users could deploy code and credentials together and store all credentials in one place.&lt;/p&gt;

&lt;p&gt;Here, multi-environment credentials were handled by specifying explicitly, and the configuration was accessed by mentioning the &lt;em&gt;access_key_id&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Encrypted Multi-environment Credentials in Rails 6.1&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The latest update made it to separate credential files for each environment. This built-in feature necessitates &lt;a href="https://github.com/rails/rails/pull/33521"&gt;separate encryption key&lt;/a&gt; for each credential file, thus guaranteeing more security.&lt;/p&gt;

&lt;p&gt;A global credential file is enough for multiple environments. And when the environment is passed, two files would be created:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V_WDbk38--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image3-3.png%3Fwidth%3D626%26name%3Dimage3-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V_WDbk38--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image3-3.png%3Fwidth%3D626%26name%3Dimage3-3.png" alt="global credential file in rails" width="626" height="58"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s an example of how it works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NMWDMiIW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image5-2.png%3Fwidth%3D628%26name%3Dimage5-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NMWDMiIW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image5-2.png%3Fwidth%3D628%26name%3Dimage5-2.png" alt="global credential file in rails" width="628" height="86"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the environment file is missing or not created, the default &lt;em&gt;credentials.yml.enc&lt;/em&gt; file will be used.&lt;/p&gt;

&lt;p&gt;Also, the &lt;em&gt;config/credential/prod.yml.enc file&lt;/em&gt; would be committed to the repository, whereas the &lt;em&gt;config/credential/prod.key&lt;/em&gt; file would not.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Add-ons in Rails 6&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;It is now possible to store the credentials in a location of your choice. This is explicitly mentioned by running:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;config.credentials.content_path&lt;/em&gt; and_ _&lt;/p&gt;

&lt;p&gt;&lt;em&gt;config.credentials.key_path&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Make sure to save the valid key and the credentials to avoid errors while running the code. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handle local environment credentials using&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;config/credentials/environment.key&lt;/em&gt; and_ _&lt;/p&gt;

&lt;p&gt;&lt;em&gt;config/master.key&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;The below command tells Rails to search the credentials file in path config/credentials/local.yml.enc instead of config/credentials/development.yml.enc&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fWp7kFPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image4-4.png%3Fwidth%3D625%26name%3Dimage4-4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fWp7kFPr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image4-4.png%3Fwidth%3D625%26name%3Dimage4-4.png" alt="Add-ons in rails 6.1" width="625" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our latest guide &lt;a href="https://blog.engineyard.com/rails-encrypted-credentials-on-6.2"&gt;Rails encrypted credentials on 6.2&lt;/a&gt; offer an interesting peek into the Rails credentials.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advantages of encrypted credentials&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The main advantages of encrypted multi-environment credentials are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Safety: The separate encryption key for each environment makes it safer&lt;/li&gt;
&lt;li&gt;Easy deployments: since the variables can be moved along with the code, deployments become easier&lt;/li&gt;
&lt;li&gt;A single upload of the key is enough&lt;/li&gt;
&lt;li&gt;The solution applies to any of the Ruby on Rails applications&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Rails is constantly improving the efficiency and scalability of the framework. With the multi-environment credentials enabled, applications used in multiple platforms and POD find it easier to keep the codes simple and accessible.&lt;/p&gt;

&lt;p&gt;Know how to use &lt;a href="https://support.cloud.engineyard.com/hc/en-us/articles/360058941713-How-Kontainers-Works"&gt;Engine Yard Kontainers&lt;/a&gt; to connect to your database, enabling the Rails credentials for improved security.&lt;/p&gt;

&lt;p&gt;To learn more about the older versions of credentials, check out &lt;a href="https://blog.engineyard.com/encrypted-rails-secrets-on-rails-5.1"&gt;Encrypted Rails Secrets on Rails 5.1&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Rewrite Rules in Nginx</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Thu, 21 Oct 2021 10:31:14 +0000</pubDate>
      <link>https://dev.to/devgraph/rewrite-rules-in-nginx-1d6e</link>
      <guid>https://dev.to/devgraph/rewrite-rules-in-nginx-1d6e</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbquiwx7lg56c0w0hj5n6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbquiwx7lg56c0w0hj5n6.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
By &lt;a href="https://blog.engineyard.com/author/ritu-chaturvedi" rel="noopener noreferrer"&gt;Ritu Chaturvedi&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Rewrite rules&lt;/em&gt; modify a part or whole of the URL. This is done for two reasons. First, to inform clients about the relocation of resources, and second, to control the flow to Nginx. The two general-purpose methods used widely for rewriting URLs are the &lt;em&gt;return&lt;/em&gt; directive and the &lt;em&gt;rewrite&lt;/em&gt; directive. Of these, the rewrite directive is more powerful. Let's discuss why it is so, as well as how to rewrite the URLs.&lt;/p&gt;

&lt;p&gt;Having a better &lt;a href="https://www.thegeekstuff.com/2013/11/nginx-vs-apache/" rel="noopener noreferrer"&gt;understanding of NGINX&lt;/a&gt; will make it easier to follow this blog.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Return&lt;/em&gt;&lt;/strong&gt;  &lt;strong&gt;directive&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Return&lt;/em&gt; is the easiest way to rewrite a URL declared in the server or local machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Return&lt;/em&gt;&lt;/strong&gt;  &lt;strong&gt;in Server:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Suppose your site is migrated to a new domain and all existing URLs are to be redirected here; run the below code to direct any new request to your site.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.engineyard.com%2Fhs-fs%2Fhubfs%2Fimage1-3.png%3Fwidth%3D620%26name%3Dimage1-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.engineyard.com%2Fhs-fs%2Fhubfs%2Fimage1-3.png%3Fwidth%3D620%26name%3Dimage1-3.png" alt="Return in Server - Rewrite rule in Ngnix"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This directs all requests that hit &lt;em&gt;www&lt;/em&gt;.&lt;em&gt;previousdomain&lt;/em&gt;.&lt;em&gt;com&lt;/em&gt; to &lt;em&gt;www&lt;/em&gt;.&lt;em&gt;currentdomain&lt;/em&gt;.&lt;em&gt;com.&lt;/em&gt; The &lt;em&gt;www&lt;/em&gt;.&lt;em&gt;previousomain&lt;/em&gt;.&lt;em&gt;com&lt;/em&gt; will send out a '301' error as soon as the above code is run, and a new access request is generated. The two variables, &lt;em&gt;$scheme&lt;/em&gt;, and &lt;em&gt;$request_uri,&lt;/em&gt; get data from the input URL. '&lt;em&gt;Listen 80'&lt;/em&gt; indicates that the block applies to both HTTP and HTTPS requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Return&lt;/em&gt;&lt;/strong&gt;  &lt;strong&gt;in local&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you want to redirect pages in place of a complete domain, you can use the &lt;em&gt;return&lt;/em&gt; directive under the location block.&lt;/p&gt;

&lt;p&gt;Knowing &lt;a href="https://www.nginx.com/blog/creating-nginx-rewrite-rules/" rel="noopener noreferrer"&gt;how to create the Nginx rewrite&lt;/a&gt; rules can save a lot of your effort and time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rewrite directive&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Just like the &lt;em&gt;return&lt;/em&gt; directive, the rewrite directive can also act in both server and local. Compared to the &lt;em&gt;return&lt;/em&gt; directive, the &lt;em&gt;rewrite&lt;/em&gt; directive can handle complex replacements of URLs. The following is the syntax of the &lt;em&gt;rewrite&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.engineyard.com%2Fhs-fs%2Fhubfs%2Fimage3-2.png%3Fwidth%3D627%26name%3Dimage3-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.engineyard.com%2Fhs-fs%2Fhubfs%2Fimage3-2.png%3Fwidth%3D627%26name%3Dimage3-2.png" alt="rewrite directive - Rewrite rule in Ngnix"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;regex&lt;/em&gt; is a regular expression that matches against incoming URI.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;replacement_url&lt;/em&gt; is the string used to change the requested URI.&lt;/p&gt;

&lt;p&gt;The value of the &lt;em&gt;flag&lt;/em&gt; decides if any more redirection or processing is necessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Static page&lt;/strong&gt;  &lt;strong&gt;&lt;em&gt;rewrite&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Suppose you want to redirect the page &lt;em&gt;https&lt;/em&gt;://&lt;em&gt;example&lt;/em&gt;.&lt;em&gt;com/tutorial&lt;/em&gt; to &lt;em&gt;https&lt;/em&gt;://&lt;em&gt;example&lt;/em&gt;.&lt;em&gt;com/new_page&lt;/em&gt;. The directive will be:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.engineyard.com%2Fhs-fs%2Fhubfs%2Fimage2-3.png%3Fwidth%3D622%26name%3Dimage2-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.engineyard.com%2Fhs-fs%2Fhubfs%2Fimage2-3.png%3Fwidth%3D622%26name%3Dimage2-3.png" alt="static page rewrite - Rewrite rule in Ngnix"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The line &lt;em&gt;location = /tutorial&lt;/em&gt; defines that any identification of the tutorial is to be replaced. The &lt;em&gt;rewrite&lt;/em&gt; command says to replace the phrase within notations &lt;em&gt;^&lt;/em&gt; and $ with '&lt;em&gt;new_page.html'&lt;/em&gt; and then break the command. The notation '?' is termed as a non-greedy modifier, after which the pattern search is stopped.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dynamic page&lt;/strong&gt;  &lt;strong&gt;&lt;em&gt;rewrite&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consider rewriting the URL &lt;em&gt;https&lt;/em&gt;://&lt;em&gt;www&lt;/em&gt;.&lt;em&gt;sample&lt;/em&gt;.&lt;em&gt;com/user.php?id=11&lt;/em&gt; to &lt;em&gt;https&lt;/em&gt;://&lt;em&gt;www&lt;/em&gt;.&lt;em&gt;sample&lt;/em&gt;.&lt;em&gt;com/user/11&lt;/em&gt;. Here, the user=11 is to be replaced. By using the static rewrite method, it would require writing the rewrite command 10 times. Instead, let's do it in a single go.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.engineyard.com%2Fhs-fs%2Fhubfs%2Fimage4-3.png%3Fwidth%3D622%26name%3Dimage4-3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fblog.engineyard.com%2Fhs-fs%2Fhubfs%2Fimage4-3.png%3Fwidth%3D622%26name%3Dimage4-3.png" alt="dynamic page rewrite - Rewrite rule in Ngnix"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The line &lt;em&gt;location = /user.php&lt;/em&gt; asks Nginx to check for the prefix'/user'. As said earlier, the Nginx will search for phrases between the start and end notations as &lt;em&gt;^&lt;/em&gt;  and $ along with the non-greedy '?' modifier. The phrase in our example is a range of users. It is mentioned inside the square brackets as [0-9]+. The back-reference in this expression is noted within the parenthesis and is referred to by the $1 symbol. So, for our example, the rewrite will happen for all users automatically. &lt;/p&gt;

&lt;p&gt;One special case under the dynamic reference is the multiple back-references. &lt;/p&gt;

&lt;p&gt;Now, we have discussed how to write the rewrite rules for simple and complex URLs.&lt;/p&gt;

&lt;p&gt;Understand the detailed working of rewrite rules through some &lt;a href="https://www.thegeekstuff.com/2017/08/nginx-rewrite-examples/" rel="noopener noreferrer"&gt;examples&lt;/a&gt; handling various scenarios.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Directive Comparison&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let's analyze both directives by comparing them and find out why the rewrite derivative is more powerful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Return directive&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It is simple to use and understand. &lt;/li&gt;
&lt;li&gt;It can be used in both server and location contexts. &lt;/li&gt;
&lt;li&gt;It explicitly mentions the corrected or updated URL so that the client can use them in the future. &lt;/li&gt;
&lt;li&gt;Return directives can include multiple error codes as well.&lt;/li&gt;
&lt;li&gt;For codes 301, 302, 303, and 307, the URL parameters define the redirect URL. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;return (301 | 302 | 303 | 307) url;&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For other codes, the text is to be explicitly mentioned by the user. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;return (1xx | 2xx | 4xx | 5xx) ["text"];&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For example: &lt;em&gt;return 401 "Access denied because the token is expired or invalid";&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This directive can be used in scenarios where the return URL is correct for both server and location block, and rewritten URL is built with Nginx variables.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Rewrite directive&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It can accommodate more complex URL modifications where capturing elements without Nginx variables or an update in the elements in the path is required.&lt;/li&gt;
&lt;li&gt;It can be used in both server and location contexts. &lt;/li&gt;
&lt;li&gt;The rewrite directive can return only code 301 or 302. To accommodate other codes, explicitly add the return directive after the rewrite directive.&lt;/li&gt;
&lt;li&gt;It may not send the redirect details to the client.&lt;/li&gt;
&lt;li&gt;The Nginx request processing is not halted.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The &lt;em&gt;return&lt;/em&gt; and &lt;em&gt;rewrite&lt;/em&gt; directives can be used to redirect URLs in both server and location contexts. Though the return directive is much simpler, the rewrite directive is widely used as it can also handle complex modifications/updates to the URLs.&lt;/p&gt;

</description>
      <category>nginx</category>
      <category>webdev</category>
      <category>devops</category>
    </item>
    <item>
      <title>Code Concurrency and Two Easy Fixes</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Tue, 12 Oct 2021 07:08:07 +0000</pubDate>
      <link>https://dev.to/devgraph/code-concurrency-and-two-easy-fixes-1bf4</link>
      <guid>https://dev.to/devgraph/code-concurrency-and-two-easy-fixes-1bf4</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Rs4Dwcj---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avj8tkm86cl30ftck647.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rs4Dwcj---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avj8tkm86cl30ftck647.jpg" alt="[Code Concurrency and Two Easy Fixes]"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By &lt;a href="https://blog.engineyard.com/author/ritu-chaturvedi"&gt;Ritu Chaturvedi&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Code concurrency is a default in any Rails code. A &lt;a href="https://www.geeksforgeeks.org/multithreaded-servers-in-java/"&gt;threaded web server&lt;/a&gt; will simultaneously serve many HTTP requests, and each will hold its controller instance. Threaded active job adapters and action cable channels also handle multiple requests simultaneously. Even when the global process space is shared, work for each instance is managed separately. For scenarios where the shared space is not altered, this process will run smoothly.&lt;/p&gt;

&lt;p&gt;Let us see in detail how this is managed in our Rails applications.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Executor: Executors separate the framework and application codes by wrapping the application code.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The wrapping of code was more complex before Rails 5.0. To protect a code, either separate Rack middleware classes were used or direct wrapping was used. With recent updates in Rails, the executor handles the wrapping in a single step that is easier to understand.&lt;/p&gt;

&lt;p&gt;Call the executor to wrap the application code as you invoke it.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d2Fu1g85--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image1-2.png%3Fwidth%3D625%26name%3Dimage1-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d2Fu1g85--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image1-2.png%3Fwidth%3D625%26name%3Dimage1-2.png" alt="Code concurrency Executor example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One attractive feature of the executor is its reentrancy.&lt;/p&gt;

&lt;p&gt;Two callbacks of the executor are '&lt;em&gt;to run'&lt;/em&gt; and '&lt;em&gt;to complete&lt;/em&gt;.' This enables wrapping code in parts when it is not possible to wrap the code as blocks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---w2rzHoO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image2-2.png%3Fwidth%3D629%26name%3Dimage2-2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---w2rzHoO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image2-2.png%3Fwidth%3D629%26name%3Dimage2-2.png" alt="code concurrency fix example"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The current thread is moved to the '&lt;em&gt;running'&lt;/em&gt; mode, temporarily blocking the thread. So, the thread will not be accessible by any other request that tries to do so.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Reloader: The Reloader functions similarly to the Executor. The code that is to be protected is wrapped before another request is hit. This is used in scenarios where application code is invoked multiple times by any long-running process. Most often, in Rails, the web requests and Active Jobs are by default wrapped. So the Reloader is rarely used.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A Reloader is employed when there is a need to reload the application. When the requested condition asks for reloading, the Reloader delays the application reload until it is secure. And for scenarios where application reloads are mandated, the Reloader waits until the current block is executed and allows the reload. Thus, the code is protected from errors.&lt;/p&gt;

&lt;p&gt;The '&lt;code&gt;to_run&lt;/code&gt;' and '&lt;code&gt;to_complete&lt;/code&gt;' callbacks are used by the Reloader also.&lt;/p&gt;

&lt;p&gt;A &lt;em&gt;class unload&lt;/em&gt; is involved in the Reloader process. Here, all of the auto-loaded classes are removed and set ready to be further loaded. The application reload is to happen only before or after the class unload, and so, the two additional callbacks of Reloader are '&lt;code&gt;before_class_unload&lt;/code&gt;' and '&lt;em&gt;&lt;code&gt;after_class_unload&lt;/code&gt;'&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://guides.rubyonrails.org/threading_and_code_execution.html"&gt;Executor and Reloaders&lt;/a&gt; are used by the Rails framework components as well. &lt;code&gt;ActionDispatch::Executor&lt;/code&gt; and &lt;code&gt;ActionDispatch::Reloader&lt;/code&gt;are included in the default application stack. Whenever there is a code change, the Reloader serves a fresh copy of the application for an HTTP request. Active Job feature also utilizes Reloaders, whereas Action Cable uses Executor. Action Cable also uses the before_class_unload of Reloader to disconnect all the connections.&lt;/p&gt;

&lt;p&gt;As discussed, code concurrency is handled by default with the threaded active jobs and action cables features of Rails codes. With &lt;a href="https://weblog.rubyonrails.org/releases/"&gt;recent Rails updates&lt;/a&gt;, Executors and Reloaders are also added to improve the code concurrency handling.&lt;/p&gt;

&lt;p&gt;Also, check out &lt;a href="https://blog.engineyard.com/7-ruby-gems-to-keep-in-your-toolbox"&gt;7 Ruby Gems to keep in your toolbox&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>concurrency</category>
      <category>rails</category>
      <category>ruby</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Sending iOS Push Notifications via APNs</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Mon, 04 Oct 2021 11:46:05 +0000</pubDate>
      <link>https://dev.to/devgraph/sending-ios-push-notifications-via-apns-4b02</link>
      <guid>https://dev.to/devgraph/sending-ios-push-notifications-via-apns-4b02</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dqGD7Zws--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8zl14bbmpqmbdsb9all.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dqGD7Zws--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8zl14bbmpqmbdsb9all.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
By &lt;a href="https://blog.engineyard.com/author/ritu-chaturvedi"&gt;Ritu Chaturvedi&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;User engagement is of the highest importance in today's world, no matter what you sell or offer to your clients. And mobile phone notifications play the masterstroke in this aspect. By regular interactions with your clients through push notifications, it is possible to release timely updates and stay connected with them. So, let us discuss setting up an Apple push notification service (APN) with a Node.js application today.&lt;/p&gt;

&lt;p&gt;In this blog, we will discuss in detail the APN services, enable and register for remote notifications, device token ids, and APN node package with the help of a sample iOS. Make sure to have a physical iOS device, as push notifications won't work with the simulator. Also, an Apple Developer Program Membership account is a must for creating a Push Notifications Certificate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting to know about the Remote Notification Service
&lt;/h2&gt;

&lt;p&gt;Apple devices use a remote notification service to send and receive notifications in any of the iOS, tvOS, or macOS. The remote notification setup is to be configured in your Apple device with proper device tokens and certificates. Otherwise, there are possibilities that the notification services don't work as expected. &lt;/p&gt;

&lt;p&gt;The main component of this remote notification setup is the Apple Push Notification service, also known as APNs. APN in actuality is a collection of services that allows the developer to send the notifications from their server to the targeted iOS devices. The APNs are robust and secure methods to establish a connection between the provider server and individual Apple devices. &lt;/p&gt;

&lt;p&gt;APN contains two components, the &lt;em&gt;Gateway component,&lt;/em&gt; and the &lt;em&gt;Feedback component,&lt;/em&gt; both of which are must-haves. &lt;/p&gt;

&lt;p&gt;Gateway component establishes the TLS connection from a provider side which enables the provider server to send messages to the Apple, which will be processed by Apple and then forwarded to the respective device. It is recommended to keep the connection in an 'always-on' mode. The &lt;em&gt;Feedback Component&lt;/em&gt; is established only occasionally to identify and remove the devices which no longer receive notifications for specific applications. This is a mandatory component in any Apple device.&lt;/p&gt;

&lt;p&gt;To receive and handle the remote notifications, the app you provide must have four components as basics. They are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Remote connection enabled&lt;/li&gt;
&lt;li&gt;A registered Apple Push Notification Service (APN) and device token&lt;/li&gt;
&lt;li&gt;Sending device token to the notification provider&lt;/li&gt;
&lt;li&gt;Establish support to handle incoming notifications in the device&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the push notification setup is complete in your apps and the provider server-side, the providers can send notification requests to APNs. they convey the notification payloads to each targeted device. As the notification is received, the payload is delivered to the respective app and user communications are managed. It is great to note that the APNs hold the notifications if the device is powered off and waits till the device turns power on while trying multiple times to deliver the message to the appropriate app in the device. This makes it perfect for the user and the provider not to miss any important communication.&lt;/p&gt;

&lt;p&gt;It is good to have a clear and strong &lt;a href="https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview.html#//apple_ref/doc/uid/TP40008194-CH8-SW1"&gt;understanding of the APNs&lt;/a&gt;, direct from Apple’s guides.&lt;/p&gt;

&lt;p&gt;The provider is entitled to many responsibilities. Some of the major ones are below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Receive globally unique, app-specific device tokens and all relevant data pertaining to your app on user’s devices.&lt;/li&gt;
&lt;li&gt;Determine when the remote notifications are to be sent to each device.&lt;/li&gt;
&lt;li&gt;Building and sending notification requests to APNs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only with proper entitlements to talk to APNs, can the app handle the remote notifications. Without these entitlements, the apps are rejected by the App Store. Know the details of entitlements required for your app in the &lt;a href="https://developer.apple.com/documentation/xcode"&gt;‘Enable push notification’&lt;/a&gt; section of the Xcode Help page on Apple’s website. Once notification pushes are enabled, app registration is a must every time the app is launched. The process of app registering includes the below steps, for all Apple devices.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your app asks to be registered with APNs&lt;/li&gt;
&lt;li&gt;APNs sends app-specific device tokens&lt;/li&gt;
&lt;li&gt;The system delivers the device to your app by calling a method in your app delegate&lt;/li&gt;
&lt;li&gt;The device token is shared with the app’s associated provider &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A method &lt;a href="https://developer.apple.com/documentation/uikit/uiapplication/1623078-registerforremotenotifications"&gt;registerForRemoteNotifications&lt;/a&gt; is called at the launch time by your app. This initiates the app object to contact APN for app-specific device tokens. Below code shows how to fetch your device token:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AiDGTkKo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/LKGo7PhKHC-qH2W3x9XQJD017qttXmcv5bHQ8Y2dtOThasw-yLZiTIWAvp98pyujpy_E6NMteiOdmMGA5Qi3l5f8BAPEbYNhjMhn0HP3Tz4y_2rLBflfqzRAAI3q7SgEzKNN5TA%3Ds0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AiDGTkKo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/LKGo7PhKHC-qH2W3x9XQJD017qttXmcv5bHQ8Y2dtOThasw-yLZiTIWAvp98pyujpy_E6NMteiOdmMGA5Qi3l5f8BAPEbYNhjMhn0HP3Tz4y_2rLBflfqzRAAI3q7SgEzKNN5TA%3Ds0" alt="fetch your device token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Swift: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5Cupedry--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/9GZhmVgaIZNubrRSs9ZVtagRbQBZUqWZEkHcUHax4Yk6uE0zvuOLONNa6_WlAx12YxPekHr-Rvy5dn3crA1K6jeGJnV0zNe4hDC0TGakhROWvByUi5Smx4lUH5-N5FwdSMrO8lI%3Ds0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5Cupedry--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/9GZhmVgaIZNubrRSs9ZVtagRbQBZUqWZEkHcUHax4Yk6uE0zvuOLONNa6_WlAx12YxPekHr-Rvy5dn3crA1K6jeGJnV0zNe4hDC0TGakhROWvByUi5Smx4lUH5-N5FwdSMrO8lI%3Ds0" alt="how to fetch your device token"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Know more about the &lt;a href="https://developer.apple.com/library/archive/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/HandlingRemoteNotifications.html#//apple_ref/doc/uid/TP40008194-CH6-SW"&gt;Remote and Local notifications&lt;/a&gt;on Apple’s page.&lt;/p&gt;

&lt;p&gt;Thus, the system establishes an encrypted and persistent IP connection between the app and APNS, whenever the app starts on a device. This connection setups and enables the device to receive notifications, thus requires configuration on the Apple development account and necessary certificate creation.&lt;/p&gt;

&lt;p&gt;For our servers to communicate with the APNs, a lot of connections and configurations are to be done in Node.js. It is easier to opt for the &lt;em&gt;apn&lt;/em&gt; packages with preset code to establish error-free connections. This ready-to-use package is one good choice for the production environment. The package is based on  HTTP/2 API and is capable of maximizing the processing and notification delivery. It also collects the not-sent notifications when an error has occurred. The &lt;a href="https://github.com/node-apn/node-apn"&gt;official repository of the apn package&lt;/a&gt;will gives a better understanding of this topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Certificates
&lt;/h2&gt;

&lt;p&gt;One main step in setting up the notification services is the certificate creation, specific to your app and device. This certificate tells the APN that our server has legal permissions to send and receive notifications of our app. That means the SSL certificate establishes a secure connection with the APNs. The certificate for our is created by us and is specific to our app alone.&lt;/p&gt;

&lt;p&gt;Proceed with the below steps to create a certificate:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the project from Xcode, and select the Capabilities tab from Target.&lt;/li&gt;
&lt;li&gt;Enable Push Notification switch and wait for automatic configuration to be completed.&lt;/li&gt;
&lt;li&gt;In the developer center, search for our app in our developer account. Tap Edit.&lt;/li&gt;
&lt;li&gt;Open the Push Notification section on this screen.&lt;/li&gt;
&lt;li&gt;Open ‘Create Certificate’ in the Development SSL Certificate tab&lt;/li&gt;
&lt;li&gt;Follow the procedure in the upcoming steps on the screen to create a certificate. The certificate will be ready for download once created successfully.&lt;/li&gt;
&lt;li&gt;Download the certificate aps_development.cer.&lt;/li&gt;
&lt;li&gt;Select the certificate and private key from the list and export a p12 file. Type in the password and note to remember the newly created password as it will be needed by Apple to send push notifications.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Obtaining device token
&lt;/h2&gt;

&lt;p&gt;In the above section, we saw the general code to fetch device tokens. Let us consider a scenario-specific code that we can further use for Node.js script creation.&lt;/p&gt;

&lt;p&gt;Our sample app is titled SimplRTApp. We need to identify the device tokens to push notifications from the sample app. So, to get the device token, add the following methods on our &lt;em&gt;AppDelegate.swift&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pQwwiYHT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/tygXGn37MrpeJzTTUBz5scstyVz7XqSPq6-kSKHnzZAg3h9zCqZAikLBDN8KH-eoIleE0gNSdcyZjU1Wa0lKkEv7_XES2c1bAzHwrjBHXZ1Mjk8pTkvC0sbzGmgbzTKnKA1tQv0%3Ds0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pQwwiYHT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/tygXGn37MrpeJzTTUBz5scstyVz7XqSPq6-kSKHnzZAg3h9zCqZAikLBDN8KH-eoIleE0gNSdcyZjU1Wa0lKkEv7_XES2c1bAzHwrjBHXZ1Mjk8pTkvC0sbzGmgbzTKnKA1tQv0%3Ds0" alt="SimplRTApp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--daRSkzyP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/cVkmf9gADkhUwFxcaqFI4pRsYyIuuKTJ8Y7qYQOs9HSrQFzvqrofg2-a_tuZtRq6YPyheG9jPiy7HxQ-_dwxgrtl-1qeatS2SJ1gUnXrNhB0ZKkJ0Bv1WEfpbuHlDeIoTwC4UPU%3Ds0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--daRSkzyP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/cVkmf9gADkhUwFxcaqFI4pRsYyIuuKTJ8Y7qYQOs9HSrQFzvqrofg2-a_tuZtRq6YPyheG9jPiy7HxQ-_dwxgrtl-1qeatS2SJ1gUnXrNhB0ZKkJ0Bv1WEfpbuHlDeIoTwC4UPU%3Ds0" alt="SimplRTApp"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View the raw code for the below snippets in the &lt;a href="https://gist.githubusercontent.com/fedejordan/9eda97e5d7b895cdb239ada430f13cab/raw/60382f53919d5293a2870b7f05b792d997e708f2/AppDelegate.swift"&gt;github library&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Also, check the &lt;a href="https://github.com/fedejordan/SimpleRTApp"&gt;final app code for SimpleRTApp&lt;/a&gt;for further details.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Node.js Script Creation
&lt;/h2&gt;

&lt;p&gt;We now have a p12 file to establish a secure connection with the APNs and a device token to send push notifications to this device from our SimpleRTApp. All we need now is to create our program on Node.js and test the push notifications.&lt;/p&gt;

&lt;p&gt;To create the Node.js script:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the terminal app.&lt;/li&gt;
&lt;li&gt;Need to install the module that handles APN connections. For that, type npm install apn. &lt;/li&gt;
&lt;li&gt;Copy or save the p12 file to the folder same as that of our Node.js script.&lt;/li&gt;
&lt;li&gt;Create the below &lt;a href="https://gist.githubusercontent.com/fedejordan/89cb4fd33a57d18441469748255b1ec6/raw/5972273eae18a920271a4d777f0daefd9841cbcf/send_push.js"&gt;script&lt;/a&gt;(available from Github).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9GWMVDo---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/QxUH4Xqht-Whh_RDFP_LFJIm6oZE4Oykl7M4998VnupUmFvKrzmj3UMUfQXdAUHv4-g3Ks8ysPLqG6lRgGXvgfmxV-oY5Cjo5zAKOSxgxC-cOazf_ES8rWr5VZ0lLmz3bamdMcQ%3Ds0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9GWMVDo---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/QxUH4Xqht-Whh_RDFP_LFJIm6oZE4Oykl7M4998VnupUmFvKrzmj3UMUfQXdAUHv4-g3Ks8ysPLqG6lRgGXvgfmxV-oY5Cjo5zAKOSxgxC-cOazf_ES8rWr5VZ0lLmz3bamdMcQ%3Ds0" alt="Node.js script"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the script, the &amp;lt;device-token&amp;gt; and &amp;lt;p12-password&amp;gt; are replaced by the values specific to our app. The above script does the following to set up a real-time push notification when triggered.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Establish the connection&lt;/li&gt;
&lt;li&gt;Instate the &lt;em&gt;provider&lt;/em&gt; object&lt;/li&gt;
&lt;li&gt;Create &lt;em&gt;Notification&lt;/em&gt; and send with the &lt;em&gt;send()&lt;/em&gt; method&lt;/li&gt;
&lt;li&gt;Finish execution when there is a response for either success or error scenario&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With the above script, the push notification setup is complete. Now is the time to test the script. For testing, type node send_push.js in the terminal and wait for the notification to arrive in the mentioned Apple device.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/fedejordan/SimpleRTAppAPI"&gt;Complete script for the push notification&lt;/a&gt; service in check in the GitHub and is available for your reference. &lt;/p&gt;

&lt;p&gt;It is a good practice to include a custom action with the notifications. It allows the user to communicate quickly with a proper preset response. Examples for custom actions include the ‘Reply’ or ‘Mark as read’ button in a message notification, ‘Install’ or ‘Snooze’ options in an OS update notification, etc. in our case, one easy and simple custom action would be ‘Retweet’.&lt;/p&gt;

&lt;p&gt;Simple edits on the Node.js code shared above will enable these custom actions. Let’s see how to proceed.&lt;/p&gt;

&lt;p&gt;While setting the notification object, include below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XqAd6zSw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/1_BcdX8WAm3OoJPTmZl701u0ZM-ALoXAamdvl-rpC4iaRkxzbL8NK9Y8A8qT8go3UHTqGiFx0yt1Au6km6LLuL4KlEVyToo-zHJ61sasmn4RBQjw8M5IL0ts3MNTd-DDC9Wbr78%3Ds0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XqAd6zSw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/1_BcdX8WAm3OoJPTmZl701u0ZM-ALoXAamdvl-rpC4iaRkxzbL8NK9Y8A8qT8go3UHTqGiFx0yt1Au6km6LLuL4KlEVyToo-zHJ61sasmn4RBQjw8M5IL0ts3MNTd-DDC9Wbr78%3Ds0" alt="Ios notification object"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, to let the iOS app know it is an allowed action with the below code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yob4BIIs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/0_72kEE7PHcs2XcmHll7wuzZNgXiZMnKLvSLs9gIA955xn3PxW21cGyR7T3he5MRq1TA-mz-XkZ6swWI3JY_W4fAdbS1uhIVVmuiwrUtoFjeLu4irBwEGGjt3E8l3SAPDMwY96A%3Ds0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yob4BIIs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/0_72kEE7PHcs2XcmHll7wuzZNgXiZMnKLvSLs9gIA955xn3PxW21cGyR7T3he5MRq1TA-mz-XkZ6swWI3JY_W4fAdbS1uhIVVmuiwrUtoFjeLu4irBwEGGjt3E8l3SAPDMwY96A%3Ds0" alt="IoS notification object"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thus, for our &lt;em&gt;AppDelegate.swift&lt;/em&gt;, the script would be as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dH_lKsVM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/Qkie7ZVQ56xynL5k-WByEWoy0OaMuuGOGUsQhERdpOl4MpAmytjB4yHd_xNiKq4jnqNHfuQVRkIJw-GTmHvpLSJbLMRBCj7dR5WP_BKQIpPKCZKbwQvDGLTI9R8ZpWLTylQ5Be0%3Ds0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dH_lKsVM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh6.googleusercontent.com/Qkie7ZVQ56xynL5k-WByEWoy0OaMuuGOGUsQhERdpOl4MpAmytjB4yHd_xNiKq4jnqNHfuQVRkIJw-GTmHvpLSJbLMRBCj7dR5WP_BKQIpPKCZKbwQvDGLTI9R8ZpWLTylQ5Be0%3Ds0" alt="AppDelegate.swift notification object"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HXfJHhwD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/eZoS38d_5sOnoNo_Btfcj1AWOf17TZfpAg2qqxvcpqDkbuGERVgUUXlv0EXOIZW4afbN5Nisp5CSSH7syylx13Omzu_dfQUmr432z9UhvAkXSqk47wtHdVMEkkofrLxhwnUJ1HQ%3Ds0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HXfJHhwD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh5.googleusercontent.com/eZoS38d_5sOnoNo_Btfcj1AWOf17TZfpAg2qqxvcpqDkbuGERVgUUXlv0EXOIZW4afbN5Nisp5CSSH7syylx13Omzu_dfQUmr432z9UhvAkXSqk47wtHdVMEkkofrLxhwnUJ1HQ%3Ds0" alt="AppDelegate.swift notification object"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eIFkhNjD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/pxxXQS5gKsQMYsDf_zC5fY_BYX1FsINT_r-WA8gsR_3VEEZqI-rSK8kg-QBWA7U13vG-7uDSnGXll7Ik46TdAXd-L9Aihc2_qeNz1zwfh-so3jO0TIl59WI_uZAGy1A6n7il9e8%3Ds0" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eIFkhNjD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://lh4.googleusercontent.com/pxxXQS5gKsQMYsDf_zC5fY_BYX1FsINT_r-WA8gsR_3VEEZqI-rSK8kg-QBWA7U13vG-7uDSnGXll7Ik46TdAXd-L9Aihc2_qeNz1zwfh-so3jO0TIl59WI_uZAGy1A6n7il9e8%3Ds0" alt=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;As we come to the end of this blog, we have discussed all the below points in detail &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The remote notifications&lt;/li&gt;
&lt;li&gt;APNs&lt;/li&gt;
&lt;li&gt;Certificate creation&lt;/li&gt;
&lt;li&gt;Fetching Device tokens&lt;/li&gt;
&lt;li&gt;Node.js code for delivering push notification&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ios</category>
      <category>swift</category>
      <category>node</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Building a Vagrant Box: Setting up your Environment</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Mon, 27 Sep 2021 05:50:59 +0000</pubDate>
      <link>https://dev.to/devgraph/building-a-vagrant-box-setting-up-your-environment-o5e</link>
      <guid>https://dev.to/devgraph/building-a-vagrant-box-setting-up-your-environment-o5e</guid>
      <description>&lt;p&gt;By &lt;a href="https://blog.engineyard.com/author/ritu-chaturvedi"&gt;Ritu Chaturvedi&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If setting up a virtual development environment is your goal, here is a guide on how to utilize the vagrant box and virtual machine for this purpose. Let’s discuss the process in detail with this blog.&lt;/p&gt;

&lt;p&gt;Here, we will build a vagrant on top of a virtualization engine, as VirtualBox. &lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the vagrant box
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.virtualbox.org/wiki/Downloads"&gt;Download VirtualBox&lt;/a&gt; and create a directory. You will also need a base operating system. Let’s use ‘ubuntu/trusty64’ for now. Now, we need to initiate the use of the vagrant directory as ‘vagrant init’ and the operating system name. &lt;/p&gt;

&lt;p&gt;So, the code would be:&lt;/p&gt;

&lt;p&gt;         &lt;em&gt;mkdir vagrant_demo                              &lt;/em&gt; --create directory&lt;/p&gt;

&lt;p&gt;&lt;em&gt;        &lt;/em&gt; &lt;em&gt;cd vagrant_demo                        &lt;/em&gt; --enter into the directory&lt;/p&gt;

&lt;p&gt;&lt;em&gt;vagrant init ubuntu/trusty64                 &lt;/em&gt; --initiate vagrant operating system&lt;/p&gt;

&lt;p&gt;A &lt;a href="https://friendsofvagrant.github.io/v1/docs/vagrantfile.html"&gt;VagrantFile&lt;/a&gt;, which explains the configuration of the vagrant environment, is created with that last command. Five specs in this file can be modified to suit your requirements. If no modifications are done, the default setup will be used. &lt;/p&gt;

&lt;p&gt; &lt;em&gt;-config.vm.box&lt;/em&gt; -- defining vagrant Operating System.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;-config.vm.provider&lt;/em&gt; -- defining the base (Virtual Machine). With this command, you can also manipulate the number of CPUs used.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;-config.vm.network&lt;/em&gt; -- defining IP address and ports of application. Rails applications usually default to port 3000. In our case, the host is our computer, and the guest is a virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;-config.vm.synced.folder&lt;/em&gt; -- defining how the guest accesses files in the host. Thus, the project files can be modified on your computer, and then they will be automatically synced to the virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;-config.vm.provision&lt;/em&gt; -- defining the virtual environment setup.&lt;/p&gt;

&lt;p&gt;Thus, the final version of our vagrant file is as follows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--01glYXI8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image12.png%3Fwidth%3D849%26name%3Dimage12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--01glYXI8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image12.png%3Fwidth%3D849%26name%3Dimage12.png" alt="vagrant file"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the vagrant file is created, start and &lt;em&gt;ssh&lt;/em&gt; into it to be in a completely active yet isolated OS. Now that the virtual machine and vagrant file are up and running, define and install everything you need for developing your application.  In our example, we need to do some installations as below:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;install git&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ivCpTXGv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image10.png%3Fwidth%3D847%26name%3Dimage10.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ivCpTXGv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image10.png%3Fwidth%3D847%26name%3Dimage10.png" alt="install git"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;install curls&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FJxgiJtc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image6.png%3Fwidth%3D848%26name%3Dimage6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FJxgiJtc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image6.png%3Fwidth%3D848%26name%3Dimage6.png" alt="install curls"&gt;&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;load rvm&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bp_yp3KU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image2-1.png%3Fwidth%3D847%26name%3Dimage2-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bp_yp3KU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image2-1.png%3Fwidth%3D847%26name%3Dimage2-1.png" alt="load rvm"&gt;&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;em&gt;install ruby&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--63HVbCSO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image13.png%3Fwidth%3D848%26name%3Dimage13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--63HVbCSO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image13.png%3Fwidth%3D848%26name%3Dimage13.png" alt="install ruby"&gt;&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;set default ruby version&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Rh2GkJLf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image14.png%3Fwidth%3D845%26name%3Dimage14.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Rh2GkJLf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image14.png%3Fwidth%3D845%26name%3Dimage14.png" alt="set default ruby version"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; verify ruby version&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QrbYqnC---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image3-1.png%3Fwidth%3D844%26name%3Dimage3-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QrbYqnC---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image3-1.png%3Fwidth%3D844%26name%3Dimage3-1.png" alt="verify ruby version"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;install and check your rails version&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yv3xuAYr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image8.png%3Fwidth%3D846%26name%3Dimage8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yv3xuAYr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image8.png%3Fwidth%3D846%26name%3Dimage8.png" alt="install and check your rails version1"&gt;&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IpwmoHEG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image9-1.png%3Fwidth%3D846%26name%3Dimage9-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IpwmoHEG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image9-1.png%3Fwidth%3D846%26name%3Dimage9-1.png" alt="install and check your rails version2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;install bundler&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--paLd_mHf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image4-1.png%3Fwidth%3D846%26name%3Dimage4-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--paLd_mHf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image4-1.png%3Fwidth%3D846%26name%3Dimage4-1.png" alt="install bundler"&gt;&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;bundle your gems&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DzAoC5IW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image1-1.png%3Fwidth%3D847%26name%3Dimage1-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DzAoC5IW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image1-1.png%3Fwidth%3D847%26name%3Dimage1-1.png" alt="bundle your gems"&gt;&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;em&gt;install nodejs&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NxfCv43e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image7.png%3Fwidth%3D849%26name%3Dimage7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NxfCv43e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image7.png%3Fwidth%3D849%26name%3Dimage7.png" alt="install nodejs"&gt;&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;em&gt;install your database&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uhGYQNr2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image5-1.png%3Fwidth%3D1950%26name%3Dimage5-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uhGYQNr2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image5-1.png%3Fwidth%3D1950%26name%3Dimage5-1.png" alt="install your database."&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Make sure to have a properly aligned &lt;a href="https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/"&gt;database.yml file&lt;/a&gt; and set the database names in this file. The alignment of this &lt;em&gt;yml&lt;/em&gt; file is very important because it can be read wrongly otherwise. Another important factor to consider is the database names. The database names in this &lt;em&gt;yml&lt;/em&gt; file should be the database names of your rails application.&lt;/p&gt;

&lt;p&gt;Finally, the vagrant environment configuration is complete, and now create your application from scratch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the application
&lt;/h2&gt;

&lt;p&gt;Creating an application from scratch is done in your local machine and not in a vagrant. &lt;/p&gt;

&lt;p&gt; In your local machine, open a new tab in the same directory as your vagrant file. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now, to create a new application, run:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;rails new your_app_name database = postgresql&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The application is now created. Configure the database with names and permissions, as discussed above.&lt;/li&gt;
&lt;li&gt;Open the terminal where the vagrant is running. &lt;/li&gt;
&lt;li&gt;Move into your synced directory, which is “&lt;em&gt;/vagrant_files”&lt;/em&gt; in our example. &lt;/li&gt;
&lt;li&gt;Move into the new app’s directory.&lt;/li&gt;
&lt;li&gt;Run ‘&lt;em&gt;bundle install&lt;/em&gt;’.&lt;/li&gt;
&lt;li&gt;Run ‘&lt;em&gt;rails s’&lt;/em&gt; and your app should be up and running at the port or URL as per your definition.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using the existing application
&lt;/h2&gt;

&lt;p&gt;Open a new terminal in the directory of your vagrant file. Bring the code of your existing app from GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6Ss3_8lN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image11.png%3Fwidth%3D849%26name%3Dimage11.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6Ss3_8lN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://blog.engineyard.com/hs-fs/hubfs/image11.png%3Fwidth%3D849%26name%3Dimage11.png" alt="clone-existing-app"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, update the database with proper names.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open the terminal where the vagrant is running. &lt;/li&gt;
&lt;li&gt;Move into your synced directory, which is “/vagrant_files” in our example. &lt;/li&gt;
&lt;li&gt;Move into the new app’s directory.&lt;/li&gt;
&lt;li&gt;Run ‘&lt;em&gt;bundle install&lt;/em&gt;’.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Run ‘&lt;em&gt;rails s’&lt;/em&gt; and your app should be up and running at the port or URL as per your definition.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;At first, the process of creating and setting up a virtual machine and a vagrant box will be tiring. But once tuned in, there is no going back.&lt;/p&gt;

&lt;p&gt;There are options to use existing vagrant boxes and also use packers instead of manually doing all the setup. But either way, it is always better to know the logic and working of any predefined schemes for better understanding and easy troubleshooting. So, try out your versions in an above-discussed manner before creating your app. Once you get into the detailed flow, you can skip the basic steps by utilizing predefined packers and move up the ladder for further challenges. &lt;/p&gt;

&lt;p&gt; There are chances to replace virtual machines with Docker containers as well. Want to know more about virtual machines and Dockers? Check out our blog post - &lt;a href="https://blog.engineyard.com/docker-vs-virtual-machines-explained"&gt;Docker Vs Virtual Machines Explained&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To deploy and start your Ruby apps on Engine Yard, start your &lt;a href="https://signup.ey.io/?_ga=2.953649.1268158655.1630289997-1200590696.1626847843"&gt;free trial&lt;/a&gt; now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Credits:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Images source:&lt;/em&gt; &lt;a href="https://dev.to/denisepen/setting-up-vagrant-for-a-rails-application-kgc"&gt;&lt;em&gt;DEV Community&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>webdev</category>
      <category>vagrant</category>
    </item>
    <item>
      <title>Rails encrypted credentials on 6.2</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Mon, 13 Sep 2021 07:18:16 +0000</pubDate>
      <link>https://dev.to/devgraph/rails-encrypted-credentials-on-6-2-2bkm</link>
      <guid>https://dev.to/devgraph/rails-encrypted-credentials-on-6-2-2bkm</guid>
      <description>&lt;p&gt;By Ritu Chaturvedi&lt;/p&gt;

&lt;p&gt;Any rails program would have secrets to be stored, for at least the secret key base with tokens for third-party APIs. Post version updates, handling secrets has become easier.&lt;/p&gt;

&lt;p&gt;Initially, there were two methods to handle secrets.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first method stored secrets in the environment variable (secret_key_base) and committed the secrets.yml file to the repository. This had many safety constraints in place.&lt;/li&gt;
&lt;li&gt;The alternate method saved secrets in the secrets.yml file and avoided committing it to the repository.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the 5.1 version, encrypted secrets were introduced, and were handled by the secrets.yml.enc file along with the encryption key control. The encryption key enabled us to commit the secrets to the repository safely.&lt;/p&gt;

&lt;p&gt;With rails 5.2, the plain text credentials became obsolete. Since then, only encrypted credentials were in place and the same were stored and accessed by two files: &lt;em&gt;credential.yml.enc&lt;/em&gt; and &lt;em&gt;master.key .&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Credentials were stored in &lt;em&gt;config/credentials.yml.enc&lt;/em&gt; and the key was stored on &lt;em&gt;config/master.key&lt;/em&gt;. This feature enabled deploying code and credentials together and also storing all credentials in one place.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Handling multi-environment credentials before rails 6&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before rails 6, credentials and configurations corresponding to all environments were saved in a one-way file, with the environment as a major key and multi-environment credentials were handled by specifying explicitly.&lt;/p&gt;

&lt;p&gt;Development:&lt;/p&gt;

&lt;p&gt;aws:&lt;/p&gt;

&lt;p&gt;access_key_id: 123&lt;/p&gt;

&lt;p&gt;secret_access_key: 345&lt;/p&gt;

&lt;p&gt;and the configuration was accessed by mentioning the access_key_id.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling multi-environment credentials in rails 6
&lt;/h2&gt;

&lt;p&gt;The Rails 6 version has taken further steps to improve the scalability of the &lt;a href="https://www.bigbinary.com/blog/rails-6-adds-support-for-multi-environment-credentials?utm_source=Engineyard.com"&gt;rails framework by including multi-environment credentials.&lt;/a&gt; Instead of keeping one credential file to handle the secrets for all environments, separate credential files for each environment and point of deliveries are created. Though this necessitates separate encryption keys per environment, this feature brings more safety and clarity. The in-built feature of multi-environment credentials also facilitates one-way-time uploading of the encryption/decryption key to the server.&lt;/p&gt;

&lt;p&gt;With this update, a global credential file is enough for multiple environments. And when the environment is passed, two files were created -&lt;/p&gt;

&lt;p&gt;_config/credentials/#{environment}.yml.enc and _&lt;/p&gt;

&lt;p&gt;_config/credentials/#{env}.key. _&lt;/p&gt;

&lt;p&gt;Let us consider an example. The below command&lt;/p&gt;

&lt;p&gt;_rails credentials:edit --environment prod _&lt;/p&gt;

&lt;p&gt;would create the credential files for the production environment as&lt;/p&gt;

&lt;p&gt;&lt;em&gt;config/credential/prod.yml.enc&lt;/em&gt; and&lt;/p&gt;

&lt;p&gt;_config/credentials/prod.key _&lt;/p&gt;

&lt;p&gt;and if the environment file is missing or not created, the default file &lt;em&gt;credentials.yml.enc&lt;/em&gt; file will be used.&lt;/p&gt;

&lt;p&gt;Also, the &lt;em&gt;config/credential/prod.yml.enc file&lt;/em&gt; would be committed to the repository, and the &lt;em&gt;config/credential/prod.key&lt;/em&gt; file would not be committed.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Credentials in rails&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Rails understands the credential files to be used in a specific environment. If the environment-specific credentials are defined, this will be considered over the global credentials.&lt;/p&gt;

&lt;p&gt;As for the above example,&lt;/p&gt;

&lt;p&gt;&lt;em&gt;rails.application.credentials.config {:aws=&amp;gt;{:access_key_id=&amp;gt;"123", :secret_access_key=&amp;gt;"345"} }} rails.application.credentials.aws[:access_key_id]=&amp;gt; "123"&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Added features in rails 6&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Rails 6 also added a feature to explicitly specify the location where the credential file is stored. Committing to a known path in the repository would make handling these files easier. To save the files in a specific path of your choice, &lt;em&gt;config.credentials.content_path and config.credentials.key_path&lt;/em&gt; are used. While using this, one must make sure to upload the valid key as well as credentials to avoid errors/interruptions. If the respective key is not available on the path, the encrypted credentials would remain as a bunch of meaningless characters.&lt;/p&gt;

&lt;p&gt;To handle local environment credentials, &lt;em&gt;config/credentials/environment.key&lt;/em&gt; and simple &lt;em&gt;config/master.key&lt;/em&gt; are to be used. The scenario varies in the production environment. For such a scenario, the rails_master_key can be used and the encryption keys from the .key file are stored here.&lt;/p&gt;

&lt;p&gt;The below command tells rails to search the credentials file in path &lt;em&gt;config/credentials/local.yml.enc&lt;/em&gt; instead of &lt;em&gt;config/credentials/development.yml.enc&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;_config.credentials.content_path = ‘config/credentials/local.yml.enc’ _&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The Rails family puts constant effort into improving the efficiency and scalability of the framework. With the multi-environment credentials enabled, applications used in multiple platforms and pods find it easier to keep the codes simple and accessible.&lt;/p&gt;

&lt;p&gt;You can use &lt;a href="https://support.cloud.engineyard.com/hc/en-us/articles/360058885853-Introduction-to-Engine-Yard-Kontainers?utm_source=devgraph&amp;amp;utm_medium=internal-link&amp;amp;utm_campaign=Rails-encrypted-credentials-on-6.2&amp;amp;utm_id=Qiworks&amp;amp;_ga=2.136879410.367885970.1629918955-549482685.1629918955"&gt;Engine Yard&lt;/a&gt; to connect to your database enabling the Rails credentials for improved security. Click &lt;a href="https://support.cloud.engineyard.com/hc/en-us/articles/1260801333150-Using-a-Database-with-your-Kontainers-Application"&gt;here&lt;/a&gt;to learn how to proceed.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Tutorial on how to use Active Storage on Rails 6.2</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Mon, 13 Sep 2021 07:13:34 +0000</pubDate>
      <link>https://dev.to/devgraph/tutorial-on-how-to-use-active-storage-on-rails-6-2-f86</link>
      <guid>https://dev.to/devgraph/tutorial-on-how-to-use-active-storage-on-rails-6-2-f86</guid>
      <description>&lt;p&gt;By &lt;a href="https://blog.engineyard.com/author/ritu-chaturvedi"&gt;Ritu Chaturvedi&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Understanding Active Storage in Rails 6.2
&lt;/h1&gt;

&lt;p&gt;Active storage is an inbuilt gem in Rails that developers widely use to handle file uploads. Combined with the encrypted credentials feature in the latest releases of Rails, active storage is a safe and easy method to upload, serve, and analyze files onto cloud-based storage services as well as local storage.&lt;/p&gt;

&lt;p&gt;To start with, we need to install the &lt;a href="https://guides.rubyonrails.org/active_storage_overview.html"&gt;active storage gem&lt;/a&gt;. This step is followed by declaring attachment associations, uploading attachments, processing attachments, and adding validations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Active storage gem Installation
&lt;/h2&gt;

&lt;p&gt;With any new application, the first step to enable active storage is to &lt;a href="https://blog.engineyard.com/how-to-build-your-own-gem-in-ruby"&gt;install the gem&lt;/a&gt;. Run the below to install this migration to create the three basic tables automatically:&lt;/p&gt;

&lt;p&gt;bin/rails active_storage:install&lt;/p&gt;

&lt;p&gt;Run the migration by below:&lt;/p&gt;

&lt;p&gt;bin/rails db:migrate. &lt;/p&gt;

&lt;p&gt;This creates the three tables for your application as active_storage_blobs, active_storage_variant_records, and active_storage_attachments. Out of these three, the active_storage_attachments is a polymorphic join table. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The blobs table holds some straightforward details about the uploaded file like filename and content type. It also stores the encoded key that points towards the uploaded file in the active storage service.&lt;/li&gt;
&lt;li&gt;The &lt;em&gt;active_storage_attachments&lt;/em&gt; table is a polymorphic join table that holds references to a &lt;em&gt;blob&lt;/em&gt; and a &lt;em&gt;record&lt;/em&gt;. The polymorphic option is set to true in t.references:record,   null: false, polymorphic: true, index: false line when this table was created. This is set this way because the &lt;em&gt;record_id&lt;/em&gt; column has details on what type of data it is holding. The identification is made clear with the help of a foreign key and a class name defined in the table.&lt;/li&gt;
&lt;li&gt;Usually, active storage stores original copies, but it also lets the user accommodate modifications like resizing file size. The &lt;em&gt;active_storage_variant_records&lt;/em&gt; table holds details about all these modified files.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Functionalities of Active Storage
&lt;/h2&gt;

&lt;p&gt;Active storage gem is used to attach, remove, serve, and analyze files. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attaching files:&lt;/strong&gt; Files can be attached as a single file or multiple files. Use macros like ‘&lt;em&gt;has_one_attached&lt;/em&gt;’ and ‘&lt;em&gt;has_many_attached&lt;/em&gt;’ accordingly. &lt;/p&gt;

&lt;p&gt;Below are the sample codes to add attachments.&lt;/p&gt;

&lt;p&gt;One attachment:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;class&lt;/em&gt; &lt;strong&gt;&lt;em&gt;User&lt;/em&gt;&lt;/strong&gt;  &lt;strong&gt;&lt;em&gt;&amp;lt;&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;ApplicationRecord&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;  has_one_attached&lt;/em&gt; &lt;em&gt;:avatar&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;End&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Many attachments:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;class&lt;/em&gt; &lt;strong&gt;&lt;em&gt;Message&lt;/em&gt;&lt;/strong&gt;  &lt;strong&gt;&lt;em&gt;&amp;lt;&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;ApplicationRecord&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;  has_many_attached&lt;/em&gt; &lt;em&gt;:images&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;End&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Active storage enables attaching files and data to record on storage services. If we expand the ‘&lt;em&gt;has_one_attached&lt;/em&gt;’ declaration, we can see that there is an &lt;em&gt;avatar_attachment&lt;/em&gt; and an &lt;em&gt;avatar_blob&lt;/em&gt; to get through the avatar_attachment association. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://edgeguides.rubyonrails.org/active_storage_overview.html"&gt;Third-party services&lt;/a&gt; are used to open, view, and operate the attached files from storage services. The content type of attachment decides the kind of service to be used. &lt;/p&gt;

&lt;p&gt;If the &lt;em&gt;avatar_attachment&lt;/em&gt; is an image file attachment, here’s how you can upload an image to this model. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;&amp;lt;%=&lt;/em&gt; f.label :avatar &lt;em&gt;%&amp;gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&amp;lt;%=&lt;/em&gt; f.file_field :avatar &lt;em&gt;%&amp;gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In order to display the uploaded image, run:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&amp;lt;%=&lt;/em&gt; image_tag event.avatar &lt;em&gt;%&amp;gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It is always advisable to add &lt;a href="https://guides.rubyonrails.org/active_record_validations.html#custom-validators"&gt;custom validations&lt;/a&gt; to the files uploaded since the Active storage feature does not include in-built validations. File type and the file size must be validated before the upload, to avoid errors and complications.&lt;/p&gt;

&lt;p&gt;As discussed above, Rails allows modification of the uploaded files and stores the data in the &lt;em&gt;variants&lt;/em&gt; table. For example, to process image files, the &lt;strong&gt;&lt;em&gt;image_processing&lt;/em&gt;&lt;/strong&gt; gem of Rails can be used.&lt;/p&gt;

&lt;p&gt;Remember to encrypt your storage service credentials before uploading to the cloud. Encrypted credentials are a safe way to handle cloud-based storage services like Amazon S3. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Removing files:&lt;/strong&gt; The attached files can also be removed from the records by using the purge command.&lt;/p&gt;

&lt;p&gt;User. &lt;strong&gt;avatar&lt;/strong&gt;. &lt;strong&gt;purge&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serving files:&lt;/strong&gt; The uploaded files can be served by active storage. Two methods are used for this - the redirecting method and the proxying method. The redirect method uses the file’s blob URL to serve the file, and the proxying method will download data in files from the storage service.&lt;/p&gt;

&lt;p&gt;As mentioned, active storage uses third-party software to enable file processing. You can download and install &lt;a href="https://github.com/libvips/libvips"&gt;&lt;em&gt;libvips&lt;/em&gt;&lt;/a&gt;or &lt;a href="https://imagemagick.org/index.php"&gt;&lt;em&gt;ImageMagick&lt;/em&gt; v8.6+&lt;/a&gt; for image analysis and transformations, &lt;a href="http://ffmpeg.org/"&gt;&lt;em&gt;ffmpeg&lt;/em&gt; v3.4+&lt;/a&gt; for video/audio analysis and video previews, and &lt;a href="https://poppler.freedesktop.org/"&gt;&lt;em&gt;poppler&lt;/em&gt;&lt;/a&gt;or &lt;a href="https://mupdf.com/"&gt;&lt;em&gt;muPDF&lt;/em&gt;&lt;/a&gt;for PDF previews separately, as Rails will not install this software.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Major active storage updates in recent releases&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the recent &lt;a href="https://weblog.rubyonrails.org/releases/"&gt;releases of Rails&lt;/a&gt;, the active storage gem has seen notable updates. The major ones are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In &lt;em&gt;ffmpeg&lt;/em&gt;, &lt;em&gt;the&lt;/em&gt; user can configure all those parameters used for generating a video preview image under &lt;em&gt;config.active_storage.video_preview_arguments.&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;No error is raised when the &lt;em&gt;mime&lt;/em&gt; type is unrecognizable.&lt;/li&gt;
&lt;li&gt;When an image previewer cannot generate a preview image, &lt;em&gt;ActiveStorage::PreviewError&lt;/em&gt; is raised.&lt;/li&gt;
&lt;li&gt;Even with no service selected, Blob creation will not crash.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Like MuPD previewer, Poppler PDF previewer also uses the original document's crop box to display a preview image. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure attachments for the service you want to store them in.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use any URL, private or public, for blobs. The latest Rails update on active storage extends support for this feature and ensures the public URLs are permanent.  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you have seen how to install the active storage gem, explore its functionalities, releases, updates, and uploading files, we hope that this blog helped you gain a better understanding of active storage in Rail 6.2.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>webdev</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>On what parameters dev.to features stories in its feed?</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Thu, 26 Aug 2021 05:29:26 +0000</pubDate>
      <link>https://dev.to/devgraph/on-what-parameters-dev-to-features-stories-in-its-feed-3dg2</link>
      <guid>https://dev.to/devgraph/on-what-parameters-dev-to-features-stories-in-its-feed-3dg2</guid>
      <description></description>
      <category>discuss</category>
    </item>
    <item>
      <title>ARM-Based Cloud Computing: Inexpensive and Fast</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Wed, 18 Aug 2021 04:13:39 +0000</pubDate>
      <link>https://dev.to/devgraph/arm-based-cloud-computing-inexpensive-and-fast-4244</link>
      <guid>https://dev.to/devgraph/arm-based-cloud-computing-inexpensive-and-fast-4244</guid>
      <description>&lt;p&gt;By Ritu Chaturvedi&lt;/p&gt;

&lt;p&gt;Many of you have heard about &lt;a href="https://www.embeddedcomputing.com/application/consumer/smart-home-tech/intro-to-raspberry-pi-home-automation"&gt;home automation with Raspberry Pi&lt;/a&gt; or that the latest smartphones are more clever than some desktops. You may have been wondering why tiny computers are not used on an industrial scale outside of the portable gear.&lt;/p&gt;

&lt;p&gt;They are. The market for alternatively architectured computers is not restricted to devices for private use. Nowadays, it is possible to equip cloud computing facilities with such machines.&lt;/p&gt;

&lt;p&gt;We are talking about ARM technology. You can use it for CI/CD on a corporate level but need to prepare your development routines for this transition.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;An Introduction to ARM Machines&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;A Brief History of ARM&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;ARM is an acronym that has kept its meaning but changed the underlying abbreviated components. Originally, it stood for Acorn RISC Machine. Acorn Computers Ltd. used to be a British manufacturer of microcomputers founded back in 1978. In 1990, after a few years of cooperative experimental projects with Apple, one section of the company was separated and established as a new firm: Arm Ltd. &lt;/p&gt;

&lt;p&gt;Today, ARM stands for Advanced RISC Machines. &lt;/p&gt;

&lt;p&gt;Arm Ltd. seldom appears in the news. The reason is that Arm Ltd. mainly focuses on the development of the RISC architecture — we will come to that in a moment — and not on the end-user products. It sells licenses to other companies that manufacture computer processors and sell them to third parties or incorporate them into their own products. Raspberry Pis have ARM cores, as do iPads and a wide range of their tablet competitors.&lt;/p&gt;

&lt;p&gt;By the way, the term “ARM core” is applied not only to CPUs manufactured under Arm Ltd.’s license. It can also be a Qualcomm computer chip that was designed independently.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What Is RISC?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;RISC stands for reduced instruction set computer. To help you understand RISC and its hidden potential for cloud computing, we will directly introduce its counterpart: a complex instruction set computer called CISC. &lt;a href="https://en.wikipedia.org/wiki/Complex_instruction_set_computer"&gt;CISC computers&lt;/a&gt; are basically all personal computers and the biggest part of the data center hardware. In other words, the CISC architecture is what we know to be a usual computer. &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;What Is the Difference?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;As we know, a CPU is an electronic circuitry. Once a set of instructions specified in a software program is executed, an oscillating clock signal is issued. When you choose hardware for end-users, you usually look at the number accompanied by MHz, standing for megahertz. Hertz measures the number of clock cycles, which is how often the clock signal was issued within one second. We are accustomed to taking the higher number for the better.&lt;/p&gt;

&lt;p&gt;This is not completely wrong. But CISC computers made the race for higher clock speed their main direction for development. The reason is that a CISC processor executes multiple instructions within only one clock cycle by merging them into a single instruction set. On the contrary, RISC processors execute only one instruction per cycle and, therefore, they break complex instructions down into simple instructions. RISC processors impose restrictions on the size of a single instruction but they do not hunt for the highest clock speed.&lt;/p&gt;

&lt;p&gt;Here comes the main difference between the two architectures.&lt;/p&gt;

&lt;p&gt;RISC architecture aims to use simple hardware and simple software that needs fewer clock cycles to execute. “Fewer” and “simpler” means that you need fewer transistors and, consequently, less electric power to run an ARM-based machine.&lt;/p&gt;

&lt;p&gt;Unfortunately, an ARM-based processor needs more memory since simple instructions add more work to compilers. In the early days, manufacturing memory components was difficult and expensive. That’s why the evolution of end-user computers took a different path, and RISC architecture had to wait decades for a new rise.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;RISC vs CISC: Further Advantages and Disadvantages&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;You can run complex applications on simple processors — ARM-based ones — but such a machine will require a larger memory cache. &lt;/p&gt;

&lt;p&gt;In addition to that, two disadvantages have evolved historically. They are not per se drawbacks of the ARM technology, but they are rather the impact of the computer industry following a non-RISC path for years.&lt;/p&gt;

&lt;p&gt;First, RISC remained a less popular niche for programmers. As a result, nowadays, it is more difficult to find a good developer who can write applications for ARM-based computers. Second, creating an application for RISC architecture means more effort since you need to transform complex instructions into simpler ones. And most of the current software applications are made complex due to their usage with the complex CPUs.&lt;/p&gt;

&lt;p&gt;But, in general, RISC architecture means cheaper and smaller hardware components that run faster and consume less energy. Once you’ve managed to adjust the legacy code to the new type of hardware, you can count on a severe cost reduction.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;ARM and Cloud Computing&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Arm Ltd. has willingly done its homework in the field of cloud computing and came up with two convincing arguments that ARM-based data centers can offer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Single threading&lt;/li&gt;
&lt;li&gt;Linear scalability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Single threading means that a CPU processes one thread at a time. Or that one thread means one CPU, with no shared cores that slow down performance. Since many web applications open one thread per request, and since most of them run in the cloud, ARM-based machines have good chances to replace their CISC rivals.&lt;/p&gt;

&lt;p&gt;Scalability was previously a curve-shaped thing: in the beginning, it goes up, but then it slows down and grows in steps. An ARM-based server scales up in a linear manner: the pace remains the same and continues to climb with the permanent speed. You do not need to run complex predictions or prognoses.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Load/Store Architecture&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Apart from this, ARM-based machines seem to have overcome the memory problem by integrating the load/store architecture. &lt;/p&gt;

&lt;p&gt;During any computing operation, the mainstream CISC architecture directly accesses the memory, takes the data from it, changes it, and stores the result.&lt;/p&gt;

&lt;p&gt;RISC architecture separates memory access operations from computing operations, making each calculation a two-step instruction.&lt;/p&gt;

&lt;p&gt;Small pieces of an ARM-based processor’s fast temporary memory are called registers. The stored instructions take data from a register and store it in the main one. The load operations do the opposite: access the main memory and move data into a register. &lt;/p&gt;

&lt;p&gt;The load/store architecture decreases memory access latency. &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Future of Computing&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;RISC architecture can change the future of software development. A boom of ARM-based processors would promote applications that are close-to-the-metal and directly access RAM and hardware registers. These applications are programmed in low-level languages, such as the C family, that approximate the commands, making them intermittently understandable to the hardware they run on.&lt;/p&gt;

&lt;p&gt;Besides, ARM processors offer a 40% better price and performance. The cost-saving factor makes them particularly attractive.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Modern Market of ARM Hardware for Cloud Computers&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We hope to have made you highly enthusiastic about ARM-based (cloud) computing. By now, you may be asking yourself, “How do I get this magic hardware?” For building your own ARM-based data center, you can consider one of the following options.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Arm Limited Neoverse Family&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We will briefly describe the official offer of Arm Ltd. As we mentioned before, it basically sells the blueprints that you need to implement into your hardware design and then manufacture.&lt;/p&gt;

&lt;p&gt;Neoverse is the family of systems on a chip (SoC). Contrary to traditional computing, ARM processors do not have motherboards that are coupled with other important components into a circuit board. An SoC is an integrated circuit that already contains all those components: a CPU, storage, input/output ports, and sometimes a GPU. &lt;/p&gt;

&lt;p&gt;The ARMv8-A extensions solved the old problem of incompatibility between ARM-based machines and 64-bit applications and operating systems. Besides, the ARMv8-A architecture has enhanced capabilities for performing calculations with integer and float numbers due to Scalable Vector Extensions (SVE). It also allows efficient memory partitioning and monitoring and has various security features.&lt;/p&gt;

&lt;p&gt;Neoverse outperforms Intel chips by 40% so far. Combined with less power consumption and smaller sizes that reduce the physical facilities you need to rent, an ARM data center seems to offer a bright future of distributed computing.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Apple M1: Maybe&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In November 2020, Apple announced its divorce from Intel and the beginning of a new era. It would replace all traditional chips with SoCs in their Mac machines within the next two years. After less than one year, only three Mac models have undergone the transformation.&lt;/p&gt;

&lt;p&gt;But a lot of software developers started to hope for the emergence of M1-based data centers. That would definitely eliminate the second challenge around ARM-based clouds.&lt;/p&gt;

&lt;p&gt;The challenge is that you would not only need to take care of the manufacturing and installation of your hardware, but also to optimize your existing applications to make them run in your new ARM-based data center. With the M1 transformation finished, the applications can be developed on ARM machines and then directly deployed into an ARM-based cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Realistic Option: AWS Graviton Processor&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Those who use Amazon Elastic Compute Cloud (Amazon EC2) can see Neoverse cores in action: they are implemented as AWS Graviton2 processors.&lt;/p&gt;

&lt;p&gt;With this option, you simply subscribe to Amazon EC2 and use it usually, but configure it to run on AWS Graviton. It is highly recommended for applications that can benefit from smaller but faster cores, including but not restricted to web applications, containerized microservices, eject-transform-load (ETL) pipelines, online games, video encoding applications, and high-performance computing in general. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.devgraph.com/2021/05/26/cloud-cost-optimization-10-lessons-learned-from-scanning-45k-aws-accounts/"&gt;AWS Graviton-based EC2 instances open the doors for more cost savings.&lt;/a&gt; Their power allows you to use open-source databases, such as MySQL, MariaDB, and PostgreSQL. They are known to be memory-intensive but, as any open-source, are free to use. You can deploy them on AWS EC2 and let them run on ARM-based capacities.&lt;/p&gt;

&lt;p&gt;AWS Graviton is not limited to memory-consuming technologies. You can use it for burstable general-purpose workloads, as well as compute-intensive workloads including real-time analytics, since, as any ARM, it can scale quickly.&lt;/p&gt;

&lt;p&gt;Indeed, if you decide to subscribe to an Amazon pay-as-you-go service, you will need to fit into the Amazon architecture and Amazon pricing model. We are not discouraging you from doing this. But we have a better alternative — or a compromise, depending on your needs. Before we explain it in detail, let us walk you through an important preparation step that lies between you and your dream ARM-based cloud. &lt;/p&gt;

&lt;p&gt;No worries, our alternative solution will include this preparation as well. We still want to show you the standard way of doing things.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;How to Prepare for Leveraging This Technology&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As with any cloud migration, you will need to do some adjustments to your existing applications before you can move them into the cloud. A new ARM-based environment requires even more fundamental changes since we know that the very CPU architecture of ARM-based servers is different.&lt;/p&gt;

&lt;p&gt;But we do not mean that those are changes you won’t manage to perform. You need to ensure that your applications will be compatible with the new environment. And you do not need to re-build them completely. Instead, use containerization technology.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Your First Container for ARM: A Short Overview of the Steps&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Containers are virtual envelopes for software applications that include the application source code and a minimum set of environmental elements — libraries and selected operating system components — that allow running these applications under any operating system and in the cloud, making them OS-agnostic. &lt;/p&gt;

&lt;p&gt;Docker is the most popular tool for creating containers. In 2019, Docker and ARM established a partnership to help their users in moving containerized applications to ARM-enabled platforms. Since then, Docker has implemented a multi-architecture imaging feature. It allows you to build containers that can be deployed on any CPU architecture.&lt;/p&gt;

&lt;p&gt;The whole procedure is done with the single “docker build” command, provided that you have installed the Docker. To generate a container that is compatible with RISC architecture, simply use the plug-in Docker Buildx together with the build command and specify which platform you need it for. Buildx must be set as the default builder. It is described in more detail in the &lt;a href="https://docs.docker.com/buildx/working-with-buildx/"&gt;Docker documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Every time you build a container, it is stored in a GitHub repository, from which you need to download it, in the GitHub language, to clone the repository to your local machine.&lt;/p&gt;

&lt;p&gt;Indeed, when you have many applications or small microservices, it starts to look like a lot of manual work prone to human errors that you cannot allow in an enterprise environment.&lt;/p&gt;

&lt;p&gt;Now, let us present the alternative we were talking about before. You can deploy your containerized applications on AWS with a &lt;a href="https://www.engineyard.com"&gt;Platform-as-a-Service solution: Engine Yard Kontainers&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Deploy and Manage Your AWS Applications&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Engine Yard is more than just a platform. We are a team that can help you to perform a transition to the new ARM-based Amazon EC2 infrastructure. At Engine Yard, we assist you in migrating your applications to containers and then to the ARM-based cloud. We share with you predefined templates and detailed documentation. If there are any questions left, you can always approach our expert team.&lt;/p&gt;

&lt;p&gt;Deployments happen with a simple Git push and take only minutes to complete. You can fine-tune regions to deploy and have full control over the memory and your CPU usage. &lt;/p&gt;

&lt;p&gt;With Engine Yard’s log aggregation feature, you can monitor your failed applications from a single universal cockpit. We enable you to dig into the failures and anomalies in resource usage and set up notifications to act fast on any troubles. &lt;/p&gt;

&lt;p&gt;We can also help you in moving your databases into AWS services. Engine Yard offers automated backup and recovery for any kind of database.&lt;/p&gt;

&lt;p&gt;Once the migration is done, you can manage your AWS instances yourself or outsource this task to our team. You can create a clone or copy of your environments to separate production and development stages. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Pricing Advantages: Transparency and Full Control&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Although ARM-based cloud computing is generally cheaper, we understand that you still want to have a clear overview of your costs. With Engine Yard, you not only customize your working environments but can also set business rules that allow you to stop worrying about provisioning going out of control. You can scale up your applications without skyrocketing your costs. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this article, we tried to touch on the long story of two types of CPU architectures — RISC and CISC — and how the potential of the former remained dormant for years but is rapidly unfolding now. &lt;/p&gt;

&lt;p&gt;Reduced instruction set computers are back on stage. This is an inspiring trend in cloud computing that promises radical cost reduction and better computing power. &lt;/p&gt;

&lt;p&gt;However, the underlying differences between the two architecture types would require you to modify your applications before moving them into an ARM-based cloud. An easier way to prepare your applications is to containerize them. Containerization preserves the core functionalities and makes your apps OS- and CPU-architecture agnostic.&lt;/p&gt;

&lt;p&gt;For a secure transition, you may need a deployment management platform that is simple to operate and allows you to monitor your cloud costs.&lt;/p&gt;

&lt;p&gt;With Engine Yard, you can move your applications into the ARM-based Amazon cloud and deploy and manage newly built applications with a simple PaaS solution. &lt;/p&gt;

&lt;p&gt;To learn more about ARM support and performance benefits, reach out to &lt;a href="https://developer.arm.com/support"&gt;ARM - Developer support&lt;/a&gt; or &lt;a href="https://community.arm.com/?_ga=2.65757991.842141114.1629292538-552263405.1629200585"&gt;arm community&lt;/a&gt;&lt;/p&gt;

</description>
      <category>arm</category>
      <category>cloud</category>
      <category>aws</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Developers Guide To Scaling Rails Apps</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Thu, 12 Aug 2021 06:27:05 +0000</pubDate>
      <link>https://dev.to/devgraph/the-developers-guide-to-scaling-rails-apps-3kln</link>
      <guid>https://dev.to/devgraph/the-developers-guide-to-scaling-rails-apps-3kln</guid>
      <description>&lt;p&gt;By Ravi Duddukuru&lt;/p&gt;

&lt;p&gt;From Airbnb to Zendesk, a ton of really great apps were built using the Ruby programming language and the Rails web framework. Albeit a less popular option than other front-end frameworks such as React, Angular, and Vuejs, Rails still holds substantial merit in modern software development.&lt;/p&gt;

&lt;p&gt;Ruby on Rails (RoR) is open-source, well-documented, fiercely maintained, and constantly extended with new Gems — community-created open-source libraries, serving as “shortcuts” for standardized configuration and distribution of Ruby code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://rubygems.org/gems/rails/versions/4.2.6" rel="noopener noreferrer"&gt;Rails&lt;/a&gt; is arguably the biggest RoR gem of them all — a full-stack server-side web application framework that is easy to customize and scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Rails? A Quick Refresher
&lt;/h2&gt;

&lt;p&gt;Rails was built on the premises of Model-View-Controller (MVC) architecture.&lt;/p&gt;

&lt;p&gt;This means each Rails app has three interconnected layers, accountable for a respective set of actions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Model: A data layer, housing the business logic of the app&lt;/li&gt;
&lt;li&gt;Controller: The “brain center”, handling application functions&lt;/li&gt;
&lt;li&gt;View: Defines graphical user interfaces (GUIs) and UI performance&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In essence, the model layer establishes the required data structure and contains codes needed to process the incoming data in HTML, PDF, XML, RSS, and other formats. The model layer then communicates updates to the View, which updates the GUI. The controller, in turn, interacts both with models and views. For instance, when receiving an update from a view, it notifies the model on how to process it. At the same time, it can update the view too on how to display the result for the user.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53rjyu4frzxjplxyegp7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F53rjyu4frzxjplxyegp7.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
(Basic Rails app architecture. Image source: &lt;a href="https://medium.com/the-renaissance-developer/ruby-on-rails-http-mvc-and-routes-f02215a46a84" rel="noopener noreferrer"&gt;Medium&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The underlying MVC architecture lends several important advantages to Rails:&lt;/p&gt;

&lt;p&gt;• Parallel development capabilities — one developer can work on View, while others handle the model subsystem. The above also makes Ruby on Rails a popular choice for rapid application development (RAD) methodology.&lt;/p&gt;

&lt;p&gt;• Reusable code components — controllers, views, and models can be packaged to share and reuse across several features with relative ease. When done right, this results in cleaner, more readable code, as well as faster development timelines. Also, Ruby on Rails is built on the DRY principle (don’t repeat yourself), prompting more frequent code reuse for monotonous functions. &lt;/p&gt;

&lt;p&gt;• Top security — the framework has a host of built-in security-centric features such as protection against SQL injections and XSS attacks, among others. Moreover, there’s plenty of community-shipped gems addressing an array of common and emerging cybersecurity threats.&lt;/p&gt;

&lt;p&gt;• Strong scalability potential — there’s a good reason why jumbo-sized web apps such as GitHub, Twitch, and Fiverr are built on Rails. Because it scales well when the overall app architecture and deployment strategy are done right. In fact, one of the oldest Rails apps, Shopify, scales to &lt;a href="https://shopify.engineering/write-fast-code-ruby-rails" rel="noopener noreferrer"&gt;processing millions of requests per minute&lt;/a&gt; (RPM).&lt;/p&gt;

&lt;p&gt;In spite of this, many Rails guides still make the arbitrary claim that &lt;a href="https://blog.engineyard.com/5-tips-to-scale-your-ruby-on-rails-application?__hstc=246083099.02435147c8312a5a70348baf53225dcd.1624856368348.1628688292983.1628744259081.56&amp;amp;__hssc=246083099.2.1628744259081&amp;amp;__hsfp=1414547400" rel="noopener noreferrer"&gt;Rails apps are hard to scale&lt;/a&gt;. Are these true? Not entirely, as this post will showcase.&lt;/p&gt;

&lt;h2&gt;
  
  
  3 Common Problems With Scaling Rails Apps
&lt;/h2&gt;

&lt;p&gt;The legacy lore once told that scaling Rails apps is like passing a camel through the eye of a needle — exasperating and exhausting. &lt;/p&gt;

&lt;p&gt;To better understand where these concerns are coming from, let’s first recap what scalability is for web apps. &lt;/p&gt;

&lt;p&gt;Scalability indicates the application’s architectural capability to handle more user requests per minute (RPM) in the future. &lt;/p&gt;

&lt;p&gt;The keyword here is “architecture”, as your choices of infrastructure configuration, connectivity, and overall layout are determinants to the entire system’s ability to scale. The framework(s) or programming languages you use will only have a marginal (if any) impact on the scalability. &lt;/p&gt;

&lt;p&gt;In the case of RoR, developers, in fact, get a slight advantage as the framework promotes clean, modular code that is easy to integrate with more database management systems. Moreover, adding &lt;a href="https://www.devgraph.com/2020/10/09/what-is-a-load-balancer-definition-explanation/?utm_source=EngineYard" rel="noopener noreferrer"&gt;load balancers&lt;/a&gt; for processing a higher number of requests is relatively easy too. &lt;/p&gt;

&lt;p&gt;Yet, the above doesn’t fully eradicate scaling issues on Rails. Let’s keep it real: any app is hard to scale when the underlying infrastructure is subpar.  &lt;/p&gt;

&lt;p&gt;Specifically, Ruby scaling issues often pop up due to:&lt;/p&gt;

&lt;p&gt;• Poor database querying&lt;br&gt;
• Inefficient indexing&lt;br&gt;
• Lack of logging and monitoring &lt;br&gt;
• Subpar database engine selection &lt;br&gt;
• Sluggish caching&lt;br&gt;
• Overly complex and spaghetti code&lt;/p&gt;

&lt;h3&gt;
  
  
  Over-Engineered App Architecture
&lt;/h3&gt;

&lt;p&gt;RoR supports multi-threading. This means the Rails framework can handle concurrent processing of different parts of code. &lt;/p&gt;

&lt;p&gt;On the one hand, multi-threading is an advance since it enables you to use CPU time wiser and ship high-performance apps. &lt;/p&gt;

&lt;p&gt;At the same time, however, the cost of context switching between different threads in highly complex apps can get high. Respectively, performance starts lagging at some point.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;How to Cope&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;By default, Ruby on Rails prioritizes clean, reusable code. Making your Rails app architecture overly complex (think too custom) indeed can lead to performance and scalability issues. &lt;/p&gt;

&lt;p&gt;This was the case with Twitter circa 2007. &lt;/p&gt;

&lt;p&gt;The team developed a Twitter UI prototype on Rails and then decided to further code the back-end on Rails too. And they decided to build a fully custom, novel back-end from scratch rather than modifying some tested components. Unsurprisingly, their product behaved weirdly and times and scaling it was challenging, as the team admitted in a &lt;a href="https://www.slideshare.net/Blaine/scaling-twitter/6-Its_Easy_Really1_Realize_Your" rel="noopener noreferrer"&gt;presentation&lt;/a&gt;. They ended up with a ton of issues when partitioning databases because their code was overly complex and bloated.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevmeehohyzs8phfezsi9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevmeehohyzs8phfezsi9.jpg" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Image Source: &lt;a href="https://www.slideshare.net/Blaine/scaling-twitter/28-MemCache" rel="noopener noreferrer"&gt;SlideShare&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Interestingly, at the same time, another high-traffic Rails web app called Penny Arcade was doing just fine. Why? Because it had no funky overly-custom code, had clearly mapped dependencies, and hailed well with connected databases. &lt;/p&gt;

&lt;p&gt;Remember: Ruby supports multi-processing within apps. In some cases, &lt;a href="https://naturaily.com/blog/multiprocessing-in-ruby" rel="noopener noreferrer"&gt;multi-process apps&lt;/a&gt; can perform better than multi-thread ones. But the trick with processes is that they consume more memory and have more complex dependencies. If you inadvertently kill a parent process, children processes will not get informed about the termination and thus, turn into sluggish “zombie” processes. This means they’ll keep running and consume resources. So watch out for those!&lt;/p&gt;

&lt;h3&gt;
  
  
  Suboptimal Database Setup
&lt;/h3&gt;

&lt;p&gt;In the early days, Twitter had intensive write workloads and poorly organized read patterns, which were non-compatible with database sharding. &lt;/p&gt;

&lt;p&gt;At present, a lot of Rails developers are still skimming on coding proper database indexes and triple-checking all queries for redundant requests. Slow database queries, lack of caching, and tangled database indexes can throw any good Rails app off the rails (pun intended).&lt;/p&gt;

&lt;p&gt;Sometimes, complex database design is also part of deliberate decisions, as was the case with one of &lt;a href="https://resources.engineyard.com/penny-pop-case-study?__hstc=246083099.02435147c8312a5a70348baf53225dcd.1624856368348.1628688292983.1628744259081.56&amp;amp;__hssc=246083099.2.1628744259081&amp;amp;__hsfp=1414547400" rel="noopener noreferrer"&gt;our clients, PennyPop&lt;/a&gt;. To store app data, the team set up an API request to the Rails application. The app itself then stores the data inside DynamoDB and sends a response back to the app. Instead of ActiveRecord, the team created their own data storage layer to enable communication between the app and DynamoDB. &lt;/p&gt;

&lt;p&gt;But the issue they ran into is that DynamoDB has limits on how much information can be stored in one key. This was a technical deal-breaker, but the dev team came up with an interesting workaround — compressing the value of the key to a payload of base64 encoded data. Doing so has allowed the team to exchange bigger records between the app and the database without compromising the user experience or app performance. &lt;/p&gt;

&lt;p&gt;Sure, the above operation requires more CPU. But since they are using &lt;a href="https://www.engineyard.com/?__hstc=246083099.02435147c8312a5a70348baf53225dcd.1624856368348.1628688292983.1628744259081.56&amp;amp;__hssc=246083099.2.1628744259081&amp;amp;__hsfp=1414547400" rel="noopener noreferrer"&gt;Engine Yard&lt;/a&gt; to help manage and optimize other infrastructure, these costs remain manageable.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;How to Cope&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;Granted, there are many approaches to improving Rails database performance. Deliberate caching and database partitioning (sharding) is one of the common routes as your app grows more complex. &lt;/p&gt;

&lt;p&gt;What’s even better is that you have a ton of great solutions for resolving RoR database issues, such as:&lt;/p&gt;

&lt;p&gt;• Redis — an open-source in-memory data structure store for Rails apps.&lt;/p&gt;

&lt;p&gt;• ActiveRecord — a database querying tool standardizing access to popular databases with built-in caching capabilities.&lt;/p&gt;

&lt;p&gt;• Memcached — distributed memory caching system for Ruby on Rails.&lt;/p&gt;

&lt;p&gt;The above three tools can help you sufficiently shape up your databases to tolerate extra-high loads. &lt;/p&gt;

&lt;p&gt;Moreover, you can:&lt;/p&gt;

&lt;p&gt;• Switch to UUIDs over standard IDs for principle keys as your databases grow more complex. &lt;/p&gt;

&lt;p&gt;• Try other ORM alternatives to ActiveRecord when your DBs get extra-large. Some good ones include Sequel, DataMapper, and ORM Adapter. &lt;/p&gt;

&lt;p&gt;• Use database profiling gems to diagnose and detect speed and performance issues early on. Popular ones are rack-mini-profiler, bullet, rails_panel, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Insufficient Server Bandwidth
&lt;/h3&gt;

&lt;p&gt;The last problem is basic but still pervasive. You can’t accelerate your Rails apps to millions of RPMs if you lack resources.&lt;/p&gt;

&lt;p&gt;Granted, with cloud computing, provisioning extra instances is a matter of several clicks. Yet, you still need to understand and account for:&lt;/p&gt;

&lt;p&gt;• Specific apps/subsystems requirements for extra resources&lt;/p&gt;

&lt;p&gt;• &lt;a href="https://www.devgraph.com/2021/05/26/cloud-cost-optimization-10-lessons-learned-from-scanning-45k-aws-accounts/" rel="noopener noreferrer"&gt;Cloud computing costs&lt;/a&gt; (aka the monetary tradeoff for speed)&lt;/p&gt;

&lt;p&gt;Ideally, you need tools to constantly scan your systems and identify cases of slow performance, resources under (and over)-provisioning, as well as overall performance benchmarks for different apps. &lt;/p&gt;

&lt;p&gt;Not having such is like driving without a speedometer: You rely on a hunch to determine if you are going too slow or deadly fast.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;How to Cope&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;One of the &lt;a href="https://blog.engineyard.com/10-lessons-learned-from-building-engine-yards-container-platform-on-kubernetes?__hstc=246083099.02435147c8312a5a70348baf53225dcd.1624856368348.1628688292983.1628744259081.56&amp;amp;__hssc=246083099.2.1628744259081&amp;amp;__hsfp=1414547400" rel="noopener noreferrer"&gt;lessons we learned when building and scaling Engine Yard&lt;/a&gt; on Kubernetes was that the container platform sets no default resource limits for hosted containers. Respectively, your apps can consume unlimited CPU and memory, which can create “noisy neighbor” situations, where some apps rack up too many resources and drag down the performance of others. &lt;/p&gt;

&lt;p&gt;The solution: Orchestrate your containers from the get-go. Use Kubernetes Scheduler to right-size nodes for the pods, limit maximum resource allocation, plus define pod preemption behavior. &lt;/p&gt;

&lt;p&gt;Moreover, if you are running containers, always set up your own logging and monitoring since there are no out-of-the-box solutions available. Adding Log Aggregation to Kubernetes provides extra visibility into your apps’ behavior. &lt;/p&gt;

&lt;p&gt;In our case, we use:&lt;/p&gt;

&lt;p&gt;• Fluent Bit for distributed log collection&lt;/p&gt;

&lt;p&gt;• Kibana + Elasticsearch for log analysis&lt;/p&gt;

&lt;p&gt;• Prometheus + Grafana for metrics alerting and visualization&lt;/p&gt;

&lt;p&gt;To sum up: The key to ensuring scalability is weeding out the lagging modules and optimizing different infrastructure and architecture elements individually for a greater cumulative good.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling Rails Apps: Two Main Approaches
&lt;/h2&gt;

&lt;p&gt;Similar to others, Rails apps scale in two ways — vertically and horizontally. &lt;/p&gt;

&lt;p&gt;Both approaches have their merit in respective cases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vertical Scaling
&lt;/h3&gt;

&lt;p&gt;Vertical scaling, i.e., provisioning more server resources to an app, can increase the number of RPMs. The baseline premises are the same as for other frameworks. You add extra processors, RAM, etc., until it is technically feasible and makes financial sense. Understandably, vertical scaling is a temp “patch” solution. &lt;/p&gt;

&lt;p&gt;Scaling Rails apps vertically makes sense to accommodate linear or predictable growth since cost control will be easy too. Also, vertical scaling is a good option for upgrading database servers. After all, slow databases can be majorly accelerated when placed on better hardware. &lt;/p&gt;

&lt;p&gt;Hardware is the obvious limitation to vertical scaling. But even if you are using cloud resources, still scaling Rails apps vertically can be challenging. &lt;/p&gt;

&lt;p&gt;For example, if you plan to implement Vertical Pod Autoscaling (VPA) on Kubernetes, it accounts for several limitations. &lt;/p&gt;

&lt;p&gt;During our experiments with scaling Ruby apps, we found that: &lt;/p&gt;

&lt;p&gt;• VPA is a rather disruptive method since it busts the original pod and then recreates its vertically scaled version. This can cause much havoc.&lt;/p&gt;

&lt;p&gt;• You cannot pair VPA with Horizontal Pod Autoscaling.&lt;/p&gt;

&lt;p&gt;So it’s best to prioritize horizontal scaling whenever you can.&lt;/p&gt;

&lt;h3&gt;
  
  
  Horizontal Scaling
&lt;/h3&gt;

&lt;p&gt;Horizontal scaling, i.e., redistributing your workloads across multiple servers, is a more future-proof approach to scaling Rails apps. &lt;/p&gt;

&lt;p&gt;In essence, you convert your apps in a three-tier architecture featuring:&lt;/p&gt;

&lt;p&gt;• Web server and load balancer for connected apps &lt;br&gt;
• Rails app instances (on-premises or in the cloud)&lt;br&gt;
• Database instances (also local or cloud-based)&lt;/p&gt;

&lt;p&gt;The main idea is to distribute loads across different machines to obtain optimal performance equitably. &lt;/p&gt;

&lt;p&gt;To effectively reroute Rails processes across server instances, you must select the optimal web server and load balancing solution. Then right-size instances to the newly decoupled workloads. &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;Load balancing&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;Load balancers are the key structural element for scale-out architecture. Essentially, they perform a routing function and help optimally distribute incoming traffic across connected instances. &lt;/p&gt;

&lt;p&gt;Most cloud computing services come with native software load balancing solutions (think &lt;a href="https://aws.amazon.com/elasticloadbalancing/" rel="noopener noreferrer"&gt;Elastic Load Balancing&lt;/a&gt; on AWS). Such solutions also support dynamic host port mapping. This helps establish a seamless pairing between registered web balancers and container instances.  &lt;/p&gt;

&lt;p&gt;When it comes to Rails apps, the two most common options are using a combo of web servers and app servers (or a fusion service) to ensure optimal performance.&lt;/p&gt;

&lt;p&gt;• Web servers transfer the user request to your website and then pass it to a Rails app (if applicable). Essentially, they filter out unnecessary requests for CSS, SSL, or JavaScript components (which the server can handle itself), thus reducing the number of requests to the Rails app to bare essentials. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Examples of Rails web servers: Ngnix and Apache.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;• App servers are programs that maintain your app in memory. So that when an incoming request from a web server apps appears, it gets routed straight to the app for handling. Then the response is bounced back to the web server and, subsequently, the user. When paired with a web server in production, such a setup lets you render requests to multiple apps faster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Examples of app servers for Rails: Unicorn, Puma, Thin, Rainbows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, there are also “fusion” services such as Passenger App (Phusion Passenger). This service integrates with popular web servers (Ngnix and Apache) and brings in an app server layer — available for standalone and combo use with web servers. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhut91n76xb1v9ba7drtv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhut91n76xb1v9ba7drtv.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Image Source: &lt;a href="https://www.phusionpassenger.com/library/indepth/integration_modes.html" rel="noopener noreferrer"&gt;Phusion Passenger&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Passenger is an excellent choice if you want to roll out unified app server settings for a bunch of apps in one go without fiddling with a separate app server setup for each.&lt;/p&gt;

&lt;p&gt;In a nutshell, the main idea behind using web and app servers is to span different rails processes optimally across different instances. &lt;/p&gt;

&lt;p&gt;Pro tip: What we found when building our product is that AWS Elastic Load Balancer often doesn’t suffice. A major drawback is that ELB can’t handle multiple vhosts.&lt;/p&gt;

&lt;p&gt;In our case, we went on with configuring an NGINX-based load balancer and configured auto-scaling on it to support ELB. As an alternative, you can also try HAProxy.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;App Instances&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;The next step of scale-out architecture is configuring communication between different app instances, where your Rails workloads will be allocated. &lt;/p&gt;

&lt;p&gt;App servers (Unicorn, Puma, etc.) help ensure proper communication between web servers and subsequently increase the throughput of requests processed per second. On Rails, you can allocate an app server to handle multiple app instances, which in turn can have separate “worker” processes or threads (depending on which type of app server service you are using).  &lt;/p&gt;

&lt;p&gt;It’s important, however, to ensure that different app servers can communicate well with the webserver. Rack interface comes in handy here as it helps homogenize communication standards between standalone app servers. &lt;/p&gt;

&lt;p&gt;When it comes to configuring the right instances for containers, keep in mind the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You have four variables to min/max CPU and min/max memory to regulate pod size
&lt;/li&gt;
&lt;li&gt;Limit the resources using [minimum requirement + 20%] formula &lt;/li&gt;
&lt;li&gt;Use average CPU utilization and average memory utilization as scaling metrics &lt;/li&gt;
&lt;li&gt;Mind the timing. Pods and clusters take 4 to 12 minutes to scale up on Kubernetes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;P.S. If you don’t want to do the above guesswork every time you are building a new pod/cluster, Engine Yard comes with a predictive cluster scaling feature, which helps you scale your infrastructure just-in-time without ballooning the costs.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;Database Scaling&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;Transferring databases to a separate server, used by all app instances, is one of the sleekest moves you can do to &lt;a href="https://blog.engineyard.com/5-tips-to-scale-your-ruby-on-rails-application?utm_source=EngineYard&amp;amp;utm_medium=Interlink" rel="noopener noreferrer"&gt;scale Rails apps&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;First of all, this can be a nice exercise in segregating your data and implementing database replication for improving business continuity. Secondly, doing so can reduce the querying time since the request will not have to travel through multiple database instances where different bits of data are stored. Instead, it will go straight to a consolidated repository. &lt;/p&gt;

&lt;p&gt;Thus, consider setting up a dedicated MySQL or PostgreSQL server for your relational databases. Then scrub them clean and ensure optimal instance size to save costs. &lt;/p&gt;

&lt;p&gt;For example, AWS RDC lets you select among 18 types of database instances and codify fine-grain provisioning. Choosing to host your data in a cheaper cloud region can drive substantial cost savings (up to 40% at times!). &lt;/p&gt;

&lt;p&gt;Here’s how on-demand hourly costs differ across AWS regions:&lt;/p&gt;

&lt;p&gt;US East (Ohio)&lt;/p&gt;

&lt;p&gt;• db.t3.small — $0.034 per hour&lt;br&gt;
• db.t3.xlarge — $0.272 per hour&lt;br&gt;
• db.t3.2xlarge — $0.544 per hour&lt;/p&gt;

&lt;p&gt;US West (LA)&lt;/p&gt;

&lt;p&gt;• db.t3.small — $0.0408 per hour&lt;br&gt;
• db.t3.xlarge —$0.3264 per hour&lt;br&gt;
• db.t3.2xlarge — $0.6528 per hour&lt;/p&gt;

&lt;p&gt;Europe (Frankfurt)&lt;/p&gt;

&lt;p&gt;• db.t3.small — $0.04 per hour&lt;br&gt;
• db.t3.large — $0.16 per hour&lt;br&gt;
• db.t3.2xlarge — $0.64 per hour&lt;/p&gt;

&lt;p&gt;Asia Pacific (Seoul) &lt;/p&gt;

&lt;p&gt;• db.t3.small —$0.052 per hour&lt;br&gt;
• db.t3.large — $0.208 per hour&lt;br&gt;
• db.t3.2xlarge — $0.832 per hour&lt;/p&gt;

&lt;p&gt;Another pro tip: opt for reserved instances over on-demand when you can to further slash the hourly costs.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;Caching&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;Database caching implementation is another core step to accelerating your Rails apps, especially when it comes to database performance. Given the fact that RoR comes with a native query caching feature that caches the result set returned by each query, it’s a shame not to profit from this! &lt;br&gt;
Caching can help you speed up those slow queries. But prior to implementing, investigate! Once you’ve found the “offenders”, consider trying out &lt;a href="https://guides.rubyonrails.org/caching_with_rails.html" rel="noopener noreferrer"&gt;different strategies&lt;/a&gt; such as:&lt;/p&gt;

&lt;p&gt;• Low-level caching — works best for any type of caching to retrieve database queries. &lt;br&gt;
• Redis cache store — lets you store keys and value pairs up to 512 MB in memory, plus provides native data replication. &lt;br&gt;
• Memcache store — another easy-to-implement in-memory datastore with values limited at 1 MB. Supports multi-thread architecture, unlike Redis.&lt;/p&gt;

&lt;p&gt;Ultimately, caching improves data availability and, by proxy, your application’s querying speed and performance.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;em&gt;Database sharding&lt;/em&gt;
&lt;/h4&gt;

&lt;p&gt;Lastly, at some point in your database scaling journey, you’ll inevitably face the decision to shard your relational databases. &lt;/p&gt;

&lt;p&gt;Data sharding means slicing your DB records horizontally or vertically into smaller chunks (shards) and storing them on a cluster of database nodes. The definitive advantage is that querying should now happen faster since a large database gets split in two and has twice more memory, I/O, and CPU to run. &lt;/p&gt;

&lt;p&gt;The tradeoff, however, is that sharding can significantly affect your app’s logic. The scope of each query is now limited to either DB 1 or DB 2 — there’s no commingling. Respectively, when adding new app functions, you need to carefully consider how to access data across shards, how sharing relates to the infrastructure, and what’s the best way to scale out the supporting infrastructure without affecting the app’s logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  To Conclude: Is There An Easier Solution to Scaling Rails Apps?
&lt;/h2&gt;

&lt;p&gt;Scaling Rails apps is a careful balancing act of ensuring optimal instance allocation, timely resource, provisioning, and careful container orchestration. Keeping tabs on all the relevant metrics across a portfolio of apps and sub-services isn’t an easy task when done manually. And it shouldn’t be.&lt;/p&gt;

&lt;p&gt;You can try Engine Yard Kontainers (EYK) — our &lt;a href="https://support.cloud.engineyard.com/hc/en-us/articles/360058885853-Introduction-to-Engine-Yard-Kontainers" rel="noopener noreferrer"&gt;NoOps PaaS autoscaling services&lt;/a&gt; for containerized apps. In essence, we act as your invisible DevOps team. You code your apps and deploy them to EYK, and we take over auto-scaling implementation, container orchestration, and other infrastructure right-sizing tasks from there.&lt;/p&gt;

&lt;p&gt;Learn more about &lt;a href="https://www.engineyard.com/" rel="noopener noreferrer"&gt;Engine Yard&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ruby</category>
      <category>rails</category>
      <category>webdev</category>
      <category>database</category>
    </item>
    <item>
      <title>Move Your Database to the Cloud With Zero Downtime</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Thu, 05 Aug 2021 05:33:06 +0000</pubDate>
      <link>https://dev.to/devgraph/move-your-database-to-the-cloud-with-zero-downtime-3839</link>
      <guid>https://dev.to/devgraph/move-your-database-to-the-cloud-with-zero-downtime-3839</guid>
      <description>&lt;p&gt;By Ravi Duddukuru&lt;/p&gt;

&lt;p&gt;A downtime-free migration with ScaleArc: decouple your database from the application layer and move it safely to the cloud with our load balancing tool.&lt;/p&gt;

&lt;p&gt;The average cost of database downtime has risen from $300,000 to almost $1,000,000 worldwide between 2014 and 2020, say &lt;a href="https://blogs.gartner.com/andrew-lerner/2014/07/16/the-cost-of-downtime/"&gt;Gartner&lt;/a&gt; and &lt;a href="https://www.statista.com/statistics/753938/worldwide-enterprise-server-hourly-downtime-cost/"&gt;Statista&lt;/a&gt;, respectively.&lt;/p&gt;

&lt;p&gt;Downtimes are one of the main risks that accompany &lt;a href="https://www.devgraph.com/2020/10/28/zero-downtime-database-migration-to-the-cloud/?utm_source=Blog&amp;amp;utm_medium=move-database-to-cloud-with-zero-downtime%2F&amp;amp;utm_id=Internal-link"&gt;database migrations to the cloud&lt;/a&gt;. But with the right tools in your hands, you can eliminate them and arrive safely at the advantages of cloud computing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Databases Are Key to Your Business Success
&lt;/h2&gt;

&lt;p&gt;By the end of 2021, 59% of all companies will be using the cloud for their main workloads, according to &lt;a href="https://resources.idg.com/download/2020-cloud-computing-executive-summary-rl"&gt;IDG&lt;/a&gt;. This shift is mostly driven by the three main factors:&lt;/p&gt;

&lt;p&gt;• The simplicity of the maintenance: you do not need to deal with purchasing and upgrading the hardware.&lt;/p&gt;

&lt;p&gt;• Better scalability: if you need more computing power, you can get it in minutes.&lt;/p&gt;

&lt;p&gt;• Cost savings: as a result of the previous two, you have fewer personnel costs and far fewer hardware expenditures.&lt;/p&gt;

&lt;p&gt;On your way to a cloud database, you should be aware of the obstacles.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database Downtimes
&lt;/h3&gt;

&lt;p&gt;Database downtimes can be unnerving even for your permanent customers and a decisive disappointment for the new ones. When your application is not showing up-to-date information, users cannot work with it properly. Downtimes may have various reasons, for instance, application errors.  &lt;/p&gt;

&lt;h3&gt;
  
  
  Application Code Change
&lt;/h3&gt;

&lt;p&gt;Applications remain bound to their databases through the SQL queries that run in the background. Once you’ve renamed an entity in the application source code, you have to reflect them in the database or the queries that the application sends over to the database. &lt;/p&gt;

&lt;p&gt;Such changes may be quickly overlooked, and even if they are not, their implementation takes time. &lt;/p&gt;

&lt;p&gt;Ideally, no adjustment in one layer should ever influence the other, especially during the cloud transition, to prevent database downtimes.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Solution
&lt;/h3&gt;

&lt;p&gt;Thus, the first step on the way to a seamless database migration to the cloud is decoupling the application from the database. It can be done by creating an additional tier between the database and the application, making them independent from each other and more robust against failures.&lt;/p&gt;

&lt;p&gt;But before we proceed with this, let’s take some time to go through the database migration issue in detail and outline possible solutions for the pitfalls that may be waiting for you during the transition period. As a bonus, we’ll show you a shortcut.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Migration: Database-Related Obstacles
&lt;/h2&gt;

&lt;p&gt;If you decide to move your application and/or the underlying database to the cloud, the whole thing may get tricky. First of all, by moving, we obviously mean creating a copy of the original database and moving the data into the new one. &lt;/p&gt;

&lt;p&gt;Basically, there are three widely used scenarios for doing this. Their main differences are the duration of a possible downtime and the number of business risks you have to accept in exchange for reducing the database downtime.&lt;/p&gt;

&lt;h3&gt;
  
  
  Strategies for Migrating a Database to the Cloud
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Offline Copy Migration
&lt;/h4&gt;

&lt;p&gt;As we mentioned before, a database receives many queries that may be updating and reading the same records with a minimum time lag. During the migration, the challenge is to create a complete copy of your data catching the most recent updates. &lt;/p&gt;

&lt;p&gt;Offline copy migration solves this issue by simply shutting down the database — and, consequently, the application — for the time a copy has to be generated by means of a simple export and import of the data. Within this downtime, the original database will be disconnected from the application and the new copy connected to it. &lt;/p&gt;

&lt;p&gt;In reality, this may mean a few hours of the database and application unavailability. It may work if you migrate an internal application used only by your employees and you do it on a weekend or during the night hours. For commercial applications, however, this may lead to revenue reduction and a poor customer experience.&lt;/p&gt;

&lt;h4&gt;
  
  
  Master/Read Replica Switch Migration
&lt;/h4&gt;

&lt;p&gt;In this case, you create a read copy of your original database. The master copy gets updated and sends updates to the replica. Once you are ready to turn on the switch, you swap them, granting write access to the former replica. &lt;/p&gt;

&lt;p&gt;Indeed, in this scenario, you still have some short downtime. You also run a risk that the original database failed to handle a few last-minute queries and they won’t show up in the new database. This may be acceptable for small businesses, internally-used applications, or applications that demonstrate a clear top and bottom in the amount of traffic during the day.&lt;/p&gt;

&lt;h4&gt;
  
  
  Master/Master, or Dual-Write, or Online Migration
&lt;/h4&gt;

&lt;p&gt;Contrary to the previous method, you can write data to both databases and “turn off” one of them at any time. The databases need to be synched to keep identical records. &lt;/p&gt;

&lt;p&gt;Here, a lot can get lost in transition. You never know which one is the most up-to-date. Thus, the synchronization process requires continuous observation. Together with the complexity of the maintenance, it can lead you to postpone the final switch. &lt;/p&gt;

&lt;p&gt;The downtime is close to zero but the risks associated with this strategy are the highest among the three scenarios.&lt;/p&gt;

&lt;p&gt;Still, the temptation for a quick switch is high, and there is a solution for this: incremental batch reading. This method adds a new column to each table in the database that identifies which records were already synced and which not. This allows you to at least spot the gaps, eliminating them later with a manual one-time synchronization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy-Independent Obstacles and Solutions
&lt;/h2&gt;

&lt;p&gt;Apart from the difficult strategic choice, there are other factors that can influence your application downtime and, therefore, your approach to database migration. We gathered the three most common challenges and a few life hacks for dealing with them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database Schema Changes
&lt;/h3&gt;

&lt;p&gt;Preparing a database migration is often a good point to do some inventory. For instance, to re-think the way your data is organized in the database and to re-design the latter accordingly. That manifests in the database schema changes.&lt;/p&gt;

&lt;p&gt;As a result, your source and target databases have different schemes. The initial setup of a replica, as well as a routine synchronization between the two, will require a schema translation tool. Alternatively, you can use an eject-transform-load (ETL) tool, provided that you selected the first or second scenario.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Discrepancies
&lt;/h3&gt;

&lt;p&gt;When you move the entire application to the cloud, you may decide to do some re-factoring that leads to changes in how your application works. Consequently, the application data will change. For instance, instead of using UTC in a time column, you start using your local time format. Again, you’ll need a synchronization and ETL solution to transfer the records correctly. &lt;/p&gt;

&lt;h3&gt;
  
  
  Other Database Functionality Differs
&lt;/h3&gt;

&lt;p&gt;Indeed, cloud migration means a functional upgrade. For instance, your old database may have lacked one-to-many relationships between tables, or did not allow triggers or materialized views, or was not so good in partitioning. &lt;/p&gt;

&lt;p&gt;The new cloud one may have this. The question is how to match the data, especially, if you choose the master/master scenario where you have to perform two-way updates all the time. &lt;/p&gt;

&lt;p&gt;In very rare cases, it can be the other way around, and you perform a functional downgrade. For instance, if due to some cost factors you decided to adopt a cloud database that offers only essentials, then you have to cover discrepancies before the final switch happens. For instance, solve the many-to-many issue by adding a few more tables and establishing connections between them, or by getting rid of the old data instead of partitioning the tables.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Smooth Migration: Managing Your Replicas Successfully
&lt;/h2&gt;

&lt;p&gt;The use case with the master and a replica of a database is not only limited to database migration. You can use the master (the primary database) and one or more replicas, or secondaries or slaves, for other purposes. For instance, you might want to split the traffic between them and write the data only to the master and read the data only from the slaves to enhance your database performance. &lt;/p&gt;

&lt;p&gt;When you manage your master and secondary databases in a clever way, this can help you not only to speed up the routine work of your application but also to secure your cloud migration.&lt;/p&gt;

&lt;p&gt;The magic cure is a &lt;a href="https://www.devgraph.com/database-load-balancer/?utm_source=Blog&amp;amp;utm_medium=move-database-to-cloud-with-zero-downtime%2F&amp;amp;utm_id=Internal-link"&gt;database load balancer&lt;/a&gt;. It can be applied to the databases without a master-slave relationship and to all databases that reside on more than one server.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is a Database Load Balancer?
&lt;/h3&gt;

&lt;p&gt;Obviously enough, it balances the load of your database. &lt;a href="https://www.devgraph.com/2020/10/09/what-is-a-load-balancer-definition-explanation/?utm_source=Blog&amp;amp;utm_medium=move-database-to-cloud-with-zero-downtime%2F&amp;amp;utm_id=Internal-link"&gt;Load balancers&lt;/a&gt; belong to middleware. They are placed between applications and database server farms.&lt;/p&gt;

&lt;p&gt;With a load balancer, you have a universal endpoint for your application and enable the resource optimization of your database. It accelerates query throughput and lowers latency. &lt;/p&gt;

&lt;p&gt;The load balancer distributes the queries more efficiently using one of the following methods:&lt;/p&gt;

&lt;p&gt;• Round robin sends queries to the servers in a linear manner: the first query to the first server, the second to the second, and so on. It ensures primitive queue management of the incoming queries.&lt;/p&gt;

&lt;p&gt;• Weight-based balancing means that the balancer keeps in mind the actual capacities of each server specified as a proportion of the network traffic and the amount of its current load. When a new query comes in, it re-directs it to the server that still has some free capacity.&lt;/p&gt;

&lt;p&gt;• The least connection method only takes into consideration how many connections each server has already established, and re-directs a newly arrived request to the least burdened one.&lt;/p&gt;

&lt;p&gt;When you want to migrate your database into the cloud, a load balancer can be used in the following scenario. But, indeed, not every load balancer is made for this. We explain to you how ScaleArc performs this task.&lt;/p&gt;

&lt;h2&gt;
  
  
  How ScaleArc Works During a Migration
&lt;/h2&gt;

&lt;p&gt;Usually, your database is tightly coupled to the application. Instead, you can make the coupling looser and more flexible by putting ScaleArc between the database and the application, and then proceed with the migration. &lt;/p&gt;

&lt;p&gt;Inside ScaleArc, you need to create a cluster or use an existing one for the source and target databases. Then, you can add a read-and-write target database to that cluster. It will be put into standby mode and won’t receive any traffic until the cutover. &lt;/p&gt;

&lt;p&gt;ScaleArc will move all the data to the target database and keep synching the new data. Once you are ready to make the target your primary database, switch the connection between ScaleArc and the new database. The old one will still be there but not receiving any traffic. Your application will read and write data from the former replica.&lt;/p&gt;

&lt;p&gt;Since ScaleArc remains connected to your application during the whole migration period, this method has no downtime.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Short Step-by-Step Tutorial
&lt;/h3&gt;

&lt;p&gt;The ScaleArc console has an intuitive cockpit where you can configure your database clusters before migration. We reduced the number of configuration parameters to a sufficient minimum. You need just a couple of minutes to fill in the form and start the migration process. &lt;/p&gt;

&lt;p&gt;In the ScaleArc console, do the following to begin your migration:&lt;/p&gt;

&lt;p&gt;• Click on “Set Up ScaleArc Cluster”&lt;br&gt;
• Specify the database type&lt;br&gt;
• Enter user credentials to allow ScaleArc to log in into the database&lt;br&gt;
• Specify source and target database server names&lt;br&gt;
• Wait till ScaleArc validates these host names and create a new cluster&lt;br&gt;
• Click “Start Migration”&lt;br&gt;
• Specify roles for each database server&lt;br&gt;
• Enter an email address to receive notifications&lt;/p&gt;

&lt;p&gt;While your migration is happening, you will get notified by email about important status changes and the completion of the migration. You can also check the progress manually in the console. You can monitor how many tables have already been written to the new database. &lt;/p&gt;

&lt;p&gt;Last but not least, you can schedule the cutover time.&lt;/p&gt;

&lt;h2&gt;
  
  
  ScaleArc: More Than Just Balancing
&lt;/h2&gt;

&lt;p&gt;With ScaleArc, you have a full grip on your migration risks. A seamless switchover is made possible thanks to the integration with Microsoft SQL Server™ AlwaysOn technology and MySQL™ automatic failover.  &lt;/p&gt;

&lt;p&gt;As we mentioned before, ScaleArc’s main purpose is routine database load balancing. We would like to highlight a few of its advantages to show you why this can be of great help to you even after you’ve finished your migration.&lt;/p&gt;

&lt;p&gt;Load balancing works in the three main directions:&lt;/p&gt;

&lt;p&gt;• traffic re-routing&lt;br&gt;
• transaction queueing&lt;br&gt;
• queries throttling&lt;/p&gt;

&lt;p&gt;It prevents downtime maintenance, doubles your website performance, and boosts your revenue. ScaleArc stands out among other tools since it offers a few particular features on top of the basic functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Automatic Failover
&lt;/h3&gt;

&lt;p&gt;If you have data-heavy applications, such as webshops, or booking or fintech platforms, you may keep one or a few secondary standby databases that can support the main one when it is down. Outside of the database migration, a switch from the primary to the secondary database is called failover.&lt;/p&gt;

&lt;p&gt;ScaleArc offers an automated failover process. It means that not only do you get notified when a database is interrupted, but ScaleArc automatically re-directs the traffic to the secondary databases. The users won’t even notice the difference, and can keep working with your application normally.&lt;/p&gt;

&lt;h3&gt;
  
  
  Surge Queuing
&lt;/h3&gt;

&lt;p&gt;During a failover, the primary and secondary databases are not synchronized. This leads to a replication lag. Once the failover ends, the primary database gets a lot of new requests, resulting in service outages. &lt;/p&gt;

&lt;p&gt;It is important to queue the incoming requests to prevent new troubles. The issue is often addressed by surge queuing, which is a ScaleArc feature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Splitting Read and Write
&lt;/h3&gt;

&lt;p&gt;Write requests are very important since they can change the returned results of a read request that follows afterward. That’s why it is often better to separate these two types of requests completely and only perform write operations on the primary database, while using the secondary one for the read requests.&lt;/p&gt;

&lt;p&gt;In this case, the information gets updated quicker. This feature boosts the overall performance of your application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connection Pooling and Multiplexing
&lt;/h3&gt;

&lt;p&gt;ScaleArc will collect inactive connections and store them temporarily in a pool instead of closing them immediately. This works faster than establishing a completely new connection every time.&lt;/p&gt;

&lt;p&gt;Multiplexing goes beyond that and allows the re-use of a connection for multiple clients.&lt;/p&gt;

&lt;h3&gt;
  
  
  Query caching
&lt;/h3&gt;

&lt;p&gt;ScaleArc caches — saves — responses to the most common queries. This allows for quicker delivery of user content since the query does not need to be processed anew when it arrives again. This reduces waiting time for your applications, making them more user-friendly. &lt;/p&gt;

&lt;h3&gt;
  
  
  Analytics and Logging
&lt;/h3&gt;

&lt;p&gt;Indeed, ScaleArc is a highly transparent tool that allows you to collect data about your database performance and all failovers and downtimes. &lt;/p&gt;

&lt;p&gt;At ScaleArc, we have thought about your need for a real-time view of all processes. We provide a live monitor for your database load. You can break down query traffic, handpick problematic queries and candidates for caching, and perform bucketing on your databases. &lt;/p&gt;

&lt;p&gt;You have all your log data in one place. You do not have to generate logs for each database and then bring them together in a third-party analytics tool. &lt;/p&gt;

&lt;p&gt;Moreover, you can retrieve historical data at any time and create forecasts based on it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrations with RESTful API
&lt;/h3&gt;

&lt;p&gt;If you still want to use your tool for analyzing logs, ScaleArc supports REST API integrations with monitoring and management tools. You can customize your log analytics to catch weak points. &lt;/p&gt;

&lt;p&gt;Browse to &lt;a href="http://www.scalearc.com/?utm_source=Blog&amp;amp;utm_medium=move-database-to-cloud-with-zero-downtime%2F"&gt;www.scalearc.com&lt;/a&gt; to learn more about ScaleArc Database Migrations!&lt;/p&gt;

</description>
      <category>database</category>
      <category>cloud</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Don’t pass on PaaS</title>
      <dc:creator>DevGraph</dc:creator>
      <pubDate>Thu, 22 Jul 2021 08:58:50 +0000</pubDate>
      <link>https://dev.to/devgraph/don-t-pass-on-paas-5d6d</link>
      <guid>https://dev.to/devgraph/don-t-pass-on-paas-5d6d</guid>
      <description>&lt;p&gt;By &lt;a href="https://dev.toBy%20Darren%20Broemmer"&gt;Darren Broemmer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gartner expects that the PaaS market will double in size between 2018 to 2022 growing at a 26.6 percent rate to about $58 billion by 2022. As per IDG, almost two-thirds of organizations today use PaaS. Post Covid-19, we expect this momentum to continue due to the shift toward remote work. &lt;/p&gt;

&lt;p&gt;However, the market is highly fragmented. Gartner notes; “As of 2019, the total PaaS market contains more than 360 vendors, offering more than 550 cloud platform services in 21 categories. The market remains short on standardization, established practices, and sustained leadership.” In this scenario, it is very difficult to choose the right PaaS provider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PaaS: Under the hood&lt;/strong&gt;&lt;br&gt;
To get an application running in the cloud, you can spend long hours downloading, compiling, installing, configuring, and connecting all sorts of components— and that’s just on a single virtual server instance. Not only is this costly and time consuming, it takes away from time your team could have spent innovating and improving your application.&lt;/p&gt;

&lt;p&gt;There’s a better way. Technologies between the infrastructure and your application have evolved — the platform layer — making cloud computing easier. Rather than downloading and building all those platform-level technologies on each server instance and then having to repeat the process as you scale, you can go to a simple Web user interface, click a few options, and have your application automatically deployed to a fully provisioned cluster.&lt;/p&gt;

&lt;p&gt;As application usage grows, you can add more capacity using the auto scaling capabilities built into your PaaS. When you need to set up increasingly sophisticated architectures with high availability and disaster recovery, you can do this from the same Web interface.&lt;/p&gt;

&lt;p&gt;As all the constituent platform components evolve, they are automatically updated for you with no effort required. That is what Platform-as-a-Service (PaaS) is all about.&lt;/p&gt;

&lt;p&gt;A PaaS consists of three main components.&lt;/p&gt;

&lt;p&gt;First are the software layers your application runs on — “the stack.” These include the various libraries, frameworks, and services the developer uses to build the application, which are present in the runtime environment. The stack consists of the language interpreter or virtual machine (VM), application framework (e.g. Rails, Lithium), HTTP server, load balancer, caching mechanisms, databases, and container orchestration frameworks. A given PaaS may offer several stack combinations to choose from, such as different stacks for different languages or frameworks. The diagram shows a view of a stack based on Kubernetes and containers.&lt;/p&gt;

&lt;p&gt;Second is the deployment machinery that packages and deploys containers provisioned with your application stack. This machinery operates directly from your CI/CD pipeline via a Git push, and it gets out of the way once deployment is complete and your application is up and running.&lt;/p&gt;

&lt;p&gt;This machinery is itself code, perhaps a combination of scripts and Web services and may use an off-the-shelf technology such as Puppet or Chef. The way this machinery is architected, the particular parameters it exposes, and the functions it makes available to an overlying graphical user interface (GUI) or command line interface (CLI) are important differentiators between a good PaaS and a bad one.&lt;/p&gt;

&lt;p&gt;Third is the user interface and the overall user experience (UX). A particular PaaS may provide Web GUI, a CLI, or both. The ordering of the screens, the choices, the logic of how multiple applications and environments are organized and presented—all these factors are make-or-break for the usability of a given PaaS. The goal is to make it easy to change the things you care about and hide the things you don’t care about. The right trade-offs between simplicity and flexibility, constraint and freedom, and opaqueness and transparency are critical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When you may need a PaaS&lt;/strong&gt;&lt;br&gt;
Those unfamiliar with PaaS options may ask, “What’s the true benefit of using a Platform as a Service?” They elaborate by saying, “I can install Ruby (or Node.js, PHP, MySQL, PostgreSQL, etc.), deploy my application, and monitor the systems myself!” This is definitely true. Thousands of companies are doing their own DevOps today, and that pattern works for them.&lt;/p&gt;

&lt;p&gt;Where a PaaS can really save you money is when the company doesn’t have the developer resources, in-house expertise, or contractor budget to efficiently manage their production infrastructure.&lt;/p&gt;

&lt;p&gt;A PaaS allows a development team of any size to focus on the application instead of the infrastructure, making them more productive and providing more “bang for the buck” with development dollars spent.&lt;/p&gt;

&lt;p&gt;Here are 5 basic scenarios when you should consider a PaaS platform to deploy your applications to the cloud.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;You Don’t Have In-house DevOps Resources&lt;/em&gt;: Setting up platform-level software to run your application is time-consuming and complex. Take a look at this video that compares deploying an application directly to AWS vs using a PaaS. By simplifying, automating, and, in many cases, eliminating the steps associated with setting up the foundation for your application, you can get your application deployed much more quickly in the first place, and you can iterate, adapt, and extend it more rapidly over time. Your developers can focus on development and leave the deployment and management to your PaaS provider.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;You Want to Improve Infrastructure Performance&lt;/em&gt;: Infrastructure knowledge and expertise is built over time. It's only when you have spent decades deploying applications on Ruby that you have the knowledge to provision the best performing infrastructure stack for Ruby. That's where PaaS providers come in - they have specialized knowledge and expertise about the best database, load balancer, web server, cache etc, to deliver the best infrastructure stack for your application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;You Want to Standardize Infrastructure&lt;/em&gt;: With the number of choices available today, it's very difficult to decide what infrastructure choices to make. This has resulted in the creation of bespoke permutations and combinations that work great until vendors’ upgrades require you to keep up with tens or hundreds of configuration changes. Why not let the vendor’s experts make the choices and create standard best-in-class operations that can be managed easily?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;You Want to Reduce Infrastructure Costs&lt;/em&gt;: The majority of organizations have high levels of idle or underutilized infrastructure. This is because they try to overprovision infrastructure to deliver performance at peak periods. A robust PaaS has inbuilt autoscaling that enables your infrastructure to scale up or down based on demand. This can reduce your infrastructure costs by up to 50 percent over time while providing excellent application performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;em&gt;Your Applications Need to Be Managed Round the Clock&lt;/em&gt;: A critical consideration for most companies is the type of support an application needs. Is one day of downtime acceptable in your business environment? What about two days? One of the key benefits a good PaaS solution provides is 24x7 monitoring and support so you incur no downtime when there are issues with your application. It's critical here to choose a PaaS partner that provides these benefits.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Reasons to use a PaaS instead of doing it yourself&lt;br&gt;
There are really 3 core benefits investing in a PaaS provides versus doing it yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#1 Increase Agility&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using a PaaS to deploy and run your application enhances your agility and time to market. This is because a PaaS greatly simplifies and accelerates the deployment process. Instead of spending hours or even weeks setting up and configuring a production strength cluster you set it up in a matter of minutes. This improves time to market and helps developers be more productive and focus on what they do best-building apps. You can get your apps to market faster and you can iterate and adapt faster with the same development resources as before.&lt;/p&gt;

&lt;p&gt;Eliminating much of the overhead to deploy and manage applications doesn’t just mean you can do certain things faster. It means you don’t have to do certain things at all --. which allows you to be even better at knowing how to do the things that differentiate your business, like building applications with innovative features and exceptional user experiences.&lt;/p&gt;

&lt;p&gt;Another challenge of deploying your application on a self-built stack is the sheer number of components that need to be maintained and updated over time. When you need to swap in an update to the app server or the load balancer you may find yourself in a nightmare of reconfiguration. This fear leads many do-it-yourselfers to remain indefinitely on an increasingly outdated stack for fear of rocking the boat. With PaaS, you not only get the best possible stack as of the moment you deploy, you also get a stack that keeps up with you over time, ensuring that your application is always running on the latest and greatest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#2 Optimize Costs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the biggest challenges with provisioning infrastructure to maintain high app performance is knowing how much infrastructure to provision-how many CPUs, RAM, storage etc. that are optimum to guarantee performance at peak periods. Most companies end up over provisioning infrastructure with high levels of idle/underutilized resources. This results in excessive spending on AWS usage, which can be controlled simply by rightsizing the infrastructure. A robust PaaS platform uses intelligent monitoring of application performance in production to rightsize infrastructure. It also autoscales based on demand resulting in higher utilization levels and lower AWS costs.&lt;/p&gt;

&lt;p&gt;The other big area where you save is of course the costs of hiring a dedicated person to manage your applications 24x7. The reality here is that such resources are expensive and scarce and you have to ask yourself where the money is better spent.  So do the math and see for yourself what's cheaper especially when you factor in the AWS usage savings. All this does not even count the savings in the developer’s time spent on setting up and configuring the application on the cloud as well as ongoing monitoring and management, which can be a 24x7 job for a mission critical app.&lt;/p&gt;

&lt;p&gt;There are also less obvious hidden costs, such as the cost of downtime when one of your administrators makes a mistake configuring your application server, and no one can access your Web application for hours. According to a study by Uptime Institute, 70 percent of data center downtime is caused by human error. Consider both the hard costs of downtime, such as lost business and unexpected support costs, and the soft costs, such as idled employees and a tarnished reputation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#3 Better App Performance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The benefit of economies of scale doesn’t simply stop at getting the same thing for less money. What you actually end up with is something better, for less money. The stack and platform-level technology you would build yourself will almost never be as good as what a top PaaS will provide. Few companies have both the ability to pay and the attractiveness to hire the world’s best platform builders. A PaaS employs specialists who constantly tune, optimize, load-balance, reconfigure, and so on. The result is faster application performance.&lt;/p&gt;

&lt;p&gt;The best PaaS vendors embed technologies and techniques in their products to keep availability high enough that they can offer service-level agreements (SLAs) at or above 99.9 percent availability.&lt;/p&gt;

&lt;p&gt;One of the key benefits baked into a best in class PaaS platform is autoscaling - scaling up or down based on demand. When building a platform yourself, you basically have three choices: you can optimize for the scale you’re at now, you can optimize for a scale you expect to be at a later date, or you can invest a lot in building your own scaling mechanism. In the first case you risk having to redo your platform and incur downtime when you outgrow your initial set-up. In the second case you will likely waste resources due to overprovisioning. And in the third case, you will likely spend a lot of opportunity cost building something that ends up not nearly as good as what you can get from a PaaS. With a PaaS, on the other hand, you get the benefit of a great scaling mechanism developed by experts over time and in response to the needs of many customers. On top of that, the PaaS scaling mechanism leverages the underlying infrastructure’s elasticity but presents it in an easy-to-use way, abstracting the complexity of the mechanism’s details.&lt;/p&gt;

&lt;p&gt;Security showcases another distinct advantage of the PaaS model. With the sheer volume and the diversity of security threats on an upward spiral, protecting against attacks is best left to specialists. A PaaS offering provides continual security updates for individual stack components as they are issued.&lt;/p&gt;

&lt;p&gt;Finally, if your application is mission critical and any downtime is unacceptable then outsourcing to a PaaS vendor makes sense. A good PaaS vendor will offer 24x7 support and specialized domain experts who have dealt with hundreds of problems in the same domain as yours. You speak to someone who has access to—may even be sitting next to—some of the leading experts in the community, whether for core Ruby or PHP language stack components or complementary open source projects.&lt;/p&gt;

&lt;p&gt;If your core expertise is in developing software not deployment and infrastructure, a PaaS may be the answer to your prayers.&lt;/p&gt;

&lt;p&gt;Any PaaS you choose needs to deliver well – not just deliver – in these areas:&lt;/p&gt;

&lt;p&gt;• Infrastructure Expertise:  Look for a provider with expertise in the language your application is developed in.  If you have a PHP application, ensure that they have expertise in PHP infrastructure and can make good platform choices for you.&lt;br&gt;
• Setup &amp;amp; Configuration Time: A good PaaS platform should reduce your setup and configuration time with a preconfigured stack and enable you to push your code directly from Git.&lt;br&gt;
• Horizontal AutoScaling: Look for the ability to scale automatically in response to traffic and usage demands so you don't have to worry about scaling your infrastructure based on planned usage.  This will ensure that you don’t need to provision excess capacity and waste valuable dollars on idle infrastructure.&lt;br&gt;
• Support &amp;amp; SLA: If you don't have in house resources then the PaaS provider is your support arm.  It's one of the most important choices you have to make so choose a provider with a reputation for robust support.  Examine their SLA and response times critically.&lt;br&gt;
• Uptime Guarantee: An uptime guarantee of 99 percent is fairly standard in the business.  Ensure that there is no downtime during deployments.&lt;br&gt;
• Redundancy, Failover &amp;amp; Backups: Ensure that the provider takes responsibility for replication, backups, and recovery.as well as keeping your platform up to date with patches and new functionality.&lt;br&gt;
• Security: What kind of security does your PaaS provide?  Are you on a private cluster or do you have shared resources? Public clusters are less secure and can be susceptible to noisy neighbor issues where one or more users can hog all the available resources degrading performance.&lt;br&gt;
• Pricing: How well does the pricing scale with usage?  One of the common complaints that PaaS users have is the fact that they can become expensive really fast.  Ensure that the pricing is scalable and delivers the value vs DIY.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When you should not consider a PaaS&lt;/strong&gt;&lt;br&gt;
While 80 percent of scenarios lend themselves well to a PaaS there may be some situations where it is important to make all the infrastructure decisions yourself. In these cases a PaaS may not be the best solution for you.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;em&gt;Legacy Systems&lt;/em&gt;: PaaS may not be a plug-and-play solution for existing legacy apps and services that are not designed for the cloud. Instead, several customizations and configuration changes may be necessary for legacy systems to work with a PaaS service. The resulting customization can result in a complex IT system that may limit the value of the PaaS investment altogether.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;On-premise Integrations&lt;/em&gt;: The complexity of connecting the data stored within an onsite data center or off-premise cloud is increased, which may affect which apps and services can be adopted to the PaaS. When not every component of a legacy IT system is built for the cloud, integration with existing services and infrastructure may be a challenge.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Highly Customized Configurations&lt;/em&gt;: A robust PaaS platform tends to promote standardized configurations and limit some flexibility. Although this is intended to reduce the operational burden on developers, it may not be appropriate for you if you want to extensively customize your environment.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Enormous Scale of Operations&lt;/em&gt;: If your operations are at the scale of a Netflix or AOL, you probably have a large in house team and you may want to make all the infrastructure decisions yourself rather than leave them to a third party.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In summary, PaaS simplifies application deployment and management, improves agility and time to market and reduces deployment and management costs. If you are a software developer who wants to deploy applications to the cloud, and don't have an in-house DevOps team, a PaaS platform may be the answer.&lt;/p&gt;

&lt;p&gt;As seen on &lt;a href="https://www.itproportal.com/features/dont-pass-on-paas/"&gt;ITProPortal.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>cloud</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
