<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sergio Díaz</title>
    <description>The latest articles on DEV Community by Sergio Díaz (@sergiodn).</description>
    <link>https://dev.to/sergiodn</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sergiodn"/>
    <language>en</language>
    <item>
      <title>A Little Bit of Action with AWS KMS</title>
      <dc:creator>Sergio Díaz</dc:creator>
      <pubDate>Tue, 29 Sep 2020 01:03:26 +0000</pubDate>
      <link>https://dev.to/sergiodn/a-little-bit-of-action-with-aws-kms-oki</link>
      <guid>https://dev.to/sergiodn/a-little-bit-of-action-with-aws-kms-oki</guid>
      <description>&lt;h1&gt;
  
  
  TL;DR
&lt;/h1&gt;

&lt;p&gt;After reading this blog post, you will have an overview on how to implement Encryption as a Service using AWS KMS.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://dev.to/sergiodn/encryption-as-a-service-in-action-213n"&gt;In the previous post&lt;/a&gt;, we used Hashicorp Vault to perform encryption and decryption operations. Now we are going to have a look to a different approach. Why? Although Vault is an open-source solution &lt;a href="https://github.com/hashicorp/vault"&gt;backed by a great community.&lt;/a&gt; It might not be for you. For example, you need to manage it and ensure that it will have 100% uptime. Failing to do so could jeopardize your application's availability. This will also require engineering resources, so depending on priorities of your company, you might consider different options. With AWS KMS, you will be able to worry less about managing a business critical service while obtaining the advantages of Encryption as a Service.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;

&lt;p&gt;The following diagram gives a general overview of how this implementation works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TSAEAqWY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nly75rjcqqlj168xxxna.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TSAEAqWY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nly75rjcqqlj168xxxna.png" alt="https://dev-to-uploads.s3.amazonaws.com/i/nly75rjcqqlj168xxxna.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Encryptor
&lt;/h3&gt;

&lt;p&gt;This service &lt;em&gt;identifies a user.&lt;/em&gt; Then, encrypts a message that is received via console prompt. When we encrypt a message, the encryptor receives a cipher-text blob (bytes) from KMS. However, the encryptor base64-encodes it before returning the message to the client. This is done to transport the data in an easier way.&lt;/p&gt;

&lt;h3&gt;
  
  
  Decryptor
&lt;/h3&gt;

&lt;p&gt;This service &lt;em&gt;identifies a user.&lt;/em&gt; Then, decrypts an encrypted message (which is base64-encoded) that is received via console prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS KMS
&lt;/h3&gt;

&lt;p&gt;This service will manage encryption and decryption for us. But first we need to create it. So let’s use CloudFormation (CF) to request the resources we need.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requesting the resources in AWS using CF
&lt;/h2&gt;

&lt;p&gt;You can fin the source code &lt;a href="https://github.com/shekodn/kms-poc"&gt;here&lt;/a&gt;. By using the repo’s CF template, we will create the following resources:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x1S21DQs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/et16y00sl2ea7v3gms8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x1S21DQs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/et16y00sl2ea7v3gms8k.png" alt="https://dev-to-uploads.s3.amazonaws.com/i/et16y00sl2ea7v3gms8k.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  theKey (CMK)
&lt;/h3&gt;

&lt;p&gt;A logical key that represents the top of a key hierarchy. We use it to perform encryption and decryption operations. By default is a symmetric key, so it can do both: encrypt and decrypt.&lt;/p&gt;

&lt;h3&gt;
  
  
  theAlias
&lt;/h3&gt;

&lt;p&gt;This is the alias for our key. This is one is helpful to reference a key using a human friendly format. For example, You can use &lt;code&gt;alias/the-alias&lt;/code&gt; instead of a UUID. Once an alias has been associated with &lt;code&gt;theKey&lt;/code&gt;, the alias can be used in place of the &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Tagging.ARN.html"&gt;ARN&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  iCanEncryptStuffGroup and iCanDecryptStuffGroup
&lt;/h3&gt;

&lt;p&gt;These IAM groups have a policy to allow KMS to encrypt or decrypt respectively.&lt;/p&gt;

&lt;h3&gt;
  
  
  EncryptorUser and DecryptorUser
&lt;/h3&gt;

&lt;p&gt;These users are able to encrypt or decrypt according to the group they belong to.&lt;/p&gt;

&lt;h3&gt;
  
  
  EncryptorUserAccessKey and DecryptorUserAccessKey
&lt;/h3&gt;

&lt;p&gt;These credentials allow the associated user to use the AWS API. Normally, we would create an IAM role and attach it to the EC2 instance that runs our application, but we are using local scripts [0].&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;p&gt;Now it’s time to deploy our infrastructure. Let’s run &lt;code&gt;./deploy.sh&lt;/code&gt; and you should be able to create the resources. Just make sure you update the variables accordingly and to have an AWS account with enough privileges to run the CF template.&lt;/p&gt;

&lt;p&gt;If you see an output like the following, we are ready to go.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
"ExportingStackId": "arn:aws:cloudformation:us-east-2:XXX:stack/kms-poc/a59778da-fe1b-11ea-adc1-0242ac120002",
"Name": "DecryptorUserAccessKey",
"Value": "AKIAXXXXXXXXXXXXXXXX"
},
{
"ExportingStackId": "arn:aws:cloudformation:us-east-2:XXX:stack/kms-poc/a59778da-fe1b-11ea-adc1-0242ac120002",
"Name": "DecryptorUserSecretAccessKey",
"Value": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
},
{
"ExportingStackId": "arn:aws:cloudformation:us-east-2:XXX:stack/kms-poc/a59778da-fe1b-11ea-adc1-0242ac120002",
"Name": "EncryptorUserAccessKey",
"Value": "AKIAXXXXXXXXXXXXXXXX"
},
{

"ExportingStackId": "arn:aws:cloudformation:us-east-2:XXX:stack/kms-poc/a59778da-fe1b-11ea-adc1-0242ac120002",
"Name": "EncryptorUserSecretAccessKey",
"Value": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
},
{
"ExportingStackId": "arn:aws:cloudformation:us-east-2:XXX:stack/kms-poc/a59778da-fe1b-11ea-adc1-0242ac120002",
"Name": "theKeyAlias",
"Value": "alias/kms-poc"
},
{
"ExportingStackId": "arn:aws:cloudformation:us-east-2:XXX:stack/kms-poc/a59778da-fe1b-11ea-adc1-0242ac120002",
"Name": "theKeyId",
"Value": "a59778da-fe1b-11ea-adc1-0242ac120002"
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Create a &lt;code&gt;.env&lt;/code&gt; file and use &lt;code&gt;env.example&lt;/code&gt; for some inspiration. You should be able to get all the info you need from the previous output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: For the &lt;code&gt;KEY_ID&lt;/code&gt; env variable’s value, we can use either &lt;code&gt;theKeyId&lt;/code&gt;’s or the &lt;code&gt;theKeyAlias&lt;/code&gt;’s value.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Encrypt
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;python3 encryptor.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use either &lt;code&gt;alice&lt;/code&gt; or &lt;code&gt;bob&lt;/code&gt; to “authenticate”&lt;/li&gt;
&lt;li&gt;Write a message to encrypt&lt;/li&gt;
&lt;li&gt;Hit Enter&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should see a base64 output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AQICAHhQUp2GIUg1Tf/r2I9nmNWsIJGDrRYieOviWRC0SLz/bwEXoiPmO1CgXGPhO5su0nhJAAAAbjBsBgkqhkiG9w0BBwagXzBdAgEAMFgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM9eb1WUEj8kK/WBjvAgEQgCs9K6dPBPgWZ+ZqKqXdIt1S/CUE1Xj9fcUq9vo95Mw/6XKv8yQkciWBnH55
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  How to Decrypt
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Copy the output from the encryptor and proceed to run the decryptor by doing: &lt;code&gt;python3 decryptor.py&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use the same user you used to encrypt earlier during the process (alice or bob)&lt;/li&gt;
&lt;li&gt;Paste the base64 output&lt;/li&gt;
&lt;li&gt;If everything went as planned, you should see something like:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Username:

alice

Hello alice, please enter your message to decrypt:

AQICAHhQUp2GIUg1Tf/r2I9nmNWsIJGDrRYieOviWRC0SLz/bwEXoiPmO1CgXGPhO5su0nhJAAAAbjBsBgkqhkiG9w0BBwagXzBdAgEAMFgGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM9eb1WUEj8kK/WBjvAgEQgCs9K6dPBPgWZ+ZqKqXdIt1S/CUE1Xj9fcUq9vo95Mw/6XKv8yQkciWBnH55

Your decrypted message: this is a secret
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Encryption Context
&lt;/h2&gt;

&lt;p&gt;Now try encrypting a message as &lt;code&gt;alice&lt;/code&gt;, but decrypting the message as &lt;code&gt;bob&lt;/code&gt;. Even though we are using the same key and we have the appropriate permissions, we get an &lt;code&gt;InvalidCiphertextException&lt;/code&gt;. This happens because we use a different &lt;code&gt;EncryptionContext&lt;/code&gt; while encrypting and decrypting. Even though this parameter is optional, you should use it. This value can help us understand why  a given master key was used and which operation was done. This will be handy when we are looking to the logs.&lt;/p&gt;

&lt;p&gt;While the &lt;code&gt;EncryptionContext&lt;/code&gt; doesn’t need to be kept as a secret, it helps to know that you are encrypting and decrypting between services in a “conscious” way. For example, if your application is handling messages for &lt;code&gt;alice&lt;/code&gt;, but by mistake receives a message from &lt;code&gt;bob&lt;/code&gt;, the decryption operation will fail. With that being said, the &lt;code&gt;Encryption&lt;/code&gt; +  &lt;code&gt;EncryptionContext&lt;/code&gt; would be like having both: &lt;a href="https://www.investopedia.com/terms/b/belt-and-suspenders.asp#mntl-sc-block_1-0-3:~:text=It%20is%20based%20on%20the%20idea,methods%20for%20holding%20up%20their%20pants."&gt;belt and suspenders&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Key Rotation
&lt;/h2&gt;

&lt;p&gt;When using KMS this functionality can be seen as an advantage or disadvantage. For example, if you use &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kms-key.html#cfn-kms-key-enablekeyrotation"&gt;EnableKeyRotation&lt;/a&gt; while creating a &lt;code&gt;AWS::KMS::Key&lt;/code&gt;, KMS automatically creates a new key material for it, and rotates it every year. If for some reason you need to do it more often, you would need to do a &lt;a href="https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html#rotate-keys-manually"&gt;manual rotation&lt;/a&gt;. Another thing to add is that KMS retains all the key versions until you delete it. In other words, you cannot delete an old version of the CMK. &lt;/p&gt;

&lt;h2&gt;
  
  
  Other things to Consider
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Key Strategy
&lt;/h3&gt;

&lt;p&gt;For this example we only used one key to encrypt and decrypt the user's messages. What would happen if we have millions of users that are encrypting millions of messages? In this case, you will need to adopt a Key Hierarchy Model that meets the needs for your organization. If you want to know more about this, I suggest starting &lt;a href="https://aws.amazon.com/blogs/security/benefits-of-a-key-hierarchy-with-a-master-key-part-two-of-the-aws-cloudhsm-series/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vendor Lock-in
&lt;/h3&gt;

&lt;p&gt;While AWS API is straight-forward, it would be interesting to know how you could migrate to another Key Management System without having the problem to re-encrypt every single blob that you have saved so far.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disaster Recovery (DR)
&lt;/h3&gt;

&lt;p&gt;If you can't get access to the crypto keys, you cannot do what your application does. So it might be worth looking on how to implement a multi-region strategy in case of an outage. &lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping it Up
&lt;/h2&gt;

&lt;p&gt;As you could see, setting up and performing basic operations with AWS KMS is straight forward. Another advantage is that you will not worry about the service's uptime. This is great, especially if your business relies on those encryption and decryption operations. Nonetheless, you still need to put thought into things such as defining an adequate encryption context, coming up with a key strategy, analyzing how you could avoid vendor lock-in, and placing a disaster recovery strategy. &lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;[0]&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kms</category>
      <category>python</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>Encryption as a Service in Action </title>
      <dc:creator>Sergio Díaz</dc:creator>
      <pubDate>Tue, 15 Sep 2020 13:22:31 +0000</pubDate>
      <link>https://dev.to/sergiodn/encryption-as-a-service-in-action-213n</link>
      <guid>https://dev.to/sergiodn/encryption-as-a-service-in-action-213n</guid>
      <description>&lt;h1&gt;
  
  
  TL;DR
&lt;/h1&gt;

&lt;p&gt;After reading this blog post you will have an overview of why we need encryption as a service. Also, we will implement a PoC using Python and Hashicorp Vault to apply what we just learned. &lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Nowadays there are companies that deal with sensitive data such as banking or health information. This means that in the case of a security breach, the company could have legal, financial, and bad PR repercussions. As a result, there are some industry standard guidelines for storing this type of data (eg. Payment Card Industry Data Security Standard). For example, when it comes down to store Credit Card Data, organizations are required to encrypt the account numbers stored in databases. &lt;/p&gt;

&lt;h2&gt;
  
  
  Encryption in transit
&lt;/h2&gt;

&lt;p&gt;While encryption at rest can be seamlessly be done by &lt;a href="https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-dynamodb-adds-support-for-switching-encryption-keys-to-encrypt-your-data-at-rest/#:~:text=DynamoDB%20encrypts%20data%20using%20256,provided%20at%20no%20additional%20charge.&amp;amp;text=To%20learn%20more%2C%20see%20Amazon%20DynamoDB%20Encryption%20at%20Rest" rel="noopener noreferrer"&gt;using a cloud provider&lt;/a&gt;, handling the data when it’s moving around services can be tricky. Yes, you can use a secure communication channel (eg. TLS) between services, but this is not enough. One thing that could go wrong, even when using TLS, is logging sensitive information by mistake. This can happen even to &lt;a href="https://www.zdnet.com/article/monzo-admits-to-storing-payment-card-pins-in-internal-logs/" rel="noopener noreferrer"&gt;companies that are worth billions of dollars and with an awesome engineering team&lt;/a&gt;. &lt;/p&gt;

&lt;h2&gt;
  
  
  Handling encryption at the application level
&lt;/h2&gt;

&lt;p&gt;This is possible since there are good cryptographic libraries out there. However, developers should do the implementation in a correct way (eg. not using deprecated ciphers). &lt;/p&gt;

&lt;p&gt;Furthermore, while a company might be running a &lt;a href="https://m.signalvnoise.com/the-majestic-monolith/" rel="noopener noreferrer"&gt;Majestic Monolith&lt;/a&gt;, chances are that you have other services that interact with your main service. This leads to questions such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What happens if more than one service needs access to the sensitive data?&lt;/li&gt;
&lt;li&gt;Which service will handle encryption and/or decryption?&lt;/li&gt;
&lt;li&gt;How will encryption keys be handled, rotated, and distributed? &lt;/li&gt;
&lt;li&gt;What happens if a key is leaked?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using a Key Management System
&lt;/h2&gt;

&lt;p&gt;The solution is to protect sensitive data with a centralized key management system. This can be done using Hashicorp Vault, AWS KMS, Google CMK, etc. The idea is to delegate the responsibility of encryption and decryption to this service. Using a solution like Hashicorp Vault, allows you to encrypt and decrypt application data with an HTTPS API call. This means that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data can be encrypted at rest&lt;/li&gt;
&lt;li&gt;Data is secured in Transit (TLS)&lt;/li&gt;
&lt;li&gt;Key handling and cryptographic implementations is taken care by Vault, not by developers&lt;/li&gt;
&lt;li&gt;More services could be added to interact with the sensitive data&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How it works?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F501zvzo8fmtl6grmxq62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F501zvzo8fmtl6grmxq62.png" alt="diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find the source code &lt;a href="https://github.com/shekodn/vault-encryption-as-a-service/releases/tag/v0.0.1" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As you can see we have 3 services: &lt;/p&gt;

&lt;h3&gt;
  
  
  API
&lt;/h3&gt;

&lt;p&gt;This application allows you to create a "credit card" with a name and a PAN (primary account number). All data is stored encrypted and API doesn't know anything about the type encryption used or how to decrypt it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Card Processor
&lt;/h3&gt;

&lt;p&gt;This service process the credit cards we have created so far according to the business logic. Just as with the API, it doesn't know anything about encryption or decryption. &lt;/p&gt;

&lt;h3&gt;
  
  
  Vault
&lt;/h3&gt;

&lt;p&gt;This service handles the encryption done by the API and the decryption done by the Card Processor Service. If a new service is added in the future, everything regarding encryption or decryption would still depend 100% on Vault. The best part is that no changes would be made in the other services. &lt;/p&gt;

&lt;h2&gt;
  
  
  How to Run
&lt;/h2&gt;

&lt;p&gt;Just do &lt;code&gt;docker-compose up --build&lt;/code&gt; and you should be ready to go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;Run the following script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./bin/vault.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3 things just happened:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The transit Secrets Engine was enabled. This tells vault to &lt;strong&gt;not&lt;/strong&gt; save data and turn on the &lt;strong&gt;encryption as a service&lt;/strong&gt; functionality.&lt;/li&gt;
&lt;li&gt;The root token was created. This one is used by our apps to encrypt and decrypt.&lt;/li&gt;
&lt;li&gt;A symmetric key was generated. This one is used to encrypt and decrypt the respective data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Grab the &lt;code&gt;token&lt;/code&gt; from the output value and put it as &lt;code&gt;VAULT_TOKEN&lt;/code&gt; env var in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;app/settings.py&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;card_processor.py&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Using the credit card API
&lt;/h2&gt;

&lt;p&gt;Send a POST request to &lt;code&gt;http://localhost:8000/credit-cards/&lt;/code&gt; with the following body:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "name":"Carlos Gardel",
    "pan":"4539296620131157"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If everything went well you should receive the the encrypted Primary Account Number (pan).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "name": "Carlos Gardel",
    "pan": "vault:v1:JnC8pS/zmHhHPGd7dk5eCGilnUi8odvRIBP9Z+rBmMLAWXJ/dgYqGAU4MTk="
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you do a GET request to &lt;code&gt;http://localhost:8000/credit-cards/&lt;/code&gt; you will still see the encrypted PAN field.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the card processor service
&lt;/h2&gt;

&lt;p&gt;Now let's processes the card using the &lt;code&gt;card_processor.py&lt;/code&gt; service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;python3&lt;/span&gt; &lt;span class="n"&gt;card_processor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;py&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We should be able to see that the card was successfully decrypted and processed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Processing CARLOS GARDEL card with number 4539296620131157
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Rotating Keys
&lt;/h2&gt;

&lt;p&gt;To minimize the risk if a key is leaked, let's tell Vault to rotate it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;./bin/rotate.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we do another POST request to the credit card application, we will see that we used &lt;code&gt;v2&lt;/code&gt; to perform the encryption.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "name": "Aníbal Troilo",
    "pan": "vault:v2:vdIlXggLzrM4n5Xlzxh6a/xpmd7yz/F9MsoifuR/kmOodGKV5wPaWvMMiEw="
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's do a GET and see our different encryption key version (v1 and v2):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
    {
        "name": "Carlos Gardel",
        "pan": "vault:v1:JnC8pS/zmHhHPGd7dk5eCGilnUi8odvRIBP9Z+rBmMLAWXJ/dgYqGAU4MTk="
    },
    {
        "name": "Aníbal Troilo",
        "pan": "vault:v2:vdIlXggLzrM4n5Xlzxh6a/xpmd7yz/F9MsoifuR/kmOodGKV5wPaWvMMiEw="
    }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's go back to the &lt;code&gt;card_processor.py&lt;/code&gt; service. Now we are able to decrypt and process both entries. One was decrypted with &lt;code&gt;v1&lt;/code&gt; and the other one with &lt;code&gt;v2&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Processing CARLOS GARDEL card with number 4539296620131157
Processing ANÍBAL TROILO card with number 2720997130887021
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Voilà. Encryption keys and rotation were handled by Vault.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further considerations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Are we production ready yet?
&lt;/h3&gt;

&lt;p&gt;I don't think so. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We are using a root token&lt;/li&gt;
&lt;li&gt;TLS and communication between Vault and services is yet to be implemented&lt;/li&gt;
&lt;li&gt;Vault needs to be sealed&lt;/li&gt;
&lt;li&gt;We need to make sure that the Vault service is reliable (did someone say K8s?)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Open Source vs Cloud Provider Solution
&lt;/h3&gt;

&lt;p&gt;Do you want to deal with a self-hosted open source solution or would you rather deal with your respective cloud provider’s solution? &lt;/p&gt;

&lt;h1&gt;
  
  
  Wrapping It Up
&lt;/h1&gt;

&lt;p&gt;If you want to discuss or challenge the implementation I’m happy to do so. Just leave a comment or find me on &lt;a href="https://twitter.com/sergiodn_" rel="noopener noreferrer"&gt;twitter&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  References:
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://m.signalvnoise.com/the-majestic-monolith/" rel="noopener noreferrer"&gt;https://m.signalvnoise.com/the-majestic-monolith/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://blog.deurainfosec.com/credit-card-primary-account-number-and-encryption/" rel="noopener noreferrer"&gt;https://blog.deurainfosec.com/credit-card-primary-account-number-and-encryption/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.zdnet.com/article/monzo-admits-to-storing-payment-card-pins-in-internal-logs/" rel="noopener noreferrer"&gt;https://www.zdnet.com/article/monzo-admits-to-storing-payment-card-pins-in-internal-logs/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-dynamodb-adds-support-for-switching-encryption-keys-to-encrypt-your-data-at-rest/" rel="noopener noreferrer"&gt;https://aws.amazon.com/about-aws/whats-new/2019/02/amazon-dynamodb-adds-support-for-switching-encryption-keys-to-encrypt-your-data-at-rest/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>vault</category>
      <category>encryption</category>
    </item>
    <item>
      <title>Deploying a Bastion Host in AWS using CloudFormation</title>
      <dc:creator>Sergio Díaz</dc:creator>
      <pubDate>Tue, 21 Apr 2020 13:58:02 +0000</pubDate>
      <link>https://dev.to/sergiodn/deploying-a-bastion-host-in-aws-using-cloudformation-k9c</link>
      <guid>https://dev.to/sergiodn/deploying-a-bastion-host-in-aws-using-cloudformation-k9c</guid>
      <description>&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;In this blog post, we are going to talk about what is Bastion Host and why do we need one. Afterward, we are going to deploy a proof of concept using AWS CloudFormation.&lt;/p&gt;

&lt;h1&gt;
  
  
  Bastion Who?
&lt;/h1&gt;

&lt;p&gt;Although &lt;a href="https://cloud.google.com/blog/products/management-tools/identifying-and-tracking-toil-using-sre-principles"&gt;toil&lt;/a&gt; is highly discouraged, sometimes we need to &lt;em&gt;ssh&lt;/em&gt; into an instance in order to do some kind of debugging. As a result, we need to expose that instance to the whole internet and that is &lt;em&gt;no bueno&lt;/em&gt;. One way to prevent this from happening is to implement a Bastion Host.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Why do we need one?
&lt;/h1&gt;

&lt;p&gt;The idea of implementing this is being able to reduce the attack surface of our&lt;br&gt;&lt;br&gt;
infrastructure by doing 2 things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Removing the application instances (could also be a database instance) or other servers that are not meant to be open to the world.&lt;/li&gt;
&lt;li&gt; Being able to harden one machine (the bastion) and not each and every other server in our infrastructure. So, in this case, the m̶o̶r̶e̶ less the merrier.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Another benefit that the Bastion Host can have is logging in order to prevent &lt;a href="https://searchsecurity.techtarget.com/definition/nonrepudiation"&gt;repudiation&lt;/a&gt;. This works because engineers have their own key pair. As a result, you can keep track of what Alice and Bob did during their last session.&lt;/p&gt;
&lt;h1&gt;
  
  
  What are we going to deploy?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ie3N5rEM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/osnsbxxhdevw5mpy8lqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ie3N5rEM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/osnsbxxhdevw5mpy8lqq.png" alt="diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The idea is that our on-call engineer will &lt;em&gt;ssh&lt;/em&gt; her way into the App Instance via Bastion Host. In order to replicate this setup, we need to deploy 15+ AWS resources, but let’s focus on the ones that are in the diagram:&lt;/p&gt;
&lt;h2&gt;
  
  
  VPC
&lt;/h2&gt;

&lt;p&gt;We need one so we can create the virtual network where our instances will run&lt;/p&gt;
&lt;h2&gt;
  
  
  Private Subnet
&lt;/h2&gt;

&lt;p&gt;We need a network that can only receive internal traffic (we only need a private IP address)&lt;/p&gt;
&lt;h2&gt;
  
  
  Public Subnet
&lt;/h2&gt;

&lt;p&gt;We need a network that can receive traffic from the Internet (we need a public IP address)&lt;/p&gt;
&lt;h2&gt;
  
  
  Bastion Security Group (SG)
&lt;/h2&gt;

&lt;p&gt;We need it to make sure the Bastion Host Instance can receive traffic from port 22 (SSH).&lt;/p&gt;
&lt;h2&gt;
  
  
  Application SG
&lt;/h2&gt;

&lt;p&gt;We need to make sure our App Instance can receive traffic from our Bastion Host SG.&lt;/p&gt;
&lt;h2&gt;
  
  
  Bastion Host (EC2)
&lt;/h2&gt;

&lt;p&gt;We need a server that we can use as a Bastion Host&lt;/p&gt;
&lt;h2&gt;
  
  
  App Instance (EC2)
&lt;/h2&gt;

&lt;p&gt;We need a server that is not exposed to the internet&lt;/p&gt;
&lt;h1&gt;
  
  
  Getting Started
&lt;/h1&gt;

&lt;p&gt;You can find the relevant files in &lt;a href="https://github.com/shekodn/bastion-poc"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Make sure you have an AWS account&lt;/li&gt;
&lt;li&gt;  Make sure you have a user with the appropriate roles&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair"&gt;Create a key pair&lt;/a&gt; in the &lt;em&gt;us-east-1&lt;/em&gt; availability zone. We will use the keys to connect to our instance.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  main.yml
&lt;/h2&gt;

&lt;p&gt;We can divide this file into 3 sections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Parameters&lt;/strong&gt;: Where we import variables from &lt;em&gt;deploy.sh&lt;/em&gt; (more about it coming next) so we can use them with our resources’ attributes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Resources&lt;/strong&gt;: Where we define all the AWS resources that we need for this setup.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Output&lt;/strong&gt;: If everything goes according to plan, we want to import the IP addresses from our created instances.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember: Although this is a simple setup, we need at least 15 AWS resources to make the desired implementation work. For example, we need an Internet Gateway so our Bastion Instance can talk to the internet and we need a Route Table to direct network traffic.&lt;/p&gt;
&lt;h2&gt;
  
  
  Set up the vars
&lt;/h2&gt;


&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# deploy.sh
STACK_NAME=bastion-poc  
REGION=us-east-1  
CLI_PROFILE=&amp;lt;your-aws-profile-with-an-appropiate-role&amp;gt;  
EC2_INSTANCE_TYPE=t2.micro  
KEY_NAME=&amp;lt;your-key-pair-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Run the deployment script
&lt;/h2&gt;

&lt;p&gt;In this script, we set up our credentials and we run a command to deploy the &lt;em&gt;main.yml&lt;/em&gt; template to AWS. If everything goes well, you should expect 2 IP addresses: One from the Bastion Instance (public) and one from the App instance (private).&lt;/p&gt;

&lt;p&gt;Go to your terminal and run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./deploy.sh
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note: If you want to debug or see what happened, go to the respective&lt;br&gt;&lt;br&gt;
CloudFormation stack in the AWS console.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Config your ssh config file
&lt;/h2&gt;

&lt;p&gt;Now that we have our implementation we are ready to pray to the demo gods and test our implementation. But before &lt;em&gt;ssh’ing&lt;/em&gt; anywhere, we need to do one more thing.&lt;/p&gt;

&lt;p&gt;Go to &lt;em&gt;~/.ssh/config&lt;/em&gt; and add the following hosts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...

### The Bastion Host  
Host bastion-host-poc  
 HostName &amp;lt;public-ip-from-output&amp;gt;  
 User ec2-user  
 Port 22  
 IdentityFile ~/.ssh/&amp;lt;your-key-pair-private-key&amp;gt;\### The App Host  

### The App Host  
Host app-host-poc  
 HostName &amp;lt;private-ip-from-output&amp;gt;  
 User ec2-user  
 IdentityFile ~/.ssh/&amp;lt;your-key-pair-private-key&amp;gt;  
 ProxyJump bastion-host-poc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  SSH’ing your way in
&lt;/h2&gt;

&lt;p&gt;If everything went well (and if we prayed to the demo gods) we should be able to&lt;br&gt;
&lt;em&gt;ssh&lt;/em&gt; to the App Instance.&lt;/p&gt;

&lt;p&gt;Go to your terminal and &lt;em&gt;ssh&lt;/em&gt; into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh app-host-poc
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Voilà. You are inside a machine that is running in a private subnet. Isn’t it&lt;br&gt;&lt;br&gt;
cool?&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrapping it up
&lt;/h1&gt;

&lt;p&gt;Remember, this is just a Proof of Concept. For example, the Application Instance&lt;br&gt;
can still send traffic to the whole world (do you really want that?). Similarly,&lt;br&gt;
the Bastion Instance has yet to be hardened.&lt;/p&gt;

&lt;p&gt;Implementing a Bastion can be useful for your current processes, especially if&lt;br&gt;&lt;br&gt;
you have some instances exposed to the world and/or you want to control&lt;br&gt;&lt;br&gt;
who can &lt;em&gt;ssh&lt;/em&gt; into your infrastructure.&lt;/p&gt;

&lt;p&gt;Although you probably have a more sophisticated setup, a Bastion Host might be&lt;br&gt;
the right solution for you and this could be the kickstart of your&lt;br&gt;
implementation.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Also published on &lt;a href="https://medium.com/@sergiodn/deploying-a-bastion-host-in-aws-using-cloudformation-47d436826ae7"&gt;Medium&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair"&gt;Amazon EC2 key pairs and Linux instances&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/security/how-to-record-ssh-sessions-established-through-a-bastion-host/"&gt;How to Record SSH Sessions Established Through a Bastion Host&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Diagrams were made using: &lt;a href="https://www.planttext.com/"&gt;https://www.planttext.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>tech</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>How I Went From Localhost To Production Using The Devops Way Part 1 Of N</title>
      <dc:creator>Sergio Díaz</dc:creator>
      <pubDate>Sat, 14 Mar 2020 16:54:03 +0000</pubDate>
      <link>https://dev.to/sergiodn/how-i-went-from-localhost-to-production-using-the-devops-way-part-1-of-n-4494</link>
      <guid>https://dev.to/sergiodn/how-i-went-from-localhost-to-production-using-the-devops-way-part-1-of-n-4494</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3nfSugsm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/4928/0%2Ap18gvcXWGRGGlGJv" class="article-body-image-wrapper"&gt;&lt;img class="s t u fs ai" src="https://res.cloudinary.com/practicaldev/image/fetch/s--3nfSugsm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/4928/0%2Ap18gvcXWGRGGlGJv" width="2464" height="1632"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small class="img-caption"&gt;&lt;br&gt;
Photo by &lt;a href="https://unsplash.com/@vegasphotog?utm_source=medium&amp;amp;utm_medium=referral"&gt;Robert Baker&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;br&gt;
&lt;/small&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  This post covers:
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;  Making a case for automation&lt;/li&gt;
&lt;li&gt;  Defining what is a Pipeline&lt;/li&gt;
&lt;li&gt;  Defining a game plan to automatically ship an application to a container repository&lt;/li&gt;
&lt;li&gt;  Setting up a Continuous Integration Platform&lt;/li&gt;
&lt;li&gt;  Breaking down the implementation&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Making a Case for Automation
&lt;/h1&gt;

&lt;p&gt;Building a compiler is not an easy task. It involves the development of many modules such as a lexer, parser and a virtual machine. It also involves the processing of variable declarations (e.g. int a = 0) and the evaluation of simple arithmetic expressions ( e.g. a = 4 * 10 + 2) all the way to calling returning functions with parameter such as factorial recursive function:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Along the process we found out a lot of things during the development phase that were not considered during the design phase (I swear I’ve never heard of this problem before). As a result, we were constantly moving the grammar to make sure edge cases worked. That is why we decided to take a step back and take advantage of the &lt;em&gt;unittest&lt;/em&gt; module that python3 provides. For each &lt;em&gt;.tl&lt;/em&gt; file, our programming language file extension, we created a test. This allowed us to make changes faster while eliminating the fear of breaking previous working code.&lt;/p&gt;

&lt;p&gt;Tests were focused on three different categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Correctness of the intermediate code generation (quadruples)&lt;/li&gt;
&lt;li&gt; Expected failures of .tl files with the expected number of errors&lt;/li&gt;
&lt;li&gt; Correctness in expected executed output&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As a result, doing &lt;em&gt;make test&lt;/em&gt; (Viva la Makefile) runs 50+ tests. But what happens if a developer forgets to run the tests before doing a pull request or merging to master? Thankfully we use GitHub, so going back to a specific commit is possible, but is it practical? Do we really want to go through that painful way?&lt;/p&gt;

&lt;p&gt;Furthermore, &lt;em&gt;trendlit&lt;/em&gt;, our compiler, is meant to be cloud-based. So that means that we also need an easy way to deploy it. On top of that, since &lt;em&gt;trendlit&lt;/em&gt; runs inside a Docker container, we also need a way to build the Docker image, check if the current version already exists in the container repository (e.g. Docker Hub), and eventually push that image to the registry. Therefore, we decided to do it the DevOps way.&lt;/p&gt;

&lt;h1&gt;
  
  
  But First, What is a Pipeline?
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bA9XMXxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/8120/0%2AdfONqGnRvL62amzL" class="article-body-image-wrapper"&gt;&lt;img class="s t u fs ai" src="https://res.cloudinary.com/practicaldev/image/fetch/s--bA9XMXxk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/8120/0%2AdfONqGnRvL62amzL" width="4060" height="3226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small class="img-caption"&gt;&lt;br&gt;
Photo by &lt;a href="https://unsplash.com/@realaxer?utm_source=medium&amp;amp;utm_medium=referral"&gt;tian kuan&lt;/a&gt; on &lt;a href="https://unsplash.com?utm_source=medium&amp;amp;utm_medium=referral"&gt;Unsplash&lt;/a&gt;&lt;br&gt;
&lt;/small&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A pipeline is a set of automated processes that allow Developers and DevOps professionals to reliably and efficiently compile, build and deploy their code to their production compute platforms. There is no hard and fast rule stating what a pipeline should like like and the tools it must utilize, however the most common components of a pipeline are; build automation/continuous integration, test automation, and deployment automation.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h1&gt;
  
  
  Implementation Roadmap
&lt;/h1&gt;

&lt;p&gt;As we previously mentioned, we want to manage &lt;em&gt;trendlit&lt;/em&gt; the DevOps way. To do this, we will implement steps of Continuous Integration (CI), Continuous Delivery (CD) and Infrastructure as a Service (IaaS). For this post, we are going to focus on the Continuous Integration part.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why would we even want CI?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Continuous Integration is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. [5]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As a result, errors can be detected faster and can be located more easily. This allows an organization to deliver software more rapidly, while reducing the risk for each release.&lt;/p&gt;

&lt;p&gt;For this example, the idea is that a developer, while adding or fixing a feature, can go from localhost to the application being pushed to a registry within minutes, while relying on an automated process. How are we going to achieve this? Well, you guessed it: we are going to use a pipeline. So the workflow will look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jfIGnKD1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/2264/1%2AhW5FMXo0ol5gSE8gdyzflQ.png" class="article-body-image-wrapper"&gt;&lt;img class="s t u fs ai" src="https://res.cloudinary.com/practicaldev/image/fetch/s--jfIGnKD1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/2264/1%2AhW5FMXo0ol5gSE8gdyzflQ.png" width="1132" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small class="img-caption"&gt;&lt;br&gt;
Diagram generated by PlantUML: &lt;a href="https://bit.ly/2W1D6OR"&gt;https://bit.ly/2W1D6OR&lt;/a&gt;&lt;br&gt;
&lt;/small&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; A developer adds new feature to the application.&lt;/li&gt;
&lt;li&gt; The application goes through an automated testing process.&lt;/li&gt;
&lt;li&gt; When the developer makes a pull request, the application goes under the four-eyes principle [4] and if it is approved by another peer and the automated tests pass, the new feature is merged to the master branch of the code repository.&lt;/li&gt;
&lt;li&gt; A new version of the application is automatically built and packaged into a container.&lt;/li&gt;
&lt;li&gt; The container is pushed to a container repository.&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;
  
  
  Getting Started
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NkDWy0nF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://miro.medium.com/max/960/1%2Axh2ITtRfpa5hJgdVcM0d-Q.gif" class="article-body-image-wrapper"&gt;&lt;img class="s t u fs ai" src="https://res.cloudinary.com/practicaldev/image/fetch/s--NkDWy0nF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://miro.medium.com/max/960/1%2Axh2ITtRfpa5hJgdVcM0d-Q.gif" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In order to run the implementation roadmap described above, we chose to use the following tech stack:&lt;/p&gt;
&lt;h2&gt;
  
  
  Code Repository
&lt;/h2&gt;

&lt;p&gt;We will use GitHub in order to host the application’s source code.&lt;/p&gt;
&lt;h2&gt;
  
  
  CI Platform
&lt;/h2&gt;

&lt;p&gt;When it comes down to pipelines, we can find them in all colors and flavors. For this project, we chose to go with CircleCI because it integrates easily with GitHub and allows SSH access to build instances, which is handy for debugging the build steps. Is worth mentioning that we could have replaced CicleCI with another automation server such as Jenkins, but it takes longer to setup because it is self-hosted.&lt;/p&gt;
&lt;h2&gt;
  
  
  Container Repository
&lt;/h2&gt;

&lt;p&gt;We’ll use the repository that is provided by &lt;a href="https://hub.docker.com/"&gt;Docker Hub&lt;/a&gt;.&lt;/p&gt;
&lt;h1&gt;
  
  
  Setting up CircleCI
&lt;/h1&gt;

&lt;p&gt;This is the application’s code repository: &lt;a href="https://github.com/shekodn/trendlit-tutorial-1"&gt;https://github.com/shekodn/trendlit-tutorial-1&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here you can also find all the scripts that the pipeline is going to run in order to test, build, check, and push the application to Docker Hub.&lt;/p&gt;
&lt;h2&gt;
  
  
  Steps
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  Fork the &lt;a href="https://github.com/shekodn/trendlit-tutorial-1"&gt;project&lt;/a&gt;, so you can create your own pipeline.&lt;/li&gt;
&lt;li&gt;  Go to &lt;a href="http://hub.docker.com"&gt;hub.docker.com&lt;/a&gt; and create an account and a repository with &lt;em&gt;trendlit-tutorial-1&lt;/em&gt; as namespace.&lt;/li&gt;
&lt;li&gt;  &lt;a href="https://circleci.com/signup/"&gt;Sign up to CircleCI&lt;/a&gt; and link your GitHub account.&lt;/li&gt;
&lt;li&gt;  Select the forked version of &lt;em&gt;trendlit-tutorial-1&lt;/em&gt; and build the project. The build will fail because you need to put your Docker Hub’s credentials. Also, you need to change the &lt;em&gt;REPOSITORY&lt;/em&gt; variable in each bash script (or at least in &lt;em&gt;docker_build.sh&lt;/em&gt;, &lt;em&gt;docker_check.sh&lt;/em&gt; and &lt;em&gt;docker_push.sh&lt;/em&gt;) inside the &lt;em&gt;scripts&lt;/em&gt; directory  to the name of your Docker Hub’s repository.&lt;/li&gt;
&lt;li&gt;  Don’t forget to push the changes to the &lt;em&gt;master&lt;/em&gt; branch.&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;  In your CircleCI’s dashboard in the project list go to &lt;em&gt;trendlit-tutorial-1&lt;/em&gt; settings located in &lt;a href="https://circleci.com/gh/shekodn/trendlit-tutorial-1/edit"&gt;https://circleci.com/gh/YOURUSERNAME/trendlit-tutorial-1/edit&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  Under &lt;em&gt;Build Settings&lt;/em&gt; you will see &lt;em&gt;Environment Variables.&lt;/em&gt; Click there and add &lt;em&gt;DOCKER_USER&lt;/em&gt; and &lt;em&gt;DOCKER_PASS&lt;/em&gt; (a.k.a. your Docker Hub’s credentials) as variables with their respective values. These variables will allow CircleCI to upload the application’s container into your Docker Hub’s repository.&lt;/li&gt;
&lt;li&gt;  Rerun the workflow and this time it should succeed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MJ9gtt5o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1960/1%2AXDajcWxs803DCwjC5MfmFw.png" class="article-body-image-wrapper"&gt;&lt;img class="s t u fs ai" src="https://res.cloudinary.com/practicaldev/image/fetch/s--MJ9gtt5o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/1960/1%2AXDajcWxs803DCwjC5MfmFw.png" width="980" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small class="img-caption"&gt;&lt;br&gt;
This is the desired workflow after a Pull Request is merged to the &lt;em&gt;master&lt;/em&gt; branch.&lt;br&gt;
&lt;/small&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Afterwards, go to your Docker Hub and you should see three new tags. One with the &lt;em&gt;trendlit’s&lt;/em&gt; RELEASE (a.k.a version) that was specified at the top of the &lt;em&gt;Makefile,&lt;/em&gt; other one with a shortened version of the commit hash and the other one named &lt;em&gt;latest.&lt;/em&gt; If you want to learn more about tagging docker images, you can have a look &lt;a href="https://container-solutions.com/tagging-docker-images-the-right-way/"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4tQjU0Vs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/864/1%2AunEapnbUW-rVlJ0txL0DxA.png" class="article-body-image-wrapper"&gt;&lt;img class="s t u fs ai" src="https://res.cloudinary.com/practicaldev/image/fetch/s--4tQjU0Vs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/864/1%2AunEapnbUW-rVlJ0txL0DxA.png" width="432" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small class="img-caption"&gt;&lt;br&gt;
Tags were automatically generated by the Pipeline&lt;br&gt;
&lt;/small&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  How does it work?
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Webhooks
&lt;/h2&gt;

&lt;p&gt;The pipeline is able to &lt;em&gt;listen&lt;/em&gt; each time there is a change in the repository thanks to a &lt;em&gt;webhook.&lt;/em&gt; According to GitHub, &lt;em&gt;webhooks&lt;/em&gt; allow external services to be notified when certain events happen. When the specified events happen, GitHub sends a POST request to each of the URLs provided [3].&lt;/p&gt;

&lt;p&gt;In other words, when you link your GitHub account with CircleCI, you allow the latter to perform some actions in your repositories. One of them, is automatically configuring a &lt;em&gt;webhook.&lt;/em&gt; Do you remember when we built the application for the first time? Well, it was created back then. So every time there is an event such as a pushed &lt;em&gt;commit&lt;/em&gt; or a &lt;em&gt;pull request,&lt;/em&gt; this &lt;em&gt;webhook&lt;/em&gt; sends a notification to CircleCI in order to trigger the build.&lt;/p&gt;

&lt;p&gt;If you want to see how the &lt;em&gt;webhook&lt;/em&gt; looks like, go to your project’s settings on GitHub and have a look. There you will be able to see which events trigger the &lt;em&gt;webhook.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sXs1Z9lT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/4412/1%2AoHLAPflVhjycO0GHlBNzsg.png" class="article-body-image-wrapper"&gt;&lt;img class="s t u fs ai" src="https://res.cloudinary.com/practicaldev/image/fetch/s--sXs1Z9lT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/max/4412/1%2AoHLAPflVhjycO0GHlBNzsg.png" width="2206" height="882"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;small class="img-caption"&gt;&lt;br&gt;
Repository/Settings/Webhooks&lt;br&gt;
&lt;/small&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Decomposing the Config File
&lt;/h2&gt;

&lt;p&gt;Once the build is triggered by the &lt;em&gt;webhook&lt;/em&gt;, CircleCI goes to the &lt;em&gt;config.yml&lt;/em&gt; file located inside the &lt;em&gt;.circleci&lt;/em&gt; directory.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;h2&gt;
  
  
  1. Jobs
&lt;/h2&gt;

&lt;p&gt;The pipeline consists of 2 main jobs: &lt;em&gt;Test&lt;/em&gt; and &lt;em&gt;Deploy.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Tests
&lt;/h2&gt;

&lt;p&gt;Here we select Docker as an executor type. Then, we use the &lt;em&gt;python:3.7-alpine&lt;/em&gt; image in order to to run some tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Deploy
&lt;/h2&gt;

&lt;p&gt;Here we use an official Docker image called &lt;em&gt;docker:stable-git&lt;/em&gt;, because we need a Docker image that installs Docker and has git [1].&lt;/p&gt;

&lt;p&gt;Then we use &lt;em&gt;setup_remote_docker,&lt;/em&gt; because according to &lt;a href="https://twitter.com/jpetazzo"&gt;jpetazzo&lt;/a&gt;, we should think twice before using Docker inside Docker [2]. As a result, all the Docker commands such as &lt;em&gt;docker build&lt;/em&gt; and &lt;em&gt;docker push&lt;/em&gt; will be safely executed in this new environment.&lt;/p&gt;

&lt;p&gt;As you may have noticed, &lt;em&gt;Deploy&lt;/em&gt; has several &lt;em&gt;run&lt;/em&gt; steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We run &lt;em&gt;apk add make&lt;/em&gt; in order to get trendlit’s release version. You can get it in your terminal by running &lt;em&gt;make version.&lt;/em&gt; The latter prints the &lt;em&gt;RELEASE&lt;/em&gt; variable located at the top of the project’s Makefile.&lt;/li&gt;
&lt;li&gt; Remember when we set some environmental variables? Well, there you go: We run &lt;em&gt;docker login&lt;/em&gt; so the pipeline has permission to push the docker container to the respective container repository.&lt;/li&gt;
&lt;li&gt; We use &lt;em&gt;docker_build.sh&lt;/em&gt; in order to build the Docker container with the following format: &lt;em&gt;REPOSITORY/IMAGE:TAG&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt; Afterwards, we use the docker_check script in order to check if the current release already exists in the respective container repository. Do you remember the &lt;em&gt;RELEASE&lt;/em&gt; variable located at the top of the &lt;em&gt;Makefile&lt;/em&gt;? Well, if the version already exists IT WILL BREAK THE PIPELINE, because you don’t want to overwrite an existing version right? You can manually fix this by editing the &lt;em&gt;Makefile&lt;/em&gt; and assigning a different version or you can run &lt;em&gt;make bump&lt;/em&gt; to increase the version in a smoother way ;)&lt;/li&gt;
&lt;li&gt; Last but not least, we push the image that &lt;em&gt;docker_build.sh&lt;/em&gt; previously  built to Docker Hub.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Workflows
&lt;/h2&gt;

&lt;p&gt;Here we defined a workflow with 2 jobs: &lt;em&gt;Test&lt;/em&gt; and &lt;em&gt;Deploy&lt;/em&gt;. &lt;em&gt;Test&lt;/em&gt; will occur each time a change is made to the application in any given branch, while &lt;em&gt;Deploy&lt;/em&gt; will always wait for the &lt;em&gt;Test&lt;/em&gt; job to finish and will only run in the &lt;em&gt;master&lt;/em&gt; branch. This is because we don’t want to build and ship a container to the registry each time we do a change in a feature branch.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Wrapping it up&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Although we finished the &lt;em&gt;Implementation Roadmap&lt;/em&gt;, the implementation as a whole is far from done. It is easy to look at all the missing things and considerations, but remember where we started. Now we have a fully automated process that tests our application and pushes it to a container repository without any human interaction. It’s worth pointing out that some important aspects such as security were neglected, but this is only one part of the implementation’s MVP (Continuous Delivery is coming soon).&lt;/p&gt;

&lt;p&gt;If you have any suggestions, I would love to hear them. I’m pretty sure we can include them in future roadmaps. BTW, &lt;em&gt;Pull Requests&lt;/em&gt; are open ;) so I would like to collaborate with you if you think we can improve the process. In the meantime, I will be working on part 2.&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;p&gt;[0] &lt;a href="https://www.bmc.com/blogs/deployment-pipeline/"&gt;https://www.bmc.com/blogs/deployment-pipeline/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[1] &lt;a href="https://circleci.com/docs/2.0/executor-types/"&gt;https://circleci.com/docs/2.0/executor-types/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[2] &lt;a href="https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/"&gt;https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[3] &lt;a href="https://developer.github.com/webhooks/"&gt;https://developer.github.com/webhooks/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[4] &lt;a href="https://whatis.techtarget.com/definition/four-eyes-principle"&gt;https://whatis.techtarget.com/definition/four-eyes-principle&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;[5] &lt;a href="https://www.thoughtworks.com/continuous-integration"&gt;https://www.thoughtworks.com/continuous-integration&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Also published on &lt;a href="https://medium.com/@sergiodn/how-i-went-from-localhost-to-production-using-the-devops-way-part-1-of-n-7a7b4c35515f"&gt;Medium&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
  </channel>
</rss>
