<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: David WOGLO</title>
    <description>The latest articles on DEV Community by David WOGLO (@davwk).</description>
    <link>https://dev.to/davwk</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/davwk"/>
    <language>en</language>
    <item>
      <title>From legacy to cloud serverless - Part 4</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Wed, 04 Sep 2024 11:46:09 +0000</pubDate>
      <link>https://dev.to/davwk/from-legacy-to-cloud-serverless-part-4-3omp</link>
      <guid>https://dev.to/davwk/from-legacy-to-cloud-serverless-part-4-3omp</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This article was originally published on Feb 5, 2024  &lt;a href="https://davidwoglo.hashnode.dev/from-legacy-to-cloud-serverless-1-1-1" rel="noopener noreferrer"&gt;here&lt;/a&gt;. It has been republished here to reach a broader audience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe546q6yeest3hn6bzath.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe546q6yeest3hn6bzath.png" alt="diagram" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hello and welcome to this article in a journey of migrating a legacy-built app to the cloud. In this section, we will focus on three aspects: interfacing the application with Cloud Firestore, automating deployment, and exploring how Binary Authorization can reinforce supply chain security while aligning with security policies.&lt;/p&gt;

&lt;p&gt;If you're joining us midway, I encourage you to take a look at the previous articles to get up to speed. Otherwise, let's dive in! 😊&lt;/p&gt;

&lt;h2&gt;
  
  
  integrating the app with Cloud Firestore
&lt;/h2&gt;

&lt;p&gt;Our previous code interacted with MongoDB. With the migration to Google Cloud, we are transitioning away from MongoDB in favor of Firestore, which is Google Cloud's managed NoSQL document database built for automatic scaling, high performance, and ease of application development. To achieve this, we'll need to make modifications to our code, ensuring that our application seamlessly integrates and functions with Firestore.&lt;/p&gt;

&lt;p&gt;We will replace the old MongoDB code with the following Firestore integration:&lt;/p&gt;

&lt;p&gt;Old:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pymongo&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoClient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;bson.objectid&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ObjectId&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;mongomock&lt;/span&gt;
&lt;span class="bp"&gt;...&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TESTING&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mongomock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MONGO_URI&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;flask_db&lt;/span&gt;
&lt;span class="n"&gt;todos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;todos&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;New:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.auth&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;compute_engine&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.cloud&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;firestore&lt;/span&gt;
&lt;span class="bp"&gt;...&lt;/span&gt;
&lt;span class="n"&gt;credentials&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;compute_engine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Credentials&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;firestore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;credentials&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;credentials&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;todos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;todos&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;from google.auth import compute_engine&lt;/code&gt;: This line imports the &lt;code&gt;compute_engine&lt;/code&gt; module from the &lt;code&gt;google.auth&lt;/code&gt; library, which is used for authentication in Google Cloud environments&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;from&lt;/code&gt;&lt;a href="http://google.cloud" rel="noopener noreferrer"&gt;&lt;code&gt;google.cloud&lt;/code&gt;&lt;/a&gt;&lt;code&gt;import firestore&lt;/code&gt;: This line imports the &lt;code&gt;firestore&lt;/code&gt; module from the &lt;a href="http://google.cloud" rel="noopener noreferrer"&gt;&lt;code&gt;google.cloud&lt;/code&gt;&lt;/a&gt; library, enabling interaction with Google Cloud Firestore.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;compute_engine.Credentials()&lt;/code&gt; call retrieves the default credentials provided by Google Cloud in its environment. These credentials are essential for authenticating with Firestore. In a local or non-Google Cloud service environment, you would need to generate a service account key before being able to authenticate with Firestore. However, in our case, since the code will be deployed on Cloud Run, authentication will be handled using the default service account of Cloud Run.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;todos = db.collection('todos')&lt;/code&gt;. Here, we're defining a Firestore collection. Collections are used to organize documents in Firestore.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Data Insertion&lt;/strong&gt;: When a POST request is made, the new todo item is added to the Firestore collection 'todos' using the &lt;code&gt;add&lt;/code&gt; method. The data is stored as a dictionary.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;GET&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;degree&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;degree&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Old:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_one&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;degree&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;New:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;degree&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This modification reflects the adjustment needed in the code for Firestore, moving from the &lt;code&gt;insert_one&lt;/code&gt; method in MongoDB to the &lt;code&gt;add&lt;/code&gt; method in Firestore for adding documents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Retrieval&lt;/strong&gt;: In the new code, we utilize &lt;a href="http://todos.stream" rel="noopener noreferrer"&gt;&lt;code&gt;todos.stream&lt;/code&gt;&lt;/a&gt;&lt;code&gt;()&lt;/code&gt; to obtain a stream of documents from the Firestore collection. In the old code, we used &lt;code&gt;todos.find()&lt;/code&gt; to get a cursor to the documents in the MongoDB collection.&lt;/p&gt;

&lt;p&gt;Old:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;all_todos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;New:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;all_todos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;_id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_dict&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now use &lt;a href="http://todos.stream" rel="noopener noreferrer"&gt;&lt;code&gt;todos.stream&lt;/code&gt;&lt;/a&gt;&lt;code&gt;()&lt;/code&gt; to iterate over documents and convert them to a dictionary format for retrieval. The '_id' field represents the document ID in Firestore.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Deletion&lt;/strong&gt;: In the new code, we employ &lt;code&gt;todos.document(id).delete()&lt;/code&gt; to remove a document from the Firestore collection. In the old code, we used &lt;code&gt;todos.delete_one({"_id": ObjectId(id)})&lt;/code&gt; to delete a document from the MongoDB collection.&lt;/p&gt;

&lt;p&gt;Old:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;delete_one&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;ObjectId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;New:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;todos.document(id).delete()&lt;/code&gt; method is used to delete a specific document by its ID in Firestore.&lt;/p&gt;

&lt;p&gt;After all these updates, the new &lt;a href="http://app.py" rel="noopener noreferrer"&gt;&lt;code&gt;app.py&lt;/code&gt;&lt;/a&gt; should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;render_template&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;url_for&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;redirect&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.auth&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;compute_engine&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.cloud&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;firestore&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;template_folder&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;templates&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Use the default credentials provided by the Cloud Run environment
&lt;/span&gt;&lt;span class="n"&gt;credentials&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;compute_engine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Credentials&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Use these credentials to authenticate with Firestore
&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;firestore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;credentials&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;credentials&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;todos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;todos&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;GET&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;degree&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;degree&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;url_for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;index&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="n"&gt;all_todos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;_id&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;to_dict&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;render_template&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;index.html&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;all_todos&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&amp;lt;id&amp;gt;/delete/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;document&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;url_for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;index&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The adjustments include using Firestore methods for data insertion (&lt;code&gt;todos.add()&lt;/code&gt;), retrieval (&lt;a href="http://todos.stream" rel="noopener noreferrer"&gt;&lt;code&gt;todos.stream&lt;/code&gt;&lt;/a&gt;&lt;code&gt;()&lt;/code&gt;), and deletion (&lt;code&gt;todos.document(id).delete()&lt;/code&gt;), along with integrating the appropriate syntax for Firestore operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the new code
&lt;/h3&gt;

&lt;p&gt;To ensure the correctness of the new '&lt;a href="http://app.py" rel="noopener noreferrer"&gt;app.py&lt;/a&gt;' code, we have to update also the testing approach. The tests aim to verify the functionality of critical components, such as data insertion and deletion, within the context of Firestore integration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;unittest&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;unittest.mock&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MagicMock&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;TestApp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;unittest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TestCase&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setUp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test_client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="nd"&gt;@patch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;app.todos.add&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_index_post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mock_add&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Test Todo&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Test Degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

        &lt;span class="n"&gt;mock_add&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;assert_called_once_with&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Test Todo&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Test Degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;assertEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;302&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nd"&gt;@patch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;app.todos.document&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;mock_document&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;mock_delete&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MagicMock&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;mock_document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;return_value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;delete&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mock_delete&lt;/span&gt;

        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/123/delete/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;mock_delete&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;assert_called_once&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;assertEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;302&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;unittest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's delve into the primary components of this testing suite:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Insertion Test (&lt;/strong&gt;&lt;code&gt;test_index_post&lt;/code&gt;):&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* This test simulates a POST request to the root endpoint ('/') of the application when a new todo item is added.

* The `@patch` decorator is utilized to mock the `todos.add` method, ensuring that actual Firestore interactions are bypassed during testing.

* The test asserts that the 'add' method is called with the expected data, and the response status code is as expected (302 for a successful redirect).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Data Deletion Test (&lt;/strong&gt;&lt;code&gt;test_delete&lt;/code&gt;):&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* This test mimics a POST request to the endpoint for deleting a specific todo item ('//delete/').

* The `@patch` decorator is applied to mock the `todos.document` method, and a MagicMock is used to mock the 'delete' method of the Firestore document.

* The test verifies that the 'delete' method is called once and asserts the response status code after the deletion operation (302 for a successful redirect).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;These tests ensure that data insertion and deletion operations interact seamlessly with Firestore. The use of mocking allows for isolated testing, focusing on specific components without the need for actual Firestore connections during the testing phase.&lt;/p&gt;

&lt;p&gt;Now that the test has been added, you can re-run your Cloud Build pipeline to address any potential minor issues before proceeding with the deployment on Cloud Run.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating deployment (CD)
&lt;/h2&gt;

&lt;p&gt;To automate the deployment on Cloud Run after building and pushing the image, add the following step to your Cloud Build configuration (&lt;code&gt;cloudbuild.yaml&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Step 8: Deploy the image to Cloud Run&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;deploy-image'&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gcr.io/google.com/cloudsdktool/cloud-sdk'&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gcloud'&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;run'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;deploy'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$_SERVICE_NAME'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--image'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--region'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$_REGION'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--platform'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;managed'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;--allow-unauthenticated'&lt;/span&gt;
  &lt;span class="na"&gt;waitFor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;push-image'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This segment of the Cloud Build configuration file handles the deployment of the Docker image to Google Cloud Run. Here's a breakdown of each line's purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;id: 'deploy-image'&lt;/code&gt;: Provides a unique identifier for this step within the Cloud Build configuration file.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;name: '&lt;/code&gt;&lt;a href="http://gcr.io/google.com/cloudsdktool/cloud-sdk" rel="noopener noreferrer"&gt;&lt;code&gt;gcr.io/google.com/cloudsdktool/cloud-sdk&lt;/code&gt;&lt;/a&gt;&lt;code&gt;'&lt;/code&gt;: Specifies the Docker image to be used for this step, which, in this case, is the Google Cloud SDK image.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;entrypoint: 'gcloud'&lt;/code&gt;: Sets the Docker entrypoint to 'gcloud,' the command-line interface for Google Cloud Platform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;args&lt;/code&gt;: A list of arguments passed to the 'gcloud' command.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;'run' 'deploy' '$_SERVICE_NAME'&lt;/code&gt;: Deploys a new revision of the Cloud Run service identified by &lt;code&gt;$_SERVICE_NAME&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;'--image' '$_&lt;/code&gt;&lt;a href="http://REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA" rel="noopener noreferrer"&gt;&lt;code&gt;REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA&lt;/code&gt;&lt;/a&gt;&lt;code&gt;'&lt;/code&gt;: Specifies the Docker image to deploy, located in the designated Google Cloud Artifact Registry repository.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;'--region' '$_REGION'&lt;/code&gt;: Specifies the region where the Cloud Run service is deployed.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;'--platform' 'managed'&lt;/code&gt;: Indicates that the Cloud Run service uses the fully managed version of Cloud Run.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;'--allow-unauthenticated'&lt;/code&gt;: Permits unauthenticated requests to the Cloud Run service.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;waitFor: ['push-image']&lt;/code&gt;: Directs Cloud Build to wait for the completion of the 'push-image' step before initiating this step.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Afterwards, don't forget to update the substitution section of your Cloud Build configuration to reflect the variables used in this new step.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Now, you can push your code to trigger the pipeline. If the pipeline runs successfully, you will obtain the access link for your application. Navigate to the Google Cloud Console, go to Cloud Run, and click on the name of your newly deployed service to retrieve the access URL for your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Binary Authorization
&lt;/h2&gt;

&lt;p&gt;The Binary Authorization is a security control mechanism during the deployment of container images on Google Cloud platforms like Cloud Run, GKE, Anthos Service Mesh, and Anthos Clusters. Its primary function is to either authorize or control the deployment of images that have been attested as secure or trusted. This attestation involves subjecting the image to various processes such as testing, vulnerability scanning, and even manual signatures. Only after the image meets predefined conditions is it considered validated and allowed for deployment on the platforms.&lt;/p&gt;

&lt;p&gt;Binary Authorization is responsible for defining and enforcing this policy. The controls or checks are executed using attestators, which can be custom-created, fixed, or generated by tools like Cloud Build (currently in preview).&lt;/p&gt;

&lt;p&gt;For this project, I explored configuring Binary Authorization using the built-by-cloud-build attestor to deploy only images built by Cloud Build. With a well-crafted and robust Cloud Build configuration (incorporating various tests, vulnerability analyses, etc.), this approach can significantly save time compared to creating and using a custom attestor. However, as of the time of writing this article, Binary Authorization with cloud build attestor is in preview.&lt;/p&gt;

&lt;p&gt;The main challenge with using the built-by-cloud-build attestor is that it is generated only once during the build with Cloud Build. This may not align well with continuous delivery (CD), especially for recurrent executions of the CI/CD pipeline. Ideally, with each new run of the pipeline, a new attestor should be generated to update the Binary Authorization policy. This becomes problematic, especially if the Binary Authorization policy is configured at the organization level, impacting all other deployments. From a personal perspective, to address this, it would be beneficial if Cloud Build generates the attestor once and uses it for subsequent pipeline executions. Currently, the custom attestor provides a workaround for this limitation. However, for simplicity, it would be ideal if Cloud Build handles this process seamlessly.&lt;/p&gt;

&lt;p&gt;Follow &lt;a href="https://cloud.google.com/binary-authorization/docs/run/overview" rel="noopener noreferrer"&gt;this link&lt;/a&gt; for the setup of Binary Authorization."&lt;/p&gt;

&lt;p&gt;This concludes the article. Thank you for reading. You can find the configurations and code for this project in the following &lt;a href="https://github.com/davWK/legacy-to-cloud-serverless" rel="noopener noreferrer"&gt;Git repository&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>security</category>
      <category>containers</category>
      <category>devops</category>
    </item>
    <item>
      <title>From legacy to cloud serverless - Part 3</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Wed, 04 Sep 2024 11:38:11 +0000</pubDate>
      <link>https://dev.to/davwk/from-legacy-to-cloud-serverless-part-3-4abm</link>
      <guid>https://dev.to/davwk/from-legacy-to-cloud-serverless-part-3-4abm</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This article was originally published on &lt;br&gt;
Dec 25, 2023  &lt;a href="https://davidwoglo.hashnode.dev/from-legacy-to-cloud-serverless-1-1" rel="noopener noreferrer"&gt;here&lt;/a&gt;. It has been republished here to reach a broader audience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatiyu6c5ow0d0suez5v4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatiyu6c5ow0d0suez5v4.png" alt="diagram" width="453" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Welcome to the third part of this serie! In this segment, we dive into testings and pipeline configuration on Google Cloud, specifically focusing on continuous integration using Cloud Build, On-demand Vulnerability Scanner, and Artifact Registry. You can find the project repository &lt;a href="https://github.com/davWK/legacy-to-cloud-serverless" rel="noopener noreferrer"&gt;here&lt;/a&gt;, or, if you prefer, you can bring your own project.&lt;/p&gt;

&lt;p&gt;Let me walk you through the pipeline. With each push to the main branch, Cloud Build is triggered. First, it runs unit tests on the code. If the tests pass, it proceeds to build the image. After the image is built, Cloud Build invokes the image scanner to ensure it's free of vulnerabilities. If all is well, the image is sent and stored in the Artifact Registry, ready for deployment. But for this article, we'll focus solely on the CI part. Let's start with the tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unittest
&lt;/h2&gt;

&lt;p&gt;Here's the code we plan to test&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pymongo&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MongoClient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;render_template&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;url_for&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;redirect&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;bson.objectid&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ObjectId&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;mongomock&lt;/span&gt;



&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;template_folder&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;templates&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;TESTING&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mongomock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;MONGO_URI&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;


&lt;span class="n"&gt;db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;flask_db&lt;/span&gt;
&lt;span class="n"&gt;todos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;todos&lt;/span&gt;


&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;GET&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;index&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt;&lt;span class="o"&gt;==&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;degree&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insert_one&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;degree&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;url_for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;index&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="n"&gt;all_todos&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;render_template&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;index.html&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;all_todos&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="nd"&gt;@app.post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&amp;lt;id&amp;gt;/delete/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;delete&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;todos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;delete_one&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;ObjectId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;)})&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;redirect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;url_for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;index&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For an explanation of the code, refer to the first article in this series.&lt;/p&gt;

&lt;p&gt;Now, let's move on to the testing phase&lt;/p&gt;

&lt;p&gt;The test is written using Python's built-in &lt;code&gt;unittest&lt;/code&gt; module, which provides a framework for writing and running tests.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Import necessary modules and create a mock MongoDB instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The test begins by importing the necessary modules. &lt;code&gt;unittest&lt;/code&gt; is the testing framework, &lt;code&gt;patch&lt;/code&gt; and &lt;code&gt;MagicMock&lt;/code&gt; from &lt;code&gt;unittest.mock&lt;/code&gt; are used to replace parts of the system that you're testing with mock objects, and &lt;code&gt;ObjectId&lt;/code&gt; from &lt;code&gt;bson.objectid&lt;/code&gt; is used to create unique identifiers. The &lt;code&gt;app&lt;/code&gt; and &lt;code&gt;todos&lt;/code&gt; are imported from the &lt;a href="http://app.py" rel="noopener noreferrer"&gt;&lt;code&gt;app.py&lt;/code&gt;&lt;/a&gt; file. &lt;code&gt;mongomock&lt;/code&gt; is used to create a mock MongoDB instance for testing, and &lt;code&gt;flask&lt;/code&gt; is used to manipulate the request context during testing.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;unittest&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;unittest.mock&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MagicMock&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;bson.objectid&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ObjectId&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;todos&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;mongomock&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt;

&lt;span class="n"&gt;mock_db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mongomock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Define the test case&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A test case is defined by creating a new class that inherits from &lt;code&gt;unittest.TestCase&lt;/code&gt;. This class will contain methods that represent individual tests.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;class&lt;/span&gt;&lt;span class="err"&gt; &lt;/span&gt;&lt;span class="nc"&gt;TestApp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;unittest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TestCase&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set up the test environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;setUp&lt;/code&gt; method is a special method that is run before each test. Here, it's used to create a test client instance of the Flask app and enable testing mode.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setUp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test_client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Write the test&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;test_index_post&lt;/code&gt; method is the actual test. It tests the behavior of the app when a POST request is sent to the index route (&lt;code&gt;/&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_index_post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Mock the database operation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;patch&lt;/code&gt; function is used to replace the &lt;code&gt;insert_one&lt;/code&gt; method of &lt;code&gt;todos&lt;/code&gt; with a &lt;code&gt;MagicMock&lt;/code&gt;. This allows the test to simulate the behavior of the database operation without actually interacting with a real database.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;app.todos.insert_one&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new_callable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;MagicMock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;mock_insert_one&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a test request context&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A test request context is created for the app using &lt;code&gt;app.test_request_context&lt;/code&gt;. This allows the test to simulate a request to the app.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test_request_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set the request method and form data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The request method is set to 'POST' and the request form data is set to a dictionary with 'content' and 'degree' keys.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;flask&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;flask&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Test Content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Test Degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Send a POST request to the app&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A POST request is sent to the app using &lt;a href="http://self.app.post" rel="noopener noreferrer"&gt;&lt;code&gt;self.app.post&lt;/code&gt;&lt;/a&gt;. The form data is passed as the &lt;code&gt;data&lt;/code&gt; argument.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;flask&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Assert the expected results&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;assertEqual&lt;/code&gt; method is used to check that the status code of the response is 302. The &lt;code&gt;assert_called&lt;/code&gt; method is used to check that the &lt;code&gt;insert_one&lt;/code&gt; method was called.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;assertEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;302&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;mock_insert_one&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;assert_called&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This test ensures that when a POST request is sent to the index route with the correct form data, the app responds with a 302 status code and inserts the data into the database.&lt;/p&gt;

&lt;p&gt;Your test code should look something like the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;unittest&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;unittest.mock&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MagicMock&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;bson.objectid&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ObjectId&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;todos&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;mongomock&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt;

&lt;span class="c1"&gt;## Create a mock MongoDB instance
&lt;/span&gt;&lt;span class="n"&gt;mock_db&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mongomock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;db&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;TestApp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;unittest&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;TestCase&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setUp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Create a test client instance
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test_client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="c1"&gt;# Enable testing mode. Exceptions are propagated rather than handled by the the app's error handlers
&lt;/span&gt;        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt; 

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;test_index_post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Patch the insert_one method of todos with a MagicMock
&lt;/span&gt;        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;patch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;app.todos.insert_one&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;new_callable&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;MagicMock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;mock_insert_one&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Create a test request context for the app
&lt;/span&gt;            &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;test_request_context&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="c1"&gt;# Set the request method to 'POST'
&lt;/span&gt;                &lt;span class="n"&gt;flask&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;method&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;POST&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
                &lt;span class="c1"&gt;# Set the request form data
&lt;/span&gt;                &lt;span class="n"&gt;flask&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Test Content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Test Degree&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="c1"&gt;# Send a POST request to the app
&lt;/span&gt;                &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;flask&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;form&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="c1"&gt;# Assert that the status code of the response is 302
&lt;/span&gt;                &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;assertEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;302&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="c1"&gt;# Assert that the insert_one method was called
&lt;/span&gt;                &lt;span class="n"&gt;mock_insert_one&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;assert_called&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, to execute the test, set the environment variable TESTING=true, Setting &lt;code&gt;TESTING=True&lt;/code&gt; switches the application to use a mock MongoDB client for testing, instead of the real MongoDB database.&lt;/p&gt;

&lt;p&gt;Now, if your test is successful, let's move on to configuring Cloud Build.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Build setup
&lt;/h2&gt;

&lt;p&gt;Follow the &lt;a href="https://cloud.google.com/build/docs/automate-builds#connect_to_your_repository" rel="noopener noreferrer"&gt;guide&lt;/a&gt; to connect Cloud Build to your repository and &lt;a href="https://cloud.google.com/build/docs/automate-builds#create_a_trigger" rel="noopener noreferrer"&gt;this one&lt;/a&gt; for initial configurations.&lt;/p&gt;

&lt;p&gt;Once that's done, let's move on to writing the Cloud Build configuration file, where we'll instruct it on how to execute the pipeline, the steps involved, dependencies, and so on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Build config file
&lt;/h2&gt;

&lt;p&gt;The Cloud Build Config file is written in YAML, a human-readable data serialization language.&lt;/p&gt;

&lt;p&gt;Here are the main sections of our config file:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Substitutions&lt;/strong&gt;: These are user-defined variables that can be replaced in the Cloud Build configuration file. They are defined under the &lt;code&gt;substitutions&lt;/code&gt; key. In this case, &lt;code&gt;_REGION&lt;/code&gt;, &lt;code&gt;_REPOSITORY&lt;/code&gt;, &lt;code&gt;_IMAGE&lt;/code&gt;, and &lt;code&gt;_SEVERITY&lt;/code&gt; are defined.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;substitutions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-central1&lt;/span&gt;
  &lt;span class="na"&gt;_REPOSITORY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;from-legacy-to-cloud&lt;/span&gt;
  &lt;span class="na"&gt;_IMAGE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;from-legacy-to-cloud&lt;/span&gt;
  &lt;span class="na"&gt;_SEVERITY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;"CRITICAL|HIGH"'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;: These are the operations that Cloud Build will perform. Each step is a separate action and they are executed in the order they are defined.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Step 0: Install test dependencies**: This step uses a Python 3.10 Docker image to install the test dependencies listed in `docker/requirements-test.txt`. The `entrypoint` is set to `/bin/bash`, which means that the command that follows will be executed in a bash shell. The `args` key specifies the command to be executed, which in this case is a pip install command. The `-c` flag tells bash to read commands from the following string. The `|` character allows us to write multiple commands, which will be executed in order.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ```yaml
    - name: 'python:3.10-slim'
      entrypoint: '/bin/bash'
      args:
        - '-c'
        - |
          pip install --user -r docker/requirements-test.txt
      id: 'install-test-dependencies'
    ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Step 1: Run unit tests**: This step also uses a Python 3.10 Docker image to run the unit tests defined in [`test.py`](http://test.py). The `export TESTING=True` command sets an environment variable `TESTING` to `True`, which can be used to change the behavior of the application during testing. The `cd docker` command changes the current directory to `docker`, where the test file is located. The `python -m unittest` [`test.py`](http://test.py) command runs the unit tests in [`test.py`](http://test.py).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ```yaml
    - name: 'python:3.10-slim'
      entrypoint: '/bin/bash'
      args:
        - '-c'
        - |
          export TESTING=True
          cd docker 
          python -m unittest test.py
      id: 'run-tests'
    ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* **Step 2: Build the Docker image**: This step uses the `docker` Cloud Builder to build a Docker image from the Dockerfile located in the `docker/` directory. The image is tagged with the commit SHA. The `waitFor` key is used to specify that this step should wait for the `run-tests` step to complete before it starts. The `args` key specifies the command to be executed, which in this case is a docker build command. The `-t` flag is used to name and optionally tag the image in the 'name:tag' format.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    ```yaml
    - name: 'gcr.io/cloud-builders/docker'
      args: ['build', '-t', '$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA', 'docker/']
      waitFor: ['run-tests']
      id: 'build-image'
    ```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Step 3: Inspect the Docker image and write the digest to a file&lt;/strong&gt;: This step uses the &lt;code&gt;docker&lt;/code&gt; Cloud Builder to inspect the Docker image and write the image digest to a file. The image digest is a unique identifier for the image. The &lt;code&gt;docker image inspect&lt;/code&gt; command retrieves detailed information about the Docker image. The &lt;code&gt;--format&lt;/code&gt; option is used to format the output using Go templates. The &lt;code&gt;{{index .RepoTags 0}}@{{.Id}}&lt;/code&gt; template retrieves the first tag of the image and the image ID. The &lt;code&gt;&amp;gt;&lt;/code&gt; operator redirects the output to a file. The &lt;code&gt;&amp;amp;&amp;amp;&lt;/code&gt; operator is used to execute the &lt;code&gt;cat&lt;/code&gt; command only if the previous command succeeded.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gcr.io/cloud-builders/docker'&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/bin/bash'&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;docker image inspect $_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA --format '{{index .RepoTags 0}}@{{.Id}}' &amp;gt; /workspace/image-digest.txt &amp;amp;&amp;amp;&lt;/span&gt;
      &lt;span class="s"&gt;cat /workspace/image-digest.txt&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;inspect-image'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Step 4: Scan the Docker image for vulnerabilities&lt;/strong&gt;: This step uses the &lt;code&gt;cloud-sdk&lt;/code&gt; Cloud Builder to scan the Docker image for vulnerabilities. The scan ID is written to a file. The &lt;code&gt;gcloud artifacts docker images scan&lt;/code&gt; command scans the Docker image for vulnerabilities. The &lt;code&gt;--format='value(response.scan)'&lt;/code&gt; option is used to retrieve the scan ID from the response. The &lt;code&gt;&amp;gt;&lt;/code&gt; operator redirects the output to a file.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scan&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/google.com/cloudsdktool/cloud-sdk&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/bin/bash&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;gcloud artifacts docker images scan $_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA \&lt;/span&gt;
    &lt;span class="s"&gt;--format='value(response.scan)' &amp;gt; /workspace/scan_id.txt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Step 5: Check the severity of any vulnerabilities found&lt;/strong&gt;: This step uses the &lt;code&gt;cloud-sdk&lt;/code&gt; Cloud Builder to list the vulnerabilities found in the Docker image and check their severity. If any vulnerabilities with a severity matching &lt;code&gt;_SEVERITY&lt;/code&gt; are found, the build fails. The &lt;code&gt;gcloud artifacts docker images list-vulnerabilities&lt;/code&gt; command lists the vulnerabilities found in the Docker image. The &lt;code&gt;--format='value(vulnerability.effectiveSeverity)'&lt;/code&gt; option is used to retrieve the severity of each vulnerability. The &lt;code&gt;grep -Exq $_SEVERITY&lt;/code&gt; command checks if any of the severities match &lt;code&gt;_SEVERITY&lt;/code&gt;. The &lt;code&gt;echo&lt;/code&gt; command prints a message and the &lt;code&gt;exit 1&lt;/code&gt; command terminates the build if a match is found.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;severity check&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/google.com/cloudsdktool/cloud-sdk&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/bin/bash&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;gcloud artifacts docker images list-vulnerabilities $(cat /workspace/scan_id.txt) \&lt;/span&gt;
    &lt;span class="s"&gt;--format='value(vulnerability.effectiveSeverity)' | if grep -Exq $_SEVERITY; \&lt;/span&gt;
    &lt;span class="s"&gt;then echo 'Failed vulnerability check' &amp;amp;&amp;amp; exit 1; else exit 0; fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Step 6: Push the Docker image to Google Cloud Artifact Registry&lt;/strong&gt;: This step uses the &lt;code&gt;docker&lt;/code&gt; Cloud Builder to push the Docker image to the Google Cloud Artifact Registry. The &lt;code&gt;waitFor&lt;/code&gt; key is used to specify that this step should wait for the &lt;code&gt;severity check&lt;/code&gt; step to complete before it starts. The &lt;code&gt;docker push&lt;/code&gt; command pushes the Docker image to a repository.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gcr.io/cloud-builders/docker'&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;push'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;push-image'&lt;/span&gt;
  &lt;span class="na"&gt;waitFor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;severity&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;check'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Images&lt;/strong&gt;: This key specifies the Docker images that Cloud Build should build and push to the Google Cloud Artifact Registry. In this case, it's the Docker image built in Step 2.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This &lt;code&gt;cloudbuild.yaml&lt;/code&gt; file defines a complete CI/CD pipeline for our application. It installs test dependencies, runs unit tests, builds a Docker image, inspects the image, scans the image for vulnerabilities, checks the severity of any vulnerabilities found, and pushes the image to the Google Cloud Artifact Registry. This pipeline ensures that the application is tested, secure, and ready for deployment.&lt;/p&gt;

&lt;p&gt;The complete config file should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;substitutions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;us-central1&lt;/span&gt;
  &lt;span class="na"&gt;_REPOSITORY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;from-legacy-to-cloud&lt;/span&gt;
  &lt;span class="na"&gt;_IMAGE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;from-legacy-to-cloud&lt;/span&gt;
  &lt;span class="na"&gt;_SEVERITY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;"CRITICAL|HIGH"'&lt;/span&gt;

&lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="c1"&gt;# Step 0: Install test dependencies&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;install-test-dependencies'&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;python:3.10-slim'&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/bin/bash'&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;pip install --user -r docker/requirements-test.txt&lt;/span&gt;

&lt;span class="c1"&gt;# Step 1: Run unit tests&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;run-tests'&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;python:3.10-slim'&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/bin/bash'&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;export TESTING=True&lt;/span&gt;
      &lt;span class="s"&gt;cd docker &lt;/span&gt;
      &lt;span class="s"&gt;python -m unittest test.py&lt;/span&gt;

&lt;span class="c1"&gt;# Step 2: Build the Docker image&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;build-image'&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gcr.io/cloud-builders/docker'&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;build'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-t'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;docker/'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;waitFor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;run-tests'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Step 3: Inspect the Docker image and write the digest to a file.&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;inspect-image'&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gcr.io/cloud-builders/docker'&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/bin/bash'&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-c'&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;docker image inspect $_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA --format '{{index .RepoTags 0}}@{{.Id}}' &amp;gt; /workspace/image-digest.txt &amp;amp;&amp;amp;&lt;/span&gt;
      &lt;span class="s"&gt;cat /workspace/image-digest.txt&lt;/span&gt;

&lt;span class="c1"&gt;# Step 4: Scan the Docker image for vulnerabilities&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;scan&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/google.com/cloudsdktool/cloud-sdk&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/bin/bash&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;gcloud artifacts docker images scan $_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA \&lt;/span&gt;
    &lt;span class="s"&gt;--format='value(response.scan)' &amp;gt; /workspace/scan_id.txt&lt;/span&gt;

&lt;span class="c1"&gt;# Step 5: Check the severity of any vulnerabilities found&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;severity check&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gcr.io/google.com/cloudsdktool/cloud-sdk&lt;/span&gt;
  &lt;span class="na"&gt;entrypoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/bin/bash&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
    &lt;span class="s"&gt;gcloud artifacts docker images list-vulnerabilities $(cat /workspace/scan_id.txt) \&lt;/span&gt;
    &lt;span class="s"&gt;--format='value(vulnerability.effectiveSeverity)' | if grep -Exq $_SEVERITY; \&lt;/span&gt;
    &lt;span class="s"&gt;then echo 'Failed vulnerability check' &amp;amp;&amp;amp; exit 1; else exit 0; fi&lt;/span&gt;

&lt;span class="c1"&gt;# Step 6: Push the Docker image to Google Cloud Artifact Registry&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;push-image'&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gcr.io/cloud-builders/docker'&lt;/span&gt;
  &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;push'&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
  &lt;span class="na"&gt;waitFor&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;severity&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;check'&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;images&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$_REGION-docker.pkg.dev/$PROJECT_ID/$_REPOSITORY/$_IMAGE:$COMMIT_SHA'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  View build results
&lt;/h2&gt;

&lt;p&gt;Now, commit and push your changes. If the Cloud Build triggers are configured correctly, the build should be triggered. Connect to the Google Cloud Console, go to Cloud Build &amp;gt; History to view your builds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83qyrfx5veukvw7txr3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83qyrfx5veukvw7txr3k.png" alt="build result" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If it fails, click on it to see the error messages and troubleshoot to resolve the issues. Once the build succeeds, you can access the Artifact Registry and see the stored image, ready for use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lieu8f0ho1nsvd8z576.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3lieu8f0ho1nsvd8z576.png" alt="buid result" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What next ?
&lt;/h2&gt;

&lt;p&gt;Well, that wraps up this article. In the next one, we'll delve into automating deployments—the CD part. After vulnerability scanning of container images, we'll be putting security policies in place through Binary Authorization, allowing only approved/trusted images to be deployed on Cloud Run. But before that, we'll migrate our Mongo database to Google Firestore. After that, we'll deploy our app on Cloud Run and connect it to Firestore to make it fully operational.&lt;/p&gt;

&lt;p&gt;See you in the next article. Until then, I'm available on social media (I'm more active on LinkedIn) for any information or additional suggestions. Thanks for reading!&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>github</category>
      <category>cicd</category>
      <category>container</category>
    </item>
    <item>
      <title>From legacy to cloud serverless - Part 2</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Wed, 04 Sep 2024 11:26:13 +0000</pubDate>
      <link>https://dev.to/davwk/from-legacy-to-cloud-serverless-part-2-5bbn</link>
      <guid>https://dev.to/davwk/from-legacy-to-cloud-serverless-part-2-5bbn</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This article was originally published on Nov 12, 2023  &lt;a href="https://davidwoglo.hashnode.dev/from-legacy-to-cloud-serverless-1" rel="noopener noreferrer"&gt;here&lt;/a&gt;. It has been republished here to reach a broader audience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Hey, how's it been since the last article? If you haven't had a chance to check out the previous installment in the series, I invite you to discover it &lt;a href="https://davidwoglo.hashnode.dev/from-legacy-to-cloud-serverless" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Perhaps you've already tackled something similar to what was described in the previous article, and this one seems to be a good resource to continue your project. Welcome aboard!&lt;/p&gt;

&lt;p&gt;In this article, we'll be transforming Docker Compose services into Kubernetes objects and deploying them in a Kubernetes environment.&lt;/p&gt;

&lt;p&gt;To follow along, you'll need some knowledge of Kubernetes, have completed the lab described in &lt;a href="https://davidwoglo.hashnode.dev/from-legacy-to-cloud-serverless" rel="noopener noreferrer"&gt;the previous article&lt;/a&gt;, or have done something similar, make sure you have a Kubernetes environment ready. As of now, I'm using Digital Ocean's Kubernetes Engine. I mention 'as of now' because if you've been here from the beginning, you're probably aware that our project's ultimate goal isn't just deploying on K8s. It's a journey of migrating a traditional app to a serverless cloud setup. The next step in this series will involve migrating to Google Cloud. Oh, did I forget to mention? I'm all about Google Cloud—I recently even snagged my Professional Cloud Architect certification. So, expect Google Cloud to pop up regularly in my discussions, and the rest of this series will be purely GCP-focused.&lt;/p&gt;

&lt;p&gt;Enough chatter, let's dive into the real stuff!&lt;/p&gt;

&lt;h2&gt;
  
  
  Build the application image and push it to the Docker registry
&lt;/h2&gt;

&lt;p&gt;If you haven't done so already, I invite you to clone our project's repo &lt;a href="https://github.com/davWK/legacy-to-cloud-serverless.git" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Navigate to the &lt;code&gt;docker&lt;/code&gt; folder, where all the Docker-related elements of the project are stored. Explore the content a bit, and once you're ready, come back, and let's continue. If you don't have a Docker Hub account yet, I recommend creating one.&lt;/p&gt;

&lt;p&gt;Now, in your terminal, log in with &lt;code&gt;docker login&lt;/code&gt; using your Docker Hub account information. After that, build the image, tagging it with your username and the image name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; &amp;lt;username&amp;gt;/&amp;lt;image-name&amp;gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, push the image to Docker Hub.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker push &amp;lt;username&amp;gt;/&amp;lt;image-name&amp;gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Export MongoDB data
&lt;/h2&gt;

&lt;p&gt;As part of our migration process, it's crucial to ensure we retain our data. To achieve this, let's export the data stored in the MongoDB container that we'll later use when deploying MongoDB on Kubernetes.&lt;/p&gt;

&lt;p&gt;Export the existing MongoDB database from the Docker Compose setup:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Access the MongoDB database container shell.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &amp;lt;mongo_db_service&amp;gt; bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Export all data from the MongoDB database.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongodump &amp;lt;file name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Exit the MongoDB database container shell.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy the 'dump' folder from the MongoDB container to a specified destination.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;cp&lt;/span&gt; &amp;lt;mongo_db_service&amp;gt;:/dump &amp;lt;destination&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Install MongoDB on Kubernetes
&lt;/h2&gt;

&lt;p&gt;Now, while connected to the Kubernetes cluster, let's install MongoDB using Helm:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;mongo-helm oci://registry-1.docker.io/bitnamicharts/mongodb &lt;span class="nt"&gt;--set&lt;/span&gt; auth.rootUser&lt;span class="o"&gt;=&lt;/span&gt;root,auth.rootPassword&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"defineYourRootPassword"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command leverages Helm, a Kubernetes package manager, to install MongoDB from a chart hosted on Docker's registry.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The part &lt;code&gt;--set auth.rootUser=root,auth.rootPassword="DefineYourPassword"&lt;/code&gt; specifies the username and password for the MongoDB root user.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Make sure to save the output of this command; we'll be using it to construct the database connection URI.&lt;/p&gt;

&lt;p&gt;Verify that everything is installed correctly with the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods
kubectl get services
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Restore data
&lt;/h2&gt;

&lt;p&gt;It's time to restore the database:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Navigate to MongoDB Kubernetes pod&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; default mongodb_pod &lt;span class="nt"&gt;--&lt;/span&gt; /bin/bash
mongosh
use admin
db.auth&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'root'&lt;/span&gt;, &lt;span class="s1"&gt;'password'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a non-root MongoDB user:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;db.createUser&lt;span class="o"&gt;({&lt;/span&gt;
  user: &lt;span class="s1"&gt;'username'&lt;/span&gt;,
  &lt;span class="nb"&gt;pwd&lt;/span&gt;: &lt;span class="s1"&gt;'password'&lt;/span&gt;,
  roles: &lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt; role: &lt;span class="s1"&gt;'readWriteAnyDatabase'&lt;/span&gt;, db: &lt;span class="s1"&gt;'admin'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="o"&gt;{&lt;/span&gt; role: &lt;span class="s1"&gt;'dbAdminAnyDatabase'&lt;/span&gt;, db: &lt;span class="s1"&gt;'admin'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="o"&gt;{&lt;/span&gt; role: &lt;span class="s1"&gt;'clusterAdmin'&lt;/span&gt;, db: &lt;span class="s1"&gt;'admin'&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="o"&gt;})&lt;/span&gt;
&lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Restore the Docker Compose database dump to the new MongoDB pod:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Copy the database dump folder previously copied into the MongoDB pod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;cp&lt;/span&gt; &amp;lt;mongodb_dump_location_filename&amp;gt; &amp;lt;mongodb_pod&amp;gt;:/tmp/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Navigate to the MongoDB pod shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--namespace&lt;/span&gt; default mongodb_pod &lt;span class="nt"&gt;--&lt;/span&gt; /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change the directory to the dump directory and list all MongoDB folders to verify the contents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /tmp/dump
&lt;span class="nb"&gt;ls&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restore the app database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mongorestore &lt;span class="nt"&gt;--uri&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"mongodb://username:password@localhost:27017/?authSource=admin"&lt;/span&gt; app_db &lt;span class="nt"&gt;-d&lt;/span&gt; app_db 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's how it's formed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;mongodb://&lt;/code&gt;: This is the prefix to identify that we're connecting to a MongoDB instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;&amp;lt;username&amp;gt;:&amp;lt;password&amp;gt;@&lt;/code&gt;: This part specifies the username and password to connect to the MongoDB instance. You would replace &lt;code&gt;&amp;lt;username&amp;gt;&lt;/code&gt; and &lt;code&gt;&amp;lt;password&amp;gt;&lt;/code&gt; with the actual username and password. In your case, the username is the one created earlier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;mongo-helm-mongodb.default.svc.cluster.local:27017&lt;/code&gt;: This is the host and port where the MongoDB server is running. &lt;code&gt;mongo-helm-mongodb.default.svc.cluster.local&lt;/code&gt; is the DNS name for the MongoDB service in your Kubernetes cluster, and &lt;code&gt;27017&lt;/code&gt; is the default port for MongoDB.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Exit the MongoDB pod shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;exit&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The new MongoDB is now ready for use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy and connect the application to the database
&lt;/h2&gt;

&lt;p&gt;First, let's create the Kubernetes secret that will contain the connection string for the database. We're using the secret object because our connection string contains sensitive information. Kubernetes provides the secret object precisely for scenarios like this. If it were just configuration information or environment variables, a ConfigMap object would be more suitable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app-secret&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mongo-uri&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;base64-encoded-mongo-uri&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a YAML file and paste this content into it. Name the file as you see fit. Note that the &lt;code&gt;mongodb-uri&lt;/code&gt; field under &lt;code&gt;data&lt;/code&gt; should contain the base64-encoded MongoDB URI. Replace the placeholder with the actual base64-encoded connection string.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;legacy-to-cloud-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;legacy-to-cloud&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;legacy-to-cloud&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;legacy-to-cloud&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;legacy-to-cloud&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker_username/image_name:tag&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5000&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MONGO_URI&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-uri-secret&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-uri&lt;/span&gt;

&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;legacy-to-cloud-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;legacy-to-cloud&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a second YAML file for your application's manifest and paste this content into it. The &lt;code&gt;env&lt;/code&gt; section of the container in the Deployment references the MongoDB URI from the secret we created earlier. Ensure that the secret name and key match the values used in the secret manifest. Also, ensure that the selector in the Service matches the one in the Deployment. This is crucial for linking the pods to the service.&lt;/p&gt;

&lt;p&gt;If everything looks good, let's proceed with deploying our application. You can use the following command to validate the syntax of your YAML file and perform a dry run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; filename.yaml &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client &lt;span class="nt"&gt;--validate&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command checks the syntax of your YAML file and prints out the resources that would be created or modified without actually applying the changes. If there are any syntax errors, this command will highlight them.&lt;/p&gt;

&lt;p&gt;If everything is okay, create the resources with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; filename1.yaml &lt;span class="nt"&gt;-f&lt;/span&gt; filename2.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;filename1.yaml&lt;/code&gt; and &lt;code&gt;filename2.yaml&lt;/code&gt; with the actual names of your YAML files.&lt;/p&gt;

&lt;p&gt;Get the access IP address with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Identify the service for your application and copy its external IP. Paste it into your browser to access the application.&lt;/p&gt;

&lt;p&gt;Well, that wraps up this section on the migration to Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  A little gift for the road?
&lt;/h2&gt;

&lt;p&gt;Haha, did you know there's a tool to speed things up? Because here, we've created YAML manifests to deploy K8s resources. This deployment is a simple one, but imagine if it were a massive deployment with hundreds of Docker Compose services, unimaginable complexities, etc. Would we sit down and manually create manifests for all that complexity? Of course not :) Enter Kompose. Kompose is a conversion tool for Docker Compose to container orchestrators like Kubernetes. It takes a Docker Compose file and translates it into Kubernetes resources.&lt;/p&gt;

&lt;p&gt;Kompose is a handy tool for those familiar with Docker Compose but aiming to deploy their application on Kubernetes. It automates the creation of Kubernetes deployments, services, and other resources based on the services defined in the Docker Compose file.&lt;/p&gt;

&lt;p&gt;However, it's worth noting that not all Docker Compose features and options are supported by Kompose, so some manual tweaking of the generated Kubernetes resources might be necessary. Here's &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-migrate-a-docker-compose-workflow-to-kubernetes#step-3-translating-compose-services-to-kubernetes-objects-with-kompose" rel="noopener noreferrer"&gt;an excellent guide&lt;/a&gt; that addresses our use case well.&lt;/p&gt;

&lt;h2&gt;
  
  
  What next?
&lt;/h2&gt;

&lt;p&gt;And that's a wrap for this article! In the next one, we're heading to the GOOGLE CLOUUUUUUUD :) and beginning to introduce DevOps tools and practices to automate and speed up our work. We're talking about stepping up the game. We'll be using Google Cloud DevOps tools—Cloud Build for CI/CD, Artifact Registry for container images, GKE for deployments. Plus, we'll dive into DevSecOps tools and practices, leveraging the security available within the Google Cloud ecosystem.&lt;/p&gt;

&lt;p&gt;Thanks for reading, and see you soon in the next article in the series!"&lt;/p&gt;

</description>
      <category>docker</category>
      <category>kubernetes</category>
      <category>mongodb</category>
      <category>microservices</category>
    </item>
    <item>
      <title>From legacy to cloud serverless - Part 1</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Wed, 04 Sep 2024 11:17:51 +0000</pubDate>
      <link>https://dev.to/davwk/from-legacy-to-cloud-serverless-part-1-1e4</link>
      <guid>https://dev.to/davwk/from-legacy-to-cloud-serverless-part-1-1e4</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This article was originally published on Nov 4, 2023  &lt;a href="https://davidwoglo.hashnode.dev/from-legacy-to-cloud-serverless" rel="noopener noreferrer"&gt;here&lt;/a&gt;. It has been republished here to reach a broader audience.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Welcome to the first article in a series that will walk you through the process of migrating a legacy app from on-premises to the cloud, with a focus on modernization, serverless platforms, and integrated DevOps practices.&lt;/p&gt;

&lt;p&gt;In this article, we will focus on containerizing your app. However, if you're building an app from scratch, that's perfectly fine (in fact, it's even better). For this example, I'm using &lt;a href="https://www.digitalocean.com/community/tutorials/how-to-use-mongodb-in-a-flask-application" rel="noopener noreferrer"&gt;this DigitalOcean guide&lt;/a&gt; to build a simple TODO app using Python (Flask) and MongoDB as the database. I've made some customizations to make it look better, but the main point is to build something that uses a NoSQL document-based database, as this will be required for the upcoming work.&lt;/p&gt;

&lt;p&gt;You can clone the repository of the app &lt;a href="https://github.com/davWK/legacy-to-cloud-serverless" rel="noopener noreferrer"&gt;here&lt;/a&gt; on GitHub if you haven't built your own.&lt;/p&gt;

&lt;p&gt;Once you have your app built, let's get started!&lt;/p&gt;

&lt;h2&gt;
  
  
  Dockerfile
&lt;/h2&gt;

&lt;p&gt;Here is the structure of the application directory that we will containerize, followed by the Dockerfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
├── app.py
├── LICENSE
├── README.md
├── requirements.txt
├── static
│   └── style.css
└── templates
    └── index.html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="http://app.py" rel="noopener noreferrer"&gt;app.py&lt;/a&gt; file is the main application file that contains the Flask app code. The requirements.txt file contains the list of Python dependencies required by the application. The static/ directory contains static files such as CSS, JavaScript, and images. The templates/ directory contains the HTML templates used by the Flask app.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Use a minimal base image&lt;/span&gt;
&lt;span class="s"&gt;FROM python:3.9.7-slim-buster AS base&lt;/span&gt;

&lt;span class="c1"&gt;# Create a non-root user&lt;/span&gt;
&lt;span class="s"&gt;RUN useradd -m -s /bin/bash flaskuser&lt;/span&gt;
&lt;span class="s"&gt;USER flaskuser&lt;/span&gt;

&lt;span class="c1"&gt;# Set the working directory&lt;/span&gt;
&lt;span class="s"&gt;WORKDIR /app&lt;/span&gt;

&lt;span class="c1"&gt;# Copy the requirements file and install dependencies&lt;/span&gt;
&lt;span class="s"&gt;COPY requirements.txt .&lt;/span&gt;
&lt;span class="s"&gt;RUN pip install --no-cache-dir -r requirements.txt&lt;/span&gt;

&lt;span class="c1"&gt;# Add the directory containing the flask command to the PATH&lt;/span&gt;
&lt;span class="s"&gt;ENV PATH="/home/flaskuser/.local/bin:${PATH}"&lt;/span&gt;

&lt;span class="c1"&gt;# Use a multi-stage build to minimize the size of the image&lt;/span&gt;
&lt;span class="s"&gt;FROM base AS final&lt;/span&gt;

&lt;span class="c1"&gt;# Copy the app code&lt;/span&gt;
&lt;span class="s"&gt;COPY app.py .&lt;/span&gt;
&lt;span class="s"&gt;COPY templates templates/&lt;/span&gt;
&lt;span class="s"&gt;COPY static static/&lt;/span&gt;

&lt;span class="c1"&gt;# Set environment variables&lt;/span&gt;
&lt;span class="s"&gt;ENV FLASK_APP=app.py&lt;/span&gt;
&lt;span class="s"&gt;ENV FLASK_ENV=production&lt;/span&gt;

&lt;span class="c1"&gt;# Expose the port&lt;/span&gt;
&lt;span class="s"&gt;EXPOSE &lt;/span&gt;&lt;span class="m"&gt;5000&lt;/span&gt;

&lt;span class="c1"&gt;# Run the app&lt;/span&gt;
&lt;span class="s"&gt;CMD ["flask", "run", "--host=0.0.0.0"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's a walkthrough and breakdown of the Dockerfile:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The Dockerfile starts with a &lt;code&gt;FROM&lt;/code&gt; instruction that specifies the base image to use. In this case, it's &lt;code&gt;python:3.9.7-slim-buster&lt;/code&gt;, which is a minimal base image that includes Python 3.9.7 and some essential libraries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The next instruction creates a non-root user named &lt;code&gt;flaskuser&lt;/code&gt; using the &lt;code&gt;RUN&lt;/code&gt; and &lt;code&gt;useradd&lt;/code&gt; commands. This is a security best practice to avoid running the container as the root user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;WORKDIR&lt;/code&gt; instruction sets the working directory to &lt;code&gt;/app&lt;/code&gt;, which is where the application code will be copied.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;COPY&lt;/code&gt; instruction copies the &lt;code&gt;requirements.txt&lt;/code&gt; file to the container's &lt;code&gt;/app&lt;/code&gt; directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;RUN&lt;/code&gt; instruction installs the dependencies listed in &lt;code&gt;requirements.txt&lt;/code&gt; using &lt;code&gt;pip&lt;/code&gt;. The &lt;code&gt;--no-cache-dir&lt;/code&gt; option is used to avoid caching the downloaded packages, which helps to keep the image size small.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;ENV&lt;/code&gt; instruction adds the directory containing the &lt;code&gt;flask&lt;/code&gt; command to the &lt;code&gt;PATH&lt;/code&gt; environment variable. This is necessary to run the &lt;code&gt;flask&lt;/code&gt; command later.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;FROM&lt;/code&gt; instruction starts a new build stage using the &lt;code&gt;base&lt;/code&gt; image defined earlier. This is a multi-stage build that helps to minimize the size of the final image.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;COPY&lt;/code&gt; instruction copies the application code (&lt;a href="http://app.py" rel="noopener noreferrer"&gt;&lt;code&gt;app.py&lt;/code&gt;&lt;/a&gt;), templates (&lt;code&gt;templates/&lt;/code&gt;), and static files (&lt;code&gt;static/&lt;/code&gt;) to the container's &lt;code&gt;/app&lt;/code&gt; directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;ENV&lt;/code&gt; instruction sets the &lt;code&gt;FLASK_APP&lt;/code&gt; and &lt;code&gt;FLASK_ENV&lt;/code&gt; environment variables. &lt;code&gt;FLASK_APP&lt;/code&gt; specifies the name of the main application file, and &lt;code&gt;FLASK_ENV&lt;/code&gt; sets the environment to &lt;code&gt;production&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;EXPOSE&lt;/code&gt; instruction exposes port &lt;code&gt;5000&lt;/code&gt;, which is the default port used by Flask.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;CMD&lt;/code&gt; instruction specifies the command to run when the container starts. In this case, it runs the &lt;code&gt;flask run&lt;/code&gt; command with the &lt;code&gt;--host=0.0.0.0&lt;/code&gt; option to bind to all network interfaces.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With this Dockerfile, the application can be containerized and executed. However, it's important to note that our app requires a database to store the data created or generated while it's running. Of course, you could separately pull a MongoDB database image and run it independently. Then, make adjustments on both sides to establish communication between the two containers so that the app can successfully store data in the database. While this approach works, it may consume time and be a bit tedious. To streamline the process, we will instead move forward with Docker Compose. In Docker Compose, everything is declared in a YAML file, and by using the &lt;code&gt;docker-compose up&lt;/code&gt; command, we can start and operate the different services seamlessly, saving time and effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Streamlining Database Integration with Docker Compose
&lt;/h2&gt;

&lt;p&gt;Here is the basic Docker Compose YAML file that we will use to streamline the process.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.9'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo:4.4.14&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;27017:27017"&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mongo-data:/data/db&lt;/span&gt;

  &lt;span class="na"&gt;web&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;myflaskapp"&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5000:5000"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;MONGO_URI=mongodb://db:27017&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;mongo-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Docker Compose YAML file is configured to set up two services: a MongoDB database (&lt;code&gt;db&lt;/code&gt;) and a web application (&lt;code&gt;web&lt;/code&gt;). Here's a breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Version:&lt;/strong&gt; Specifies the version of the Docker Compose file format being used (&lt;code&gt;3.9&lt;/code&gt; in this case).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Services:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Database (&lt;/strong&gt;&lt;code&gt;db&lt;/code&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Uses the MongoDB version &lt;code&gt;4.4.14&lt;/code&gt; image.&lt;/li&gt;
&lt;li&gt;Maps the host port &lt;code&gt;27017&lt;/code&gt; to the container port &lt;code&gt;27017&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Utilizes a volume named &lt;code&gt;mongo-data&lt;/code&gt; to persistently store MongoDB data.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Web Application (&lt;/strong&gt;&lt;code&gt;web&lt;/code&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Builds the Docker image from the current directory (&lt;code&gt;.&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Sets the container name as "myflaskapp."&lt;/li&gt;
&lt;li&gt;Maps the host port &lt;code&gt;5000&lt;/code&gt; to the container port &lt;code&gt;5000&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Defines an environment variable &lt;code&gt;MONGO_URI&lt;/code&gt; with the value &lt;code&gt;mongodb://db:27017&lt;/code&gt;, establishing a connection to the MongoDB service.&lt;/li&gt;
&lt;li&gt;Specifies a dependency on the &lt;code&gt;db&lt;/code&gt; service, ensuring that the database is started before the web service.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Volumes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Defines a volume named &lt;code&gt;mongo-data&lt;/code&gt; for persisting MongoDB data.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;In summary, this Docker Compose file orchestrates the deployment of a MongoDB database and a Flask web application, ensuring they can communicate and function together seamlessly.&lt;/p&gt;

&lt;p&gt;Now navigate to the directory with the Docker Compose file and run &lt;code&gt;docker-compose up&lt;/code&gt; to start MongoDB and a Flask web app. Access the app at &lt;code&gt;http://localhost:5000&lt;/code&gt; to ensure everything works as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5huyzojm8njr5xhxv8d8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5huyzojm8njr5xhxv8d8.png" alt="Outcome" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To stop, use &lt;code&gt;docker-compose down&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;All good? Next up: migrating the workflow to Kubernetes in the next article.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnuwmjxxp2i4lpl5eucaa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnuwmjxxp2i4lpl5eucaa.png" alt="Next step" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>mongodb</category>
      <category>docker</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Continuous Deployment to Kubernetes with ArgoCD</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Fri, 07 Jun 2024 17:43:01 +0000</pubDate>
      <link>https://dev.to/davwk/continuous-deployment-to-kubernetes-with-argocd-4mi9</link>
      <guid>https://dev.to/davwk/continuous-deployment-to-kubernetes-with-argocd-4mi9</guid>
      <description>&lt;p&gt;Continuous deployment (CD) is the process of automatically deploying changes to production. It is a key part of the DevOps toolchain, and it can help organizations to improve their software delivery speed, reliability, and security.&lt;/p&gt;

&lt;p&gt;ArgoCD is a Kubernetes-native CD tool that can help you to automate the deployment of your applications to Kubernetes. It is a declarative tool, which means that you can define the desired state of your applications in a Git repository. ArgoCD will then automatically synchronize the actual state of your applications with the desired state.&lt;/p&gt;

&lt;p&gt;ArgoCD is a powerful tool that can help you to improve your CD process. It is easy to use, and it can be integrated with a wide range of other tools. If you are looking for a way to automate the deployment of your applications to Kubernetes, then ArgoCD is a great option.&lt;/p&gt;

&lt;p&gt;In this blog post, we will explore the process of setting up continuous integration (CI) using GitHub Actions, and then we will delve into configuring ArgoCD to handle the continuous deployment (CD) aspect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why argoCD ?
&lt;/h2&gt;

&lt;p&gt;For a brief overview of the benefits and reasons for using ArgoCD, I recommend checking out my LinkedIn post on the subject. In the post, I discuss the key advantages of leveraging ArgoCD and provide valuable insights into how it can enhance your deployment process. Click below to access the LinkedIn post and gain a quick understanding of why ArgoCD is a valuable tool for your software development and deployment needs&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://www.linkedin.com/posts/kodjovi-david-woglo_kubernetes-cicd-argocd-activity-7056054135531397120-sp9p?utm_source=share&amp;amp;amp%3Butm_medium=member_desktop" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s---4L2NNi3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://media.licdn.com/dms/image/v2/D4E22AQFEN0pK0cEVzQ/feedshare-shrink_2048_1536/feedshare-shrink_2048_1536/0/1682277326678%3Fe%3D2147483647%26v%3Dbeta%26t%3DKyrfHG3MpO4IOm56fDB0pKcX1sdMu9B_j_3wev50RdY" height="531" class="m-0" width="800"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://www.linkedin.com/posts/kodjovi-david-woglo_kubernetes-cicd-argocd-activity-7056054135531397120-sp9p?utm_source=share&amp;amp;amp%3Butm_medium=member_desktop" rel="noopener noreferrer" class="c-link"&gt;
          David W. on LinkedIn: #kubernetes #cicd #argocd #devops #continuousdeployment #gitops…
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Why ArgoCD is My Preferred Tool for Continuous Deployment on Kubernetes ?

🚀 In a typical CI/CD flow using commons tools like Jenkins, Gitlab CI/CD, or GitHub…
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://res.cloudinary.com/practicaldev/image/fetch/s--aGQ1YUtN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://static.licdn.com/aero-v1/sc/h/al2o9zrvru7aqj8e1x2rzsrca" width="64" height="64"&gt;
        linkedin.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Installed &lt;code&gt;kubectl&lt;/code&gt; command-line tool.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Have a Kubernetes cluster and a &lt;code&gt;kubeconfig&lt;/code&gt; file. The default location for the &lt;code&gt;kubeconfig&lt;/code&gt; file is &lt;code&gt;~/.kube/config&lt;/code&gt;. If you don't have a Kubernetes cluster set up, you can follow this &lt;a href="https://minikube.sigs.k8s.io/docs/start/" rel="noopener noreferrer"&gt;guide&lt;/a&gt; to quickly bootstrap Minikube.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A GitHub account.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting Up Continuous Integration (CI) Using GitHub Actions
&lt;/h2&gt;

&lt;p&gt;For this activity, we will use a simple web application written in Python and utilizing Flask. The application has been specifically designed with cloud demonstrations and containers in mind.&lt;/p&gt;

&lt;p&gt;To obtain the application code, you can fork this &lt;a href="https://github.com/davWK/argoCD-demo.git" rel="noopener noreferrer"&gt;Github repository&lt;/a&gt; to your own Github account and then clone it to your local machine to start making changes and customizations as needed.&lt;/p&gt;

&lt;p&gt;To create the workflows instruction for GitHub Actions, you'll need to create a YAML file following a specific structure. Start by creating a file named &lt;code&gt;main.yml&lt;/code&gt; inside the &lt;code&gt;.github/workflows&lt;/code&gt; directory of your repository. This file will serve as the configuration file for workflows. By following this standardized structure, you'll be able to define and customize the actions, triggers, and steps that make up your CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;Let's start the workflow configuration with the following structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ArgoCD demo Build&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;main"&lt;/span&gt;
  &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this configuration, we've named the workflow as "ArgoCD Demo Build". It will be triggered on both push events to the "main" branch and pull requests. The workflow will run on an "ubuntu-latest" virtual machine. This setup forms the foundation of the workflow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Test'&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; 
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run tests&lt;/span&gt;
        &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;make test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above, we define a job called "Test" that will run on the latest Ubuntu environment (&lt;code&gt;ubuntu-latest&lt;/code&gt;).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The "Checkout" step ensures that the repository's code is available by using the &lt;code&gt;actions/checkout@v2&lt;/code&gt; action.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The "Run tests" step executes the command &lt;code&gt;make test&lt;/code&gt; to run the tests.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Build&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Push&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Docker&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Hub'&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
    &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; 
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v2&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to Docker Hub&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/login-action@v2&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_USERNAME }}&lt;/span&gt;
          &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_TOKEN }}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Set up Docker Buildx&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/setup-buildx-action@v2&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push&lt;/span&gt;
        &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker/build-push-action@v4&lt;/span&gt;
        &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.&lt;/span&gt;
          &lt;span class="na"&gt;file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./Dockerfile&lt;/span&gt;
          &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.DOCKERHUB_USERNAME }}/image-name:tag&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next job is "Build &amp;amp; Push to Docker Hub," which also runs on the &lt;code&gt;ubuntu-latest&lt;/code&gt; environment.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The "Checkout" step ensures that the repository's code is available by using the &lt;code&gt;actions/checkout@v2&lt;/code&gt; action.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The "Login to Docker Hub" step authenticates with Docker Hub using the credentials that should be defined in the repository secrets in the GitHub repository settings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The "Set up Docker Buildx" step uses the &lt;code&gt;docker/setup-buildx-action@v2&lt;/code&gt; action to set up Docker Buildx for building the Docker image.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, the "Build and push" step uses the &lt;code&gt;docker/build-push-action@v4&lt;/code&gt; action to build the Docker image based on the specified &lt;code&gt;Dockerfile&lt;/code&gt; and push it to Docker Hub. Make sure to modify the &lt;code&gt;tags&lt;/code&gt; field to match your desired image name and version. And also add credentials to the repository secret before moving on.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once everything is in place, you can initiate the workflow by pushing your changes to the repository. This action will automatically trigger the workflow to start. To monitor and gain insights into the workflow execution, navigate to the "Actions" tab in GitHub. Here, you'll be able to view the workflow status, check the progress of each step, and identify any errors encountered. If any issues arise, carefully review the error messages provided and make the necessary fixes before proceeding to the next part, which involves setting up the continuous deployment (CD) using ArgoCD.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Continuous Deployment (CD) with ArgoCD
&lt;/h2&gt;

&lt;p&gt;In this section, we will explore the process of setting up continuous deployment (CD) using ArgoCD. Building upon the foundation of continuous integration (CI) we established earlier with GitHub Actions, we will now focus on automating the deployment of our application to Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;You have the flexibility to utilize any Kubernetes (k8s) cluster at your disposal, whether it's a cloud-based cluster, a bare-metal setup, or even local environments such as Minikube or MicroK8s. ArgoCD is compatible with various Kubernetes configurations, allowing you to seamlessly integrate it into your preferred infrastructure. This versatility enables you to leverage your existing infrastructure or choose a setup that best suits your needs for continuous deployment (CD) with ArgoCD.&lt;/p&gt;

&lt;p&gt;To proceed further, we will be utilizing Minikube for our setup. Minikube provides a convenient and lightweight way to run a single-node Kubernetes cluster locally.&lt;/p&gt;

&lt;p&gt;Now, let's proceed with the installation of ArgoCD. We will walk through the steps to set up ArgoCD on your chosen Kubernetes cluster, in this case, Minikube.&lt;/p&gt;

&lt;h3&gt;
  
  
  Installing ArgoCD
&lt;/h3&gt;

&lt;p&gt;To install ArgoCD on your Kubernetes cluster, execute the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create namespace argocd
kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first command creates a namespace called "argocd" where ArgoCD will be installed. The second command applies the ArgoCD installation manifest, which can be accessed from the official ArgoCD GitHub repository. By executing these commands, you will initiate the installation process and set up ArgoCD within your cluster.&lt;/p&gt;

&lt;p&gt;Once the installation is completed, you can verify the installation status by running the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-n&lt;/span&gt; argocd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will display the nodes in the "argocd" namespace, confirming that ArgoCD is successfully installed.&lt;/p&gt;

&lt;p&gt;To access the ArgoCD web interface, you can use kubectl port-forwarding to connect to the API server. Execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward svc/argocd-server &lt;span class="nt"&gt;-n&lt;/span&gt; argocd 8080:443
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will create a port-forwarding tunnel, allowing you to access the ArgoCD UI locally at &lt;a href="https://localhost:8080" rel="noopener noreferrer"&gt;&lt;code&gt;https://localhost:8080&lt;/code&gt;&lt;/a&gt;. Simply open a web browser and navigate to the provided URL to access the ArgoCD interface.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zx7bbxt0lslbxo8kfgs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zx7bbxt0lslbxo8kfgs.png" alt="img" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To log in to the ArgoCD UI, you will need to retrieve the password from the &lt;code&gt;argocd-initial-admin-secret&lt;/code&gt; secret. Follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Retrieve the secret by executing the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get secret argocd-initial-admin-secret &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-o&lt;/span&gt; yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The output will include a field called &lt;code&gt;data&lt;/code&gt;, which contains the base64-encoded password. Copy the value associated with the &lt;code&gt;password&lt;/code&gt; key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Decode the password using the &lt;code&gt;echo&lt;/code&gt; and &lt;code&gt;base64&lt;/code&gt; commands. Replace &lt;code&gt;encodedpassword&lt;/code&gt; in the command below with the copied value:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo &lt;/span&gt;encodedpassword | &lt;span class="nb"&gt;base64&lt;/span&gt; &lt;span class="nt"&gt;--decode&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The decoded password will be displayed in the terminal. Copy the password string.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Return to the ArgoCD UI login page. Enter &lt;code&gt;admin&lt;/code&gt; as the username and paste the decoded password into the password field.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gbhvsp3ple3r586lavf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gbhvsp3ple3r586lavf.png" alt="Img" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Currently, ArgoCD is empty as we haven't configured any applications yet. Let's proceed with configuring ArgoCD to connect to a GitHub repository where our deployment files will be hosted.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It's important to note that in best practices, it is recommended to separate the application repository from the deployment repository. However, for the purpose of this activity, we will keep the deployment files alongside the application files. Please keep in mind that this is not a recommended practice for production-ready environments. In such scenarios, it is crucial to separate the two repositories to ensure a more organized and manageable deployment workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Configuring ArgoCD
&lt;/h3&gt;

&lt;p&gt;To configure ArgoCD to connect to your GitHub repository and deploy your application,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a YAML file, such as &lt;code&gt;argocd-config.yaml&lt;/code&gt;, and add the following content:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argo-cd-demo&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argocd&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;

  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/davWK/argoCD-demo.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy/kubernetes/&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-app-for-argo-cd&lt;/span&gt;

  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let's break down what each section of the YAML file does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;metadata&lt;/code&gt;: Specifies the metadata for the ArgoCD application, including its name and namespace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;spec.project&lt;/code&gt;: Specifies the project within ArgoCD where the application belongs. In this case, it is set to the default project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;source&lt;/code&gt;: Defines the source repository details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;repoURL&lt;/code&gt;: Specifies the URL of the GitHub repository where your application's deployment files are hosted.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;targetRevision&lt;/code&gt;: Specifies the target revision of the repository to deploy. Here, it is set to &lt;code&gt;HEAD&lt;/code&gt;, meaning the latest revision.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;path&lt;/code&gt;: Specifies the path within the repository where your application's Kubernetes deployment files are located. the path ArgoCD will track for any modification&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;code&gt;destination&lt;/code&gt;: Specifies the destination details for the deployment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;server&lt;/code&gt;: Specifies the URL of the Kubernetes API server. Here, it is set to &lt;a href="https://kubernetes.default.svc" rel="noopener noreferrer"&gt;&lt;code&gt;https://kubernetes.default.svc&lt;/code&gt;&lt;/a&gt;. It can be an external cluster&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;namespace&lt;/code&gt;: Specifies the target namespace in which the application will be deployed. In this case, it is set to &lt;code&gt;demo-app-for-argo-cd&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;code&gt;syncPolicy&lt;/code&gt;: Defines the synchronization policy for the application:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;automated&lt;/code&gt;: Specifies that the synchronization should be automated, enabling self-healing and pruning capabilities.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;selfHeal&lt;/code&gt;: Enables self-healing, ensuring the application stays in the desired state.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;prune&lt;/code&gt;: Enables pruning, removing any resources that are no longer defined in the deployment files.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Save the file and apply the configuration by running the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-n&lt;/span&gt; argocd &lt;span class="nt"&gt;-f&lt;/span&gt; argocd-config.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By applying this configuration, ArgoCD will establish a connection to the specified GitHub repository, fetch the deployment files from the specified path, and deploy the application to the designated namespace within the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;Once you apply the configuration using the command &lt;code&gt;kubectl apply -n argocd -f argocd-config.yaml&lt;/code&gt;, you will no longer need to manually apply any changes to your Kubernetes files. ArgoCD takes over the responsibility of tracking and applying changes automatically.&lt;/p&gt;

&lt;p&gt;After the initial deployment, ArgoCD continuously monitors the specified GitHub repository and the Kubernetes files within it. Whenever there is a change detected in the repository, ArgoCD will automatically apply those changes to your Kubernetes cluster. This ensures that your application remains up-to-date with the latest version defined in the repository.&lt;/p&gt;

&lt;p&gt;With ArgoCD in place, you can focus on making changes to your application's deployment files in the repository, and ArgoCD will handle the synchronization and deployment to the Kubernetes cluster for you. This simplifies the deployment process and provides a seamless experience for maintaining the desired state of your applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing the python app deployment file for kubernetes
&lt;/h3&gt;

&lt;p&gt;At this stage, the configuration will be created in ArgoCD, but no application pods or services will be available. This is because we have not yet defined the Kubernetes deployment manifest that contains the deployment information for our Python demo app. However, once this manifest is in place, ArgoCD will automatically apply it, resulting in the deployment of the application.&lt;/p&gt;

&lt;p&gt;To proceed, you need to create the Kubernetes deployment manifest file that describes the desired state of your application, such as the container image, ports, and any other necessary configurations. Once you have the deployment manifest ready, commit and push it to your GitHub repository.&lt;/p&gt;

&lt;p&gt;ArgoCD will then detect the changes in the repository and automatically apply the deployment manifest, triggering the creation of the corresponding pods and services. This automatic synchronization ensures that the deployed application aligns with the desired state defined in the deployment manifest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjdl86fkl636v280k17b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjdl86fkl636v280k17b.png" alt="Img" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To proceed with defining the Kubernetes deployment manifest for the Python demo app:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Inside the &lt;code&gt;deploy/kubernetes&lt;/code&gt; directory, create a new &lt;code&gt;deployment.yaml&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open the &lt;code&gt;deployment.yaml&lt;/code&gt; file and add the following content:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python-app-deployment&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python-app&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;image-name&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;imageurl&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5000&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python-app-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python-app&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5000&lt;/span&gt;
      &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Save the file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This deployment manifest defines a Kubernetes Deployment and Service for the Python app. It specifies the container image, ports, replicas, and other necessary configurations.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The Deployment creates three replicas of the Python app pods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Service exposes the app using a NodePort type, making it accessible on port 30000 of the cluster nodes.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Commit and push the &lt;code&gt;deployment.yaml&lt;/code&gt; file to your GitHub repository. ArgoCD will automatically detect the changes and apply the deployment manifest, leading to the creation of the Python app deployment and service.&lt;/p&gt;

&lt;p&gt;Once the synchronization is complete, you should see the app pods running and the service available for access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fna9h8y59j2cu1ddx9whm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fna9h8y59j2cu1ddx9whm.png" alt="Img" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To access the deployed Python app:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Run the following command to get the service information:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc &lt;span class="nt"&gt;-n&lt;/span&gt; &amp;lt;namespace &lt;span class="k"&gt;for &lt;/span&gt;the Python app&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;namespace for the Python app&amp;gt;&lt;/code&gt; with the actual namespace where your Python app is deployed. This command will provide you with the details of the service, including its name, type, cluster IP, and port.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Once you have the service information, run the following command to set up port forwarding:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward &lt;span class="nt"&gt;-n&lt;/span&gt; &amp;lt;namespace &lt;span class="k"&gt;for &lt;/span&gt;the Python app&amp;gt; svc/python-app-service 8083:&amp;lt;service port&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;namespace for the Python app&amp;gt;&lt;/code&gt; with the actual namespace where your Python app is deployed, and &lt;code&gt;&amp;lt;service port&amp;gt;&lt;/code&gt; with the port number specified in your service configuration (e.g., 80).&lt;/p&gt;

&lt;p&gt;This command establishes a connection between your local machine and the Python app service running in the Kubernetes cluster. It forwards traffic from your local port 8083 to the specified service port.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Now, you can access the deployed Python app by opening a web browser and navigating to &lt;a href="http://localhost:8083" rel="noopener noreferrer"&gt;&lt;code&gt;http://localhost:8083&lt;/code&gt;&lt;/a&gt;. This will direct your requests to the Python app service running in the Kubernetes cluster.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcrilllrvd42b51vjy8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcrilllrvd42b51vjy8i.png" alt="Img" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, setting up Continuous Integration (CI) and Continuous Deployment (CD) processes are crucial for efficient software development and deployment. In this article, we explored the steps to configure CI using GitHub Actions and CD with ArgoCD. By integrating these tools into your workflow, you can automate the build, test, and deployment processes, leading to faster and more reliable software delivery.&lt;/p&gt;

&lt;p&gt;To learn more about ArgoCD and its capabilities, you can refer to the official ArgoCD documentation available &lt;a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The documentation provides comprehensive information, including installation guides, usage examples, and advanced configurations.&lt;/p&gt;

&lt;p&gt;For a practical demonstration and understanding of ArgoCD, you can watch the "ArgoCD tutorial" on YouTube by TechWorld with Nana.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/MeU5_k9ssrs"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;To grasp the concept of GitHub Actions and its integration with CI/CD processes, you can watch the "GitHub Action Tutorial" video by TechWorld with Nana. This video explains the fundamentals and basic concepts of GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/R8_veQiYBjI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Thanks for reading I hope you found the information helpful and informative. If you have any questions or comments, please feel free to reach out to me or leave a comment below.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cicd</category>
      <category>argocd</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>AWS Cloud resume challenge</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Wed, 19 Apr 2023 02:13:39 +0000</pubDate>
      <link>https://dev.to/davwk/aws-cloud-resume-challenge-5453</link>
      <guid>https://dev.to/davwk/aws-cloud-resume-challenge-5453</guid>
      <description>&lt;p&gt;In this article I describe how things I worked on the AWS CRC project With some slight comparison with Google Cloud based on my own experience. By the way, I also wrote an article about the Google Cloud version of this project, you can have a look at it &lt;a href="https://blog.davidwoglo.me/google-cloud-resume-challenge" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is better if you are new to all this Cloud related stuff, to take the  &lt;a href="https://aws.amazon.com/certification/certified-cloud-practitioner/" rel="noopener noreferrer"&gt;AWS Cloud Practitioner certification&lt;/a&gt; exam, as recommended in the &lt;a href="https://cloudresumechallenge.dev/docs/the-challenge/aws/" rel="noopener noreferrer"&gt;CRC guide&lt;/a&gt; as a first step. If you are not familiar with cloud environments or come from another cloud provider, taking this exam will help you validate your knowledge of the cloud and your familiarity with the different services of AWS (in case you come from another cloud provider). &lt;/p&gt;

&lt;p&gt;Personally, I had to quickly complete the &lt;a href="https://www.credly.com/badges/aec0902e-11fc-4056-82e3-0fba80d07dc3/linked_in_profile" rel="noopener noreferrer"&gt;AWS Cloud Practitioner Quest&lt;/a&gt; to refresh my knowledge of AWS as I am quite familiar with the cloud and regarding AWS, I have already worked with some services. The quest is free; you can try it &lt;a href="http://skillbuilder.aws/cloudquest?acq=sec&amp;amp;sec=syq" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Well now it's good for the intro, let's get to the heart of the matter&lt;/p&gt;

&lt;h1&gt;
  
  
  Big picture of deployments
&lt;/h1&gt;

&lt;p&gt;This project is about creating a website which is hosted on amazon S3, then on the site display the number of visitors which is calculated with an AWS Lambda Function and stored in DynamoDB table, then automate all process, Website publication and the resources deployment  via a CI/CD pipeline. &lt;br&gt;
My resume page is available &lt;a href="https://aws.davidwoglo.me/" rel="noopener noreferrer"&gt;https://aws.davidwoglo.me/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkcaxrs3hq9apsv8c043.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkcaxrs3hq9apsv8c043.png" alt="Image description" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Website
&lt;/h1&gt;

&lt;p&gt;The first steps of the project consist in setting up a website, using a completely different approach than the traditional one. In a traditional way, we set up a machine or VM on which we install a web server (Apache, Nginx, or whatever you choose) then we upload the html/CSS files etc. and we proceed to some configuration to make the site available. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgduqcev1meyjuwwuzhlj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgduqcev1meyjuwwuzhlj.png" alt="website.png" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There I'm spared this server preparation task (and the underlying configs). I just uploaded the website files to &lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;Amazon S3&lt;/a&gt; bucket, the object storage service of AWS, the equivalent of Cloud Storage at Google Cloud, then I did some small configuration in two three clicks, just to let S3 know that I want to use this bucket to host a static website, and then we have a ready to use website. No need for a physical server or additional tools to install. &lt;strong&gt;&lt;em&gt;This is the serverless approach&lt;/em&gt;&lt;/strong&gt;, and this is what the rest of the project is based on, I didn't need to use any server or VM. &lt;/p&gt;

&lt;p&gt;In order for my site to be accessible via a user friendly and secure HTTPS URL, it is necessary to manage the DNS and the ssl certificate configuration, so I used &lt;a href="https://aws.amazon.com/certificate-manager/" rel="noopener noreferrer"&gt;AWS Certificate Manager&lt;/a&gt; to obtain a certificate for my domain which ownership had to be verified by automatic mail, due to some problems related to my Custom Domain provider but it is recommended to use a CNAME record to do that. Then to route the DNS traffic I used &lt;a href="https://aws.amazon.com/route53/" rel="noopener noreferrer"&gt;Amazon Route 53&lt;/a&gt;, and the distribution of website content is sped up by &lt;a href="https://aws.amazon.com/cloudfront/" rel="noopener noreferrer"&gt;Amazon CloudFront&lt;/a&gt;(CDN Service). All these configurations were done manually in a separate way and tied together at the end to make things work. &lt;/p&gt;

&lt;p&gt;At this point let's make a small comparison with how Google Cloud handles it. At Google Cloud all this can be included in the creation of a Load Balancer where we will just have to activate the automatic management of SSL for HTTPS and CDN for content caching. &lt;/p&gt;

&lt;h1&gt;
  
  
  Counting website visitors
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzbfwgw9ayd5gyt7oell.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzbfwgw9ayd5gyt7oell.png" alt="visitor count" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My web page include a visitor counter that displays how many people have accessed the site.To do this, I created an  &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt; function, a &lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;DynamoDB&lt;/a&gt; table and a REST API. On one side, I wrote a python code that is executed by Lambda, the python function is to get the current number of visitors stored in DYnamoDB and increment it by 1 each time a visitor access my page,  on the other side, I added a javaScript code to the files of my site. The job of this script is to get the number of visitors present in the Dynamo table and display it on my page, the communication between the JS code and the database is done via a REST API that I set up using &lt;a href="https://aws.amazon.com/api-gateway/" rel="noopener noreferrer"&gt;Amazon API Gateway&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;This is the part that gave me headaches when I was doing the project on Google Cloud. I didn't use an API Gateway there (because honestly, I didn't know), so I used an open source &lt;a href="https://github.com/GoogleCloudPlatform/functions-framework-python" rel="noopener noreferrer"&gt;Functions Framework for Python&lt;/a&gt; in which I used client library API to communicate with Cloud Firestore which is the equivalent of DynamoDB. &lt;/p&gt;

&lt;h1&gt;
  
  
  Automation (CI/CD, IaC, Source Control)
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa31bfk3a0j8o4kxcb0tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa31bfk3a0j8o4kxcb0tf.png" alt="IAC" width="800" height="627"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To accelerate and simplify the update of my deployments, whether it is the website (frontend) or the underlying resources (backend), I need to set up a CI/CD pipeline. A CI/CD pipeline is a series of steps that must be performed in order to deliver a new version of software. &lt;br&gt;
Well for the website, it's ok, we can manage it as software, (it's composed of files as a software is), the question you could ask yourself is : What about the cloud resources in the background ? Since they are not files, right? This is where Infrastructure as Code (IaC) comes in. But before talking about it, let's see how the CI/CD for the front end was set up. &lt;br&gt;
I created a source control  repository on Github where I put the website files, then I wrote a workflow file that instructs &lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;Github Action&lt;/a&gt; on how to update my website every time I make a push. &lt;br&gt;
Now let's talk about the &lt;a href="https://www.hashicorp.com/resources/what-is-infrastructure-as-code/" rel="noopener noreferrer"&gt;Infrastructure as Code&lt;/a&gt; stuff, i.e. how to manage provision resources through machine-readable definition files, rather than use an interactive configuration as it is traditionally done.&lt;br&gt;
Well I used &lt;a href="https://www.terraform.io/docs" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt; to define the DynamoDB table, the API Gateway, the Lambda function configurations in a template and deploy them with Terraform CLI. You see, now that we can also manage our infrastructure as a software, we can integrate it in a CICD to accelerate the process of deployment and update of the infrastructure resources. &lt;br&gt;
I proceeded in the same way as the website to set up the Backend pipeline, just that here the Github Actions workflow file is a bit more complex.&lt;/p&gt;

&lt;p&gt;You can access my frontend repository &lt;a href="https://github.com/davWK/cloud-resume-challenge-AWS" rel="noopener noreferrer"&gt;here&lt;/a&gt; and the backend one &lt;a href="https://github.com/davWK/cicd-cloud-resume-challenge-AWS" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, here is how things went during this project. Thanks for reading :) &lt;br&gt;
Please check below for some useful resources.&lt;/p&gt;

&lt;h1&gt;
  
  
  Useful resources
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started-cloudfront-overview.html" rel="noopener noreferrer"&gt;Use an Amazon CloudFront distribution to serve a static website&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cloudflare.com/learning/dns/what-is-dns/" rel="noopener noreferrer"&gt;What is DNS? | How DNS works&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://serverlessland.com/" rel="noopener noreferrer"&gt;Serverless Land&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.dynamodbguide.com/" rel="noopener noreferrer"&gt;DynamoDB Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://theultimateapichallenge.com/" rel="noopener noreferrer"&gt;API Projects&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://wiki.python.org/moin/BeginnersGuide/NonProgrammers" rel="noopener noreferrer"&gt;Python&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cors.serverlessland.com/" rel="noopener noreferrer"&gt;Amazon API Gateway CORS Configurator&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Language_Overview" rel="noopener noreferrer"&gt;Java Script&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.freecodecamp.org/news/how-to-make-api-calls-with-fetch" rel="noopener noreferrer"&gt;API Calls&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/terraform/tutorials/automation/github-actions" rel="noopener noreferrer"&gt;Automate Terraform with GitHub Actions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Quickly deploy website on kubernetes locally</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Wed, 19 Apr 2023 02:09:36 +0000</pubDate>
      <link>https://dev.to/davwk/quickly-deploy-website-on-kubernetes-locally-2age</link>
      <guid>https://dev.to/davwk/quickly-deploy-website-on-kubernetes-locally-2age</guid>
      <description>&lt;p&gt;Several reasons can justify the need to do local Kubernetes deployments, it can be and often is for learning reasons with a lack of resources or with limited resources, or for testing reasons where either we don't want to pay the bill for our use in the cloud, or it is not allowed to do tests on the production environment in case of on-prem deployment. as bootstrapping a Kubernetes cluster still requires considerable resources in terms of computing and time which may not be worth it when doing just simple tests.&lt;/p&gt;

&lt;p&gt;So in this article, I show you an example of deploying a website on Kubernetes without worrying about your resources problems (in a light way).&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Tools used in this lab :&lt;/strong&gt;
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Kubectl:&lt;/em&gt;&lt;/strong&gt; It will be used to interact with the Kubernetes cluster. It is the Kubernetes command-line tool, which allows running commands against Kubernetes clusters. &lt;a href="https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/" rel="noopener noreferrer"&gt;How to install kubctl&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Docker:&lt;/em&gt;&lt;/strong&gt; Docker Engine is an open-source containerization technology for building and containerizing applications. It will be used to containerize the website. &lt;a href="https://docs.docker.com/engine/install/debian/" rel="noopener noreferrer"&gt;How to install Docker&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;em&gt;Minikube:&lt;/em&gt;&lt;/strong&gt; Minikube is a local light Kubernetes, focusing on making it easy to learn and develop for Kubernetes. &lt;a href="https://minikube.sigs.k8s.io/docs/start/" rel="noopener noreferrer"&gt;How to install Minikube&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once all these tools are installed, you are ready to start. Create a directory for the lab, then in this directory create another one where you will put the website files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;lab
&lt;span class="nb"&gt;cd &lt;/span&gt;lab
&lt;span class="nb"&gt;mkdir &lt;/span&gt;files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0wd9Kifd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669984971067/67d7fb67-8a9b-430b-8c18-e02c5778c7fa.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0wd9Kifd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669984971067/67d7fb67-8a9b-430b-8c18-e02c5778c7fa.png%2520align%3D" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the website files are in place go back to the lab directory and let's start the containerization&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Containerization&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;In the lab directory, create the Dockerfile.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Docker builds images automatically by reading the instructions from a Dockerfile -- a text file that contains all commands, in order, needed to build a given image. A Dockerfile adheres to a specific format and set of instructions which you can find at Dockerfile reference&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For this lab, we will create a dockerfile as simply as possible. Use &lt;em&gt;nano Dockerfile&lt;/em&gt; to create the file and paste below content in it then save it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM nginx
COPY /files /usr/share/nginx/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;the image we will create here will use a Nginx image as a base image and will copy the website files from the &lt;em&gt;/files&lt;/em&gt; directory to the &lt;em&gt;/usr/share/nginx/html&lt;/em&gt; directory in the container. you can use another image if you want, like an apache image for example.&lt;/p&gt;

&lt;p&gt;After that, we can start building the container image of our website.&lt;/p&gt;

&lt;p&gt;Build the container with the &lt;em&gt;docker build&lt;/em&gt; command. the dot . in the command bellow indicates the location of the dockerfile which is the current directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;-t&lt;/span&gt; website &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ByEqWouF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669986396916/bf040250-545c-49aa-990c-dde93e1f55ff.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ByEqWouF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669986396916/bf040250-545c-49aa-990c-dde93e1f55ff.png%2520align%3D" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's check if the image is available by running &lt;em&gt;docker images&lt;/em&gt; command&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L_HmiMVJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669986611905/c6226fd8-9548-4e30-b92a-259ecdc02b6c.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L_HmiMVJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669986611905/c6226fd8-9548-4e30-b92a-259ecdc02b6c.png%2520align%3D" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that our image is available, now let's try to run it and access our website&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 website
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The “-d” option detaches the container from the current shell and runs in the background. as output we will have the container ID.&lt;/p&gt;

&lt;p&gt;The -p 80:80 maps to container’s port 80 to local machine port 80.&lt;/p&gt;

&lt;p&gt;Once this command is executed we can access our website from the browser via &lt;a href="http://localhost:80" rel="noopener noreferrer"&gt;http://localhost:80&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations, you just deployed a website using a container. now let's move on to the next phase of this lab by scaling up a bit and deploying our website using Kubernetes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Deployment on Kubernetes
&lt;/h1&gt;

&lt;p&gt;if not already done, start the cluster with the &lt;em&gt;minikube start&lt;/em&gt; command.&lt;/p&gt;

&lt;p&gt;Before you continue we need to make a small basic adjustment to minikube. By default the minikube node uses its own Docker repository that’s not connected to the Docker registry on the local machine, so without pulling, it doesn’t know where to get the image from. If we don't fix this, we'll observe an error ErrImageNeverPull. To fix this, let's use &lt;code&gt;minikube docker-env&lt;/code&gt; the command that outputs environment variables needed to point the local Docker daemon to the minikube internal Docker registry.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NF_tRbi8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669989268838/723b375d-013b-4877-825d-6225b53a942b.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NF_tRbi8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669989268838/723b375d-013b-4877-825d-6225b53a942b.png%2520align%3D" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a recommendation on the output, To point the shell to minikube's docker-daemon, run: &lt;em&gt;eval $(minikube -p minikube docker-env)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After that, the image has to be rebuilt again. Once this is done, we can move on to the actual deployment.&lt;/p&gt;

&lt;p&gt;We can create a K8s deployment either via an imperative method (by writing directly the commands in the terminal) or via a declarative method where we declare in a yaml or json manifest file the desired state of our deployment and apply it. We will proceed with the declarative method here.&lt;/p&gt;

&lt;p&gt;We can either start writing a manifest from scratch and deal with all the possible errors, or we can save ourselves from all that and use a really cool option of kubectl and quickly generate a file ready to use or on which we can work.&lt;/p&gt;

&lt;p&gt;So to save us from all this hustle let's generate a yaml file with the &lt;em&gt;dry-run&lt;/em&gt; option.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create deployment website &lt;span class="nt"&gt;--image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;website &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3 &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 &lt;span class="nt"&gt;--dry-run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;client &lt;span class="nt"&gt;-o&lt;/span&gt; yaml &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; website.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then apply the manifest file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; website.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now let's see if the deployment has been successful.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get po
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pyD7_hCu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669991575069/773d888a-1d9f-479f-a691-6abe1ad58f32.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pyD7_hCu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669991575069/773d888a-1d9f-479f-a691-6abe1ad58f32.png%2520align%3D" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Oh oh ! It seems we have a problem.&lt;/p&gt;

&lt;p&gt;Kubernetes tries to pull the image specified in &lt;code&gt;the manifest&lt;/code&gt;, but this image is not in the minikube registry or a public Docker registry. So to fix this you have to prevent the image from being pulled from a public registry by setting the image pull policy to never. Edit the website.yaml file the in container section add &lt;em&gt;imagePullPolicy: Never&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--v5HYLHBC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669991913622/45982c5e-b58b-4d75-9bf5-0d0c2ea614e6.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--v5HYLHBC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669991913622/45982c5e-b58b-4d75-9bf5-0d0c2ea614e6.png%2520align%3D" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;then save the file then apply tby first deleting the erroneous deployment recheck again.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl delete &lt;span class="nt"&gt;-f&lt;/span&gt; website.yaml
kubectl create &lt;span class="nt"&gt;-f&lt;/span&gt; website.yaml
kubectl get po
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It should work now&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---pdizPLk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669992228986/be9a0004-ca22-4a43-99c0-ad6872427759.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---pdizPLk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669992228986/be9a0004-ca22-4a43-99c0-ad6872427759.png%2520align%3D" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Voila, it's okay now.&lt;/p&gt;

&lt;p&gt;Next, expose the website deployment as a Kubernetes Service, specifying a port where it will be accessible with &lt;em&gt;--type=NodePort&lt;/em&gt; and &lt;em&gt;--port=80&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl expose deployment website &lt;span class="nt"&gt;--type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;NodePort &lt;span class="nt"&gt;--port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's check whether the service is running.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SaMHpFaQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669992544920/ab45d0cb-1ec9-4745-a2f7-66508278edae.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SaMHpFaQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669992544920/ab45d0cb-1ec9-4745-a2f7-66508278edae.png%2520align%3D" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's retrieve a URL that is accessible outside of the container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube service website &lt;span class="nt"&gt;--url&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--k_TbVamM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669992831657/27274468-708d-454e-9f3e-446ca431a722.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--k_TbVamM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669992831657/27274468-708d-454e-9f3e-446ca431a722.png%2520align%3D" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;once the url has been obtained, it can be accessed via a browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DYgphE8a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669992914935/6ecaa0aa-3b6b-4c2d-9c12-bb737b8eac1b.png%2520align%3D" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DYgphE8a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1669992914935/6ecaa0aa-3b6b-4c2d-9c12-bb737b8eac1b.png%2520align%3D" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, that's what ends this article, thank you and see you very soon&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>containers</category>
    </item>
    <item>
      <title>Deploy a highly available application in a scalable VPC architecture on AWS.</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Wed, 19 Apr 2023 02:04:09 +0000</pubDate>
      <link>https://dev.to/davwk/deploy-a-highly-available-application-in-a-scalable-vpc-architecture-on-aws-2i0g</link>
      <guid>https://dev.to/davwk/deploy-a-highly-available-application-in-a-scalable-vpc-architecture-on-aws-2i0g</guid>
      <description>&lt;p&gt;In this article, we are going to deploy a VPC architecture in a scalable way. It is mainly about deploying two VPCs, one for a bastion host in a public subnet, and the second one for the main resources of the architecture, namely two private subnets that will be used to host the different instances of an autoscaling group that will be distributed across the two availability zones to ensure the high availability of the application. Since these instances will be in private subnets they will not be able to have public IPs so no internet access. So to allow them to have internet access, a NAT gateway will be set up in a public subnet, and the route tables will be updated to route the outgoing traffic by default to the NAT gateway to allow the resources in the private subnets to have internet access. To serve the requests coming from the internet to the application, a Network Load Balancer will be set up in front of the auto-scaling group. And finally to allow private communication between the two VPCs a transit gateway will be set up.&lt;/p&gt;

&lt;h1&gt;
  
  
  Preparation for the deployment
&lt;/h1&gt;

&lt;p&gt;This application will be based on an ec2 instance with all the prerequisites and dependencies for the proper functioning of the application. So to do this we will create an ec2 instance on which to install and make all the necessary configurations for the web application to be ready to use. Of course, feel free to set up your web application, so everything it will need to be ready to use (web server, DB, etc.)&lt;/p&gt;

&lt;p&gt;Once everything is ready, we will create a golden AMI based on the EC2 instance.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An Amazon Machine Image (AMI) is &lt;strong&gt;a supported and maintained image provided by AWS that provides the information required to launch an instance&lt;/strong&gt;. You can create your own AMI, customize the instance (for example, install software on the instance), and then save this updated configuration as a custom AMI. Instances launched from this new custom AMI include the customizations that you made when you created the AMI&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So to create your custom AMI:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;stop the ec2 instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on action&amp;gt;images and templates&amp;gt; create an image**.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the necessary fields are filled in, click on create an image.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wait for the image creation to finish&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Deleting (terminating) the instance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  VPCs Deployment
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Bastion Host VPC
&lt;/h2&gt;

&lt;p&gt;To create the VPC:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to VPC dashboard&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the button &lt;strong&gt;create VPC&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the VPC settings, choose only VPC&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define IPv4 CIDR the address block and click on &lt;strong&gt;create VPC&lt;/strong&gt; to validate the creation of the VPC.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As you may have seen on the architecture at the beginning, in this VPC there must be a public subnet in which the bastion host will reside. To set up a public subnet, several elements come into play, the main ones are the route tables and the internet gateway. For a subnet to be public, it must be associated with a route table that directs traffic to an internet gateway.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create Internet Gateway
&lt;/h3&gt;

&lt;p&gt;So let's create the internet gateway first.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the VPC menu, click on &lt;strong&gt;internet gateway&lt;/strong&gt; on the left,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then click on the &lt;strong&gt;create internet gateway&lt;/strong&gt; button, enter the gateway name and click on &lt;strong&gt;create internet gateway&lt;/strong&gt; to validate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the internet gateway is created you will see a banner that invites you to attach it to a VPC, click on it then attach it to the previously created VPC.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Create the public subnet
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the VPC menu click left on the subnet&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;create subnet&lt;/strong&gt; button&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the VPC section choose the VPC in which the subnet must be located&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the subnet parameters, define a name, then define the IPv4 CIDR block.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on create subnet to validate.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Create route tables and add routing rules
&lt;/h3&gt;

&lt;p&gt;When creating a VPC, a route table is created by default, you can just add to it a new route depending on what you want to get, or create a new one. Since the route we want to add here will allow public access, I will not create a new one. But in case you want to create a new one,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to the VPC menu&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on route tables on the left then the &lt;strong&gt;create route table&lt;/strong&gt; button&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the VPC to which it should belong, give it a name then validate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the route table is available, select it then in the routes section click on &lt;strong&gt;edit routes&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on &lt;strong&gt;add route&lt;/strong&gt;. at the destination level, type 0.0.0.0/0 for any destination then at the target level, choose the internet gateway we just created then save changes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After the routes are updated, it is necessary to associate the table to a subnet, so in the section subnet associations, click on &lt;strong&gt;edit subnet associations&lt;/strong&gt;, then select the subnet to which we want to associate the route table then save associations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Production VPC
&lt;/h2&gt;

&lt;p&gt;In the production VPC there are a public subnet and two private subnets. For the public subnet, proceed as done in the previous steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create NAT Gateway
&lt;/h3&gt;

&lt;p&gt;Now for the private subnet, we don't want the resources to be directly accessible on the internet, but the outbound traffic by default still points to the internet. This is possible by setting up a NAT gateway and then creating a route table that routes the default traffic to NAT for an outbound internet connection.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the VPC menu, click on Nat gateway on the left&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the create NAT gateway button, give it a name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the public subnet in which it must be located at the connection type level&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose public to allocate it an Elastic public IP then validate by clicking on create NAT Gateway.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After you have to create a new route table that will direct the default traffic to NAT for outbound internet connection, to do this you have to proceed as before but here you will have to change the target to redirect it to the nat gateway instead of the internet gateway and do not forget to associate the private subnet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transit gateway
&lt;/h2&gt;

&lt;p&gt;Using the transit gateway here will enable private communication between the Bastion Host VPC and the Production VPC, to create the transit gateway,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Still, in the VPC menu, click on the left on the transit gateway&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on the &lt;strong&gt;create transit gateway&lt;/strong&gt; button, give it a name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Validate by clicking on create transit gateway.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After its creation, you must add the attachments for the VPCs, so on the left just below the transit gateway is transit gateway attachments&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click on it then click on the &lt;strong&gt;create transit gateway attachment&lt;/strong&gt; button,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Give it a name&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the required transit gateway ID, for the attachment type choose VPC, then at the VPC id level choose the required VPC then validate the creation of the attachment, and then it's good.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  VPC traffics observability
&lt;/h1&gt;

&lt;p&gt;Here we're going to create a CloudWatch Log Group with two Log Streams to store the VPC Flow Logs of both VPCs and enable Flow Logs for both VPCs and push the Flow Logs to Cloudwatch Log Groups and store the logs in the respective Log Stream for each VPC.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In the CloudWatch menu, click on "Logs" and then click on the "Create log group" button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Create log group&lt;/strong&gt; dialog, enter a name for the log group and click "Create".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the log group has been created, click on the log group and then click on the "Create log stream" button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the "Create log stream" dialog, enter a name for the log stream and click "Create".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repeat this step to create a second log stream.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next, you will need to enable VPC Flow Logs for the two VPCs. To do this,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to the VPC service in the AWS Management Console&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select one of the VPCs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the VPC menu, click on "Flow Logs" and then click on the "Create flow log" button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the "Create flow log" dialog, select the log group that you created previously as the destination for the flow logs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repeat these steps for the second VPC to enable VPC Flow Logs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  The application deployment
&lt;/h1&gt;

&lt;p&gt;We will deploy the application by setting up an Auto Scaling group based on the Golden AMI we created above&lt;/p&gt;

&lt;h2&gt;
  
  
  Create launch template
&lt;/h2&gt;

&lt;p&gt;To create an Auto Scaling group based on the golden AMI, you must use a launch template. To create a launch template&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to the Amazon EC2 console and select "Launch Templates" from the navigation pane.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the "Create launch template" button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the "Create launch template" section, enter a name and choose the required VPC.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Auto Scaling guidance&lt;/strong&gt;, select the check box to have Amazon EC2 provide guidance to help create a template to use with Amazon EC2 Auto Scaling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under &lt;strong&gt;Launch template contents, f&lt;/strong&gt;ill out each required field and any optional fields as needed, select the Golden AMI created above, instance type, key pair, network settings, security group, and other instance details for the instances in the Auto Scaling group.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the "Create launch template" button to create your launch template.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create Auto Scaling group
&lt;/h2&gt;

&lt;p&gt;Once your launch template has been created, you can use it to create an Auto Scaling group.&lt;/p&gt;

&lt;p&gt;But first, let's create the load balancer that will be in front of the auto-scaling group&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Navigate to the Amazon EC2 dashboard.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left-hand menu, under "Load Balancing," select "Load Balancers."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the "Create Load Balancer" button and select "Network Load Balancer."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follow the prompts to configure the NLB, including the name, listeners, and target groups.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To configure the NLB, you will need to specify the following details:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Name: This is the name of the NLB. It should be unique within your AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Listeners: These are the ports that the NLB will listen on. You can add multiple listeners with different ports and protocols. Here we use port 80&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Target groups: These are groups of Amazon EC2 instances that the NLB will route traffic to. You will need to create at least one target group and add instances to it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have configured the NLB, navigate to the EC2 dashboard and select "Auto Scaling Groups" in the left-hand menu.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Click the "Create Auto Scaling group" button.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Follow the prompts to configure the Auto Scaling group, including the group name, network, and subnet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the "Load Balancer" section, select the NLB that you just created&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the other settings for the Auto Scaling group as desired, including the minimum and a maximum number of instances and the scaling policies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the "Create Auto Scaling group" button to create the Auto Scaling group with the NLB.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the Autoscaling group is created and the NLB becomes active. You can then test the NLB by sending traffic to it and verifying that it is routing traffic to the target groups as expected.&lt;/p&gt;

&lt;h1&gt;
  
  
  Troubleshooting tips
&lt;/h1&gt;

&lt;p&gt;in case of unreachability, the two main points that should be reviewed are the security groups and the route tables, make sure that the security group allows the type of traffic you want and the source. also, make sure to apply the right route table configurations. An excellent tool that can help debug in case of unreachability is the &lt;strong&gt;network reachability analyzer.&lt;/strong&gt; it helps troubleshoot reachabilities by configuring paths and analyzing them for insightful explanations that can help solve problems.&lt;/p&gt;

&lt;p&gt;The unreachability issue I had during my configurations was related to the transit gateway. The ec2 instances in the private subnets could not be accessed via ssh from the jump (bastion) host. To fix this I used the &lt;strong&gt;network reachability analyzer&lt;/strong&gt; tool to get some information on to base my hypothesis, and this helped in part to solve the issue. the problem was due to the lack of a route within the VPC bastion and also at the production VPC level. I had to add a route that redirected traffic destined for the private subnet to the transit gateway by default, and also do the same at the private subnet level, ie direct by default the traffic destined for the bastion to the transit gateway.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Deploying and Securing an App on AWS EKS with Gitlab CI/CD and Checkov</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Wed, 19 Apr 2023 01:58:55 +0000</pubDate>
      <link>https://dev.to/davwk/deploying-and-securing-an-app-on-aws-eks-with-gitlab-cicd-and-checkov-4j8d</link>
      <guid>https://dev.to/davwk/deploying-and-securing-an-app-on-aws-eks-with-gitlab-cicd-and-checkov-4j8d</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Deploying an application on AWS EKS (Elastic Kubernetes Service) can be a powerful way to ensure scalability and reliability for your application. However, the process can be complex and time-consuming, especially when it comes to ensuring the security and compliance of your deployment. In this article, we'll show you how to simplify the process and ensure your deployment is secure with GitLab CI/CD and Checkov. GitLab CI/CD provides a powerful toolset for automating the deployment process and improving collaboration among team members, while Checkov is a Security as Code tool that can help you automatically scan your configuration files for potential security and compliance issues. By integrating these tools into your deployment pipeline, you can ensure your deployment is secure and compliant with industry best practices, all while saving time and effort.&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;Before proceeding you will need the following :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Set up a GitLab project with runners to execute CI/CD jobs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A container registry (a docker hub repo is more than enough)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A running AWS EKS cluster&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Some knowledge of Docker and Kubernetes&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  The different steps are as follows
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Set up the application code and the Dockerfile&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Define CI/CD GitLab variables&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set Kubernetes manifest files&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up the CI/CD pipeline&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trigger the pipeline with a git push.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Directory structure
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
├── Dockerfile
├── .gitlab-ci.yml
├── .k8s
│   ├── deployment.yaml
│   └── services.yaml
└── src
    ├── app.py
    └── requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The application files and the Kubernetes configurations are respectively in the &lt;strong&gt;src&lt;/strong&gt; and &lt;strong&gt;.k8s&lt;/strong&gt; directories. and the Dockerfile and the GitLab-ci script are at the root of the directory.&lt;/p&gt;

&lt;h1&gt;
  
  
  Set up the application code and the Dockerfile
&lt;/h1&gt;

&lt;p&gt;Use whatever language or framework you want to create the application you want, the main thing is to have an application that you can containerize with a Dockerfile. Personally, I used a simple Python code that uses the Flask framework to create a web application that displays a "Hello, World!" message.&lt;/p&gt;

&lt;p&gt;For the Dockefile, here is an example of what it could look like :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight apache"&gt;&lt;code&gt;&lt;span class="nc"&gt;FROM&lt;/span&gt; python:3.9-slim-buster

WORKDIR /app

COPY src/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY src/app.py .

&lt;span class="nc"&gt;CMD&lt;/span&gt; ["python", "./app.py"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I recommend you test your docker image locally before continuing.&lt;/p&gt;

&lt;h1&gt;
  
  
  Define CI/CD GitLab variables
&lt;/h1&gt;

&lt;p&gt;To connect to AWS, Kubernetes, and Docker Hub from GitLab CI, you need to define variables in the GitLab CI/CD pipeline. You can define these variables in the GitLab project settings under &lt;em&gt;CI/CD &amp;gt; Variables&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  To connect to AWS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;${AWS_ACCESS_KEY_ID}&lt;/code&gt;: This variable contains the access key ID for the AWS account used to deploy the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;${AWS_SECRET_ACCESS_KEY}&lt;/code&gt;: This variable contains the secret access key for the AWS account used to deploy the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;${AWS_DEFAULT_REGION}&lt;/code&gt;: This variable contains the AWS region where the application will be deployed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The variables related to the Docker hub or container registry
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;${CI_REGISTRY_USER}:&lt;/code&gt; This variable contains the username used to authenticate with Container Registry, it can be docker hub, GitLab registry or whatever you want&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;${CI_REGISTRY_PASSWORD}&lt;/code&gt;: This variable contains the password used to authenticate with the Container Registry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;${CI_REGISTRY_IMAGE}&lt;/code&gt;: This variable contains the name of the Docker image in the Container Registry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;${CI_REGISTRY_IMAGE_VERSION}&lt;/code&gt;: This variable contains the version or tag of the Docker image in the Container Registry.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The configuration file to access Kubernetes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;${KUBECONFIG}&lt;/code&gt;: This variable of type file contains the Kubernetes configuration file used to authenticate with the Kubernetes cluster. this file is available at this path &lt;code&gt;~/.kube/config&lt;/code&gt; so when adding it to Gitlab make sure you choose &lt;strong&gt;File&lt;/strong&gt; as the type&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Set Kubernetes manifest files
&lt;/h1&gt;

&lt;p&gt;Now it's time to define manifest files for Kubernetes deployments. As you would have seen above these files are located in the &lt;strong&gt;.k8s&lt;/strong&gt; folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;deployment.yaml&lt;/strong&gt; defines a Kubernetes Deployment for the application named "my-app". The Deployment creates a single replica of the application and specifies the container image to be used for the application using the variable &lt;code&gt;${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}&lt;/code&gt;. This variable references the Docker image built and pushed to a Docker registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;creationTimestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5000&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app&lt;/span&gt;
&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;loadBalancer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;services.yaml&lt;/strong&gt; defines Kubernetes Service for the "my-app" application. The Service exposes the application on port 80 and routes traffic to the listening port of our hello world app which is port 5000 on the application container. The file also specifies the labels to be used to identify the application in Kubernetes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hint: To generate a quick template on which to base and write these configuration files and thus reduce the risk of error and save time, there is an option of the &lt;code&gt;kubectl&lt;/code&gt; command that can be very useful.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The &lt;code&gt;--dry-run=client -o yaml&lt;/code&gt; option in the &lt;code&gt;kubectl&lt;/code&gt; command generates a YAML representation of the Kubernetes resource that would be created or modified without actually creating or modifying the resource. Here is an example of how to generate our yaml file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;#deployment&lt;/span&gt;
&lt;span class="s"&gt;kubectl create deployment &amp;lt;app_name&amp;gt; \&lt;/span&gt;
     &lt;span class="s"&gt;--image=&amp;lt;the_docker_image&amp;gt; \&lt;/span&gt;
     &lt;span class="s"&gt;--dry-run=client -o yaml &amp;gt; deployment.yaml&lt;/span&gt;

&lt;span class="c1"&gt;#service&lt;/span&gt;
&lt;span class="s"&gt;kubectl expose deployment &amp;lt;app_name&amp;gt; \&lt;/span&gt;
     &lt;span class="s"&gt;--port=80 --target-port=5000 \&lt;/span&gt;
     &lt;span class="s"&gt;--dry-run=client -o yaml &amp;gt; service.yaml&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should generate the two yaml files that you will adjust to fit your use. You can also use the &lt;code&gt;--dry-run&lt;/code&gt; option with the &lt;code&gt;kubectl&lt;/code&gt; command to validate a Kubernetes YAML file without actually applying it to a cluster.&lt;/p&gt;

&lt;p&gt;To use the &lt;code&gt;--dry-run&lt;/code&gt; option to validate a Kubernetes YAML file, you can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;kubectl apply --dry-run=client -f &amp;lt;yaml_file_path&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Set up the CI/CD pipeline
&lt;/h1&gt;

&lt;p&gt;Now we can start setting up the GitLab script for our pipeline. The script in our case here will have 5 steps.&lt;/p&gt;

&lt;p&gt;The script consists of the following stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker build&lt;/code&gt;: Builds the Docker image and tags it with the registry information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker push&lt;/code&gt;: Pushes the Docker image to the registry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;test&lt;/code&gt;: Runs the Checkov tool to validate the Kubernetes deployment and service files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;deploy services&lt;/code&gt;: Deploy the Kubernetes services to EKS using the &lt;code&gt;kubectl&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;deploy the app&lt;/code&gt;: Deploys the Kubernetes application to EKS using the &lt;code&gt;kubectl&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's go a little further into the test phase, our test here is much more security oriented. we have integrated Checkov into the pipeline. &lt;a href="https://www.checkov.io/" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt; is an open-source tool used for static code analysis of infrastructure-as-code (IAC) files. In this case, it will be used to perform security and compliance checks on the Kubernetes YAML files in the .k8s directory.&lt;/p&gt;

&lt;p&gt;By running Checkov on the .k8s/deployments.yaml and .k8s/services.yaml files, the GitLab CI/CD pipeline can ensure that the Kubernetes resources being deployed meet the security and compliance requirements defined in the policies and rules being enforced by Checkov.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker build&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker push&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy services&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;deploy app&lt;/span&gt;


&lt;span class="na"&gt;Build docker image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker:stable&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker build&lt;/span&gt;
  &lt;span class="na"&gt;before_script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD}&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker build -t the-app .&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker tag the-app:latest ${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}&lt;/span&gt; 

&lt;span class="na"&gt;Push to registry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker:stable&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker push&lt;/span&gt;
  &lt;span class="na"&gt;before_script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD}&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker push ${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}&lt;/span&gt;

&lt;span class="na"&gt;Test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bridgecrew/checkov:latest&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;checkov -d .k8s/deployments.yaml&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;checkov -d .k8s/services.yaml&lt;/span&gt;
  &lt;span class="na"&gt;allow_failure&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;



&lt;span class="na"&gt;Deploy services on EKS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy services&lt;/span&gt;
  &lt;span class="na"&gt;before_script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cd .k8s&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kubectl --kubeconfig ${KUBECONFIG} apply -f services.yaml&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;changes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;.k8s/services.yaml&lt;/span&gt;


&lt;span class="na"&gt;Deploy app on EKS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deploy app&lt;/span&gt;
  &lt;span class="na"&gt;before_script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cd .k8s&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kubectl --kubeconfig ${KUBECONFIG} apply -f deployment.yaml&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;kubectl --kubeconfig ${KUBECONFIG} rollout status deployments&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker build stage&lt;/strong&gt;: Builds a Docker image for the application specified by the Dockerfile using the official &lt;code&gt;docker:stable&lt;/code&gt; image. Before building the image, it logs in to the Docker registry using the username and password provided as GitLab CI environment variables &lt;code&gt;${CI_REGISTRY_USER}&lt;/code&gt; and &lt;code&gt;${CI_REGISTRY_PASSWORD}&lt;/code&gt;. After building the image, it tags it with &lt;code&gt;${CI_REGISTRY_USER}/${CI_REGISTRY_IMAGE}:${CI_REGISTRY_IMAGE_VERSION}&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Docker push stage&lt;/strong&gt;: Pushes the Docker image created in the previous stage to the GitLab CI registry using the &lt;code&gt;docker push&lt;/code&gt; command.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test stage&lt;/strong&gt;: Runs &lt;code&gt;checkov&lt;/code&gt; tool for Kubernetes manifest files deployment.yaml and services.yaml. &lt;em&gt;in this case, this stage is allowed to fail and does not prevent the pipeline from continuing&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deploy services on EKS stage&lt;/strong&gt;: Deploys the Kubernetes services specified in the &lt;code&gt;services.yaml&lt;/code&gt; file to the Amazon Elastic Kubernetes Service (EKS) cluster. Before deploying, it sets the AWS credentials and region environment variables. It only runs the deployment if the &lt;code&gt;services.yaml&lt;/code&gt; file has been modified since the last pipeline run.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deploy app on EKS stage&lt;/strong&gt;: Deploys the Kubernetes deployment specified in the &lt;code&gt;deployment.yaml&lt;/code&gt; file to the EKS cluster. Before deploying, it sets the AWS credentials and region environment variables. After deploying, it checks the rollout status of the deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To make sure that this script is valid, you can use a lint tool to validate the script, it will help to quickly fix the errors and save time. For my part, I used &lt;a href="https://docs.gitlab.com/ee/integration/glab/" rel="noopener noreferrer"&gt;&lt;strong&gt;glab&lt;/strong&gt;&lt;/a&gt; CLI tool which with the command &lt;code&gt;glab ci lint&lt;/code&gt; allows validating the script to make sure that everything is correct.&lt;/p&gt;

&lt;h1&gt;
  
  
  Trigger the pipeline with a git push.
&lt;/h1&gt;

&lt;p&gt;To trigger this GitLab CI pipeline, you need to commit and push the code changes to the GitLab repository that contains this GitLab CI script.&lt;/p&gt;

&lt;p&gt;Once you have pushed the changes to the repository, GitLab CI automatically detects the changes and starts running the pipeline. The pipeline can also be triggered manually by clicking the "CI/CD" tab in the GitLab repository and clicking the "Run Pipeline" button.&lt;/p&gt;

&lt;p&gt;Note that to run the pipeline successfully, you need to ensure that you have configured the necessary environment variables on GitLab CI, such as &lt;code&gt;CI_REGISTRY_USER&lt;/code&gt;, &lt;code&gt;CI_REGISTRY_PASSWORD&lt;/code&gt;, &lt;code&gt;AWS_ACCESS_KEY_ID&lt;/code&gt;, &lt;code&gt;AWS_SECRET_ACCESS_KEY&lt;/code&gt;, &lt;code&gt;AWS_DEFAULT_REGION&lt;/code&gt;, and &lt;code&gt;KUBECONFIG&lt;/code&gt;. These variables are used to log in to the GitLab registry, authenticate with AWS, and connect to the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;You can find the files I used here on my &lt;a href="https://github.com/davWK/AWS-EKS-Deployment-with-Gitlab-CI-CD-and-Checkov" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; or &lt;a href="https://gitlab.com/davwoglo/gitlab-ci_showcase.git" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I remain open to any contribution and suggestion to improve my work. Do not hesitate to let me know your contribution or suggestion by opening an issue.&lt;/p&gt;

&lt;p&gt;Thanks for reading :)&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kubernetes</category>
      <category>gitlab</category>
      <category>cicd</category>
    </item>
    <item>
      <title>My Cloud Resume Challenge</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Wed, 17 Aug 2022 21:55:00 +0000</pubDate>
      <link>https://dev.to/davwk/my-cloud-resume-challenge-383k</link>
      <guid>https://dev.to/davwk/my-cloud-resume-challenge-383k</guid>
      <description>&lt;h2&gt;
  
  
  How did it all start?
&lt;/h2&gt;

&lt;p&gt;It all started after I passed my certification. I thought to myself, this is it, you've achieved your first short term goal of getting into the Cloud world, something you've been studying tirelessly for in recent months. But I must admit I thought that once certified the offers would fall left and right. Haha !!!  it's not the case and it wasn't going to happen, we can't make you an offer like that just because you are certified inh, Certified people, we can find all over the place lately. Certification at best is just the key to open the door of Cloud house 😀, it just offers the opportunity to be consulted, to have a little bit of visibility or I would say consideration. But to get a job (like settling down in the Cloud house 😀), to get offers as I thought, you have to justify your know-how and your knowledge  in the Cloud by practical, relevant and convincing experiences. And I can see from afar this question that we juniors often ask ourselves "How can we get experience if we are not given the opportunity to join a company where we can develop these experiences? 🤔" Honestly (at least for the IT world) we can do without it, we can justify our skills, our experiences, our knowledge by projects, challenges, that make us go through the pitfalls, the mistakes, the solutions, the obstacles, the dark days, the light that make us build and help us to polish a profile well adapted and that constitutes a solution to the needs and problems of companies.&lt;br&gt;
It is with this in mind that after watching &lt;a href="https://youtu.be/vviS_fHnJu4" rel="noopener noreferrer"&gt;a video&lt;/a&gt; of Forrest Brazeal, I came across the &lt;a href="https://cloudresumechallenge.dev/" rel="noopener noreferrer"&gt;Cloud Resume Challenge&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is The Cloud Resume Challenge ?
&lt;/h2&gt;

&lt;p&gt;The Cloud Resume Challenge is a hands-on project designed to help bridge the gap from Cloud certification to Cloud job. It incorporates many of the skills that real Cloud and DevOps engineers use in their daily work. The Cloud Resume Challenge is a multiple-step (It's roughly 16 steps) resume project, from the creation of a website to the implementation of a CI/CD pipeline, which helps build and demonstrate skills fundamental to pursuing a career as a Cloud Engineer. &lt;/p&gt;

&lt;h2&gt;
  
  
  Certification
&lt;/h2&gt;

&lt;p&gt;The first step of the challenge is to get a Cloud certification of beginner or associate level. I don't think it's an obligation, but from my personal point of view I strongly recommend it, especially when you are a beginner or if you have a non-technical background. it will allow you to acquire and validate the fundamentals that you need to pursue a career in the cloud. For me, I had to pass the associate level certification of Google Cloud, &lt;a href="https://cloud.google.com/certification/cloud-engineer" rel="noopener noreferrer"&gt;the Google Cloud Associate Cloud engineer&lt;/a&gt; thanks to the &lt;a href="https://andela.com/alc/google-africa-developer-scholarship-gads/" rel="noopener noreferrer"&gt;Google Africa Developer Scholarship&lt;/a&gt;GADS 2021 program&lt;/p&gt;

&lt;h2&gt;
  
  
  The architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujhhd0lmjtpt1q14zs6q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujhhd0lmjtpt1q14zs6q.png" alt="High level Infrastructure architecture" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Website
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;To go fast and get to the heart of things, I didn't have to write HTML files from scratch, I rather took a free template that I modified and adapted to my needs with the little and old knowledge I have in HTML and CSS. Once the website files are ready, it must be hosted, right? Since the challenge is based on a serverless spirit, the site is not hosted on a server but on the Object storage service of Google Cloud.  By uploading website contents as files to Cloud Storage, we can host  static websites on buckets. &lt;a href="https://cloud.google.com/storage/docs/hosting-static-website" rel="noopener noreferrer"&gt;See how to do it&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Make the site accessible via a domain name and  secure it
&lt;/h3&gt;

&lt;p&gt;Usually to make a site accessible via a domain name, you must map the IP address of the server hosting the site to a domain name. But in our case there is no server in play that has an IP address, what can I do? Fortunately we have the managed HTTP Load Balancer on Google Cloud which can allow us to set up a LoadBalancer with a public IP address and point this address to a Bucket that hosts a static site. This solves the problem of IP address, we now have an IP address to access our website, we can now link a domain name to this IP address and access our website as usual.&lt;br&gt;
But with only that, the site is only accessible in HTTP, for more security the site must use HTTPS, again thanks to Google Cloud's LB we can set this up, without too much trouble, Google Cloud's HTTPS LB has an option for automatic management of SSL certificates that can be activated during the implementation of the LB. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;But in a corporate environment it is better that the certificates are managed and controlled by you&lt;br&gt;
For the domain name I used Namecheap, however you can use Cloud DNS from Google Cloud to do this&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Website visitors count
&lt;/h2&gt;

&lt;p&gt;This part of the challenge consists in having the number of visitors who have visited the site.&lt;br&gt;
For this I wrote a JavaScript code in addition to the website files on one side, and on the other side a function in python which is hosted on Google Cloud Function. The purpose of the javascript is to call the execution of the python function which in turn is executed, calculates and stores the result in the Document NoSQL database,  Firestore  and returns the total number of visitors in Json format to the JS code which is  responsible for displaying it on the site, This operation is done each time there is a visit to the page. The python function in this case serves as an API to avoid having the JS code communicate directly with the database&lt;/p&gt;

&lt;h2&gt;
  
  
  Test
&lt;/h2&gt;

&lt;p&gt;To ensure that the python code works as it should and returns the expected result, it was necessary to write python a test using the unittest module provided by the Python standard library  which help write and run tests for Python code. Since the JS code needs to get the data concerning the number of visitors in json format, the python test here verifies if the data returned by the function are indeed in json format&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Well I think we can take a coffee break ☕&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;So far everything that has been done has been manually clicking around in the Google Cloud console. In the following stages, the methods and techniques used by DevOps Engineers will intervene, in particular &lt;a href="https://www.hashicorp.com/resources/what-is-infrastructure-as-code/" rel="noopener noreferrer"&gt;Infrastructure as Code&lt;/a&gt;, &lt;a href="https://about.gitlab.com/topics/gitops/" rel="noopener noreferrer"&gt;GitOps&lt;/a&gt; and the &lt;a href="https://about.gitlab.com/topics/ci-cd/" rel="noopener noreferrer"&gt;CI/CD&lt;/a&gt; (I admit it's not that hard in this challenge but it's the same operating mode)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Infrastructure as Code
&lt;/h2&gt;

&lt;p&gt;Here I discovered a rather brilliant thing. Before telling you, know that it is planned here to define resources in a Terraform template and deploy them using the Terraform CLI. But instead of getting into another hassle of rewriting everything I've done so far in Terraform, testing it, destroying it, aligning it to what exists and all that, I discovered a &lt;a href="https://cloud.google.com/sdk/gcloud/reference/beta/resource-config/bulk-export" rel="noopener noreferrer"&gt;Google Cloud tool&lt;/a&gt; that I can use to generate Terraform code for resources in a project, folder, or organization. &lt;br&gt;
The &lt;code&gt;gcloud beta resource-config bulk-export --resource-format=Terraform&lt;/code&gt; command exports resources currently configured in the project, folder, or organization and prints them to the screen in HCL code format.&lt;br&gt;
(&lt;em&gt;Of course that's what I did&lt;/em&gt;)&lt;br&gt;
So what's left for me to do is manage these Terraform templates with a source control system, so I can integrate it into the pipeline later. aSo what's left for me to do is manage those templates with Github, so I can integrate it into the pipeline later. and when I need to make changes to my Cloud resources , I just have to adjust a few lines and that's it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;But I point out, before using this tool, it is necessary to already have Terraform bases. &lt;a href="https://learn.hashicorp.com/collections/terraform/gcp-get-started" rel="noopener noreferrer"&gt;Here's a great guide to getting started&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD
&lt;/h2&gt;

&lt;p&gt;To setup the CI/CD pipeline or I would rather say the pipelines, since there are two, one for the frontend and another for the backend. I created a Github repository for the frontend (i.e. the website files), and thanks to Cloud Build I automate the  update of the website which is triggered each time I make a push. On the other side for the Backend (i.e. the infrastructure resources that are defined in a Terraform template), I also created a Github repo for this purpose. repo (changes to .tf files) my Cloud resources are automatically updated, whether it's deletion, change or addition, all of this is done automatically without any intervention on my part. &lt;/p&gt;

&lt;h2&gt;
  
  
  In the end ...
&lt;/h2&gt;

&lt;p&gt;Well, that's an overview of how I managed to complete this challenge, which by the way is very rewarding, allowed me to learn and discover really useful things, to identify my weaknesses and how to strengthen them and above all has allowed me to have relevant experience in the cloud. and I don't intend to stop there 😉&lt;br&gt;
Of course some of the steps were not fun, but thanks to the help of some of my SWE connections, in particular &lt;a href="https://vincentbakpatina.me/" rel="noopener noreferrer"&gt;Vincent&lt;br&gt;
Bakpatina&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/ayao-corneille-allogbalo/" rel="noopener noreferrer"&gt;Corneille ALLOGBALO&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/reussiteforever/" rel="noopener noreferrer"&gt;Abdel-Khafid ATOKOU&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/samtou-assekouda-b2a78b174/" rel="noopener noreferrer"&gt;Samtou Assekouda&lt;/a&gt;  I was able to overcome.&lt;/p&gt;

&lt;h3&gt;
  
  
  Here are some useful resources 👇🏾
&lt;/h3&gt;

&lt;p&gt;Here is &lt;a href="https://github.com/davWK/cloud-resume-challenge.git" rel="noopener noreferrer"&gt;my Github repo for the frontend&lt;/a&gt;, concerning the backend, since it contains some sensitive information I have not made it public&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloud.google.com/storage/docs/hosting-static-website" rel="noopener noreferrer"&gt;Host a static website on Google Cloud Storage&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/googleCloudPlatform/functions-framework-python" rel="noopener noreferrer"&gt;Functions Framework for Python&lt;/a&gt;&lt;br&gt;
&lt;a href="https://cloud.google.com/community/tutorials/automated-publishing-cloud-build" rel="noopener noreferrer"&gt;Automated static website publishing with Cloud Build&lt;/a&gt;&lt;br&gt;
&lt;a href="https://cloud.google.com/cdn/docs/invalidating-cached-content#gcloud" rel="noopener noreferrer"&gt;Invalidate cached content&lt;/a&gt; &lt;br&gt;
&lt;a href="https://cloud.google.com/architecture/managing-infrastructure-as-code?utm_source=youtube&amp;amp;utm_medium=unpaidsoc&amp;amp;utm_campaign=CDR_mao_gcp_ce93fpqrkck_ServerlessExpeditions_040821&amp;amp;utm_content=description" rel="noopener noreferrer"&gt;Managing infrastructure as code with Terraform, Cloud Build, and GitOps&lt;/a&gt; &lt;br&gt;
&lt;a href="https://cloud.google.com/docs/terraform/resource-management/export" rel="noopener noreferrer"&gt;Export your Google Cloud resources into Terraform format&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloudskills</category>
      <category>googlecloud</category>
      <category>challenge</category>
    </item>
    <item>
      <title>Quickly automate resources deployment on Google Cloud using an IaC and CI/CD Platform</title>
      <dc:creator>David WOGLO</dc:creator>
      <pubDate>Tue, 09 Aug 2022 08:27:00 +0000</pubDate>
      <link>https://dev.to/davwk/quickly-automate-resources-deployment-on-google-cloud-using-an-iac-and-cicd-platform-32bl</link>
      <guid>https://dev.to/davwk/quickly-automate-resources-deployment-on-google-cloud-using-an-iac-and-cicd-platform-32bl</guid>
      <description>&lt;p&gt;In this article I will show you in a simple way, how to set up a CI/CD pipeline that automatically deploys  your google cloud infrastructure resources using &lt;a href="https://www.terraform.io/docs" rel="noopener noreferrer"&gt;Terraform&lt;/a&gt;, &lt;a href="https://cloud.google.com/build/docs/overview" rel="noopener noreferrer"&gt;Cloud Build&lt;/a&gt; and Github.&lt;/p&gt;

&lt;h3&gt;
  
  
  Objectives
&lt;/h3&gt;

&lt;p&gt;Automatically deploy resources to Google Cloud from Terraform code hosted in the source control repository. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foiyiw5phqpberyao7qwn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foiyiw5phqpberyao7qwn.png" alt="Image description" width="780" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Requirements
&lt;/h3&gt;

&lt;p&gt;To be able to realize all the steps of this article, you will need a functional google cloud account (You can use the free trial ), a Github account, and some basic knowledge in Google Cloud and Terraform. &lt;/p&gt;

&lt;h3&gt;
  
  
  Granting necessary permissions to Cloud Build
&lt;/h3&gt;

&lt;p&gt;To be able to perform the necessary deployments on the infrastructure, Cloud Build will need proper permissions. In this lab I will go faster by giving the service account the project editor role.  Get the Cloud Build service account and give it the necessary permissions so that it can make required changes to the resources. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Of course, in a production environment it is necessary to comply with the principle of least privilege. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To do so, run the following command in the cloud shell &lt;br&gt;
&lt;code&gt;gcloud projects add-iam-policy-binding $PROJECT_ID  --member serviceAccount:theCloudBuidServiceAccount --role roles/editor&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;To get Cloud Build service account, click on Cloud Build then settings &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2o7l9wz45kto75q71ky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2o7l9wz45kto75q71ky.png" alt="Image description" width="369" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And there you will find the email address of the service account &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8r1bos92lo44jwpgyuu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8r1bos92lo44jwpgyuu3.png" alt="Image description" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup the Github repo and connect Cloud Build to it
&lt;/h3&gt;

&lt;p&gt;Login to Github and create a new repo , then upload Terraform files or edit new ones directly on Github. [Click here](&lt;a href="https://Github.com/davWK/ci-cd-terraform-cloudbuild_basics" rel="noopener noreferrer"&gt;https://Github.com/davWK/ci-cd-terraform-cloudbuild_basics&lt;/a&gt; to fork my example infrastructure files repository, or if you are comfortable with Terraform and want to deploy a custom infrastructure write ones from scratch. After that go to Cloud Build to set up automated deployment with a build trigger, you will use Cloud Build and its build triggers to deploy your ressources automatically every time you push a new git commit to the source repository. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go to &lt;a href="https://console.cloud.google.com/cloud-build/" rel="noopener noreferrer"&gt;Cloud Build&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And on the left select trigger &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;click on &lt;strong&gt;create trigger&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Give it a name, and for the event  choose &lt;strong&gt;push to the branch&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the source , select** repository** and click &lt;strong&gt;connect new repository&lt;/strong&gt; &lt;br&gt;
Here it is possible to link a Github repo to Cloud Build by mirroring a Github repository to Cloud Source Repositories or by using Google Cloud Build Github app. We will use the application in this case &lt;br&gt;
&lt;a href="https://cloud.google.com/architecture/managing-infrastructure-as-code?utm_source=youtube&amp;amp;utm_medium=unpaidsoc&amp;amp;utm_campaign=CDR_mao_gcp_ce93fpqrkck_ServerlessExpeditions_040821&amp;amp;utm_content=description#directly_connecting_cloud_build_to_your_Github_repository" rel="noopener noreferrer"&gt;see how to configure the application &lt;/a&gt;. After configuring the app,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Back to create trigger page,  and click on repository and choose the newly created repository&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In branch set it to &lt;strong&gt;^master$&lt;/strong&gt; or &lt;strong&gt;^main$&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the configuration type  choose &lt;strong&gt;Cloud Build configuration file (yaml or json)&lt;/strong&gt;&lt;br&gt;
and in your Github repo create a cloudbuid.yaml with the content below.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;steps:
- id: 'tf init'
  name: 'hashicorp/terraform:1.0.0'
  entrypoint: 'sh'
  args: 
  - '-c'
  - |
      terraform init

- id: 'tf apply'
  name: 'hashicorp/terraform:1.0.0'
  entrypoint: 'sh'
  args: 
  - '-c'
  - |
      terraform apply -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Back yo trigger page, in the location, by selecting repository, put the path to the yaml file or choose inline (in this case you would not need to create the yaml file in the repo but rather paste the yaml content directly into code editor)&lt;br&gt;
Leave the other values as default and click on create &lt;/p&gt;

&lt;p&gt;Voila :) the deployment of your resources should start automatically if you make a push of the yaml file created previously, if not you can run it manually for the first time, for the next times as soon as you update your Terraform configuration the update of your resources should be done automatically &lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>terraform</category>
      <category>github</category>
      <category>cicd</category>
    </item>
  </channel>
</rss>
