<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: #SirPhemmiey</title>
    <description>The latest articles on DEV Community by #SirPhemmiey (@oluwafemiakind1).</description>
    <link>https://dev.to/oluwafemiakind1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oluwafemiakind1"/>
    <language>en</language>
    <item>
      <title>Circuit Breakers in Go: Stop Cascading Failures</title>
      <dc:creator>#SirPhemmiey</dc:creator>
      <pubDate>Fri, 07 Jun 2024 23:32:27 +0000</pubDate>
      <link>https://dev.to/oluwafemiakind1/circuit-breakers-in-go-stop-cascading-failures-3p1l</link>
      <guid>https://dev.to/oluwafemiakind1/circuit-breakers-in-go-stop-cascading-failures-3p1l</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yaXq1Tmi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ACQH7KC9X_AIOOhs8sdLsUA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yaXq1Tmi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/1024/1%2ACQH7KC9X_AIOOhs8sdLsUA.png" alt="front-cover" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Circuit Breakers&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;A circuit breaker detects failures and encapsulate the logic of handling those failures in a way that prevents the failure from constantly recurring. For example, they’re useful when dealing with network calls to external services, databases, or really, any part of your system that might fail temporarily. By using a circuit breaker, you can prevent cascading failures, manage temporary errors, and maintain a stable and responsive system amidst a system breakdown.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cascading Failures
&lt;/h4&gt;

&lt;p&gt;Cascading failures occur when a failure in one part of the system triggers failures in other parts, leading to widespread disruption. An example is when a microservice in a distributed system becomes unresponsive, causing dependent services to timeout and eventually fail. Depending on the scale of the application, the impact of these failures can be catastrophic which is going to degrade performance and probably even impact user experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Circuit Breaker Patterns
&lt;/h3&gt;

&lt;p&gt;A circuit breaker itself is a technique/pattern and there are three different states it operates which we will talk about:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Closed State:&lt;/strong&gt; In a closed state, the circuit breaker allows all requests to pass through to the target service normally as they would. If the requests are successful, the circuit remains closed. However, if a certain threshold of failures is reached, the circuit transitions to the open state. Think of it like a fully operational service where users can log in and access data without issues. Everything is running smoothly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--W1vu8kwC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/531/0%2AgjBBDwp5yVcuUz1F" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--W1vu8kwC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/531/0%2AgjBBDwp5yVcuUz1F" alt="close state" width="531" height="61"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Open State&lt;/strong&gt; : In an open state, the circuit breaker immediately fails all incoming requests without attempting to contact the target service. The state is entered to prevent further overload of the failing service and give it time to recover. After a predefined timeout, the circuit breaker moves to the half-open state. A relatable example is this; Imagine an online store experiences a sudden issue where every purchase attempt fails. To avoid overwhelming the system, the store temporarily stops accepting any new purchase requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gHvcOzsE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/551/0%2ADq1-m916GiNq0f_t" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gHvcOzsE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/551/0%2ADq1-m916GiNq0f_t" alt="open state" width="551" height="62"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Half-Open State&lt;/strong&gt; : In the half-open state, the circuit breaker allows a (configurable) limited number of test requests to pass through to the target service. And if these requests are successful, the circuit transitions back to the closed state. If they fail, the circuit returns to the open state. In the example of the online store i gave in the open state above, this is where the online store starts to allow a few purchase attempts to see if the issue has been fixed. If these few attempts succeed, the store will fully reopen its service to accept new purchase requests.&lt;/p&gt;

&lt;p&gt;This diagram shows when the circuit breaker tries to see if requests to &lt;strong&gt;Service B&lt;/strong&gt; are successful and then it fails/breaks:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G0xzKPtY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/581/0%2ASdSlA3UaiNnuU-D7" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G0xzKPtY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/581/0%2ASdSlA3UaiNnuU-D7" alt="half-open-1" width="581" height="61"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The follow up diagram then shows when the test requests to &lt;strong&gt;Service B&lt;/strong&gt; succeeds, the circuit is closed, and all further calls are routed to &lt;strong&gt;Service B&lt;/strong&gt;  again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iVVoWwgO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/581/0%2AKzUKstBSz06cWL-E" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iVVoWwgO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn-images-1.medium.com/max/581/0%2AKzUKstBSz06cWL-E" alt="half-open-2" width="581" height="61"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : Key configurations for a circuit breaker include the failure threshold (number of failures needed to open the circuit), the timeout for the open state, and the number of test requests in the half-open state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementing Circuit Breakers in Go
&lt;/h3&gt;

&lt;p&gt;It’s important to mention that prior knowledge of Go is required to follow along in this article.&lt;/p&gt;

&lt;p&gt;As with any software engineering pattern, circuit breakers can be implemented in various languages. However, this article will focus on implementation in Golang. While there are several libraries available for this purpose, such as goresilience, go-resiliency, and gobreaker, we will specifically concentrate on using the gobreaker library.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pro Tip&lt;/strong&gt; : You can see the internal implementation of the gobreaker package, check &lt;a href="https://github.com/sony/gobreaker/blob/master/v2/gobreaker.go"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let’s consider a simple Golang application where a circuit breaker is implemented to handle calls to an external API. This basic example demonstrates how to wrap an external API call with the circuit breaker technique:&lt;/p&gt;

&lt;p&gt;Let’s touch on a few important things:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;gobreaker.NewCircuitBreaker&lt;/code&gt;&lt;/strong&gt; function initializes the circuit breaker with our custom settings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;cb.Execute&lt;/code&gt;&lt;/strong&gt; method wraps the HTTP request, automatically managing the circuit state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MaximumRequests&lt;/strong&gt; is the maximum number of requests allowed to pass through when the state is half-open&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interval&lt;/strong&gt; is the cyclic period of the closed state for the circuit breaker to clear the internal counts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Timeout&lt;/strong&gt; is the duration before transitioning from open to half-open state.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ReadyToTrip&lt;/strong&gt; is called with a copy of counts whenever a request fails in the closed state. If ReadyToTrip returns true, the circuit breaker will be placed into the open state. In our case here, it returns true if requests have failed more then three consecutive times.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OnStateChange&lt;/strong&gt; is called whenever the state of the circuit breaker changes. You would usually want to collect the metrics of the state change here and report to any metrics collector of your choice.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let’s write some unit tests to verify our circuit breaker implementation. I will only be explaining the most critical unit tests to understand. You can check &lt;a href="https://github.com/SirPhemmiey/circuit-breaker-with-go/blob/main/main_test.go"&gt;here&lt;/a&gt; for the full code.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We will write a test that simulates consecutive failed requests and checks if the circuit breaker trips to the open state. Essentially, after 3 failures, when the forth failure occurs, we expect the circuit breaker to trip (open) since our condition says counts.ConsecutiveFailures &amp;gt; 3 . Here's what the test looks like:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; t.Run("FailedRequests", func(t *testing.T) {
         // Override callExternalAPI to simulate failure
         callExternalAPI = func() (int, error) {
             return 0, errors.New("simulated failure")
         }

         for i := 0; i &amp;lt; 4; i++ {
             _, err := cb.Execute(func() (interface{}, error) {
                 return callExternalAPI()
             })
             if err == nil {
                 t.Fatalf("expected error, got none")
             }
         }

         if cb.State() != gobreaker.StateOpen {
             t.Fatalf("expected circuit breaker to be open, got %v", cb.State())
         }
     })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;We will test the &lt;strong&gt;open&lt;/strong&gt; &amp;gt; &lt;strong&gt;half&lt;/strong&gt; - &lt;strong&gt;open&lt;/strong&gt; &amp;gt; &lt;strong&gt;closed&lt;/strong&gt; states. But we will first simulate an open circuit and call a timeout. After a timeout, we need to make at least one success request for the circuit to transition to half-open. After the half-open state, we need to make another success request for the circuit to be fully closed again. If for any reason, there’s no record of a success request in the case, it will go back to being open. Here’s how the test looks like:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; //Simulates the circuit breaker being open, 
 //wait for the defined timeout, 
 //then check if it closes again after a successful request.
     t.Run("RetryAfterTimeout", func(t *testing.T) {
         // Simulate circuit breaker opening
         callExternalAPI = func() (int, error) {
             return 0, errors.New("simulated failure")
         }

         for i := 0; i &amp;lt; 4; i++ {
             _, err := cb.Execute(func() (interface{}, error) {
                 return callExternalAPI()
             })
             if err == nil {
                 t.Fatalf("expected error, got none")
             }
         }

         if cb.State() != gobreaker.StateOpen {
             t.Fatalf("expected circuit breaker to be open, got %v", cb.State())
         }

         // Wait for timeout duration
         time.Sleep(settings.Timeout + 1*time.Second)

         //We expect that after the timeout period, 
         //the circuit breaker should transition to the half-open state. 

         // Restore original callExternalAPI to simulate success
         callExternalAPI = func() (int, error) {
             resp, err := http.Get(server.URL)
             if err != nil {
                 return 0, err
             }
             defer resp.Body.Close()
             return resp.StatusCode, nil
         }

         _, err := cb.Execute(func() (interface{}, error) {
             return callExternalAPI()
         })
         if err != nil {
             t.Fatalf("expected no error, got %v", err)
         }

         if cb.State() != gobreaker.StateHalfOpen {
             t.Fatalf("expected circuit breaker to be half-open, got %v", cb.State())
         }

         //After verifying the half-open state, another successful request is simulated to ensure the circuit breaker transitions back to the closed state.
         for i := 0; i &amp;lt; int(settings.MaxRequests); i++ {
             _, err = cb.Execute(func() (interface{}, error) {
                 return callExternalAPI()
             })
             if err != nil {
                 t.Fatalf("expected no error, got %v", err)
             }
         }

         if cb.State() != gobreaker.StateClosed {
             t.Fatalf("expected circuit breaker to be closed, got %v", cb.State())
         }
     })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Let’s test the ReadyToTrip condition which triggers after 2 consecutive failure requests. We'll have a variable that tracks for consecutive failures. The ReadyToTrip callback is updated to check if the circuit breaker trips after 2 failures ( counts.ConsecutiveFailures &amp;gt; 2). We will write a test that simulates failures and verifies the count and that the circuit breaker transitions to the open state after the specified number of failures.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; t.Run("ReadyToTrip", func(t *testing.T) {
         failures := 0
         settings.ReadyToTrip = func(counts gobreaker.Counts) bool {
             failures = int(counts.ConsecutiveFailures)
             return counts.ConsecutiveFailures &amp;gt; 2 // Trip after 2 failures
         }

         cb = gobreaker.NewCircuitBreaker(settings)

         // Simulate failures
         callExternalAPI = func() (int, error) {
             return 0, errors.New("simulated failure")
         }
         for i := 0; i &amp;lt; 3; i++ {
             _, err := cb.Execute(func() (interface{}, error) {
                 return callExternalAPI()
             })
             if err == nil {
                 t.Fatalf("expected error, got none")
             }
         }

         if failures != 3 {
             t.Fatalf("expected 3 consecutive failures, got %d", failures)
         }
         if cb.State() != gobreaker.StateOpen {
             t.Fatalf("expected circuit breaker to be open, got %v", cb.State())
         }
     })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Advanced Strategies
&lt;/h3&gt;

&lt;p&gt;We can take it a step further by adding an exponential backoff strategy to our circuit breaker implementation. We will this article keep it simple and concise by demonstrating an example of the exponential backoff strategy. However, there are other advanced strategies for circuit breakers worth mentioning, such as load shedding, bulkheading, fallback mechanisms, context and cancellation. These strategies basically enhance the robustness and functionality of circuit breakers. Here’s an example of using the exponential backoff strategy:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exponential Backoff&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.githubusercontent.com/SirPhemmiey/a19af4b469d5a67787ba14f8eeccb1d4"&gt;Circuit breaker with exponential backoff&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s make a couple of things clear:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Custom Backoff Function:&lt;/strong&gt; The exponentialBackoff function implements an exponential backoff strategy with a jitter. It basically calculates the backoff time based on the number of attempts, ensuring that the delay increases exponentially with each retry attempt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handling Retries:&lt;/strong&gt; As you can see in the /api handler, the logic now includes a loop that attempts to call the external API up to a specified number of attempts ( attempts := 5). After each failed attempt, we wait for a duration determined by the exponentialBackoff function before retrying.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Circuit Breaker Execution:&lt;/strong&gt; The circuit breaker is used within the loop. If the external API call succeeds ( err == nil), the loop breaks, and the successful result is returned. If all attempts fail, an HTTP 503 (Service Unavailable) error is returned.&lt;/p&gt;

&lt;p&gt;Integrating custom backoff strategy in a circuit breaker implementation indeed aims to handle transient errors more gracefully. The increasing delays between retries help reduce the load on failing services, allowing them time to recover. As evident in our code above, our exponentialBackoff function was introduced to add delays between retries when calling an external API.&lt;/p&gt;

&lt;p&gt;Additionally, we can integrate metrics and logging to monitor circuit breaker state changes using tools like Prometheus for real-time monitoring and alerting. Here’s a simple example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.githubusercontent.com/SirPhemmiey/e9af8e9d0e0adf13e2058beb1fc3ee42/"&gt;Implementing a circuit breaker pattern with advanced strategies in go&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you’ll see, we have now done the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In L16–21, we define a prometheus counter vector to keep track of the number of requests and their state (success, failure, circuit breaker state changes).&lt;/li&gt;
&lt;li&gt;In L25–26, the metrics defined are registered with Prometheus in the init function.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Pro Tip&lt;/strong&gt; : The init function in Go is used to initialize the state of a package before the main function or any other code in the package is executed. In this case, the init function registers the requestCount metric with Prometheus. And this essentially ensures that Prometheus is aware of this metric and can start collect data as soon as the application starts running.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We create the circuit breaker with custom settings, including the ReadyToTrip function that increases the failure counter and determines when to trip the circuit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;OnStateChange to log state changes and increment the corresponding prometheus metric&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We expose the Prometheus metrics at /metrics endpoint&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Wrapping Up
&lt;/h3&gt;

&lt;p&gt;To wrap up this article, i hope you saw how circuit breakers play a huge role in building resilient and reliable systems. By proactively preventing cascading failures, they fortify the reliability of microservices and distributed systems, ensuring a seamless user experience even in the face of adversity.&lt;/p&gt;

&lt;p&gt;Keep in mind, any system designed for scalability must incorporate strategies to gracefully handle failures and swiftly recover —  &lt;strong&gt;Oluwafemi&lt;/strong&gt; , &lt;strong&gt;2024&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at&lt;/em&gt; &lt;a href="https://oluwafemiakinde.dev/circuit-breakers-in-go-preventing-cascading-failures"&gt;&lt;em&gt;https://oluwafemiakinde.dev&lt;/em&gt;&lt;/a&gt; &lt;em&gt;on June 7, 2024.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>resilience</category>
      <category>go</category>
      <category>circuitbreaker</category>
    </item>
    <item>
      <title>The Beacon API: Enhancing Web Performance with Background Data Transmission</title>
      <dc:creator>#SirPhemmiey</dc:creator>
      <pubDate>Mon, 22 Apr 2024 22:36:20 +0000</pubDate>
      <link>https://dev.to/oluwafemiakind1/the-beacon-api-enhancing-web-performance-with-background-data-transmission-716</link>
      <guid>https://dev.to/oluwafemiakind1/the-beacon-api-enhancing-web-performance-with-background-data-transmission-716</guid>
      <description>&lt;p&gt;We all know that sending data from a client to a server, especially as a web page is closing, is essential. This article explains how Beacon API makes this easy - a web standard designed to send small bits of data to the server without slowing down the page or disrupting the user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is the Beacon API?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Beacon API is a JavaScript-based interface that allows web pages to send data to a server in the background, asynchronously, and without waiting for a response. As you would have guessed, this is useful for sending analytics or diagnostic information that doesn't typically require a response from your server or backend or just before the user leaves a page (for example, during the &lt;code&gt;unload&lt;/code&gt; or &lt;code&gt;beforeunload&lt;/code&gt; events).&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Key Features of the Beacon API&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asynchronous Data Transfer:&lt;/strong&gt; Unlike AJAX requests, Beacon requests do not require a response from the server, allowing the user to navigate away from the page immediately without delay.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reliability:&lt;/strong&gt; The data is transmitted to the server more reliably. Even if the page is being &lt;code&gt;unloaded&lt;/code&gt;, the browser will attempt to send the Beacon data in the background.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Efficiency:&lt;/strong&gt; It uses HTTP POST requests and does not impact the performance or the loading time of the web page.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Let's get to it. How Does It Work?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The Beacon API's main function is &lt;code&gt;navigator.sendBeacon(url, data)&lt;/code&gt;, where &lt;code&gt;url&lt;/code&gt; is the server endpoint to which data is sent, and &lt;code&gt;data&lt;/code&gt; is the payload. The data can be any of several types, including &lt;code&gt;ArrayBuffer&lt;/code&gt;, &lt;code&gt;Blob&lt;/code&gt;, &lt;code&gt;DOMString&lt;/code&gt;, &lt;code&gt;FormData&lt;/code&gt;, or &lt;code&gt;URLSearchParams&lt;/code&gt; as long as that's what your server or backend is expecting.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Implementation Steps&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Check for Support:&lt;/strong&gt; It's usually a good practice to first check if the user's browser supports it. If it does, then go ahead to use the beacon API
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; if (navigator.sendBeacon) {
     // Beacon API is supported
 }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sending Data:&lt;/strong&gt; To send data with the Beacon API, we simply call &lt;code&gt;navigator.sendBeacon()&lt;/code&gt; with the endpoint and data to be sent.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; window.addEventListener('unload', function(event) {
   var data = { userAction: 'pageExit', timestamp: Date.now() };
   var beaconUrl = 'https://example.com/analytics';
   navigator.sendBeacon(beaconUrl, JSON.stringify(data));
 });

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Server-Side Handling:&lt;/strong&gt; On the server, you'll receive the Beacon request just like any other POST request. The data can be processed or stored as needed.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Let's see more examples and uses cases&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sending Analytics Data on Page Unload:&lt;/strong&gt; With the Beacon API, you can send user interaction data to an analytics endpoint when the user leaves the page. This is useful for capturing page session times, button clicks, or any actions the user performed on the page.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; window.addEventListener('beforeunload', function(event) {
   const analyticsData = {
     sessionDuration: Date.now() - window.sessionStartTime, // Assuming sessionStartTime was recorded at page load
     actions: window.userActions, // Assuming userActions were recorded during the session
   };

   navigator.sendBeacon('https://youranalyticsendpoint.com/data', JSON.stringify(analyticsData));
 });

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Tracking Form Data Without Submission:&lt;/strong&gt; The Beacon API can be used for draft logic to partially save form details. It allows you to send data to your backend asynchronously. It's useful for creating an autosave or draft feature in a web app.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
 function saveDraft() {
   const formElement = document.getElementById('your-form-id');
   const formData = new FormData(formElement);

   // send draft data to the server using Beacon API
   const draftUrl = 'https://yourserver.com/saveDraft';
   const success = navigator.sendBeacon(draftUrl, formData);
   console.log('Draft save initiated:', success ? 'Success' : 'Failed');
 }

 // trigger the saveDraft function on form input (throttled)
 formElement.addEventListener('input', () =&amp;gt; {
   // it's usually a best practice to use a throttle/debounce function to 
   //prevent too many Beacon requests
   if (window.draftSaveTimeout) {
     clearTimeout(window.draftSaveTimeout);
   }

   window.draftSaveTimeout = setTimeout(saveDraft, 500); // save draft every 500 ms of inactivity
 });

 // Additional save on page unload
 window.addEventListener('unload', saveDraft);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Advantage Over Traditional Methods&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Before the Beacon API, sending data to the server during &lt;code&gt;unload&lt;/code&gt; events was less reliable. Traditional AJAX requests might be cancelled if they were initiated during these events, leading to data loss. The Beacon API ensures that the data is transmitted even after the page has started unloading.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Limitations and Considerations&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Since the Beacon API does not expect a response from the server, it's not suitable for tasks that require any response from the server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Some browsers may impose their own limits on the size of the data payload.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In general, the Beacon API provides a reliable, efficient method to send data to the server without affecting the user experience.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Leveraging the Power of Google Cloud Preemptible VMs for Cost-Effective Computing</title>
      <dc:creator>#SirPhemmiey</dc:creator>
      <pubDate>Fri, 23 Jun 2023 06:25:44 +0000</pubDate>
      <link>https://dev.to/oluwafemiakind1/leveraging-the-power-of-google-cloud-preemptible-vms-for-cost-effective-computing-2ala</link>
      <guid>https://dev.to/oluwafemiakind1/leveraging-the-power-of-google-cloud-preemptible-vms-for-cost-effective-computing-2ala</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In the world of cloud computing, optimizing costs without sacrificing performance is a constant challenge. One way that Google Cloud offers to address this is through preemptible virtual machines.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For AWS folks, it's called AWS EC2 Spot Instance. The idea behind a spot instance and preemptible VM is the same.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Essentially, Preemptible VMs provide a cost-effective solution for running fault-tolerant and non-critical workloads. In this article, we will explore the benefits of preemptible VMs/Spot Instance, limitations, use cases, how the preemptible process work, creating a preemptible VM from a regular instance and many more.&lt;/p&gt;

&lt;h2&gt;
  
  
  What exactly is a Preemptible VM?
&lt;/h2&gt;

&lt;p&gt;Google Cloud preemptible VMs are similar to regular instances but come with a significant cost advantage. Well, the tradeoff is that these VMs/Spot Instances may be terminated by Google/AWS at any time albeit with a 30secs/2 mins notice. While this means they are not suitable for long-running, critical tasks, they are ideal for batch processing, distributed computing, and fault-tolerant applications.&lt;/p&gt;

&lt;p&gt;Before we move ahead to spin up some preemptible VM instances (actually, in a followup article), i'd like to highlight the benefits, limitations and use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benefits of Preemptible VMs:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost Efficiency&lt;/strong&gt; : Preemptible VMs are priced significantly lower than regular instances, providing cost savings of up to 80%. This makes them an attractive option for workloads that can tolerate occasional interruptions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt; : By leveraging preemptible VMs, you can easily scale your infrastructure at a fraction of the cost. This is particularly advantageous for bursty (occurring at intervals in a short timespan) workloads that require additional resources &lt;strong&gt;temporarily&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Availability&lt;/strong&gt; : Preemptible VMs can be used in combination with managed instance groups and autoscaling to ensure high availability and fault tolerance. The system automatically replaces preempted VMs with new ones, maintaining the desired level of capacity.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Limitations of Preemptible VMs
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Availability&lt;/strong&gt; : Preemptible VMs are available on a "best-effort" basis and their availability is not guaranteed. They are offered at a significantly reduced price compared to regular VMs because Google Cloud can terminate them at any time. So, this means that they may not be suitable for applications requiring strict uptime or critical workloads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Maximum Runtime&lt;/strong&gt; : Preemptible VMs have a maximum runtime limit of 24 hours. After this time, they will be automatically terminated by Google Cloud. If your application or job requires longer execution times, you need to account for this limitation and design your solution accordingly :).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Termination without Warning&lt;/strong&gt; : Preemptible VMs can be terminated at any time, without any advanced warning. While Google Cloud typically provides a 30-second notification before termination, your applications and processes must be designed to handle sudden interruptions and gracefully recover or resume operations when a VM is preempted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited Quantity&lt;/strong&gt; : There is a finite capacity of preemptible VMs available within a specific region and zone. If the demand for preemptible VMs exceeds the available capacity, you may not be able to launch new instances until capacity becomes available.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Constraints&lt;/strong&gt; : Preemptible VMs have some resource constraints compared to regular VMs. For example, they cannot be live migrated to other hosts, and they have a limited amount of CPU and memory resources. These constraints may impact certain workloads or applications that require specific configurations or resource-intensive operations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Despite these limitations, preemptible VMs can still be a cost-effective option for certain use cases which are mentioned below:&lt;/p&gt;

&lt;h2&gt;
  
  
  Usecases of Preemptible VMs
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Batch Processing&lt;/strong&gt; : Preemptible VMs are ideal for batch processing workloads that can be divided into smaller tasks or jobs. You can leverage the significant cost savings offered by preemptible VMs to run large-scale data processing, ETL or other batch jobs. If a batch job is preempted, it will be restarted on a new preemptible VM. However, the job may lose some of its state, so it is important to design the job in a way that minimizes the impact of preemptions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test and Development Environments&lt;/strong&gt; : Preemptible VMs can be used for creating temporary or short-term test and development environments. For instance, if your dev team requires isolated environments for testing, experimenting, or prototyping, preemptible VMs can provide the necessary resources at a much lower cost.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Non-Critical Workloads&lt;/strong&gt; : Applications or workloads that can tolerate occasional interruptions or delays are good candidates for preemptible VMs. Examples include non-production environments, non-critical background tasks, non-time-sensitive data processing, or non-mission-critical applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DevOps:&lt;/strong&gt; Preemptible VMs can be used for DevOps tasks, such as running continuous integration and continuous delivery (CI/CD) pipelines. These tasks can be interrupted and restarted without any loss of data, so they are well-suited for preemptible VMs. By leveraging the cost savings, you can scale your CI/CD infrastructure without incurring high expenses during idle or low-demand periods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High-Performance Computing (HPC)&lt;/strong&gt;: For certain HPC workloads, preemptible VMs can be used to increase compute capacity while managing costs. Tasks such as rendering, simulation, scientific calculations, or distributed computing can benefit from the availability of preemptible VMs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Web Crawlers or Scrapers&lt;/strong&gt; : Preemptible VMs can be used for web crawling or scraping tasks where the workload can be divided into smaller chunks or parallelized. The lower costs associated with preemptible VMs make them an attractive option for scraping data from websites or conducting periodic web crawls.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overall, batch jobs can be run on Google Cloud preemptible VMs, but it is important to design the job in a way that minimizes the impact of preemptions. By following these tips, you can save money on your batch processing jobs without sacrificing reliability. It is important to assess your application's requirements, resilience, and cost considerations before incorporating preemptible VMs into your infrastructure.&lt;/p&gt;

&lt;p&gt;💡&lt;em&gt;Spot VMs are the latest version of preemptible VMs. New and existing preemptible VMs continue to be supported, and preemptible VMs use the same pricing model as Spot VMs. However, Spot VMs provide new features that preemptible VMs do not support. For example, preemptible VMs can only run for up to 24 hours at a time, but Spot VMs do not have a maximum runtime unless you&lt;/em&gt; &lt;a href="https://cloud.google.com/compute/docs/instances/limit-vm-runtime"&gt;&lt;em&gt;limit the runtime&lt;/em&gt;&lt;/a&gt;&lt;em&gt;. You can read more on them and decide which one to use for your project and/or tasks.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Preemption Process
&lt;/h2&gt;

&lt;p&gt;According to Google Cloud documentation, the preemption process is as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Once Compute Engine needs the capacity, Google sends a preemption notification as an Advanced Configuration and Power Interface (ACPI) G2 Soft Off signal -- a standard motherboard soft shutdown command, which every OS can handle -- that signals the system must reboot.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ideally, the Soft Off signal then triggers a shutdown script that users have previously configured to save any system state and application data, terminate processes and stop the VM.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If the instance is still running after 30 seconds, GCE sends an ACPI G3 Mechanical Off signal to the OS, which is the equivalent of pulling the power on a server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Compute Engine instance then enters a &lt;a href="https://cloud.google.com/compute/docs/instances/stopping-or-deleting-an-instance"&gt;terminated state&lt;/a&gt;, which preserves its configuration settings, metadata and attachments to other resources -- such as storage volumes -- but destroys in-memory data and VM state. Users can choose to restart or delete an instance in a terminated state, or leave it terminated indefinitely&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Preempted instances still appear in your project, but you are not charged for the instance hours while it remains in a &lt;code&gt;TERMINATED&lt;/code&gt; state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Converting a regular VM into a preemtible VM
&lt;/h2&gt;

&lt;p&gt;There's no direct way to convert an existing regular VM into a preemtible VM but there's a workaround and i'll be showing you the steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1:
&lt;/h3&gt;

&lt;p&gt;Goto snapshots page &lt;a href="https://console.cloud.google.com/compute/snapshots"&gt;here&lt;/a&gt; and click on Create Snapshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YGicSDnO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1687303078121/ed65c6b4-deb6-4f29-a9b2-0b4b058c365f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YGicSDnO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1687303078121/ed65c6b4-deb6-4f29-a9b2-0b4b058c365f.png" alt="" width="800" height="35"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2:
&lt;/h3&gt;

&lt;p&gt;Input the name of your snapshot, click on "source disk" to choose which VM instance you want to create a snapshot from and then click on "Create"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yqmIpx0H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1687303118572/5ffb4755-e981-468a-9ff9-3f823ec36668.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yqmIpx0H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1687303118572/5ffb4755-e981-468a-9ff9-3f823ec36668.png" alt="" width="800" height="669"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3:
&lt;/h3&gt;

&lt;p&gt;Once a snapshot is created, click on it to view details and then click on "Create Instance".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jvm0CIeF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1687303176334/cbc4c9d0-aa71-478b-9ea4-74e2a7a5e8f6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jvm0CIeF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1687303176334/cbc4c9d0-aa71-478b-9ea4-74e2a7a5e8f6.png" alt="" width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4:
&lt;/h3&gt;

&lt;p&gt;Scroll to the near bottom under Available policies. Standard is selected by default but you have to select "Spot" because that's what we want to create. You will also notice that the price when you choose Standard is very different from (and higher than) the Spot and that's because it's a preemtible VM. Once that's done, just click on Create.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E6YlFvfC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1687303202174/98c972ce-93ea-4c47-9a26-15f9313e6e8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E6YlFvfC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1687303202174/98c972ce-93ea-4c47-9a26-15f9313e6e8f.png" alt="" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--slP9jYPZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1687303219109/03bd9567-22c9-4fa3-9d34-44f350a66444.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--slP9jYPZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1687303219109/03bd9567-22c9-4fa3-9d34-44f350a66444.png" alt="" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 5: That's it. You've successfully created a preemptible VM instance from a regular VM!&lt;/p&gt;

&lt;h3&gt;
  
  
  ICYMI: What to keep in mind when using Preemtible VMs
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Your application must be fault-tolerant:&lt;/strong&gt; Your application must be able to handle being interrupted and restarted. If your application cannot handle being interrupted, then you should &lt;strong&gt;not&lt;/strong&gt; use preemptible VMs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Your application must be stateless:&lt;/strong&gt; Your application must not store any state on the VM. If your application stores state on the VM, then it will be lost when the VM is preempted.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Your application must be able to run quickly:&lt;/strong&gt; Your application should be able to complete its work within 24 hours, wrap up and save the current state within the 30-seconds notice period. If your application takes longer than that to run, then it is possible that it will be preempted before it completes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Google Cloud preemptible VMs offer an excellent opportunity to optimize costs while leveraging the power of cloud computing. By understanding the benefits, limitations and implementing them in your applications, you can unlock significant savings and scalability. However, it's important to carefully assess the suitability of preemptible VMs for your specific use case and ensure appropriate fault tolerance measures are in place.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Google Cloud Tasks: Next-Level Task Execution for Modern Applications</title>
      <dc:creator>#SirPhemmiey</dc:creator>
      <pubDate>Sat, 20 May 2023 10:12:18 +0000</pubDate>
      <link>https://dev.to/oluwafemiakind1/google-cloud-tasks-next-level-task-execution-for-modern-applications-2i3d</link>
      <guid>https://dev.to/oluwafemiakind1/google-cloud-tasks-next-level-task-execution-for-modern-applications-2i3d</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Efficient task management is vital in modern distributed and scalable cloud environments. Google Cloud Tasks offers a managed solution that simplifies the distribution and execution of tasks across various components of your application. In this article, we will explore the key features of Google Cloud Tasks and demonstrate how to leverage them using Node.js code snippets.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Google Cloud Tasks?
&lt;/h2&gt;

&lt;p&gt;Google Cloud Tasks is a fully managed task distribution service that allows you to reliably enqueue and execute tasks. It provides features such as task queuing, scheduling, retries, and prioritization, making it an ideal choice for building scalable and responsive applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Google Cloud Tasks
&lt;/h2&gt;

&lt;p&gt;To start using Google Cloud Tasks, follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Enable the Cloud Tasks API&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensure that you have enabled the Cloud Tasks API in your Google Cloud project. You can do this through the Google Cloud Console or by using the &lt;code&gt;gcloud&lt;/code&gt; command-line tool.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud services enable cloudtasks.googleapis.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Create a Task Queue&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A task queue is a container for your tasks. Create a task queue by specifying a name and other optional parameters such as maximum task attempts, rate limits, and worker constraints.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lIwAWeDW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684574792662/5c508210-2fe4-4a9a-92b8-044c1334be98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lIwAWeDW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684574792662/5c508210-2fe4-4a9a-92b8-044c1334be98.png" alt="image" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Enqueue Tasks:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enqueue tasks to the task queue by specifying the request method, URL, body and any other optional parameters that fit your needs. The payload can contain any data necessary for task execution. The body needs to be in base64 in order for the data to be sizeable and easily transmitted over the network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ITZdCkHV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684567835796/a689e12d-73ed-4092-af01-3f6f57c661a4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ITZdCkHV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684567835796/a689e12d-73ed-4092-af01-3f6f57c661a4.png" alt="image" width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Task Handler:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Implement a task handler that processes the tasks. This could be a separate route or function that receives the tasks, extracts the payload, and performs the necessary actions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Qf9bPAFa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684568149753/4721903f-02fd-4e64-9dff-15ed99d40694.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qf9bPAFa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684568149753/4721903f-02fd-4e64-9dff-15ed99d40694.png" alt="image" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TdVdlxvT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684568157247/aabf24a4-7c8f-4d45-bbaa-42c30f2831b5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TdVdlxvT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684568157247/aabf24a4-7c8f-4d45-bbaa-42c30f2831b5.png" alt="image" width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that it is very important to return a 200. Any status code other than that indicates that the execution failed and Cloud Task will keep on retrying (depending on your queue configurations).&lt;/p&gt;

&lt;p&gt;That's it, basically.&lt;/p&gt;

&lt;p&gt;If you want to get information about a task and/or delete a task, you can use the methods below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rJHlzike--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684568799374/5641e8fe-778f-4ef0-8b21-cd004fa3ba04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rJHlzike--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1684568799374/5641e8fe-778f-4ef0-8b21-cd004fa3ba04.png" alt="image" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's go through a simple example of how to use the cloud task functions we created. There are different use cases of Cloud Tasks (i mentioned them in the later part of this article) but for the sake of simplicity of this article, let's imagine that we have to do different batch jobs, sort of like a sequence of jobs.&lt;/p&gt;

&lt;p&gt;We can have a &lt;code&gt;BatchJobService&lt;/code&gt; that has functions which call the cloud task functions that we created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { ITaskService } from "./TaskService";

const baseUrl = "http://whatever-your-base-url-is";

export interface IObject {
    [key: string]: any;
}

export class BatchService {

    constructor (private taskService: ITaskService) {}

    async createFirstBatchTask() {
        //create queue
        const queueName = await this.taskService.createTaskQueue('first-batch-queue');

        //add a task to the queue you created above
        await this.taskService.createTask(queueName, {
            taskName: "first-batch-task",
            url: `${baseUrl}/create-first-batch`,
            data: { //whatever data you want to send or pass
                operationType: "batch",
                value: 20
            }
        });
    }

    async processFirstBatchTask(data: IObject) {
        console.log(data); //{operationType: "batch", value:20}
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The example above is pretty explanatory and there's not much to talk about. The first function &lt;code&gt;createFirstBatchTask&lt;/code&gt; creates a task queue (like a container), and then enqueues a task into the queue while passing the data to be sent, url to process the data and what HTTP method to use.&lt;/p&gt;

&lt;p&gt;The second function &lt;code&gt;processFirstBatchTask&lt;/code&gt; is the handler which processes whatever data and does whatever it wants with it.&lt;/p&gt;

&lt;p&gt;The full code can be seen here: &lt;a href="https://github.com/SirPhemmiey/cloud-task-tutorial"&gt;https://github.com/SirPhemmiey/cloud-task-tutorial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have used Google Cloud Pub/Sub before, you'd probably be wondering about the difference between Cloud Pub/Sub and Cloud Tasks, just like I did before i started using Cloud Tasks. Truth is, they are both powerful services provided by GCP, but they serve different purposes and have distinct characteristics.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Amongst other differences between the two, the core difference is in their message handling and invocation; implicitly and explicitly.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;What does Implicit and Explicit Invocation even mean?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implicit&lt;/strong&gt; : In this case, the publisher has no control over the delivery of the message. Pub/Sub aims to decouple publishers of events and subscribers to those events. Publishers do not need to know anything about their subscribers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explicit&lt;/strong&gt; : By contrast, Cloud Tasks is aimed at &lt;strong&gt;explicit&lt;/strong&gt; invocation where the publisher retains full control of execution. The publisher can tell how the message should be delivered, when the message should be delivered and what to pass in the message. Full control.&lt;/p&gt;

&lt;p&gt;Another benefit of Cloud Tasks is you can pause/resume the queue using Cloud Console and CLI command to stop/start the processing of tasks, very similar to Google Cloud Scheduler.&lt;/p&gt;

&lt;p&gt;Detailed Comparison of Cloud Tasks and Pub/Sub&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqlma534cctbew6354q0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqlma534cctbew6354q0.png" alt="Cloud Task and Pub/Sub Comparison" width="800" height="685"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations of Google Cloud Tasks
&lt;/h2&gt;

&lt;p&gt;As much as Google Cloud Tasks helps with efficient task management on the cloud, it does have limitations, some of which i don't like and i wish that the limitation is removed in the near future. There are a couple of limitations but i'll highlight the "most important ones" to know and keep in mind&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Limited task payload size&lt;/strong&gt;: Google Cloud Tasks imposes a limit on the size of the task payload, which is currently set at 1MB. So, if your tasks require larger payloads, you may need to consider alternative solutions or split the payload across multiple tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Task retention period&lt;/strong&gt;: Tasks in Google Cloud Tasks have a limited retention period, which is currently set at 31 days. This means that any task added to a queue must be executed within 31 days. If a task is not processed within this period, it will be automatically deleted. So, you need to ensure your tasks are processed in a timely manner to avoid losing any important data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Task execution time limits&lt;/strong&gt;: Google Cloud Tasks imposes a maximum execution time limit for tasks, which is currently set at 10 minutes. If your tasks require longer execution times, you'll need to consider other mechanisms or split the work into multiple tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Queue Recreation&lt;/strong&gt;: If you delete a queue, you must wait for 7 days before creating a queue with the name again. One of the limitations i dislike because this makes me rethink about naming my queues carefully.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Queue dispatch rate&lt;/strong&gt;: This refers to the maximum rate at which tasks can be dispatched from a queue. The limitation is that you can only dispatch 500 tasks in a queue per second. So, if you want to dispatch more than that, it's best to use multiple queues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Task de-duplication window&lt;/strong&gt;: As much as you can create multiple tasks with different names in queue, once a task is deleted, you'll have to wait for about 1 hour to use the same name again.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Maximum schedule time for a task&lt;/strong&gt;: This is the maximum amount of time in the future that a task can be scheduled. If you want to schedule a task to be ran more than 30 days from the current date, it's going to throw an error. This is arguably the limitation i dislike the most. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's important to consider these limitations when evaluating Google Cloud Tasks for your specific use case. While it is a powerful task queuing service, understanding its constraints will help you make informed decisions and plan accordingly for your application requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Google Cloud Tasks simplifies the management of distributed tasks in your applications. Its powerful features, such as task queuing, scheduling, retries, and prioritization, make it an excellent choice for building scalable and reliable systems. In this article, we covered the basics of using Google Cloud Tasks and demonstrated how to create task queues, enqueue tasks, and handle them using a task handler in Node.js. We talked about advanced features, and use cases of Cloud Tasks to help you make an informed decision. We also talked about the differences between Cloud Pub/Sub and Cloud Tasks. By leveraging Google Cloud Tasks, you can focus on your application's business logic while relying on a fully managed service to handle task distribution and execution efficiently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://medium.com/google-cloud/cloud-tasks-or-pub-sub-8dcca67e2f7a"&gt;https://medium.com/google-cloud/cloud-tasks-or-pub-sub-8dcca67e2f7a&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Simplify Your Redis Deployment on GCP with Ansible</title>
      <dc:creator>#SirPhemmiey</dc:creator>
      <pubDate>Mon, 01 May 2023 12:37:12 +0000</pubDate>
      <link>https://dev.to/oluwafemiakind1/simplify-your-redis-deployment-on-gcp-with-ansible-1po3</link>
      <guid>https://dev.to/oluwafemiakind1/simplify-your-redis-deployment-on-gcp-with-ansible-1po3</guid>
      <description>&lt;p&gt;In this article, we will learn how to install Redis on a GCP VM instance using Ansible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A GCP account with a project and a VM instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ansible is installed on your local machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An SSH key pair to access the VM instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You must have followed my previous article &lt;a href="https://dev.to/oluwafemiakind1/streamlining-infrastructure-management-provisioning-google-cloud-vms-with-ansible-3ed7"&gt;here&lt;/a&gt; because you will be needing to modify the playbook that provisions a VM.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1&lt;/strong&gt; :
&lt;/h3&gt;

&lt;p&gt;Create an inventory file Create a file named &lt;code&gt;inventory&lt;/code&gt; and add the IP address or hostname of the VM instance you want to install Redis on. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[redis]
&amp;lt;ip address&amp;gt; or &amp;lt;hostname&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2&lt;/strong&gt; :
&lt;/h3&gt;

&lt;p&gt;Create a playbook Create a file named &lt;code&gt;redis-playbook.yml&lt;/code&gt; and add the following tasks:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
- name: Redis Installation
  hosts: redis
  become: true

  tasks:
    - name: Update package repositories
      yum:
       update_cache: yes

    - name: Install Redis
      yum:
        name: redis
        state: present

    - name: Start Redis service
      systemd:
        name: redis
        state: started
        enabled: yes

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This playbook uses the &lt;code&gt;yum&lt;/code&gt; module to update the package repositories on the target system and it also installs the Redis package. The &lt;code&gt;state&lt;/code&gt; parameter is set to &lt;code&gt;present&lt;/code&gt; to ensure that the package is installed if it is not already present.&lt;/p&gt;

&lt;p&gt;After that, the Redis service is started with the &lt;code&gt;systemd&lt;/code&gt; module.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3&lt;/strong&gt; :
&lt;/h3&gt;

&lt;p&gt;Run the playbook using the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook redis-playbook.yml -i inventory --private-key=/path/to/ssh/key

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This command runs the &lt;code&gt;redis-playbook.yml&lt;/code&gt; playbook on the hosts specified in the &lt;code&gt;inventory&lt;/code&gt; file and uses the SSH key specified in &lt;code&gt;--private-key&lt;/code&gt; to access the VM instance.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Step 4:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Verify that Redis is installed and running by connecting to the VM instance using SSH and running the following command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis-cli ping

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;TIP&lt;/strong&gt; : You can use the command to ssh into your instance&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh &amp;lt;external-ip-address&amp;gt; -i path/to/private/key

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;If Redis is running, the command should return &lt;code&gt;PONG&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NNe86aAt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1682859591306/68fa0c18-083f-4310-843b-7516247d65cf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NNe86aAt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1682859591306/68fa0c18-083f-4310-843b-7516247d65cf.png" alt="redis pong response" width="800" height="63"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we've confirmed that Redis is running in the VM instance, we'll need to connect to it outside the instance. Right now, if we try to do it, we're going to get an error (or it will most likely timeout). So this means we need to enable remote access to Redis and also update our firewall to accept TCP connections on port &lt;code&gt;6379&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You first need to install the community versions of Ansible's firewall and google modules with the command:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-galaxy collection install community.general community.google

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 5:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;You will need to add &lt;code&gt;apache-libcloud&lt;/code&gt; to the list of requirements in &lt;code&gt;requirements.yml&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; pip_package_requirements:
       ...
      - "apache-libcloud"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Once that is successful then copy the following tasks:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Configure Redis
  become: yes
  lineinfile:
    path: /etc/redis/redis.conf #or /etc/redis.conf if you get an error that /etc/redis/redis.conf does not exist
    regexp: "{{ item.regexp }}"
    line: "{{ item.line }}"
  with_items:
    - { regexp: "^bind .*", line: "bind 0.0.0.0" }
    - { regexp: "^port .*", line: "port 6379" }
    - { regexp: "^# requirepass .*", line: "requirepass your_password_here" }
  notify: Restart Redis service

 - name: Allow incoming connections on port 6379
   community.general.ufw:
     rule: allow
     port: 6379
     proto: tcp

 - name: Reload firewall rules
   community.general.ufw:
     state: enabled

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The task above updates the Redis configuration file and you'll notice that we also set a password. Remember to replace &lt;code&gt;your_password_here&lt;/code&gt; with your preferred password.&lt;/p&gt;

&lt;p&gt;This is the full Ansible playbook for Redis configuration:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Step 6:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We will also need to create a GCP compute firewall by adding the following tasks in the playbook we created in my previous article &lt;a href="https://dev.to/oluwafemiakind1/streamlining-infrastructure-management-provisioning-google-cloud-vms-with-ansible-3ed7"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Copy the following tasks into your playbook. Be careful of indentation though :)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  - name: Create firewall policy for Redis
      gcp_compute_firewall:
        name: "{{ firewall_policy_name }}"
        priority: 1000
        direction: "INGRESS"
        project: "{{ gcp_project }}"
        service_account_file: "{{ gcp_cred_file }}"
        auth_kind: "{{ gcp_cred_kind }}"
        allowed:
        - ip_protocol: "tcp"
          ports:
            - 6379
        target_tags:
          - "redis"
        state: present
      register: firewall_policy_result
      #when: firewall_policy_result is not defined

    - name: Print firewall_policy_result
      debug:
        var: firewall_policy_result

    - name: Add firewall policy to Redis instance
      community.google.gce_tag:
        instance_name: "{{ instance_name }}"
        tags: redis
        zone: "{{ zone }}"
        project_id: "{{ gcp_project }}"
        state: present

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;In the tasks above, we're creating a GCP compute firewall policy and we're assigning the policy to our Redis instance.&lt;/p&gt;

&lt;p&gt;This is the full Ansible playbook for VM provisioning and firewall policy creation:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;The content of requirements.yml is this:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;&lt;em&gt;It's important to note that you'll have to grant your service account permission to create/manage a firewall by assigning the&lt;/em&gt; &lt;strong&gt;&lt;em&gt;Compute Network Admin&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;role in GCP's IAM page&lt;/em&gt; &lt;a href="https://console.cloud.google.com/iam-admin/iam?project=ajar-dev"&gt;&lt;em&gt;here&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 7:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Verify the remote connection by connecting remotely to the Redis instance with the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;redis-cli -h &amp;lt;ip&amp;gt; -p &amp;lt;port&amp;gt; -a &amp;lt;password&amp;gt; ping

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And you should be able to connect and get a &lt;code&gt;PONG&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AmTAyKq4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1682976288362/0be4cace-690d-4776-bd88-fcab02330ce6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AmTAyKq4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1682976288362/0be4cace-690d-4776-bd88-fcab02330ce6.png" alt="redis pong response 2" width="800" height="44"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulations! You have successfully installed Redis on a GCP VM instance using Ansible. You can now use Redis as a database, cache, or message broker in your application.&lt;/p&gt;

&lt;p&gt;I know this must have been a lot. And that's because this is the first time. Trust me, this can save you and anyone else hours of time because with just this file, you can repeat your tasks and you can deploy the same configuration to multiple VM instances at once.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this article, we learned how to install Redis on a GCP VM instance using Ansible. We also learned how to configure Redis by updating the configuration file, allow incoming connections, create firewall rules and policy. Ansible provides a simple and efficient way to automate the deployment and configuration of Redis on GCP VM instances.&lt;/p&gt;

&lt;p&gt;Thank you for reading my article on &lt;strong&gt;Simplifying Your Redis Deployment with Ansible!&lt;/strong&gt; Stay tuned for my upcoming articles on adding monitoring to your Redis server and the many advantages of creating a Redis cluster using Ansible. Don't miss out on the benefits of high availability, scalability, and fault tolerance that a Redis cluster can provide for your applications.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Streamlining Infrastructure Management: Provisioning Google Cloud VMs with Ansible</title>
      <dc:creator>#SirPhemmiey</dc:creator>
      <pubDate>Tue, 18 Apr 2023 10:46:03 +0000</pubDate>
      <link>https://dev.to/oluwafemiakind1/streamlining-infrastructure-management-provisioning-google-cloud-vms-with-ansible-3ed7</link>
      <guid>https://dev.to/oluwafemiakind1/streamlining-infrastructure-management-provisioning-google-cloud-vms-with-ansible-3ed7</guid>
      <description>&lt;p&gt;As more and more organizations move their workloads to the cloud, managing infrastructure becomes an increasingly important task. Infrastructure management involves the provisioning, configuration, and maintenance of computing resources like virtual machines (VMs) in the cloud. However, managing infrastructure can be a complex and time-consuming process, particularly when it comes to managing large-scale deployments. Thats where Ansible comes in. In this article, well explore how Ansible can be used to streamline infrastructure management by provisioning Google Cloud VMs.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Ansible?
&lt;/h3&gt;

&lt;p&gt;Ansible is an open-source automation tool that helps with configuration management, application deployment, and task automation. It uses a simple, human-readable language to describe automation tasks and is easy to use even for those without a programming background. Ansible is agentless, which means that it doesnt require software to be installed on the target host to manage it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why use Ansible for infrastructure management?
&lt;/h3&gt;

&lt;p&gt;Ansible can help streamline infrastructure management in several ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consistency&lt;/strong&gt; : Ansible ensures that infrastructure is provisioned and configured in a consistent manner across all hosts. This can help reduce errors and make troubleshooting easier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt; : Ansible can manage large-scale deployments with ease, making it an ideal choice for organizations with a significant number of hosts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Reusability&lt;/strong&gt; : Ansibles modules and playbooks can be reused across different projects and environments, making it a valuable asset for organizations that require flexibility and agility.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Time-saving&lt;/strong&gt; : Ansibles automation capabilities can significantly reduce the time and effort required to manage infrastructure, freeing up IT teams to focus on more strategic initiatives.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Another benefit of using Ansible for infrastructure management is the ability to use it across different cloud providers and even on-premises infrastructure. This means that you can use the same automation tool to manage infrastructure across different environments, reducing the need for specialized skills and tools.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;To follow this article and use Ansible to provision Google Cloud VMs, you should have some basic knowledge of the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Linux&lt;/strong&gt; : Ansible is primarily a Linux automation tool, so you should have some familiarity with Linux commands, file systems, and permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud Computing&lt;/strong&gt; : You should have a basic understanding of cloud computing concepts, such as virtual machines, cloud providers, and cloud infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Google Cloud Platform (GCP)&lt;/strong&gt;: You should have a GCP account and some familiarity with the GCP console, including creating and managing VMs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ansible&lt;/strong&gt; : You should have a basic understanding of Ansible concepts, such as playbooks, modules, variables, and tasks.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you are not familiar with any of these concepts, you may want to spend some time learning about them before attempting to follow this article. Many online resources are available for learning about Linux, cloud computing, GCP, and Ansible.&lt;/p&gt;

&lt;p&gt;Fret not though, we wont be going very deep into them, and Ill be guiding you through the most important concepts.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ansible can provision and automate anything on GCP and other cloud providers. Its not limited to provisioning VMs on GCP only.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Long story short, lets dive straight into what youre here for!&lt;/p&gt;

&lt;h3&gt;
  
  
  Provisioning GCP VMs with Ansible
&lt;/h3&gt;

&lt;p&gt;To provision GCP VMs with Ansible, youll first need to install ansible on your machine by following the instructions &lt;a href="https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html"&gt;here&lt;/a&gt;. We will also need to install the Ansible GCP module. Check &lt;a href="https://docs.ansible.com/ansible/latest/collections/google/cloud/"&gt;here&lt;/a&gt; to see a list of GCP collections. This module allows you to interact with the GCP API and perform tasks such as creating, starting, stopping, and deleting VMs.&lt;/p&gt;

&lt;p&gt;Instead of just installing a single module, it is better to install the whole google cloud collection to avoid getting any errors that something is missing. Heres the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//install ansible if you don't have it
pip install ansible

//install google cloud ansible collection
ansible-galaxy collection install google.cloud

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Before you can use the Ansible GCP module though, youll need to set up a service account and download the service account key in JSON format (youll need the path to it later).&lt;/p&gt;

&lt;p&gt;Now youre ready to create a playbook. A playbook is a file that describes a set of tasks to be executed on a group of hosts. In this case, well create a playbook that provisions a Google Cloud VM. Copy and paste the content of this file to your machine. The filename is &lt;code&gt;playbook.yml&lt;/code&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;This playbook creates a VM instance with the specified image, machine type, disk size and type, network, and tags.&lt;/p&gt;

&lt;p&gt;You can execute the playbook with the &lt;code&gt;ansible-playbook&lt;/code&gt; command like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ansible-playbook initial.yml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Its that simple! You should get a response in your terminal like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_e2itcXw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1681814289294/cf7b40d7-a14c-4aae-9dd8-13853a7f87fa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_e2itcXw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://cdn.hashnode.com/res/hashnode/image/upload/v1681814289294/cf7b40d7-a14c-4aae-9dd8-13853a7f87fa.png" alt="cover image" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And when I go to the VM instance in the google cloud console &lt;a href="https://console.cloud.google.com/compute/instances"&gt;here&lt;/a&gt;, I can see it created and running!&lt;/p&gt;

&lt;p&gt;Thats simple, right? I bet it is! You can see that it took us less than a minute to do this. It may interest you to know that you can run this playbook many times, which means the process is &lt;strong&gt;Idempotent&lt;/strong&gt;. Instead of hard coding the values, you can also pass them as an argument to the playbook. You may want to look into the documentation on how to do that.&lt;/p&gt;

&lt;p&gt;Lets talk briefly about some lines in the file:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 6&lt;/strong&gt; : This is how you set local variables in an Ansible yml file and they can be accessed throughout the file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 78&lt;/strong&gt; : Youll need to input the right credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 11&lt;/strong&gt; : Instance name can be anything. But ideally, its recommended to make it meaningful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 1213&lt;/strong&gt; : Zone and region can be any acceptable zone and region respectively. To see a list of all available zones and regions, run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//list available regions
gcloud compute regions list --project=&amp;lt;project-id&amp;gt;

//list available zones
gcloud compute zones list --project=&amp;lt;project-id&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Line 1415&lt;/strong&gt; : Thats my preferred machine type and machine image. To see the available list of images, run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
gcloud compute images list --uri --project=&amp;lt;project-id&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;My Stackoverflow answer &lt;a href="https://stackoverflow.com/questions/54261944/gcp-api-format-of-disk-image-is-incorrect/76025103#76025103"&gt;here&lt;/a&gt; might help you. &lt;em&gt;Please upvote if it helped you&lt;/em&gt; 😢.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 25&lt;/strong&gt; : Using the &lt;code&gt;register&lt;/code&gt; key, were saving the result of the task into that variable. This is useful if we want to perform another task based on the result of a previous task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 48&lt;/strong&gt; : Finally, we use the &lt;code&gt;debug&lt;/code&gt; module to print the VM's IP address. You can see how we used &lt;code&gt;gcp_ip.address&lt;/code&gt; to show the address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE&lt;/strong&gt; : If by any chance you get an error that a package or library doesnt exist, you can just run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install &amp;lt;package name&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I know this is just provisioning a VM and doing nothing with it. In my upcoming articles, Ill take you through installing and configuring additional things on the VM to make it useful all with Ansible. Stay tuned.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Automating infrastructure management tasks with Ansible can greatly improve efficiency and reduce errors. Provisioning and managing GCP VMs with Ansible is a powerful way to streamline infrastructure management and ensure that your systems are always configured to your specifications. Whether youre deploying a new application, scaling up an existing system, or just need to make updates to your infrastructure, Ansible provides a simple and powerful way to automate these tasks. By following the steps outlined in this article, you can start using Ansible to provision Google Cloud VMs in no time.&lt;/p&gt;

&lt;p&gt;Of course, this is just the beginning of what you can do with Ansible and Google Cloud. Ansible has a wide range of modules for managing different aspects of cloud infrastructure, from networking to security to storage. You can use Ansible to automate the deployment of applications, configure load balancers, and much more.&lt;/p&gt;

&lt;p&gt;If you liked this article, please leave a clap or even a comment and dont forget to follow me to get updated when I publish another one. Thanks!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deploying to Google Cloud Run with GitLab CI/CD: A Step-by-Step Guide</title>
      <dc:creator>#SirPhemmiey</dc:creator>
      <pubDate>Sun, 16 Apr 2023 16:47:43 +0000</pubDate>
      <link>https://dev.to/oluwafemiakind1/deploying-to-google-cloud-run-with-gitlab-cicd-a-step-by-step-guide-6j6</link>
      <guid>https://dev.to/oluwafemiakind1/deploying-to-google-cloud-run-with-gitlab-cicd-a-step-by-step-guide-6j6</guid>
      <description>&lt;p&gt;Google Cloud Run is a powerful platform that allows developers to run stateless HTTP containers without worrying about the underlying infrastructure. With GitLab CI/CD, you can automate your build, test, and deployment process to Cloud Run, making it a perfect match for modern application development.&lt;/p&gt;

&lt;p&gt;In this article, I will walk you through the process of setting up a GitLab CI/CD pipeline to deploy your code to Google Cloud Run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE: If you want to use Github Actions instead of Gitlab CI/CD, see my other article&lt;/strong&gt; &lt;a href="https://medium.com/@oluwafemiakinde/deploying-containerized-web-apps-to-google-cloud-run-using-github-actions-777590c8bda5"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s continue….&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we get started, make sure that you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  A Google Cloud account&lt;/li&gt;
&lt;li&gt;  A GitLab account with a repository containing your code&lt;/li&gt;
&lt;li&gt;  The Google Cloud SDK installed on your local machine&lt;/li&gt;
&lt;li&gt;  Docker installed on your local machine&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Create a Google Cloud Run Service
&lt;/h2&gt;

&lt;p&gt;First, we need to create a Google Cloud Run service that will host our application. To do this, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Open the Google Cloud Console and navigate to the Cloud Run page.&lt;/li&gt;
&lt;li&gt; Click the “+ Create Service” button.&lt;/li&gt;
&lt;li&gt; Choose your preferred region and select the “Deploy one revision from an existing container image” option.&lt;/li&gt;
&lt;li&gt; Enter a name for your service and select the container image you want to deploy.&lt;/li&gt;
&lt;li&gt; Click “Create” to create your Cloud Run service.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 2: Authenticate the Google Cloud SDK
&lt;/h2&gt;

&lt;p&gt;To deploy your code to Cloud Run, you need to authenticate the Google Cloud SDK on your local machine. To do this, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Open your terminal and run the following command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud auth login
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2. Follow the prompts to log in to your Google Cloud account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Create a GitLab CI/CD Pipeline
&lt;/h2&gt;

&lt;p&gt;Now that we have our Cloud Run service set up and authenticated the Google Cloud SDK, we can create a GitLab CI/CD pipeline to automate our deployment process.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; In your GitLab repository, create a new file called &lt;code&gt;.gitlab-ci.yml&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; Add the following code to the file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;image: docker:latest  

services:  
  - docker:dind  

before\_script:  
  - docker login -u $CI\_REGISTRY\_USER -p $CI\_REGISTRY\_PASSWORD $CI\_REGISTRY  

deploy:  
  image: google/cloud-sdk:latest  
  script:  
    - gcloud auth activate-service-account --key-file=google-creds.json  
    - gcloud config set project $PROJECT\_ID  
    - gcloud builds submit --tag gcr.io/$PROJECT\_ID/$CI\_PROJECT\_NAME:$CI\_COMMIT\_SHA  
    - gcloud run deploy --image=gcr.io/$PROJECT\_ID/$CI\_PROJECT\_NAME:$CI\_COMMIT\_SHA --platform=managed --region=$CLOUD\_RUN\_REGION --allow-unauthenticated --update-env-vars=VAR1=value1,VAR2=value2 --quiet  
  only:  
    - master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3. Replace &lt;code&gt;$PROJECT_ID&lt;/code&gt; with your Google Cloud project ID and &lt;code&gt;$CLOUD_RUN_REGION&lt;/code&gt; with your preferred region.&lt;/p&gt;

&lt;p&gt;4. Add any environment variables you need to the &lt;code&gt;--update-env-vars&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;5. Commit and push your changes to your GitLab repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Configure GitLab CI/CD Variables
&lt;/h2&gt;

&lt;p&gt;Finally, we need to configure some variables in GitLab CI/CD to authenticate our Google Cloud account and registry. To do this, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; In your GitLab repository, navigate to “Settings” &amp;gt; “CI/CD” &amp;gt; “Variables”.&lt;/li&gt;
&lt;li&gt; Add the following variables:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;code&gt;GOOGLE_APPLICATION_CREDENTIALS&lt;/code&gt; - the contents of your Google Cloud service account key file.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;PROJECT_ID&lt;/code&gt; - your Google Cloud project ID.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;CI_REGISTRY_USER&lt;/code&gt; - your GitLab username.&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;CI_REGISTRY_PASSWORD&lt;/code&gt; - your GitLab personal access token.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Congratulations! You now have a fully automated GitLab CI/CD pipeline that deploys your code to Google Cloud Run. With this setup, you can focus on writing code and let GitLab and Google Cloud handle the rest.&lt;/p&gt;

&lt;p&gt;If you liked this article, please leave a clap or even a comment and don’t forget to follow me to get updated when I publish another one. Thanks!&lt;/p&gt;

</description>
      <category>gitlabc</category>
      <category>googlecloud</category>
      <category>continousintegration</category>
    </item>
    <item>
      <title>Deploying to Google Cloud Run with Github Actions: A Step-by-Step Guide</title>
      <dc:creator>#SirPhemmiey</dc:creator>
      <pubDate>Sun, 16 Apr 2023 16:39:40 +0000</pubDate>
      <link>https://dev.to/oluwafemiakind1/deploying-to-google-cloud-run-with-github-actions-a-step-by-step-guide-53nf</link>
      <guid>https://dev.to/oluwafemiakind1/deploying-to-google-cloud-run-with-github-actions-a-step-by-step-guide-53nf</guid>
      <description>&lt;h2&gt;
  
  
  What is Google Cloud Run?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Google Cloud Run is a serverless container platform that enables developers to run applications in a fully managed environment. It allows you to deploy stateless containers on a pay-as-you-go basis and auto-scales your application based on incoming traffic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What is Github Actions?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;GitHub Actions is a powerful workflow automation tool that allows developers to automate their development workflows. It integrates well with Google Cloud Run, making it easy to deploy applications from GitHub to Cloud Run.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this article, I will be deploying a containerized web application to Google Cloud Run using GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NOTE: If you want to use Gitlab CI/CD instead of GitHub Actions, see my other article&lt;/strong&gt; &lt;a href="https://medium.com/@oluwafemiakinde/deploying-to-google-cloud-run-with-gitlab-ci-cd-a-step-by-step-guide-2c617e4ea2d4"&gt;&lt;strong&gt;here&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s continue….&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before we dive into the tutorial, make sure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  A Google Cloud Platform account&lt;/li&gt;
&lt;li&gt;  A GitHub account&lt;/li&gt;
&lt;li&gt;  Docker installed on your local machine&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Set up your project on Google Cloud
&lt;/h2&gt;

&lt;p&gt;Before we can deploy our application to Google Cloud Run, we need to create a new project on Google Cloud Platform and enable the Cloud Run API. Here’s how to do it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Go to the &lt;a href="https://console.cloud.google.com/"&gt;Google Cloud Console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt; Click on the project dropdown menu and select “New Project”.&lt;/li&gt;
&lt;li&gt; Give your project a name and click “Create”.&lt;/li&gt;
&lt;li&gt; Once your project is created, click on the “Activate Cloud Shell” button on the top right corner of the page.&lt;/li&gt;
&lt;li&gt; Run the following command to enable the Cloud Run API:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud services enable run.googleapis.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;An alternative way to enable Cloud Run API&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Go to the Google Cloud Console and select your project.&lt;/li&gt;
&lt;li&gt; In the left navigation menu, click on “APIs &amp;amp; Services” and then “Dashboard.”&lt;/li&gt;
&lt;li&gt; Click on the “+ ENABLE APIS AND SERVICES” button.&lt;/li&gt;
&lt;li&gt; Search for “Cloud Run API” and click on it.&lt;/li&gt;
&lt;li&gt; Click the “Enable” button.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 2: Create a Dockerfile
&lt;/h2&gt;

&lt;p&gt;Next, we need to create a Dockerfile for our application. This file will contain instructions on how to build a container image for our application.&lt;/p&gt;

&lt;p&gt;Here’s an example Dockerfile for a Node.js application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\# Use the official Node.js image  
FROM node:14-alpine  

\# Set the working directory  
WORKDIR /app  

\# Copy the package.json and package-lock.json files  
COPY package\*.json ./  

\# Install the dependencies  
RUN npm install --production  

\# Copy the rest of the application code  
COPY . .  

\# Expose port 8080  
EXPOSE 8080  

\# Start the application  
CMD \["npm", "start"\]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save this file in the root directory of your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Build and test the container locally
&lt;/h2&gt;

&lt;p&gt;Before deploying our container to Google Cloud Run, let’s build and test it locally. Run the following command to build the container image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t &amp;lt;your-image-name&amp;gt; .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;your-image-name&amp;gt;&lt;/code&gt; with a name for your container image. Once the build is complete, run the container with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -p 8080:8080 &amp;lt;your-image-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start the container and map port 8080 on your local machine to port 8080 inside the container. Open your web browser and go to &lt;code&gt;http://localhost:8080&lt;/code&gt; to test your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Set up GitHub Actions
&lt;/h2&gt;

&lt;p&gt;GitHub Actions is a powerful tool that allows you to automate your software development workflows. In this step, we will be creating a GitHub Actions workflow to build and deploy our container to Google Cloud Run.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; In your GitHub repository, click on the “Actions” tab.&lt;/li&gt;
&lt;li&gt; Click on the “Set up a workflow yourself” button.&lt;/li&gt;
&lt;li&gt; Replace the contents of the file with the following code:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: "Deploy to Google Cloud Run"  

on:  
  push:  
    branches:  
      - main  

jobs:  
  deploy:  
    runs-on: ubuntu-latest  
    steps:  
      - name: Checkout code  
        uses: actions/checkout@v2  

      - name: Set up Google Cloud SDK  
        uses: google-github-actions/setup-gcloud@master  
        with:  
          project\_id: &amp;lt;your-project-id&amp;gt;  
          service\_account\_key: ${{ secrets.GCP\_SA\_KEY }}  
          export\_default\_credentials: true  

      - name: Configure docker for GCP  
        run: gcloud auth configure-docker  

      - name: Build and push Docker image  
        uses: docker/build-push-action@v2  
        with:  
          context: .  
          push: true  
          tags: gcr.io/&amp;lt;your-project-id&amp;gt;/&amp;lt;your-image-name&amp;gt;:latest  
          build-args: |  
            HTTP\_PORT=8080  

      - name: Deploy to Cloud Run  
        uses: google-github-actions/deploy-cloudrun@main  
        with:  
          image: gcr.io/&amp;lt;your-project-id&amp;gt;/&amp;lt;your-image-name&amp;gt;:latest  
          service: &amp;lt;your-service-name&amp;gt;  
          region: &amp;lt;your-region&amp;gt;  
          platform: managed  
          allow-unauthenticated: true  
          env\_vars: |  
              FOO=bar  
              ZIP=zap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;your-project-id&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;your-image-name&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;lt;your-service-name&amp;gt;&lt;/code&gt;, and &lt;code&gt;&amp;lt;your-region&amp;gt;&lt;/code&gt; with your own values.&lt;/p&gt;

&lt;p&gt;You can see more &lt;a href="https://github.com/google-github-actions/deploy-cloudrun"&gt;here&lt;/a&gt; on how to use the google cloud run github actions.&lt;/p&gt;

&lt;p&gt;4. Click on the “Start commit” button and commit the changes to the repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Deploy to Google Cloud Run
&lt;/h2&gt;

&lt;p&gt;Once the GitHub Actions workflow completes successfully, your container should be deployed to Google Cloud Run. To verify that your application is running, go to the Google Cloud Console, select your project, and click on “Cloud Run” in the sidebar. You should see your service listed there.&lt;/p&gt;

&lt;p&gt;Click on the service to view its details, including the URL for your application. Open this URL in your web browser to test your deployed application.&lt;/p&gt;

&lt;p&gt;Congratulations! You have successfully deployed a containerized web application to Google Cloud Run using GitHub Actions.&lt;/p&gt;

&lt;p&gt;If you liked this article, please leave a clap or even a comment and don’t forget to follow me to get updated when I publish another one. Thanks!&lt;/p&gt;

</description>
      <category>githubactions</category>
      <category>devops</category>
      <category>serverless</category>
      <category>googlecloudrun</category>
    </item>
  </channel>
</rss>
