<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Thierry Njike</title>
    <description>The latest articles on DEV Community by Thierry Njike (@thierrynjike).</description>
    <link>https://dev.to/thierrynjike</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thierrynjike"/>
    <language>en</language>
    <item>
      <title>Cloud run jobs, your parallel tasks solution</title>
      <dc:creator>Thierry Njike</dc:creator>
      <pubDate>Mon, 19 Jun 2023 10:26:32 +0000</pubDate>
      <link>https://dev.to/zenika/cloud-run-jobs-your-parallel-tasks-solution-j05</link>
      <guid>https://dev.to/zenika/cloud-run-jobs-your-parallel-tasks-solution-j05</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69ikbpf1qkb66rssf94n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69ikbpf1qkb66rssf94n.png" alt="Multitask Cloud run job" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;We usually need to accelerate our tasks without using a lot of resources. It's now possible on Cloud Run. Jobs is a serverless brand-new feature of Cloud Run that is GA since March 23rd 2023. In this article, I will first compare Cloud run and Cloud Functions (1st gen and 2nd gen), then I will explain how Cloud Run jobs works. Then, I will show some use cases where you could use Cloud Run jobs instead of other serverless options. Finally, there will be a basic demo to apply what is explained in the previous parts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison between serverless options
&lt;/h2&gt;

&lt;p&gt;This is a diagram that shows which product is more suitable depending on the job to perform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsyqcsh1mx5olhdpvknc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsyqcsh1mx5olhdpvknc.png" alt="Serverless use cases" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work ?
&lt;/h2&gt;

&lt;p&gt;Cloud Run jobs can execute a single task or a group of tasks as well. When you create a job, you set the number of tasks that contains your job. This number is saved as an environment variable that you can directly use in your code without defining it yourself. Each task is identified by its index, starting from 0 which is also saved as an environment variable directly useable in the code. So, after the creation of your job, Cloud Run creates 2 environment variables which are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CLOUD_RUN_TASK_COUNT: which is the total number of tasks of the job&lt;/li&gt;
&lt;li&gt;CLOUD_RUN_TASK_INDEX: which is the index of the current task&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These environment variables are not visible on the job's configuration page. Theirs names are conventional.&lt;/p&gt;

&lt;p&gt;When creating a job, you must select an image to use. This image could be stored on Artifact registry, or Docker. For other container registries, follow the steps described on &lt;a href="https://cloud.google.com/run/docs/deploying#other-registries" rel="noopener noreferrer"&gt;this page&lt;/a&gt;. But Google recommends to use Artifact registry. If you face an issue about violated constraint (low carbon), follow the step described in my &lt;a href="https://dev.to/zenika/fix-cloud-run-resource-locations-constraint-error-httperror-412-5ne"&gt;previous article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A job can be split into up to 10,000 tasks. Each task creates a new instance of the image and run independently of the others. If a task fails, the job fails too, even if all the others ended successfully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cases examples of Cloud Run jobs.
&lt;/h2&gt;

&lt;p&gt;1- Large dataset&lt;/p&gt;

&lt;p&gt;Let's suppose a situation where we have to process a large dataset of 1milion lines. Cloud Run jobs could help us to split the dataset into several smaller datasets and process them separately. Thereby, we can split the job into 100 tasks and process 10000 lines per task.&lt;/p&gt;

&lt;p&gt;2- Replications&lt;/p&gt;

&lt;p&gt;Imagine that we want to replicate data from 3 external databases to cloud storage. You can do it with a single cloud run job and assign a task per database. So, depending on the index of the task, the corresponding database credentials will be used, without duplicating code.&lt;/p&gt;

&lt;p&gt;3- Unsupported language&lt;/p&gt;

&lt;p&gt;Cloud functions supports only 7 languages (Node.js, Python, Go, Java, C#, Ruby and PHP). So, you won't be able to use cloud functions with a bash code. One of the advantages of Cloud run jobs is that the code language does not matter because it uses image's containers. Thereby, you just have to create your image and set an entry point.&lt;/p&gt;

&lt;p&gt;We can imagine a lot of use cases of cloud run jobs. Now let's jump into an example to show you directly how to use it from the console.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;In this example, we will create a cloud run job with 5 tasks. The python code writes the result of a BigQuery query on cloud storage. The BigQuery dataframe result will be split in 5 parts and each part will be written in a separated file in csv format.&lt;/p&gt;

&lt;p&gt;1- Let's write the code&lt;/p&gt;

&lt;p&gt;If you use the same code to test, do not forget to set your environment variables when creating the job. The python code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# librairies imports
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.cloud&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;bigquery&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;storage&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="c1"&gt;# create a bigquery client
&lt;/span&gt;    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bigquery&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;SELECT *
    FROM `&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;`
    LIMIT 1000
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="c1"&gt;#Notes : 
&lt;/span&gt;    &lt;span class="c1"&gt;#avoid SELECT * in real problems. we use it here just to illustrate
&lt;/span&gt;    &lt;span class="c1"&gt;#LIMIT 1000 does not have impact on the cost, the same amount of data are retrieved but filtered in the result.
&lt;/span&gt;
    &lt;span class="c1"&gt;# run the sql query
&lt;/span&gt;    &lt;span class="n"&gt;query_job&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# we convert the iterator object into pandas dataframe
&lt;/span&gt;    &lt;span class="n"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;query_job&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;result&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="n"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;items&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt;

    &lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

    &lt;span class="c1"&gt;# we load all the environment variables
&lt;/span&gt;    &lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# we get all the environment variables
&lt;/span&gt;    &lt;span class="n"&gt;project_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PROJECT_ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;bucket_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;dataset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DATASET&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TABLE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CLOUD_RUN_TASK_INDEX&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; 
    &lt;span class="n"&gt;nb_task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CLOUD_RUN_TASK_COUNT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;


    &lt;span class="c1"&gt;# the filename root
&lt;/span&gt;    &lt;span class="n"&gt;filename&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test-parallel-task&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="c1"&gt;# we run the query and get the result as a dataframe and the length of the dataframe
&lt;/span&gt;    &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;run_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# the length of each task dataframe
&lt;/span&gt;    &lt;span class="n"&gt;len_task_df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;nb_task&lt;/span&gt;
    &lt;span class="n"&gt;begin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;len_task_df&lt;/span&gt;
    &lt;span class="n"&gt;end&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;begin&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;len_task_df&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;nb_task&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;   &lt;span class="c1"&gt;#we write like this to avoid data loss in case of imperfect division
&lt;/span&gt;
    &lt;span class="c1"&gt;# we write the corresponding file on cloud storage
&lt;/span&gt;    &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;begin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;end&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;to_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gs://&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;bucket_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.csv&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see in the last line of the code, I'm writing directly on cloud storage using pandas. This is only possible if you add the &lt;code&gt;gcsfs&lt;/code&gt; library in your &lt;code&gt;requirements.txt&lt;/code&gt;. Your &lt;code&gt;requirements.txt&lt;/code&gt; should look like below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcsfs==2023.6.0
google-cloud-bigquery==3.11.1
google-cloud-storage==2.9.0
numpy==1.24.3
pandas==2.0.2
python-dotenv==1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : you can use any other language you want to perform this. But for this case, you can only use supported languages for GCP client API.&lt;/p&gt;

&lt;p&gt;2- Image creation&lt;/p&gt;

&lt;p&gt;To create the image to use, let's write the Dockerfile first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# we use the version 3.10 of the python image &lt;/span&gt;
FROM python:3.10

&lt;span class="c"&gt;# we define a work directory&lt;/span&gt;
WORKDIR /app

&lt;span class="c"&gt;# we copy the code dir into the work directory&lt;/span&gt;
COPY requirements.txt /app

&lt;span class="c"&gt;# we install the dependencies&lt;/span&gt;
RUN pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# we copy the code dir into the work directory&lt;/span&gt;
COPY &lt;span class="nb"&gt;.&lt;/span&gt; /app

&lt;span class="c"&gt;# we execute the code with the following command&lt;/span&gt;
CMD &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"python"&lt;/span&gt;, &lt;span class="s2"&gt;"main.py"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, build your image. If you use GCP Artifact registry, follow the part 1 and 2 of my &lt;a href="https://dev.to/zenika/fix-cloud-run-resource-locations-constraint-error-httperror-412-5ne"&gt;previous article&lt;/a&gt; to build your image.&lt;/p&gt;

&lt;p&gt;3- Job creation&lt;br&gt;
From the GCP console, search &lt;strong&gt;Cloud Run&lt;/strong&gt;, select the jobs tab and click on create job&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma6l7hzh0y69ihydag6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma6l7hzh0y69ihydag6w.png" alt="job creation" width="800" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, fill the first part of the form. If you use GCP Artifact registry, use the SELECT button to browse and find your image. In the number of tasks field, enter 5.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffadf4r3ae9incq7tydzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffadf4r3ae9incq7tydzx.png" alt="job info" width="800" height="717"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, click on the arrow to reveal the config part. Switch between tabs to configure your job as you want and click on create.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuipaheffc9t219u1i7gm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuipaheffc9t219u1i7gm.png" alt="job config" width="800" height="863"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once created, your job should appear in the job list when you select the JOBS tab on cloud run homepage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjyczn5yq4jlzw0m6nme8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjyczn5yq4jlzw0m6nme8.png" alt="jobs tab" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the job and switch between tabs to see job info. The tab History is empty because there is no execution yet. To set a trigger, click on the trigger tab and schedule your job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwj7fsd6k56l487obsfee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwj7fsd6k56l487obsfee.png" alt="job tabs" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;EXECUTE&lt;/strong&gt; to start the job and return to the history tab to see the changes. You should see an execution in progress. If you click on the execution, you will see the progress of each task execution. To check the parallelism, you can click on each task to see the start time. You can also check the logs of each task separately for debug purposes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F254noim9exbeu6ycxxzc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F254noim9exbeu6ycxxzc.png" alt="Tasks info" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the job is completed, we can check the result on cloud storage to verify if the files have been created as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccj4rlasirit7d4ugnaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccj4rlasirit7d4ugnaj.png" alt="Cloud storage results" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that the suffix of the file names are the indexes of the tasks. In the other hand, we can also see the creation date of the files. We see that 3 of them have been created at the same time because of parallelism. Now, open the file and verify if the contents are what is expected, depending on the index of the task.&lt;/p&gt;

&lt;p&gt;This example is just a basic one to help you understand how it works. We can perform more complex tasks with it, as described in the use cases part. &lt;/p&gt;

&lt;p&gt;Hope this article will help 🚀&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>serverless</category>
      <category>gcp</category>
      <category>docker</category>
    </item>
    <item>
      <title>Fix Cloud run resource locations constraint error (Error 412)</title>
      <dc:creator>Thierry Njike</dc:creator>
      <pubDate>Sun, 26 Feb 2023 17:58:50 +0000</pubDate>
      <link>https://dev.to/zenika/fix-cloud-run-resource-locations-constraint-error-httperror-412-5ne</link>
      <guid>https://dev.to/zenika/fix-cloud-run-resource-locations-constraint-error-httperror-412-5ne</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq28aavux8e5ju7n9cq9k.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq28aavux8e5ju7n9cq9k.jpeg" alt=" " width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have a project that you would like to deploy on Google Cloud using Cloud run, but due to some organisation's restrictions, you get an error like below:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ERROR: (gcloud.run.deploy) HTTPError 412: '$region' violates constraint 'constraints/gcpresourceLocations'&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This error could come from the fact that you should use a low carbon region, and not all Google Cloud regions satisfy this condition. &lt;br&gt;
Even if you pass the region as an argument, you have some steps of the deployment process that you don't manage because they are automated. So, you have to do it another way.&lt;br&gt;
In this article, I will explain step by step how to solve that problem.&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;
&lt;h2&gt;
  
  
  1 - Create the repository yourself
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;If we use the default gcloud run command, It will create a repository in the artifact registry, and use it to create the image to deploy.&lt;br&gt;
That image is by default multiregion and it includes prohibited regions. So, you have to build the image yourself manually and specify it as an argument in the command. To do so, you have to create a repository in the Artifact Registry as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmai3441b1gzbhtz4qanj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmai3441b1gzbhtz4qanj.png" alt=" " width="800" height="235"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When creating the repository, make sure to select a low carbon region. Once the repository is created, open it and copy its path.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kfatloccon7epa6xnq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kfatloccon7epa6xnq6.png" alt=" " width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;
&lt;h2&gt;
  
  
  2 - Let's build the image
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;At this step, you should have created a repository. Now, let's build the image. Paste the command below in your cloud shell at the same level with your Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud builds submit --region=$region --gcs-source-staging-dir=$path_to_the_cloud_storage_bucket --tag $path_to_the_repo/image_name:version

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh8smuhw9fkdpyaw7njh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh8smuhw9fkdpyaw7njh.png" alt=" " width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image was successfully built! Copy the image URL, you will need it at the next step.&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  3 - Deploy your image
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
Now your image is created, let's deploy it on &lt;strong&gt;Cloud Run&lt;/strong&gt;. Paste the command below in your cloud shell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud run deploy $service_name --image $image_url:tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you should have your app link in your cloud shell!&lt;/p&gt;

&lt;p&gt;Hope this article will help 🚀&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>googlecloud</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
