<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Thierry Njike</title>
    <description>The latest articles on DEV Community by Thierry Njike (@thierrynjike).</description>
    <link>https://dev.to/thierrynjike</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thierrynjike"/>
    <language>en</language>
    <item>
      <title>When AI Becomes Expensive, Human Judgment Becomes Priceless</title>
      <dc:creator>Thierry Njike</dc:creator>
      <pubDate>Fri, 01 May 2026 14:17:16 +0000</pubDate>
      <link>https://dev.to/thierrynjike/when-ai-becomes-expensive-human-judgment-becomes-priceless-1oed</link>
      <guid>https://dev.to/thierrynjike/when-ai-becomes-expensive-human-judgment-becomes-priceless-1oed</guid>
      <description>&lt;p&gt;The headlines are loud and a little unsettling: AI is consuming staggering amounts of energy, companies are burning through cash to keep these systems running, and as a result, AI tool pricing is creeping steadily upward. For developers already anxious about being replaced by the very tools they're being asked to use, this feels like a particularly cruel plot twist.&lt;br&gt;
But here's the uncomfortable truth most think-pieces miss: the situation is far more nuanced than "AI is coming for your job." And the rising cost of AI might, paradoxically, be one of the best things to happen to the developer profession in years.&lt;br&gt;
Let's unpack it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fear Is Real, But Only Part of the Story
&lt;/h2&gt;

&lt;p&gt;It would be dishonest to pretend developers have nothing to worry about. The data tells a mixed story.&lt;br&gt;
Entry-level roles are genuinely under pressure. Employment for software developers aged 22–25 has declined nearly 20% from its 2022 peak, and entry-level tech hiring dropped 25% year-over-year in 2024 (Stack Overflow, 2025). Tech internship postings have fallen 30% since 2023 according to Handshake. These are real numbers affecting real people, especially those just starting their careers.&lt;br&gt;
Some executives have leaned in aggressively. Salesforce CEO Marc Benioff publicly stated the company stopped hiring engineers in 2025, pointing to AI productivity gains. Anthropic's own CEO Dario Amodei has speculated that AI could eventually eliminate up to 50% of entry-level jobs.&lt;br&gt;
So yes, there is a real disruption happening at the junior end of the market.&lt;/p&gt;

&lt;h2&gt;
  
  
  But Look at the Bigger Picture
&lt;/h2&gt;

&lt;p&gt;Zoom out, and the picture changes dramatically.&lt;br&gt;
Job openings for software developers on Indeed are up 11% annually. A faster rate than job postings overall. A Bank of America survey found that companies are not just maintaining but expanding their software budgets and increasing engineer headcounts. The U.S. Bureau of Labor Statistics projects software developer employment to grow 17.9% between 2023 and 2033 (CNN Business, 2026). Nearly five times faster than the average for all occupations.&lt;br&gt;
Companies don't just want less software. They want more of it. AI is enabling that expansion. The question isn't "will there be developer jobs?". It's "what will those jobs look like?"&lt;br&gt;
IBM is a revealing case study here. Rather than cutting engineering staff, the company is tripling entry-level hiring in the United States. The role has simply evolved: instead of writing boilerplate code, developers now work directly with customers, specify features, and oversee AI-generated output. As IBM's General Manager of Automation and AI put it, the job shifted from "routine coding" to "being the person who directs the AI and understands the business well enough to catch its mistakes."&lt;/p&gt;

&lt;h2&gt;
  
  
  The Productivity Paradox: AI Isn't as Magic as We Thought
&lt;/h2&gt;

&lt;p&gt;Here's where it gets genuinely fascinating, and where the cost conversation becomes critical.&lt;br&gt;
In early 2025, a landmark study by METR (a non-profit AI safety organization) measured the real-world productivity impact of AI tools on experienced open-source developers. The result? Developers using AI tools actually took 19% longer to complete tasks than those working without AI. This directly contradicted what developers believed, they estimated AI was speeding them up by 20%.&lt;br&gt;
This isn't an argument against AI tools. The study itself acknowledged that models have improved rapidly since, and that developers in follow-up studies were so dependent on AI that many refused to work without it. But it does expose a truth the industry has been reluctant to admit: AI assistance is not free productivity. It comes with context-switching costs, hallucination-checking overhead, and prompt engineering time that often goes unaccounted for.&lt;br&gt;
Add to this the very real cost pressures companies are now facing. A 2026 survey of software engineers found that companies routinely spend $100 - $200 per engineer per month on AI coding tools. Around 30% of developers regularly hit usage limits. Budget managers are "increasingly nervous" that AI-related costs are "headed only one way: up."&lt;br&gt;
The era of unlimited cheap AI is over. The tab is coming due.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Proof Is Already Here: GitHub Copilot Just Changed the Rules
&lt;/h2&gt;

&lt;p&gt;If you needed a concrete example of this cost shift in action, look no further than what happened on April 27, 2026.&lt;br&gt;
GitHub announced that all Copilot plans will transition to usage-based billing on June 1, 2026. Instead of flat subscriptions with a fixed number of "premium requests," users will now consume monthly allotments of GitHub AI Credits, calculated based on actual token usage: input, output, and cached tokens, according to published API rates per model.&lt;br&gt;
The reasoning GitHub itself gave is telling: "Copilot is not the same product it was a year ago." The tool has evolved to power far more complex, agentic workflows that consume dramatically more compute. Flat-rate pricing is simply no longer sustainable.&lt;br&gt;
What does this mean in practice? Heavy users of agentic features, those running Copilot across pull request reviews, multi-step coding agents, and cloud-based workflows will almost certainly see their costs increase. Code completions and basic suggestions remain unlimited, but every advanced AI interaction now has a price tag attached. Fallback experiences (where exhausting your quota would drop you to a cheaper model) are being retired entirely.&lt;br&gt;
This isn't a GitHub-specific quirk. It is a signal from the most widely-used AI coding tool in the world that the era of unlimited cheap AI assistance is definitively over.&lt;/p&gt;

&lt;h2&gt;
  
  
  Will Companies Abandon AI and Return to Manual Development?
&lt;/h2&gt;

&lt;p&gt;Bluntly: &lt;strong&gt;No&lt;/strong&gt;. That ship has sailed.&lt;/p&gt;

&lt;p&gt;A return to pre-AI workflows is essentially unthinkable for any company that has integrated these tools into their pipelines. The 2025 Stack Overflow Developer Survey found 80% of developers now use AI in their workflows. In the follow-up study, some developers described working without AI as feeling like "trying to get across the city walking when you're used to taking an Uber." The dependency is structural now.&lt;br&gt;
What will change is how companies approach AI costs. We're already seeing it: teams hitting limits switch tools, consolidate licenses, or move to API-based pricing for more control. The era of every developer having an unlimited premium AI subscription will give way to tiered access based on role and actual need.&lt;br&gt;
This creates a natural stratification, and here is where experienced developers have a profound advantage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Developer Who Survives (and Thrives)
&lt;/h2&gt;

&lt;p&gt;The pattern emerging across companies. From IBM to Intuit to mid-sized firms is consistent: &lt;strong&gt;the value of a senior developer is not decreasing. It is increasing.&lt;/strong&gt;&lt;br&gt;
Why? Because AI needs supervision. Code generated by AI tools accumulates what engineers are calling "AI slop". Plausible-looking code that introduces subtle bugs, technical debt, or security vulnerabilities that only an experienced developer can catch. Junior developers who don't yet have the pattern recognition to audit AI output can actually deliver worse results than they would working manually. A problem that compounds as more of their daily work becomes AI-assisted.&lt;br&gt;
Companies covet what AI cannot replicate: deep domain understanding, architectural judgment, the ability to ask the right question before generating any code at all, and the wisdom to know when AI output shouldn't be trusted.&lt;br&gt;
The Stack Overflow survey puts this in sharp relief: 64% of developers do not see AI as a threat to their jobs, though this is down from previous years. The developers feeling most secure aren't the ones ignoring AI. They're the ones who've made themselves indispensable because of how well they use it.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Note for Junior Developers
&lt;/h2&gt;

&lt;p&gt;The concern is legitimate, and it deserves to be acknowledged honestly. The pathway that once existed: learn to code, get a junior role, grow into a senior engineer over years of mentored practice has been disrupted. Entry-level roles are disappearing faster than they're being replaced by new AI-era equivalents.&lt;/p&gt;

&lt;p&gt;The answer is not panic. It's a radical adaptation.&lt;/p&gt;

&lt;p&gt;The developers positioned best for this market are the ones who understand AI tools not just as coding assistants but as systems to be architected, evaluated, and directed. Prompt engineering, understanding model limitations, building AI-native workflows, and contributing to AI integration projects. These are the differentiators that matter now. Senior engineers want junior collaborators who understand these tools well enough to save them time, not create more review overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AI is expensive. It's getting more expensive. Companies are feeling the weight of that. But the answer to "should we keep paying for AI?" will almost always be yes because the alternative is falling behind competitors who do.&lt;br&gt;
The developer profession isn't dying. It's under enormous pressure to evolve, and that evolution is happening faster than most people expected. Rising AI costs actually create a healthier market: they force companies to be deliberate about how AI is used, which elevates the value of developers who understand how to use it well.&lt;br&gt;
The developer who should be afraid is the one who treats AI as either an existential threat to avoid or an infinite magic box to blindly trust. The developer who will thrive is the one who understands it as a powerful, expensive, imperfect tool, and has the skills to use it better than anyone else.&lt;br&gt;
That has always been the job description. The tool has just changed.&lt;/p&gt;

&lt;p&gt;What do you think? Are AI prices changing how your team approaches development? Share your experience in the comments.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Stack Overflow — 2025 Developer Survey (December 2025) &lt;em&gt;Developers remain willing but reluctant to use AI: The 2025 Developer Survey results are here.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://stackoverflow.blog/2025/12/29/developers-remain-willing-but-reluctant-to-use-ai-the-2025-developer-survey-results-are-here/" rel="noopener noreferrer"&gt;https://stackoverflow.blog/2025/12/29/developers-remain-willing-but-reluctant-to-use-ai-the-2025-developer-survey-results-are-here/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CNN Business (April 2026) &lt;em&gt;The demise of software engineering jobs has been greatly exaggerated.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.cnn.com/2026/04/08/tech/ai-software-developer-jobs" rel="noopener noreferrer"&gt;https://www.cnn.com/2026/04/08/tech/ai-software-developer-jobs&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Pragmatic Engineer (April 2026) &lt;em&gt;The impact of AI on software engineers in 2026: key trends.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://newsletter.pragmaticengineer.com/p/the-impact-of-ai-on-software-engineers-2026" rel="noopener noreferrer"&gt;https://newsletter.pragmaticengineer.com/p/the-impact-of-ai-on-software-engineers-2026&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;METR (July 2025) &lt;em&gt;Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;METR (February 2026) &lt;em&gt;We are Changing our Developer Productivity Experiment Design.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://metr.org/blog/2026-02-24-uplift-update/" rel="noopener noreferrer"&gt;https://metr.org/blog/2026-02-24-uplift-update/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stack Overflow (December 2025) &lt;em&gt;AI vs Gen Z: How AI has changed the career pathway for junior developers.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/" rel="noopener noreferrer"&gt;https://stackoverflow.blog/2025/12/26/ai-vs-gen-z/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;U.S. Bureau of Labor Statistics (March 2025) &lt;em&gt;AI impacts in BLS employment projections — The Economics Daily.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.bls.gov/opub/ted/2025/ai-impacts-in-bls-employment-projections.htm" rel="noopener noreferrer"&gt;https://www.bls.gov/opub/ted/2025/ai-impacts-in-bls-employment-projections.htm&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Virtue Market Research (2025) &lt;em&gt;AI Developer Tools Market — Size, Share, Growth | 2025–2030.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://virtuemarketresearch.com/report/ai-developer-tools-market" rel="noopener noreferrer"&gt;https://virtuemarketresearch.com/report/ai-developer-tools-market&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub Blog (April 27, 2026) &lt;em&gt;GitHub Copilot is moving to usage-based billing.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/" rel="noopener noreferrer"&gt;https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub Docs (April 2026) &lt;em&gt;Preparing for your move to usage-based billing.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://docs.github.com/en/copilot/how-tos/manage-and-track-spending/prepare-for-your-move-to-usage-based-billing" rel="noopener noreferrer"&gt;https://docs.github.com/en/copilot/how-tos/manage-and-track-spending/prepare-for-your-move-to-usage-based-billing&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GitHub Changelog (April 27, 2026) &lt;em&gt;GitHub Copilot code review will start consuming GitHub Actions minutes on June 1, 2026.&lt;/em&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.blog/changelog/2026-04-27-github-copilot-code-review-will-start-consuming-github-actions-minutes-on-june-1-2026/" rel="noopener noreferrer"&gt;https://github.blog/changelog/2026-04-27-github-copilot-code-review-will-start-consuming-github-actions-minutes-on-june-1-2026/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>vibecoding</category>
      <category>agents</category>
    </item>
    <item>
      <title>Cloud run jobs, your parallel tasks solution</title>
      <dc:creator>Thierry Njike</dc:creator>
      <pubDate>Mon, 19 Jun 2023 10:26:32 +0000</pubDate>
      <link>https://dev.to/zenika/cloud-run-jobs-your-parallel-tasks-solution-j05</link>
      <guid>https://dev.to/zenika/cloud-run-jobs-your-parallel-tasks-solution-j05</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69ikbpf1qkb66rssf94n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69ikbpf1qkb66rssf94n.png" alt="Multitask Cloud run job" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;We usually need to accelerate our tasks without using a lot of resources. It's now possible on Cloud Run. Jobs is a serverless brand-new feature of Cloud Run that is GA since March 23rd 2023. In this article, I will first compare Cloud run and Cloud Functions (1st gen and 2nd gen), then I will explain how Cloud Run jobs works. Then, I will show some use cases where you could use Cloud Run jobs instead of other serverless options. Finally, there will be a basic demo to apply what is explained in the previous parts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparison between serverless options
&lt;/h2&gt;

&lt;p&gt;This is a diagram that shows which product is more suitable depending on the job to perform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsyqcsh1mx5olhdpvknc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsyqcsh1mx5olhdpvknc.png" alt="Serverless use cases" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work ?
&lt;/h2&gt;

&lt;p&gt;Cloud Run jobs can execute a single task or a group of tasks as well. When you create a job, you set the number of tasks that contains your job. This number is saved as an environment variable that you can directly use in your code without defining it yourself. Each task is identified by its index, starting from 0 which is also saved as an environment variable directly useable in the code. So, after the creation of your job, Cloud Run creates 2 environment variables which are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CLOUD_RUN_TASK_COUNT: which is the total number of tasks of the job&lt;/li&gt;
&lt;li&gt;CLOUD_RUN_TASK_INDEX: which is the index of the current task&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These environment variables are not visible on the job's configuration page. Theirs names are conventional.&lt;/p&gt;

&lt;p&gt;When creating a job, you must select an image to use. This image could be stored on Artifact registry, or Docker. For other container registries, follow the steps described on &lt;a href="https://cloud.google.com/run/docs/deploying#other-registries" rel="noopener noreferrer"&gt;this page&lt;/a&gt;. But Google recommends to use Artifact registry. If you face an issue about violated constraint (low carbon), follow the step described in my &lt;a href="https://dev.to/zenika/fix-cloud-run-resource-locations-constraint-error-httperror-412-5ne"&gt;previous article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;A job can be split into up to 10,000 tasks. Each task creates a new instance of the image and run independently of the others. If a task fails, the job fails too, even if all the others ended successfully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use cases examples of Cloud Run jobs.
&lt;/h2&gt;

&lt;p&gt;1- Large dataset&lt;/p&gt;

&lt;p&gt;Let's suppose a situation where we have to process a large dataset of 1milion lines. Cloud Run jobs could help us to split the dataset into several smaller datasets and process them separately. Thereby, we can split the job into 100 tasks and process 10000 lines per task.&lt;/p&gt;

&lt;p&gt;2- Replications&lt;/p&gt;

&lt;p&gt;Imagine that we want to replicate data from 3 external databases to cloud storage. You can do it with a single cloud run job and assign a task per database. So, depending on the index of the task, the corresponding database credentials will be used, without duplicating code.&lt;/p&gt;

&lt;p&gt;3- Unsupported language&lt;/p&gt;

&lt;p&gt;Cloud functions supports only 7 languages (Node.js, Python, Go, Java, C#, Ruby and PHP). So, you won't be able to use cloud functions with a bash code. One of the advantages of Cloud run jobs is that the code language does not matter because it uses image's containers. Thereby, you just have to create your image and set an entry point.&lt;/p&gt;

&lt;p&gt;We can imagine a lot of use cases of cloud run jobs. Now let's jump into an example to show you directly how to use it from the console.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example
&lt;/h2&gt;

&lt;p&gt;In this example, we will create a cloud run job with 5 tasks. The python code writes the result of a BigQuery query on cloud storage. The BigQuery dataframe result will be split in 5 parts and each part will be written in a separated file in csv format.&lt;/p&gt;

&lt;p&gt;1- Let's write the code&lt;/p&gt;

&lt;p&gt;If you use the same code to test, do not forget to set your environment variables when creating the job. The python code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# librairies imports
&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pandas&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google.cloud&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;bigquery&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;storage&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dotenv&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dotenv&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="c1"&gt;# create a bigquery client
&lt;/span&gt;    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bigquery&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;SELECT *
    FROM `&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;`
    LIMIT 1000
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="c1"&gt;#Notes : 
&lt;/span&gt;    &lt;span class="c1"&gt;#avoid SELECT * in real problems. we use it here just to illustrate
&lt;/span&gt;    &lt;span class="c1"&gt;#LIMIT 1000 does not have impact on the cost, the same amount of data are retrieved but filtered in the result.
&lt;/span&gt;
    &lt;span class="c1"&gt;# run the sql query
&lt;/span&gt;    &lt;span class="n"&gt;query_job&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# we convert the iterator object into pandas dataframe
&lt;/span&gt;    &lt;span class="n"&gt;rows&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;query_job&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;result&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="n"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;items&lt;/span&gt;&lt;span class="p"&gt;()))&lt;/span&gt;

    &lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;pd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DataFrame&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rows&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

    &lt;span class="c1"&gt;# we load all the environment variables
&lt;/span&gt;    &lt;span class="nf"&gt;load_dotenv&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# we get all the environment variables
&lt;/span&gt;    &lt;span class="n"&gt;project_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PROJECT_ID&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;bucket_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BUCKET_NAME&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;dataset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DATASET&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;table&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;TABLE&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CLOUD_RUN_TASK_INDEX&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; 
    &lt;span class="n"&gt;nb_task&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CLOUD_RUN_TASK_COUNT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;


    &lt;span class="c1"&gt;# the filename root
&lt;/span&gt;    &lt;span class="n"&gt;filename&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;test-parallel-task&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="c1"&gt;# we run the query and get the result as a dataframe and the length of the dataframe
&lt;/span&gt;    &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;run_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;table&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# the length of each task dataframe
&lt;/span&gt;    &lt;span class="n"&gt;len_task_df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;//&lt;/span&gt;&lt;span class="n"&gt;nb_task&lt;/span&gt;
    &lt;span class="n"&gt;begin&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;len_task_df&lt;/span&gt;
    &lt;span class="n"&gt;end&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;begin&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;len_task_df&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;nb_task&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;   &lt;span class="c1"&gt;#we write like this to avoid data loss in case of imperfect division
&lt;/span&gt;
    &lt;span class="c1"&gt;# we write the corresponding file on cloud storage
&lt;/span&gt;    &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;begin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="n"&gt;end&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;to_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gs://&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;bucket_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.csv&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see in the last line of the code, I'm writing directly on cloud storage using pandas. This is only possible if you add the &lt;code&gt;gcsfs&lt;/code&gt; library in your &lt;code&gt;requirements.txt&lt;/code&gt;. Your &lt;code&gt;requirements.txt&lt;/code&gt; should look like below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcsfs==2023.6.0
google-cloud-bigquery==3.11.1
google-cloud-storage==2.9.0
numpy==1.24.3
pandas==2.0.2
python-dotenv==1.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; : you can use any other language you want to perform this. But for this case, you can only use supported languages for GCP client API.&lt;/p&gt;

&lt;p&gt;2- Image creation&lt;/p&gt;

&lt;p&gt;To create the image to use, let's write the Dockerfile first:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# we use the version 3.10 of the python image &lt;/span&gt;
FROM python:3.10

&lt;span class="c"&gt;# we define a work directory&lt;/span&gt;
WORKDIR /app

&lt;span class="c"&gt;# we copy the code dir into the work directory&lt;/span&gt;
COPY requirements.txt /app

&lt;span class="c"&gt;# we install the dependencies&lt;/span&gt;
RUN pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# we copy the code dir into the work directory&lt;/span&gt;
COPY &lt;span class="nb"&gt;.&lt;/span&gt; /app

&lt;span class="c"&gt;# we execute the code with the following command&lt;/span&gt;
CMD &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"python"&lt;/span&gt;, &lt;span class="s2"&gt;"main.py"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, build your image. If you use GCP Artifact registry, follow the part 1 and 2 of my &lt;a href="https://dev.to/zenika/fix-cloud-run-resource-locations-constraint-error-httperror-412-5ne"&gt;previous article&lt;/a&gt; to build your image.&lt;/p&gt;

&lt;p&gt;3- Job creation&lt;br&gt;
From the GCP console, search &lt;strong&gt;Cloud Run&lt;/strong&gt;, select the jobs tab and click on create job&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma6l7hzh0y69ihydag6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma6l7hzh0y69ihydag6w.png" alt="job creation" width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, fill the first part of the form. If you use GCP Artifact registry, use the SELECT button to browse and find your image. In the number of tasks field, enter 5.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffadf4r3ae9incq7tydzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffadf4r3ae9incq7tydzx.png" alt="job info" width="800" height="717"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, click on the arrow to reveal the config part. Switch between tabs to configure your job as you want and click on create.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuipaheffc9t219u1i7gm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuipaheffc9t219u1i7gm.png" alt="job config" width="800" height="864"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once created, your job should appear in the job list when you select the JOBS tab on cloud run homepage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjyczn5yq4jlzw0m6nme8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjyczn5yq4jlzw0m6nme8.png" alt="jobs tab" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the job and switch between tabs to see job info. The tab History is empty because there is no execution yet. To set a trigger, click on the trigger tab and schedule your job.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwj7fsd6k56l487obsfee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwj7fsd6k56l487obsfee.png" alt="job tabs" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;EXECUTE&lt;/strong&gt; to start the job and return to the history tab to see the changes. You should see an execution in progress. If you click on the execution, you will see the progress of each task execution. To check the parallelism, you can click on each task to see the start time. You can also check the logs of each task separately for debug purposes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F254noim9exbeu6ycxxzc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F254noim9exbeu6ycxxzc.png" alt="Tasks info" width="800" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the job is completed, we can check the result on cloud storage to verify if the files have been created as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccj4rlasirit7d4ugnaj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccj4rlasirit7d4ugnaj.png" alt="Cloud storage results" width="800" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that the suffix of the file names are the indexes of the tasks. In the other hand, we can also see the creation date of the files. We see that 3 of them have been created at the same time because of parallelism. Now, open the file and verify if the contents are what is expected, depending on the index of the task.&lt;/p&gt;

&lt;p&gt;This example is just a basic one to help you understand how it works. We can perform more complex tasks with it, as described in the use cases part. &lt;/p&gt;

&lt;p&gt;Hope this article will help 🚀&lt;/p&gt;

</description>
      <category>googlecloud</category>
      <category>serverless</category>
      <category>gcp</category>
      <category>docker</category>
    </item>
    <item>
      <title>Fix Cloud run resource locations constraint error (Error 412)</title>
      <dc:creator>Thierry Njike</dc:creator>
      <pubDate>Sun, 26 Feb 2023 17:58:50 +0000</pubDate>
      <link>https://dev.to/zenika/fix-cloud-run-resource-locations-constraint-error-httperror-412-5ne</link>
      <guid>https://dev.to/zenika/fix-cloud-run-resource-locations-constraint-error-httperror-412-5ne</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq28aavux8e5ju7n9cq9k.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq28aavux8e5ju7n9cq9k.jpeg" alt=" " width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have a project that you would like to deploy on Google Cloud using Cloud run, but due to some organisation's restrictions, you get an error like below:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ERROR: (gcloud.run.deploy) HTTPError 412: '$region' violates constraint 'constraints/gcpresourceLocations'&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This error could come from the fact that you should use a low carbon region, and not all Google Cloud regions satisfy this condition. &lt;br&gt;
Even if you pass the region as an argument, you have some steps of the deployment process that you don't manage because they are automated. So, you have to do it another way.&lt;br&gt;
In this article, I will explain step by step how to solve that problem.&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;
&lt;h2&gt;
  
  
  1 - Create the repository yourself
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;If we use the default gcloud run command, It will create a repository in the artifact registry, and use it to create the image to deploy.&lt;br&gt;
That image is by default multiregion and it includes prohibited regions. So, you have to build the image yourself manually and specify it as an argument in the command. To do so, you have to create a repository in the Artifact Registry as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmai3441b1gzbhtz4qanj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmai3441b1gzbhtz4qanj.png" alt=" " width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When creating the repository, make sure to select a low carbon region. Once the repository is created, open it and copy its path.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kfatloccon7epa6xnq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2kfatloccon7epa6xnq6.png" alt=" " width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;
&lt;h2&gt;
  
  
  2 - Let's build the image
&lt;/h2&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;p&gt;At this step, you should have created a repository. Now, let's build the image. Paste the command below in your cloud shell at the same level with your Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud builds submit --region=$region --gcs-source-staging-dir=$path_to_the_cloud_storage_bucket --tag $path_to_the_repo/image_name:version

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh8smuhw9fkdpyaw7njh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh8smuhw9fkdpyaw7njh.png" alt=" " width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The image was successfully built! Copy the image URL, you will need it at the next step.&lt;/p&gt;

&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  3 - Deploy your image
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
Now your image is created, let's deploy it on &lt;strong&gt;Cloud Run&lt;/strong&gt;. Paste the command below in your cloud shell&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud run deploy $service_name --image $image_url:tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you should have your app link in your cloud shell!&lt;/p&gt;

&lt;p&gt;Hope this article will help 🚀&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>googlecloud</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
