<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Guillaume Renaudin</title>
    <description>The latest articles on DEV Community by Guillaume Renaudin (@rguillome).</description>
    <link>https://dev.to/rguillome</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rguillome"/>
    <language>en</language>
    <item>
      <title>Multi-tenant in Airflow is almost there</title>
      <dc:creator>Guillaume Renaudin</dc:creator>
      <pubDate>Mon, 17 Mar 2025 08:44:39 +0000</pubDate>
      <link>https://dev.to/zenika/multi-tenant-in-airflow-is-almost-there-1fdp</link>
      <guid>https://dev.to/zenika/multi-tenant-in-airflow-is-almost-there-1fdp</guid>
      <description>&lt;p&gt;&lt;em&gt;Photo of Oksana Lyniv by Oliver Wolf pic (&lt;a href="https://creativecommons.org/licenses/by/4.0/" rel="noopener noreferrer"&gt;License CC BY 4.0&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When considering &lt;em&gt;multi-tenancy&lt;/em&gt;, it is often associated with SaaS applications. However, most of my clients do not seem to be involved because they typically address internal needs.&lt;/p&gt;

&lt;p&gt;But these days, a new paradigm is rising. It's called &lt;strong&gt;Platform Engineering&lt;/strong&gt;. And it doesn't target only operational software but also data related products. &lt;br&gt;
We need to develop accessible data tools for corporate end users, sush as dashboards and ETL services.&lt;/p&gt;

&lt;p&gt;Therefore, providing Airflow with multi-tenant capabilities is essential.&lt;/p&gt;

&lt;p&gt;Nearly four years ago, someone questioned the community on Stack Overflow &lt;a href="https://stackoverflow.com/a/68621317" rel="noopener noreferrer"&gt;about how to provide a multi-team feature with Airflow&lt;/a&gt;. Jarek Potiuk, a main Airflow contributor, provided a very comprehensive &lt;a href="https://stackoverflow.com/a/68621317" rel="noopener noreferrer"&gt;answer&lt;/a&gt;. In summary, he explained that Airflow did not yey provide a multi-tenancy feature. He went further adding that, even if in the next months this could be designed and implemented in &lt;strong&gt;Airflow 3&lt;/strong&gt;, from it side, the multi-tenancy should be still adressed with &lt;strong&gt;multiple Airflow instances&lt;/strong&gt; in some context where isolation is a must have.&lt;/p&gt;

&lt;p&gt;Finally, &lt;a href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=89066609" rel="noopener noreferrer"&gt;AIP-1&lt;/a&gt;, an &lt;strong&gt;A&lt;/strong&gt;irflow &lt;strong&gt;I&lt;/strong&gt;mprovement &lt;strong&gt;P&lt;/strong&gt;roposal, aimed to enhance the Airflow security in all the dag lifecycle (from submission to execution), starts feeding the discussion of the multi-tenancy design.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-43+DAG+Processor+separation" rel="noopener noreferrer"&gt;AIP-43&lt;/a&gt; and &lt;a href="https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-44+Airflow+Internal+API" rel="noopener noreferrer"&gt;AIP-44&lt;/a&gt; provided an initial solution to implement this feature as early as Airflow 2. 🤩&lt;/p&gt;

&lt;p&gt;⚠️ As described in the &lt;a href="https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-44+Airflow+Internal+API" rel="noopener noreferrer"&gt;AIP-44&lt;/a&gt;, the Airflow 2 changes are &lt;em&gt;experimental&lt;/em&gt; and they already provide a PR to remove these changes for the AF 3 version. You need to run at least Airflow version 2.10.4.&lt;/p&gt;

&lt;p&gt;Of course the previously mentioned AIPs do not provide a complete multi-tenant feature because it's missing the ressources (like variables and connection) isolation between tasks. This is one of the goals of the &lt;a href="https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-72+Task+Execution+Interface+aka+Task+SDK" rel="noopener noreferrer"&gt;AIP-72&lt;/a&gt;, which will only be implemented with Airflow 3.&lt;/p&gt;

&lt;p&gt;With Airflow 2, the only isolation that can be proposed is with team-worker-specific instance and so with worker specific configuration (Environment variables or local files).&lt;/p&gt;

&lt;p&gt;The next part is a demonstration of how to apply configuration in Airflow 2 to isolate DAG code between teams in a company and how to make sure that a team A can't access ressources (machines, databases, file shares, etc.) owned by another team.&lt;/p&gt;


&lt;h2&gt;
  
  
  How-to Implement Sort of Multi-Tenancy with Airflow 2
&lt;/h2&gt;

&lt;p&gt;In the next steps, we will: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Configure Airflow to load dags from team dedicated directories&lt;/li&gt;
&lt;li&gt;Give each user permission to see only dags related to its team&lt;/li&gt;
&lt;li&gt;Attach one worker to a team. Shares rights and network specific policies are team specific&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All the steps start from the &lt;a href="https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#running-airflow-in-docker" rel="noopener noreferrer"&gt;Airflow Docker installation&lt;/a&gt;. Previously you must, follow all the installation steps before going through the next instruction.&lt;br&gt;
‼️ Stop just before runnint &lt;code&gt;docker compose up&lt;/code&gt; at the &lt;a href="https://airflow.apache.org/docs/apache-airflow/stable/howto/docker-compose/index.html#running-airflow" rel="noopener noreferrer"&gt;Running Airflow paragraph&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 1: Split DagProcessor for Scheduler
&lt;/h3&gt;

&lt;p&gt;According to the &lt;a href="https://airflow.apache.org/docs/apache-airflow/stable/authoring-and-scheduling/dagfile-processing.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;, set the environment variable &lt;code&gt;AIRFLOW__SCHEDULER__STANDALONE_DAG_PROCESSOR&lt;/code&gt; to &lt;code&gt;True&lt;/code&gt;.&lt;br&gt;
So in the &lt;code&gt;docker-compose.yaml&lt;/code&gt; file, add a new line :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;    _PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
    # The following line can be used to set a custom config file, stored in the local config folder
    # If you want to use it, outcomment it and replace airflow.cfg with the name of your config file
    # AIRFLOW_CONFIG: '/opt/airflow/config/airflow.cfg'
    + AIRFLOW__SCHEDULER__STANDALONE_DAG_PROCESSOR: "true"
    AIRFLOW__WEBSERVER__EXPOSE_CONFIG: "true"
    AIRFLOW__LOGGING__DAG_PROCESSOR_LOG_LEVEL: "DEBUG"
    AIRFLOW__LOGGING__LOGGING_LEVEL: "INFO"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Define a Dag Processor for Each Team
&lt;/h3&gt;

&lt;p&gt;We will assume that we have two teams : &lt;strong&gt;team1&lt;/strong&gt; and &lt;strong&gt;team2&lt;/strong&gt; and each of them will have a dedicated directory under &lt;code&gt;/opt/airflow/dags&lt;/code&gt;, which is the default dags folder.&lt;/p&gt;

&lt;p&gt;Here are two new services, one for each processor in the &lt;code&gt;docker-compose.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;airflow-dag-processor-team1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dag-processor --subdir /opt/airflow/dags/team1/&lt;/span&gt;
    &lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;curl"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--fail"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8974/health"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
      &lt;span class="na"&gt;start_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common-depends-on&lt;/span&gt;
      &lt;span class="na"&gt;airflow-init&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_completed_successfully&lt;/span&gt;

  &lt;span class="na"&gt;airflow-dag-processor-team2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dag-processor --subdir /opt/airflow/dags/team2/&lt;/span&gt;
    &lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;curl"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--fail"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8974/health"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
      &lt;span class="na"&gt;start_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common-depends-on&lt;/span&gt;
      &lt;span class="na"&gt;airflow-init&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_completed_successfully&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the command is &lt;code&gt;dag-processor&lt;/code&gt; followed by the &lt;code&gt;subdir&lt;/code&gt; argument and its value, the path to the team's DAGs directory.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Configure a Worker for Each Team
&lt;/h3&gt;

&lt;p&gt;By replacing the default worker:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;airflow-worker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;celery worker&lt;/span&gt;
&lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;...&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;  &lt;span class="na"&gt;airflow-worker-team1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;celery worker -q team1&lt;/span&gt;
    &lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# yamllint disable rule:line-length&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD-SHELL"&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;celery&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--app&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;airflow.providers.celery.executors.celery_executor.app&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;inspect&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ping&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-d&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"celery@$${HOSTNAME}"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;||&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;celery&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--app&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;airflow.executors.celery_executor.app&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;inspect&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ping&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-d&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"celery@$${HOSTNAME}"'&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
      &lt;span class="na"&gt;start_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common-env&lt;/span&gt;
      &lt;span class="c1"&gt;# Required to handle warm shutdown of the celery workers properly&lt;/span&gt;
      &lt;span class="c1"&gt;# See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation&lt;/span&gt;
      &lt;span class="na"&gt;DUMB_INIT_SETSID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0"&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common-depends-on&lt;/span&gt;
      &lt;span class="na"&gt;airflow-init&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_completed_successfully&lt;/span&gt;

  &lt;span class="na"&gt;airflow-worker-team2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;celery worker -q team2&lt;/span&gt;
    &lt;span class="na"&gt;healthcheck&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# yamllint disable rule:line-length&lt;/span&gt;
      &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CMD-SHELL"&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;celery&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--app&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;airflow.providers.celery.executors.celery_executor.app&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;inspect&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ping&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-d&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"celery@$${HOSTNAME}"&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;||&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;celery&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;--app&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;airflow.executors.celery_executor.app&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;inspect&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ping&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;-d&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;"celery@$${HOSTNAME}"'&lt;/span&gt;
      &lt;span class="na"&gt;interval&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;10s&lt;/span&gt;
      &lt;span class="na"&gt;retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
      &lt;span class="na"&gt;start_period&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;30s&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common-env&lt;/span&gt;
      &lt;span class="c1"&gt;# Required to handle warm shutdown of the celery workers properly&lt;/span&gt;
      &lt;span class="c1"&gt;# See https://airflow.apache.org/docs/docker-stack/entrypoint.html#signal-propagation&lt;/span&gt;
      &lt;span class="na"&gt;DUMB_INIT_SETSID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0"&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;depends_on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;*airflow-common-depends-on&lt;/span&gt;
      &lt;span class="na"&gt;airflow-init&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;condition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;service_completed_successfully&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, each worker runs with the argument &lt;code&gt;-q &amp;lt;team_name&amp;gt;&lt;/code&gt; so each worker will execute only tasks from its configured queue.&lt;/p&gt;

&lt;p&gt;ℹ️ The dedicated team worker is where you could enforce security of connection, ressources (network policies) or variables and made them specific to a team.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Add a Cluster Policy to Redirect Tasks
&lt;/h3&gt;

&lt;p&gt;To route each DAG's task to a specific worker, we need to build a &lt;a href="https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/cluster-policies.html#airflow.policies.task_policy" rel="noopener noreferrer"&gt;cluster task policy&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the directory &lt;code&gt;config&lt;/code&gt; of the &lt;code&gt;AIRFLOW_HOME&lt;/code&gt; variable, add a file &lt;code&gt;airflow_local_settings&lt;/code&gt; with the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;airflow.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DAG&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BaseOperator&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;airflow.policies&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;hookimpl&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;airflow.exceptions&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AirflowClusterPolicyViolation&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;

&lt;span class="n"&gt;pattern&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;^([^/]+)/.+\.py$&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;


&lt;span class="nd"&gt;@hookimpl&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;task_policy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;BaseOperator&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Task policy activated : task.dag.filepath : &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;match&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pattern&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;match&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;queue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;match&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;group&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;AirflowClusterPolicyViolation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DAG &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dag_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; is not in the correct path location. File path: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dag&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;filepath&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the task dag filepath (or fileloc since filepath is deprecated), the name of the team will be extracted and for each task the correspond queue will be targeted with &lt;code&gt;task.queue = match.group(1)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;⚠️ It's important to understand that security relies on the team dag directory access rights. So you must control who can add (write) a dag in this directory. It could be a human user or one assigned to a tool like a ci/cd job. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Add DAG Samples to Test
&lt;/h3&gt;

&lt;p&gt;So we need to add two dags one in the directory &lt;code&gt;dags/team1&lt;/code&gt; and the other in the directory &lt;code&gt;dags/team2&lt;/code&gt;.&lt;br&gt;
For testing purpose, we choose this &lt;a href="https://airflow.apache.org/docs/apache-airflow/stable/_modules/airflow/example_dags/tutorial.html" rel="noopener noreferrer"&gt;sample&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;What you need to do is only, to modify the dag name, add access_control, eventualy its description and why not adding a tag with the team name.&lt;/p&gt;

&lt;p&gt;Example for the team1 DAG:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;DAG&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tutorial_team1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;# These args will get passed on to each operator
&lt;/span&gt;    &lt;span class="c1"&gt;# You can override them on a per-task basis during operator initialization
&lt;/span&gt;    &lt;span class="n"&gt;default_args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;depends_on_past&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;airflow@example.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;email_on_failure&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;email_on_retry&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;retries&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;retry_delay&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;minutes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="c1"&gt;# 'queue': 'bash_queue',
&lt;/span&gt;        &lt;span class="c1"&gt;# 'pool': 'backfill',
&lt;/span&gt;        &lt;span class="c1"&gt;# 'priority_weight': 10,
&lt;/span&gt;        &lt;span class="c1"&gt;# 'end_date': datetime(2016, 1, 1),
&lt;/span&gt;        &lt;span class="c1"&gt;# 'wait_for_downstream': False,
&lt;/span&gt;        &lt;span class="c1"&gt;# 'sla': timedelta(hours=2),
&lt;/span&gt;        &lt;span class="c1"&gt;# 'execution_timeout': timedelta(seconds=300),
&lt;/span&gt;        &lt;span class="c1"&gt;# 'on_failure_callback': some_function, # or list of functions
&lt;/span&gt;        &lt;span class="c1"&gt;# 'on_success_callback': some_other_function, # or list of functions
&lt;/span&gt;        &lt;span class="c1"&gt;# 'on_retry_callback': another_function, # or list of functions
&lt;/span&gt;        &lt;span class="c1"&gt;# 'sla_miss_callback': yet_another_function, # or list of functions
&lt;/span&gt;        &lt;span class="c1"&gt;# 'on_skipped_callback': another_function, #or list of functions
&lt;/span&gt;        &lt;span class="c1"&gt;# 'trigger_rule': 'all_success'
&lt;/span&gt;    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;A simple tutorial DAG team1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;schedule&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;timedelta&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;days&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;start_date&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2021&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;catchup&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;access_control&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Team1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;can_read&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;can_edit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;can_delete&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;   
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;tags&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;team1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;dag&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
&lt;span class="p"&gt;[...]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔒 See the &lt;code&gt;access_control&lt;/code&gt; which associated with permissions and roles control whose users can interact with those dags.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Provide Team Users and Give Them Rigths
&lt;/h3&gt;

&lt;p&gt;First, we need to create two roles : one for each team. Of course, you could create more than one role for each team, for example to separate viewers and operators.&lt;br&gt;
We need also to add permissions that will associate dag to roles.&lt;/p&gt;

&lt;p&gt;With the airflow cli provided with the docker-compose write: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A role creation command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose run airflow-worker-team1 airflow roles create Team1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add standard permissions to the role:&lt;/li&gt;
&lt;li&gt;First, launch a shell:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose run airflow-worker-team1 bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Then launch all of these:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Associative array to hold resources and their corresponding actions&lt;/span&gt;
&lt;span class="nb"&gt;declare&lt;/span&gt; &lt;span class="nt"&gt;-A&lt;/span&gt; &lt;span class="nv"&gt;permissions&lt;/span&gt;&lt;span class="o"&gt;=(&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"My Password"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"can_edit can_read"&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"My Profile"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"can_edit can_read"&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DAG Runs"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"can create can_read can_edit menu_access"&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Browse"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"menu_access"&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Jobs"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"can_read menu_access"&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Task Instances"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"can_read"&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DAG Dependencies"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"can_read menu_access"&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DAG Code"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"can_read"&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Import Error"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"can_read"&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Task logs"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"can_read"&lt;/span&gt;
  &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Website"&lt;/span&gt;&lt;span class="o"&gt;]=&lt;/span&gt;&lt;span class="s2"&gt;"can_read"&lt;/span&gt;
&lt;span class="o"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Iterate over the associative array and add permissions using the Airflow CLI&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;resource &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="p"&gt;!permissions[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nv"&gt;actions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;permissions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;$resource&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;for &lt;/span&gt;action &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$actions&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="c"&gt;# Construct and execute the Docker Compose command&lt;/span&gt;
    &lt;span class="nb"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"airflow roles add-perms Team1 -a &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$action&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; -r &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$resource&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$command&lt;/span&gt;
    &lt;span class="nb"&gt;eval&lt;/span&gt; &lt;span class="nv"&gt;$command&lt;/span&gt;
  &lt;span class="k"&gt;done
done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;😪 It can take a while...&lt;br&gt;
Then, you need to create users. For example, &lt;code&gt;user1&lt;/code&gt; belongs to &lt;em&gt;team1&lt;/em&gt; and &lt;code&gt;user2&lt;/code&gt; to &lt;em&gt;team2&lt;/em&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A user who will have this role
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose run airflow-worker-team1 airflow &lt;span class="nb"&gt;users &lt;/span&gt;create &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-e&lt;/span&gt; user1@team1.com &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-f&lt;/span&gt; user1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-l&lt;/span&gt; user1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="k"&gt;*******&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-u&lt;/span&gt; user1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--role&lt;/span&gt; Team1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🛎️ Don't forget to edit the password &lt;code&gt;*******&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And that's it! 🎉&lt;/p&gt;

&lt;p&gt;Try to log in with &lt;code&gt;user1&lt;/code&gt;; you should view and be able only one DAG!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqikiarr9a57tridr40sr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqikiarr9a57tridr40sr.png" alt="Image description" width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run the dag, and you should see the worker1 executing it :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ia7sdh5a1rmybvexing.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ia7sdh5a1rmybvexing.png" alt="Image description" width="800" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>data</category>
      <category>multitenant</category>
      <category>airflow</category>
    </item>
    <item>
      <title>Fix a resource locations organisation constraint while deploying Google App Engine with Docker based Application</title>
      <dc:creator>Guillaume Renaudin</dc:creator>
      <pubDate>Mon, 08 Aug 2022 15:32:00 +0000</pubDate>
      <link>https://dev.to/zenika/fix-a-resource-locations-organisation-constraint-while-deploying-google-app-engine-with-docker-based-application-935</link>
      <guid>https://dev.to/zenika/fix-a-resource-locations-organisation-constraint-while-deploying-google-app-engine-with-docker-based-application-935</guid>
      <description>&lt;h2&gt;
  
  
  The case
&lt;/h2&gt;

&lt;p&gt;You want to deploy a Google App Engine Docker based Application (&lt;code&gt;gcloud app deploy&lt;/code&gt;) but your organization enabled the resource locations constraint and you've quickly encountered the following message : &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;'us' violates constraint ‘constraints/gcp.resourceLocations’&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;or maybe it was &lt;em&gt;'eu'&lt;/em&gt; instead of 'us'&lt;/p&gt;

&lt;p&gt;For example, my organization wants to target only &lt;em&gt;low CO2&lt;/em&gt; 🌱 identified regions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reason
&lt;/h2&gt;

&lt;p&gt;💡 This is because neither &lt;em&gt;'us'&lt;/em&gt; nor &lt;em&gt;'eu'&lt;/em&gt; multi-region groups are present in your &lt;code&gt;constraints/gcp.resourceLocations&lt;/code&gt; organization policy.&lt;br&gt;
To check the complete list, go to the Google Cloud console, in &lt;em&gt;IAM an admin&lt;/em&gt; menu and in &lt;em&gt;Organisation Policies&lt;/em&gt;, look for the &lt;code&gt;constraints/gcp.resourceLocations&lt;/code&gt; policy and find what are the allowed ones ✅.&lt;/p&gt;

&lt;p&gt;Because for a multi-region groups, not all the individual region are &lt;em&gt;low CO2&lt;/em&gt; ones, my organization cannot allowed the groups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;The solution is to specify in which region our container registry should be. But the service Google Container Registry doesn't provide this option. &lt;br&gt;
Luckely 🍀, Google provides and recommands another service &lt;a href="https://cloud.google.com/artifact-registry/" rel="noopener noreferrer"&gt;Artifact registry&lt;/a&gt;. In the meantime, you should also drop the Google Cloud Build step in the App Engine deployment process by giving the &lt;code&gt;--image-url&lt;/code&gt; argument as described (here)[&lt;a href="https://cloud.google.com/artifact-registry/docs/integrate-app-engine" rel="noopener noreferrer"&gt;https://cloud.google.com/artifact-registry/docs/integrate-app-engine&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Your new process to deploy is : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an Artifact registry repository : &lt;a href="https://cloud.google.com/artifact-registry/docs/repositories/create-repos#create" rel="noopener noreferrer"&gt;https://cloud.google.com/artifact-registry/docs/repositories/create-repos#create&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Build and tag the Docker image with the following tag name  &lt;code&gt;[LOCATION]-docker.pkg.dev/[PROJECT-ID]/[REPOSITORY]/[IMAGE]:[TAG]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Deploy your application with &lt;code&gt;gcloud app deploy --image-url=[LOCATION]-docker.pkg.dev/[PROJECT-ID]/[REPOSITORY]/[IMAGE]:[TAG]&lt;/code&gt; &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can always configure the App Engine runtime environment with an &lt;code&gt;app.yaml&lt;/code&gt; file but the Dockerfile will be ignored in the deployment process&lt;/p&gt;

&lt;p&gt;I hope this could help some of you 😆&lt;/p&gt;

</description>
      <category>gcp</category>
      <category>appengine</category>
      <category>docker</category>
      <category>resourcelocation</category>
    </item>
  </channel>
</rss>
