<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Naseem Mohammed</title>
    <description>The latest articles on DEV Community by Naseem Mohammed (@mnaseem).</description>
    <link>https://dev.to/mnaseem</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mnaseem"/>
    <language>en</language>
    <item>
      <title>Distributing API Authorization Policies using OPA Bundles</title>
      <dc:creator>Naseem Mohammed</dc:creator>
      <pubDate>Tue, 05 Jul 2022 08:00:12 +0000</pubDate>
      <link>https://dev.to/mnaseem/distributing-api-authorization-policies-using-opa-bundles-1i4e</link>
      <guid>https://dev.to/mnaseem/distributing-api-authorization-policies-using-opa-bundles-1i4e</guid>
      <description>&lt;p&gt;A typical organization will have several Applications &amp;amp; Deployments in multiple environments. Most of them would have some central Identity Provider like Keycloak or Azure AD/Okta... &lt;br&gt;
While the identity provider issued OAuth2.0 token and solve the Identity problem, we didn't have a clean central way to handle API Authorization. This usually might be bundled along with the Application code.&lt;/p&gt;

&lt;p&gt;For Authorization, OPA is an open-source CNCF graduated project that allows us to write Authorization logic in Declarative statements.&lt;/p&gt;

&lt;p&gt;The goal today is to explore the OPA bundle feature. We can &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;build policies using Rego, &lt;/li&gt;
&lt;li&gt;package them as bundles (tar files), &lt;/li&gt;
&lt;li&gt;and distribute them to various Applications or Deployment Clusters and the from a central location.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferm21xw9os7p0ycf90p3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferm21xw9os7p0ycf90p3.png" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The central policy portal can organize your policies as the below diagram shows. An individual bundle is created at the environment level and it will have all the policies of the projects that the environment hosts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9cosb773x28juna1d7p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9cosb773x28juna1d7p.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In my organization, we had a similar distribution of environment. &lt;br&gt;
All the APIs were fronted by various API Gateways. What we want is for every call should be intercepted by API Gateway and forwarded to the Authorization engine. If the Authorization engine returns false then the request should be declined with HTTP status 403. Below diagram represents this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lmxlorr59ruv58ztsc8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lmxlorr59ruv58ztsc8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The below table gives details on how this can be achieved with different API Gateways.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;API Gateway&lt;/th&gt;
&lt;th&gt;Details&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AWS API Gateway&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Traefik&lt;/td&gt;
&lt;td&gt;&lt;a href="https://doc.traefik.io/traefik/middlewares/http/forwardauth/" rel="noopener noreferrer"&gt;Using Forward Auth Middleware&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NGINX&lt;/td&gt;
&lt;td&gt;&lt;a href="http://nginx.org/en/docs/http/ngx_http_auth_request_module.html" rel="noopener noreferrer"&gt;Module ngx_http_auth_request_module&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In our case, we have deployed OPA as a sidecar to a golang API project. The API receives a request from the API Gateway and then invokes the OPA. It parses the response received and if the Allow is set to false, then the API will return 403 to API Gateway.&lt;/p&gt;

&lt;p&gt;Below is the yaml for the deployment of the API project and OPA. The configuration is from wsl2 desktop. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apiVersion: apps/v1
kind: Deployment
metadata:
  name: authorization
  namespace: default
  labels:
    app: authorization
spec:
  replicas: 1
  selector:
    matchLabels:
      app: authorization
  template:
    metadata:
      labels:
        app: authorization
    spec:
      containers:
      - name: auth-policy-manager
        image: naseemmohammed/policyengine:0.1.7
        imagePullPolicy: IfNotPresent
        env:
        - name: ENV_AUTH_SERVER
          value: ":8080"
        - name: ENV_PPSA
          value: "localhost"
        - name: ENV_OPA_PORT
          value: "8181"
        ports:
        - containerPort: 8080
      - name: opa
        image: openpolicyagent/opa:0.41.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8181
        env:
        # location to subscribe for new policy bundles
        - name: AWS_REGION 
          value: us-east-2
        - name: AWS_ACCESS_KEY_ID
          value: AKIA52XXXUIISSCC-FAKE
        - name: AWS_SECRET_ACCESS_KEY
          value: E/VLCUZJ2G-NOTEXISTING-aUhmsMcI33yS8O     
        args:
          - "run" 
          - "--ignore=.*"  # exclude hidden dirs created by Kubernetes
          - "--server"
          - "--set=decision_logs.console=true"          
          - "--config-file"
          - "/config/config.yaml"
        volumeMounts:        
        - readOnly: true  
          mountPath: /config
          name: config-volume
        livenessProbe:
          httpGet:
            scheme: HTTP              # assumes OPA listens on localhost:8181
            port: 8181
          initialDelaySeconds: 5      # tune these periods for your environemnt
          periodSeconds: 5000  # in prod reduce to 5
        readinessProbe:
          httpGet:
            path: /health?bundle=true  # Include bundle activation in readiness
            scheme: HTTP
            port: 8181
          initialDelaySeconds: 5
          periodSeconds: 5
      volumes:
      - name: config-volume
        hostPath:
          # directory location on host
          path: /run/desktop/mnt/host/c/Users/nmohammed/Downloads/cluster-files
          # this field is optional
          type: Directory
      imagePullSecrets:
        - name: topsecret



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now the OPA configuration file below&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

services:
  s3:
    url: https://zohoohio.s3.us-east-2.amazonaws.com
    credentials:
      s3_signing:
        environment_credentials: {}

bundles:
  authz:
    service: s3
    resource: Zoho-onprem/bundle.tar.gz
    persist: false
    polling:
      min_delay_seconds: 100
      max_delay_seconds: 200


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>opa</category>
      <category>security</category>
      <category>microservices</category>
      <category>authorization</category>
    </item>
    <item>
      <title>IBM App Connect Professional CI/CD</title>
      <dc:creator>Naseem Mohammed</dc:creator>
      <pubDate>Sun, 28 Jun 2020 14:52:45 +0000</pubDate>
      <link>https://dev.to/mnaseem/ibm-app-connect-professional-ci-cd-2h58</link>
      <guid>https://dev.to/mnaseem/ibm-app-connect-professional-ci-cd-2h58</guid>
      <description>&lt;h2&gt;
  
  
  IBM App Connect Professional is an iPaas. What is an iPaas?
&lt;/h2&gt;

&lt;p&gt;Quoting from Mulesoft (a competitor of IBM App Connect Professional) *"In simplest terms, iPaaS is a platform for building and deploying integrations within the cloud and between the cloud and the enterprise. With iPaaS, users can develop integration flows that connect applications residing in the cloud or on-premises and then deploy them without installing or managing any hardware or middleware."&lt;/p&gt;

&lt;p&gt;IBM App Connect Professional can be installed in multiple ways. IBM provides a Docker Image also. That lets us deploy the App Connect to the Kubernetes platform.&lt;/p&gt;

&lt;p&gt;The goal of this article is to document the steps to build a CI/CD pipeline for IBM App Connect Professional. I am using the below tools to realize this. Also, the IBM App Connect Professional version I am using is 7.5.3. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Github for source code.&lt;/li&gt;
&lt;li&gt;Codeship for CICD&lt;/li&gt;
&lt;li&gt;Datadog for monitoring&lt;/li&gt;
&lt;li&gt;Locust for Load testing.&lt;/li&gt;
&lt;li&gt;Oracle Kubernetes Engine for hosting the Docker containers.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Here is the Deployment Architecture
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FfZuMMc4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0zqfnt8m32brcb7fxogx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FfZuMMc4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0zqfnt8m32brcb7fxogx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;IBM provides the App Connect Professional software (7.5.3.0-WS-ACP-_64-docker.tar.gz), which has the requisite software and Dockerfile for customizing. &lt;/p&gt;

&lt;h4&gt;
  
  
  System requirements
&lt;/h4&gt;

&lt;p&gt;You can run the App Connect Professional Docker container in the following configurations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Two CPUs with 4 GB RAM and a 100GB disk&lt;/li&gt;
&lt;li&gt;Four CPUs with 8 or 16 GB RAM and a 100GB disk&lt;/li&gt;
&lt;li&gt;Eight CPUs with 16, 24, or 32 GB RAM and a 100GB disk
For production purposes, it’s recommended that you use four or more CPUs.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below is the customized Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;FROM ubuntu:16.04&lt;/span&gt;

&lt;span class="s"&gt;ARG ibm_file&lt;/span&gt;

&lt;span class="s"&gt;ENV IRONHIDE_SOURCE /var/tmp/ironhide-setup&lt;/span&gt;

&lt;span class="s"&gt;RUN echo "bing" &amp;amp;&amp;amp; echo $ibm_file &amp;amp;&amp;amp;  apt-get update &amp;amp;&amp;amp; apt-get install -y  openssh-server supervisor cron syslog-ng-core logrotate libapr1 libaprutil1 liblog4cxx10v5 libxml2 psmisc xsltproc ntp vim net-tools iputils-ping curl&lt;/span&gt; 

&lt;span class="s"&gt;RUN curl -LO  $ibm_file --output ironhide-setup.tar.gz &amp;amp;&amp;amp; tar -xzvf ironhide-setup.tar.gz&lt;/span&gt;

&lt;span class="s"&gt;RUN ls -l &amp;amp;&amp;amp;  cd ironhide-setup &amp;amp;&amp;amp;  ls -l &amp;amp;&amp;amp; cat supervisord.conf &amp;amp;&amp;amp; cp supervisord.conf /etc/supervisor/conf.d/supervisord.conf  &amp;amp;&amp;amp;  sed -i -E 's/^(\s*)system\(\);/\1unix-stream("\/dev\/log");/' /etc/syslog-ng/syslog-ng.conf&lt;/span&gt; 

&lt;span class="s"&gt;RUN sed -i 's/^su root syslog/su root adm/' /etc/logrotate.conf&lt;/span&gt;

&lt;span class="s"&gt;RUN mkdir -p /var/log/supervisor &amp;amp; mkdir -p /opt/ibm/&lt;/span&gt;

&lt;span class="c1"&gt;#Directory to hold the artifacts which need to be loaded during docker launch/start&lt;/span&gt;
&lt;span class="s"&gt;RUN mkdir -p /var/tmp/LoadArtifacts/projects &amp;amp; mkdir -p /var/tmp/LoadArtifacts/ThirdPartylibs &amp;amp; mkdir -p /var/tmp/LoadArtifacts/SecureConnectorConfig&lt;/span&gt;
&lt;span class="s"&gt;RUN mkdir -p /var/tmp/LoadArtifacts/UsersAndGroups &amp;amp; mkdir -p /var/tmp/LoadArtifacts/CertificatesAndKeys&lt;/span&gt;

&lt;span class="s"&gt;RUN cp /ironhide-setup/etc/cron.d/* /etc/cron.d/&lt;/span&gt;

&lt;span class="c1"&gt;#copy the configuration files inside docker container&lt;/span&gt;
&lt;span class="s"&gt;COPY /projects /var/tmp/LoadArtifacts/projects&lt;/span&gt;
&lt;span class="c1"&gt;#nm &lt;/span&gt;
&lt;span class="c1"&gt;#RUN cd \ &amp;amp;&amp;amp; ls -l &amp;amp;&amp;amp; pwd&lt;/span&gt;
&lt;span class="c1"&gt;#RUN cp -R /UsersAndGroups /var/tmp/LoadArtifacts/UsersAndGroups&lt;/span&gt;
&lt;span class="c1"&gt;#COPY /CertificatesAndKeys /var/tmp/LoadArtifacts/CertificatesAndKeys&lt;/span&gt;
&lt;span class="c1"&gt;#COPY /ThirdPartylibs /var/tmp/LoadArtifacts/ThirdPartylibs&lt;/span&gt;
&lt;span class="c1"&gt;#COPY /SecureConnectorConfig /var/tmp/LoadArtifacts/SecureConnectorConfig&lt;/span&gt;

&lt;span class="s"&gt;RUN cp ironhide-setup/etc/logrotate.d/* /etc/logrotate.d/&lt;/span&gt;

&lt;span class="s"&gt;RUN chmod 644 /etc/cron.d/*&lt;/span&gt;

&lt;span class="s"&gt;RUN chmod -R 777 /var/tmp/ironhide-setup&lt;/span&gt;

&lt;span class="s"&gt;ENV JAVA_HOME /usr/java/default&lt;/span&gt;

&lt;span class="s"&gt;ENV PATH $JAVA_HOME/bin:$PATH&lt;/span&gt;

&lt;span class="s"&gt;ENV IRONHIDE_ROOT /usr/ironhide&lt;/span&gt;

&lt;span class="s"&gt;ENV LD_LIBRARY_PATH /usr/ironhide/lib&lt;/span&gt;

&lt;span class="s"&gt;ENV IH_ROOT /usr/ironhide&lt;/span&gt;

&lt;span class="s"&gt;ENV IRONHIDE_BACKUP_PATH /var/tmp/ironhide-backup&lt;/span&gt;

&lt;span class="s"&gt;ENV PATH $IH_ROOT/bin:$PATH&lt;/span&gt;

&lt;span class="s"&gt;ENV interface1=""&lt;/span&gt;

&lt;span class="s"&gt;ENV interface2=""&lt;/span&gt;

&lt;span class="s"&gt;RUN cp  ironhide-setup/scripts/liblog4cxx.so.10 /usr/lib/x86_64-linux-gnu/liblog4cxx.so.10.0.0&lt;/span&gt;

&lt;span class="s"&gt;RUN echo 'PS1="[AppConnect-Container@\h \w]&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;~/.bashrc&lt;/span&gt;

&lt;span class="s"&gt;CMD&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;["/usr/bin/supervisord"]&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Below is my codeship service file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;phx.ocir.io/axyp8vsk2dul/ibmappconnect&lt;/span&gt;
    &lt;span class="na"&gt;dockerfile_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;encrypted_args_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_encrypted&lt;/span&gt;
&lt;span class="na"&gt;oracle_dockercfg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;codeship/myservice-dockercfg-generator&lt;/span&gt;
&lt;span class="na"&gt;appkubectl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;phx.ocir.io/oci_kubectl:0.0.4&lt;/span&gt;
    &lt;span class="na"&gt;dockerfile_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DockerfileOciKube&lt;/span&gt;
    &lt;span class="na"&gt;encrypted_args_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_encrypted&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;CommitID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{.CommitID&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;And below is my Codeship steps file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="c1"&gt;# codeship-steps.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push to Oracle Docker Registry&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;
  &lt;span class="na"&gt;image_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;phx.ocir.io/axyp8vsk2dul/ibmappconnect&lt;/span&gt;
  &lt;span class="na"&gt;image_tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.CommitID&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;registry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;phx.ocir.io&lt;/span&gt;
  &lt;span class="na"&gt;encrypted_dockercfg_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dockercfg.encrypted&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;push&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check response to kubectl get nodes&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubectl get nodes&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appkubectl&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check OCI Version&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oci -v&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appkubectl&lt;/span&gt;  
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy the IBM App Connect Image with flows and configs&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubectl apply -f /config/.kube/ibmappconnect.yaml&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appkubectl&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Print out the environment varibales&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appkubectl&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;printenv&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h5&gt;
  
  
  I check in the par file to git.
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--A46fN3C0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/167vdnflqdtgdfui7dxk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A46fN3C0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/167vdnflqdtgdfui7dxk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  This will trigger a build in Codeship.
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xvzKNuwB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e1ts01tw9dkmoytxgyud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xvzKNuwB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e1ts01tw9dkmoytxgyud.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Codeship packages the par files on top IBM App Connect Professional base image and deploys to Oracle Kubernetes Engine.
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xXgyQ7Oz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/49s5zb2403ueg237c9f9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xXgyQ7Oz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/49s5zb2403ueg237c9f9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  We can use Datadog to monitor the cluster and running pods (IBM App Connect Professional container).
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sKKiaZbq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5fnfq0eqn5zw06205esc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sKKiaZbq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5fnfq0eqn5zw06205esc.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  We can also use IBM App Connect Professional WMC and confirm everything as expected.
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ISzBm6aI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/72a5kt9k65rgf925vm0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ISzBm6aI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/72a5kt9k65rgf925vm0f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  We can run some load test and ensure the system performance will be acceptable to our end customers.
&lt;/h5&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zarQq8KV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x5ta93xz10d1xu91vw1g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zarQq8KV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/x5ta93xz10d1xu91vw1g.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>appconnect</category>
      <category>cicd</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Using Istio for Ingress &amp; Feature Flag Deployment in AKS</title>
      <dc:creator>Naseem Mohammed</dc:creator>
      <pubDate>Sun, 28 Jun 2020 08:27:36 +0000</pubDate>
      <link>https://dev.to/mnaseem/using-istio-for-ingress-feature-flag-deployment-in-aks-4ceb</link>
      <guid>https://dev.to/mnaseem/using-istio-for-ingress-feature-flag-deployment-in-aks-4ceb</guid>
      <description>&lt;h3&gt;
  
  
  What is a Service Mesh?
&lt;/h3&gt;

&lt;p&gt;According to Wikipedia, it is &lt;br&gt;
&lt;em&gt;"In software architecture, a service mesh is a dedicated infrastructure layer for facilitating service-to-service communications between microservices, often using a sidecar proxy."&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Ingress (in Kubernetes)?
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;"Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource."&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Istio?
&lt;/h3&gt;

&lt;p&gt;Istio is all about traffic. Whether that traffic is between the Microservices within a Kubernetes* cluster (East-West) or traffic entering/leaving the Kubernetes* Cluster (Ingress traffic or North-South).&lt;br&gt;
*&lt;em&gt;"(Istio can be used in other Orchestration platforms also besides Kubernetes)."&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  So Istio is
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Service Mesh (E-W) &amp;amp; Ingress Gateway (N-S)&lt;/li&gt;
&lt;li&gt;Open Sourced by Google, IBM &amp;amp; Lyft in May 2017&lt;/li&gt;
&lt;li&gt;Service Mesh designed to connect, secure and monitor microservices&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Istio architecture from Istio Website.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv6y7z7zooskuz80e2f4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv6y7z7zooskuz80e2f4k.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Istio features
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Traffic Management

&lt;ol&gt;
&lt;li&gt;Discovery.&lt;/li&gt;
&lt;li&gt;Load balancing&lt;/li&gt;
&lt;li&gt;Rate limiting&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Resilience

&lt;ol&gt;
&lt;li&gt;Failure recovery&lt;/li&gt;
&lt;li&gt;Fault Injection&lt;/li&gt;
&lt;li&gt;Circuit Breaker&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Observability

&lt;ol&gt;
&lt;li&gt;Metrics&lt;/li&gt;
&lt;li&gt;Monitoring &amp;amp; Alerts&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Deployment

&lt;ol&gt;
&lt;li&gt;Canary rollouts&lt;/li&gt;
&lt;li&gt;Feature Flag Deployment &lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Security

&lt;ol&gt;
&lt;li&gt;Access control&lt;/li&gt;
&lt;li&gt;End-to-end authentication&lt;/li&gt;
&lt;li&gt;Security in transit- mTLS&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Ingress Gateway

&lt;ol&gt;
&lt;li&gt;Prefix based traffic routing&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;For this article, I came up with a Made up use case. I have created 4 Microservice. Each is written in .NET Core and packaged as Docker. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Healthcare

&lt;ol&gt;
&lt;li&gt;Benefits&lt;/li&gt;
&lt;li&gt;Insurance.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Hospital &lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;The Healthcare microservice internally depends on benefits which depend on Insurance. &lt;br&gt;
The Hospital microservice does not talk to any other microservice. &lt;br&gt;
The Healthcare and Hospital microservice needs to be invoked from the outside world.&lt;br&gt;
Below is the diagram of the microservices and their interaction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F17nh1i6jwvnyyqeme0f1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F17nh1i6jwvnyyqeme0f1.png" alt="Alt Text"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;So I installed Istio in my cluster. Enabled the namespace to inject Istio pods. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj0ydz6u29s57ey54x1i4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj0ydz6u29s57ey54x1i4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The installation had also installed Kiali.&lt;/p&gt;

&lt;h3&gt;
  
  
  What does Kiali provide? Well, it answers the below questions.
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Which microservices are part of my service mesh?&lt;/li&gt;
&lt;li&gt;How are they connected?&lt;/li&gt;
&lt;li&gt;How are they performing?&lt;/li&gt;
&lt;li&gt;How can I operate on them?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now using Kiali I had a look at my cluster.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7a9gibpgn1uiklc9goov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F7a9gibpgn1uiklc9goov.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Couple of things you can do with Kiali is do Routing &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Weighted Routing between different version of your Microservice&lt;/li&gt;
&lt;li&gt;A more interesting and more useful part is the ability to Feature Flag release based on HTTP Header Routing. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below are the configurations.&lt;/p&gt;

&lt;h4&gt;
  
  
  Weighted Routing
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F95ihoga9okf0wq1gaw96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F95ihoga9okf0wq1gaw96.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Flag release based on HTTP Header Routing
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbsufvnyqu6cxo50ck45b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbsufvnyqu6cxo50ck45b.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Setting up Ingress Gateway
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgx4uq5wahbchs92cs9zx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgx4uq5wahbchs92cs9zx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>istio</category>
      <category>kubernetes</category>
      <category>devops</category>
    </item>
    <item>
      <title>Using Codeship to Deploy a Microservice onto Oracle Kubernetes Engine</title>
      <dc:creator>Naseem Mohammed</dc:creator>
      <pubDate>Fri, 29 May 2020 05:30:13 +0000</pubDate>
      <link>https://dev.to/mnaseem/using-codeship-to-deploy-a-microservice-on-oracle-kubernetes-engine-3cei</link>
      <guid>https://dev.to/mnaseem/using-codeship-to-deploy-a-microservice-on-oracle-kubernetes-engine-3cei</guid>
      <description>&lt;p&gt;So, I was looking at an alternative to Azure DevOps and Jenkins to build a CI CD pipeline for a new project. A friend had asked me for a recommendation. He wanted to host microservices in Oracle Kubernetes Service. &lt;/p&gt;

&lt;p&gt;I had heard about Codeship and had wanted to give it a try for a while. So this was the nudge I needed and spend the weekend over it. And it was totally worth it.&lt;/p&gt;

&lt;p&gt;There are two versions of Codeship. Basic and Pro. Pro is a bit more expensive. According to their faq below is the reason. &lt;/p&gt;

&lt;h4&gt;
  
  
  Why is CodeShip Pro more expensive than Codeship Basic?
&lt;/h4&gt;

&lt;p&gt;"CodeShip Pro spawns single-tenant AWS instances for you whenever you push a build. You are not sharing your instance’s CPU, Memory, etc. with anyone else."&lt;/p&gt;

&lt;h3&gt;
  
  
  The flow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--u3ziorPd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pkdz9usayl8sjm3vxgfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--u3ziorPd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pkdz9usayl8sjm3vxgfg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Developers checks in the code to a Github repo.&lt;/li&gt;
&lt;li&gt;This triggers a build in Codeship. Codeship uses the Dockerfile checked in and tries to build a Docker image.&lt;/li&gt;
&lt;li&gt;The resulting image is tagged by Codeship with the Github Commit Id and pushed to the Azure Container Registry (ACR). (Yeah, just mixing it up with Azure for fun.)&lt;/li&gt;
&lt;li&gt;Deployment 

&lt;ol&gt;
&lt;li&gt;Codeship now issues a kubectl command to Oracle Kubernetes Engine (OKE) Master API service. This will be to deploy the Microservice and the load balancer in front of it. &lt;/li&gt;
&lt;li&gt;The Docker image that we build and pushed earlier to ACR will be the image in this deployment. &lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Prerequisites for achieving this
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Github Account&lt;/li&gt;
&lt;li&gt;Codeship Pro Account (There is a free tier which is what I am using).&lt;/li&gt;
&lt;li&gt;Azure Account and Azure Container Registry (You can replace this with Oracle Container Registry).&lt;/li&gt;
&lt;li&gt;Oracle Cloud account and a running instance of Oracle Kubernetes Engine. (I used a free 30 day Oracle Cloud Trial Account).&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Codeship Structure and my Setup
&lt;/h4&gt;

&lt;p&gt;In Codeship everything revolves around two configuration files. codeship-service &amp;amp; codeship-steps files &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CwIAQZiI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e7l7ivm5sx58p0goapuj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CwIAQZiI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/e7l7ivm5sx58p0goapuj.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Correlation I build in my head about Codeship Services &amp;amp; Steps.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Services
&lt;/h4&gt;

&lt;p&gt;Codeship Services provide the functionality to accomplish the CI CD pipeline’s steps (or tasks). Services provide these functionalities by using Dockers. Services at the end, are just a Docker.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Used for Build and Push Microservice&lt;br&gt;
Two services are used to accomplish a Build &amp;amp; Push to registry functionality. That is two corresponding dockers are required. These two Services are actually mapped to a codeship step (or task) with an attribute called type(=Push). Below is Codeship documentation for that step which provides the Build and Push functionality. &lt;br&gt;
&lt;a href="https://documentation.codeship.com/pro/builds-and-configuration/steps/"&gt;https://documentation.codeship.com/pro/builds-and-configuration/steps/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Utility Service&lt;br&gt;
These Services (Dockers) maybe prebuilt by Codeship (or us) and pulled from a registry like Docker Hub at the time of CICD execution.&lt;br&gt;
codeship/azure-dockercfg-generator is an example of Codeship pre-built Dockers.&lt;br&gt;
You can see more of Codeship prebuilt Dockers here. &lt;a href="https://hub.docker.com/u/codeship"&gt;https://hub.docker.com/u/codeship&lt;/a&gt;. Interesting to see that the AWS Docker has been downloaded 500k+ times while the Azure one has only been downloaded 10K+ times. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Custom Service &lt;br&gt;
A Service can also custom build a Docker at runtime. We provide the Codeship Service a Dockerfile. This is what I did with appkubectl Service. Custom build one with Oracle CLI (OCI) and kubectl packaged within it.&lt;br&gt;
This is how my code-service file looks like&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dockerstore.azurecr.io/aksistioinsurance&lt;/span&gt;
    &lt;span class="na"&gt;dockerfile_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfileweb&lt;/span&gt;
&lt;span class="na"&gt;azure_dockercfg&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;codeship/azure-dockercfg-generator&lt;/span&gt;
  &lt;span class="na"&gt;add_docker&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;encrypted_env_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;az_config_encrypted&lt;/span&gt;
&lt;span class="na"&gt;appkubectl&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dockerstore.azurecr.io/oci_kubectl:0.0.4&lt;/span&gt;
    &lt;span class="na"&gt;dockerfile_path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Dockerfile&lt;/span&gt;
    &lt;span class="na"&gt;encrypted_args_file&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;config_encrypted&lt;/span&gt;
    &lt;span class="c1"&gt;#encrypted_env_file: config_encrypted&lt;/span&gt;
    &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;CommitID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{.CommitID&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Steps (Or Tasks)
&lt;/h4&gt;

&lt;p&gt;codeship-steps.yml file is where you specify the steps. Think of this as the task. Each step uses one of the Services. Usually, it is a 1: Many relationships between Service &amp;amp; Tasks. But some steps like the Building and Pushing Dockers to the repository has 2 Services mapped to it.&lt;/p&gt;

&lt;p&gt;Example Steps (or tasks) can look like below. This we define and build the functionality as per our requirements. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build Microservice &amp;amp; Push to Container Registry&lt;/li&gt;
&lt;li&gt;Integration Test&lt;/li&gt;
&lt;li&gt;Deploy to Oracle Kubernetes Engine&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To accomplish these tasks, we use our custom Services (or Dockers) or Codeship provided Services. The below table shows the relationship I used in my Build pipeline. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gSIQPE2s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/853t5hte6pmzv0mayagp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gSIQPE2s--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/853t5hte6pmzv0mayagp.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# codeship-steps.yml&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push to Azure Docker Registry&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;push&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;
  &lt;span class="na"&gt;image_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dockerstore.azurecr.io/aksistioinsurance&lt;/span&gt;
  &lt;span class="na"&gt;image_tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.CommitID&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
  &lt;span class="na"&gt;registry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dockerstore.azurecr.io&lt;/span&gt;
  &lt;span class="na"&gt;dockercfg_service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure_dockercfg&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check response to kubectl config&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubectl get nodes&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appkubectl&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check OCI Version&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;oci -v&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appkubectl&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to Oracle Kubernetes Engine&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubectl apply -f /config/.kube/insurance.yaml&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appkubectl&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Print out the environment varibales&lt;/span&gt;
  &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appkubectl&lt;/span&gt;
  &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;printenv&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;The above one is my codeship-steps.yml file&lt;/p&gt;

&lt;h4&gt;
  
  
  Desktop utility
&lt;/h4&gt;

&lt;p&gt;There is a nice command-line utility that Codeship ships called Jet. (Ship &amp;amp; Jet Hmm..) You can use it for&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Encrypting/Decrypting your configuration files/variables. These may db passwords or Container Registry credentials.&lt;/li&gt;
&lt;li&gt;Local Testing of your Codeship Steps (tasks) before pushing to Github.&lt;/li&gt;
&lt;li&gt;Validation. (Didn’t use this much.).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ok, let's look at the pipeline in action. &lt;/p&gt;

&lt;h3&gt;
  
  
  Flow
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;When I check in the code in Github, I expect a CommitId being provided by GitHub.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3tVHVoPJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h0syemu5dmx1r7cqe5g9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3tVHVoPJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/h0syemu5dmx1r7cqe5g9.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see from above the Commit Id starts with characters =&amp;gt; aa02a67&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Now we expect this to have triggered a build in codeship engine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Codeship will build and push a new Docker image to Azure Container Registry. This image will be tagged with the new GitHub Commit Id. The below screenshot confirms that the tagging has happened.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JOFlq5NL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k4ku7zz4ekx3nxcf91xf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JOFlq5NL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/k4ku7zz4ekx3nxcf91xf.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, we expect Codeship to issue a Kubectl command. Against the below yaml file. It contains a Kubernetes Deployment and related Service object.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;insurance-api&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nm&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;insurance-api&lt;/span&gt;
      &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;old&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;insurance-api&lt;/span&gt;
        &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;old&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;insurance-api&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dockerstore.azurecr.io/aksistioinsurance:##tag##&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;32Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;25m"&lt;/span&gt;
          &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;64Mi"&lt;/span&gt;
            &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;100m"&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;livenessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;httpGet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;health&lt;/span&gt;
            &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
            &lt;span class="na"&gt;httpHeaders&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;X-Custom-Header&lt;/span&gt;
              &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Awesome&lt;/span&gt;
          &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;90&lt;/span&gt;
          &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;              
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DeviceName"&lt;/span&gt; 
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;aStrangeDevice"&lt;/span&gt;        
      &lt;span class="na"&gt;imagePullSecrets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;topsecretregistryconnection&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;insurance-api-service&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;nm&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterIP&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80&lt;/span&gt;      
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;insurance-api&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;There are two things I want to bring to your attention about the above yaml file.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The imagepullsecret. This actually has the Docker username and password of the Azure Container Registry. This secret was set up initially at the time of the creation of Oracle Kubernetes Engine Cluster. The below command was used.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret docker
-registry topsecretregistryconnection 
connection --docker-server dockerstore.azurecr.io 
--docker-email "###" --docker-username="###" 
--docker-password "##$#####"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;For details check Kubernetes documentation &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/?ref=hackernoon.com#registry-secret-existing-credentials"&gt;https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/?ref=hackernoon.com#registry-secret-existing-credentials&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The image tag has a placeholder ##tag##. This placeholder will dynamically change in the Dockerfile.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cp insurance.yaml $HOME/.kube/insurance.yaml &amp;amp;&amp;amp;  
sed -i 's/##tag##/'$CommitID'/1' $HOME/.kube/insurance.yaml &amp;amp;&amp;amp; 
cat $HOME/.kube/insurance.yaml &amp;amp;&amp;amp; \
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Once Codeship runs the kubectl apply command against the above yaml file we would expect this image to be deployed in the Kubernetes cluster within OKE. Let's check and find out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1l0YICJZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8tctzws4head5prgmyrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1l0YICJZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8tctzws4head5prgmyrg.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see from the above screenshot the image with the right tag has been picked up and deployed to OKE.&lt;/p&gt;

&lt;h4&gt;
  
  
  Codeship Dashboard
&lt;/h4&gt;

&lt;p&gt;The codeship dashboard is minimal but has enough to help you in debugging. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jazeD5kD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/29fxcexg67tcu33ax4pl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jazeD5kD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/29fxcexg67tcu33ax4pl.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And one last thing, be careful of Codeship build arguments and environment variables. I wish Codeship was a little more consistent in their naming. For instance, at places they got CI_Commit_ID and for build arguments, they got CommitID (no underscore). It would have been nice if it all were consistent.  &lt;/p&gt;

</description>
      <category>cicd</category>
      <category>codeship</category>
      <category>jenkins</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Securing APIs Using Okta and Azure API Gateway</title>
      <dc:creator>Naseem Mohammed</dc:creator>
      <pubDate>Thu, 28 May 2020 16:25:33 +0000</pubDate>
      <link>https://dev.to/mnaseem/securing-apis-using-okta-and-azure-api-gateway-3ojo</link>
      <guid>https://dev.to/mnaseem/securing-apis-using-okta-and-azure-api-gateway-3ojo</guid>
      <description>&lt;p&gt;Traditionally in a .NET or Java Server application, the APIs have been secured using SessionId. After a user authenticates with the server; the server generates a unique sessionid and this sessionId is sent to the client in the Http Response. All further communication between the client and server will carry this sessionid in the payload. Though the HTTP protocol is stateless; using this sessionid helps the server to track the client and group all the client's requests as being part of one conversation. Now, this approach has worked for a long time; but it has some significant weaknesses. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table class="tg"&gt;
&lt;thead&gt;
  &lt;tr&gt;
    &lt;th class="tg-0lax"&gt;Issues&lt;/th&gt;
    &lt;th class="tg-0lax"&gt;Details&lt;/th&gt;
  &lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
  &lt;tr&gt;
    &lt;td class="tg-0lax"&gt;Session affinities&lt;/td&gt;
    &lt;td class="tg-0lax"&gt;When there is just one server it works fine. But when 2 or more server behind a load balancer, the SessionId will need to be store outside the app server. Like a external State server or Database. This adds complications like serialization/deserialization and latency issues.&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td class="tg-0lax"&gt;Session ID hacking or spoofing&lt;br&gt;
&lt;/td&gt;
    &lt;td class="tg-0lax"&gt;Spoofing is hard if the session id is a Guid. But it does not prevent a stolen session id (or cookie) from being reused. &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td class="tg-0lax"&gt;Multiple authentication &lt;/td&gt;
    &lt;td class="tg-0lax"&gt;if the client needs to connect to a different service/, the client needs to authenticate again and use a different sessionid. This multiple authentications is cumbersome and also a security risk. &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td class="tg-0lax"&gt;No access granularity&lt;/td&gt;
    &lt;td class="tg-0lax"&gt;There is only id exchanged. No extra information, attributes are available. The server ends up making look ups into the database for enriching information about the user logged in,  &lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td class="tg-0lax"&gt;Access propagation&lt;br&gt;
&lt;/td&gt;
    &lt;td class="tg-0lax"&gt;if the server needs to talk another service on the client's behalf the sessionId will not suffice. A different mechanism needs to involved making this SessionId route more cumbersome and adding security risk. &lt;/td&gt;
  &lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So the industry guidance today is to avoid baking in API security into code. (read sessionid). Today in the world of cloud development; it is recommended to adopt the infrastructure for security instead of baking it in code. &lt;/p&gt;

&lt;h4&gt;
  
  
  Using infrastructure for API security has the following advantages.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Respects the separation of duties.&lt;/li&gt;
&lt;li&gt;Makes the code simple and easy to maintain. (now that all that ugly &amp;amp; scary  security stuff are removed.)&lt;/li&gt;
&lt;li&gt;Less maintenance burden.&lt;/li&gt;
&lt;li&gt;Reusable.&lt;/li&gt;
&lt;li&gt;Transparent to the security team.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So what is the modern way of securing APIs in the cloud? The answer is Oauth2.0.&lt;br&gt;
We won't go into the OAuth2.0 protocol. To learn more on the OAuth2.0 protocol you can use this as reference =&amp;gt; &lt;a href="https://www.oauth.com/"&gt;https://www.oauth.com/&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
The goal here is to look at how we can secure APIs hosted behind the API gateway. We will look at Okta (Identity as a service provider) as the OAuth2.0 token generator and API gateway ensuring each API request is authorized.  &lt;/p&gt;

&lt;p&gt;Just for analogy purpose let's see what roles Okta and Azure API Gateway play in securing our APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uCywSJaa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sl8bx8mizlk7qfa0qcnd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uCywSJaa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/sl8bx8mizlk7qfa0qcnd.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the above pic shows Okta issues the Access card (token to be exact) and the Gateway sits in front of each door (API to be exact) and validates whether the user has access to the said API.&lt;br&gt;
Okta actually generates two tokens. ID Token and Access Token. We will focus on the Access Token. According to Wikipedia, an access token is defined as &lt;br&gt;
"In computer systems, an access token contains the security credentials for a login session and identifies the user, the user's groups, the user's privileges, and, in some cases, a particular application."&lt;/p&gt;
&lt;h3&gt;
  
  
  Okta
&lt;/h3&gt;

&lt;p&gt;So how does Okta know what token to issue a user?. Befire we answer that question; let's look at Okta a little more in detail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pEfG0Mri--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6bhe2hm82712md3e2ewr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pEfG0Mri--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6bhe2hm82712md3e2ewr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above screenshot shows that Okta stores &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users (and passwords), &lt;/li&gt;
&lt;li&gt;Client Application information (SPA, Mobile Apps, etc..)&lt;/li&gt;
&lt;li&gt;User Attributes (Prebuilt and Custom attributes based on client Application)&lt;/li&gt;
&lt;li&gt;Authorization Server&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Users
&lt;/h4&gt;

&lt;p&gt;Okta stores users and their passwords. (In most enterprises this will actually be federated from the Enterprise Active Directory.)&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tiIeji51--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6xbo8mxmzjky8gx2469p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tiIeji51--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/6xbo8mxmzjky8gx2469p.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Client Application
&lt;/h4&gt;

&lt;p&gt;Okta configures the Client application (that will consume the APIs). The below screenshot shows an example Okta APP page with users assigned to it. Usually, you do not assign users directly to Apps, instead you would assign user groups to apps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E5FgfPln--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/szmcj38prnzan36pl0s2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E5FgfPln--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/szmcj38prnzan36pl0s2.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
Only the users assigned to the applications can authenticate into the applications.&lt;/p&gt;
&lt;h4&gt;
  
  
  User Profiles
&lt;/h4&gt;

&lt;p&gt;Okta allows us to store User attributes. Some of these are built-in. Admins can add custom attributes either at the Base Okta level or at the individual application level. Below screenshot shows; the IMS application's User Attribute configuration page. Here admins can add new or custom attributes about Users that are specific to the application. For instance, an application may add an attribute called "User Birthplace". Another Application may add an attribute called "User SchoolName". The point is that; each application is unique and the Admins of that application can customize attributes specific to that application. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B3AH_j1x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nvnuu19ao34nud4vkrtk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B3AH_j1x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/nvnuu19ao34nud4vkrtk.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Authorization Server
&lt;/h4&gt;

&lt;p&gt;We talked earlier about the Access token. Authorization Server is the one that generates the Access Token. The below screenshot shows the configuration page of the Authorization Server. Here we can tell the Authorization server what all claims (user attributes) need to part of the Access token.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D1pLsp4g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tb5bcrxvj1339lmzki61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D1pLsp4g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/tb5bcrxvj1339lmzki61.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Okta provides an inbuilt utility where we can test the contents of the token.&lt;br&gt;
Below is the screenshot for that. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--JQe0uZGb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z8ly1bm88gvxoqv58rwx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--JQe0uZGb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/z8ly1bm88gvxoqv58rwx.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SPA application receives this token after the user Authenticates against Okta.&lt;br&gt;
This is the token received by my application =&amp;gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eyJraWQiOiJhQW85TktxdjMxZmQ0Q1RaRWtuQnpoZVZBMVcwLThhc0xsWC0wUG5uN0JNIiwiYWxnIjoiUlMyNTYifQ.eyJ2ZXIiOjEsImp0aSI6IkFULmhrckNOS092R3V4aE1IUGxSck9BbUhxYzRTZFpRdkN1c3hOelBhMng2MkkiLCJpc3MiOiJodHRwczovL2Rldi0yMTI5MDMub2t0YXByZXZpZXcuY29tL29hdXRoMi9kZWZhdWx0IiwiYXVkIjoiYXBpOi8vZGVmYXVsdCIsImlhdCI6MTU5MDY3NjQyNCwiZXhwIjoxNTkwNzEyNDI0LCJjaWQiOiIwb2FxNGY0N3hmU21JV2RxTjBoNyIsInVpZCI6IjAwdWhzYXN0NnlNcGdjVmZXMGg3Iiwic2NwIjpbIm9wZW5pZCJdLCJhcHB1c2VyIjoibS5uYXNlZW1Ab3V0bG9vay5jb20iLCJzdWIiOiJtLm5hc2VlbUBvdXRsb29rLmNvbSIsIkRlYWxlcnNoaXBMb2NhdGlvbiI6WyJTYW4gRnJhbmNpc2NvIiwiTG9zIEFuZ2VsZXMiXX0.OJmj-o0LHGtHtCtxLKtshwPK6IRQjDc6umZ_PBtGcZfXvE9afluOvfaqmLyYvyaA1uis3PK0jrZg8zSJFS-ryjYycbyJ8f7Mxhd5TyMxYpMRdPIEr3rG6KoByvhqLMJTnwKRV7sSp6af7R8DO7noFc1Wj7jolDRsb3iJmV7Z_g-IySqXKtU7BSpAI1nBoPG6SUsDfU18PoDT7z8_PPkvP7JpNWAzphcp1S3H0O_Z7RkaE04iqvboCjf3OHeBKFScx918bR4XtVh-dMdd84mL8xT7ez-IOpnoQvYCjXSTNmkJjEDUPYjgnA8VNgv3i9dBl4vxj7sDgQ5IXjslL5IYeQ
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;if you drop the above token at jwt.ms; you will see the below screenshot.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Nn2Uui6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8kb86ays62ztjp5o9l2u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Nn2Uui6o--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/8kb86ays62ztjp5o9l2u.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we've got the token; we have to pass this in every API call we make to the Azure API Gateway. Because the Azure API Gateway checks each incoming request headers. It specifically looks for Header named "Authorization". It expects this header to have a valid Access token in it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Azure API Gateway
&lt;/h3&gt;

&lt;p&gt;Azure API Gateway sits in front of all our APIs. We have to configure our API Applications such that it allows traffic only from Azure API Gateway, whether the application is hosted in Azure Kubernetes, Azure WebApps, or Function Apps. Assuming we have done that; let us look at how Azure API Gateway is configured to allow only requests with the right Access Token. &lt;/p&gt;

&lt;p&gt;Below is a screenshot of the Azure API Management Portal.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--svzGu9hc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ewo3tutzv13ag0mi3lvu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--svzGu9hc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/ewo3tutzv13ag0mi3lvu.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every API has one more Operation within it. In the above screenshot, we can see that the "Vehicle Pricing" API has a single "GET" Pricing Operation in it. "GET" being the Http verb. Azure API Management allows us to place policies within it. Policies are nothing but a set of rules. These rules can be set at &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Product Level (A logical grouping of multiple APIs)&lt;/li&gt;
&lt;li&gt;API Level&lt;/li&gt;
&lt;li&gt;API Operation Level
A policy defined at the Product is applicable to all APIs and API operations under it and similarly, a policy defined at API level is applicable to Operations under it. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Policies can be for &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access Restriction&lt;/li&gt;
&lt;li&gt;Authentication Policies&lt;/li&gt;
&lt;li&gt;Caching Policies&lt;/li&gt;
&lt;li&gt;Cross-Domain Policies&lt;/li&gt;
&lt;li&gt;Transformation Polices.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More details can be found here =&amp;gt; &lt;a href="https://docs.microsoft.com/en-us/azure/api-management/api-management-policies"&gt;https://docs.microsoft.com/en-us/azure/api-management/api-management-policies&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The policy we are interested in is the Validate JWT policy &lt;br&gt;
&lt;a href="https://docs.microsoft.com/en-us/azure/api-management/api-management-access-restriction-policies#ValidateJWT"&gt;https://docs.microsoft.com/en-us/azure/api-management/api-management-access-restriction-policies#ValidateJWT&lt;/a&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  JWT
&lt;/h3&gt;

&lt;p&gt;JWT stands for JSON Web Token. This text is from Wikipedia "JSON Web Token (JWT, sometimes pronounced /dʒɒt/[1]) is an internet standard for creating data with optional signature and/or optional encryption whose payload holds JSON that asserts some number of claims. The tokens are signed either using a private secret or a public/private key. For example, a server could generate a token that has the claim "logged in as admin" and provide that to a client."&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/JSON_Web_Token"&gt;https://en.wikipedia.org/wiki/JSON_Web_Token&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Validatge jwt policy
&lt;/h3&gt;

&lt;p&gt;Now let's look at how we can configure the Validate JWT policy.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---dkdaI1X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hyvis4ea59dn945cru7k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---dkdaI1X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/hyvis4ea59dn945cru7k.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above screenshot, you can see a policy called Validate-jwt.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The policy first looks for the header named "Authorization". The policy says that if the jwt validation fails, an Http Response code 401 should be returned with the message "JWT Validation failed." to the caller. &lt;/li&gt;
&lt;li&gt;The OpenId URL for the Okta Authorization Server keys are provided. These keys are used by Azure APIM to validate that the JWT Access Tokens is not a tampered or a fake token. That it really was issued by an Okta Authorization Server.&lt;/li&gt;
&lt;li&gt;The Access token issuer is the default Authorization Server. (We can have custom Authorization server also within Okta).&lt;/li&gt;
&lt;li&gt;If all the above passes; it knows the user has a valid token for the client App from Okta. So Authentication passed.&lt;/li&gt;
&lt;li&gt;Next us the Authorization part. The policy now checks whether the incoming token has the custom claim DealershipLocation and also the values of this is  Los Angeles and San Francisco. 
If the custom claims are present in the incoming request's Access Token the API request is forwarded to &lt;a href="http://dummy.restapiexample.com/api/v1/employees"&gt;http://dummy.restapiexample.com/api/v1/employees&lt;/a&gt;. 
###Note#### You would see some values hardcoded within the APIM Policies. As a best practice, these should be placed in API Management's "Named Value" facility. &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>api</category>
      <category>oauth20</category>
      <category>okta</category>
      <category>azure</category>
    </item>
    <item>
      <title>Azure DevOps: How to Build, Test And Deploy to Azure Kubernetes Service</title>
      <dc:creator>Naseem Mohammed</dc:creator>
      <pubDate>Wed, 27 May 2020 06:39:05 +0000</pubDate>
      <link>https://dev.to/mnaseem/azure-devops-how-to-build-test-and-deploy-to-azure-kubernetes-service-h8o</link>
      <guid>https://dev.to/mnaseem/azure-devops-how-to-build-test-and-deploy-to-azure-kubernetes-service-h8o</guid>
      <description>&lt;p&gt;I have been using Azure DevOps for a while. Like most of the cloud products out there this is one which gets a constant refresh. My plan is to document the steps for building, testing, and deploying an app to Azure Kubernetes Service using Azure DevOps. So let's start.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Github or Bitbucket&lt;/li&gt;
&lt;li&gt;Azure Kubernetes Service&lt;/li&gt;
&lt;li&gt;Azure Container Registry&lt;/li&gt;
&lt;li&gt;Soap UI Pro (you need Pro edition for CICD).&lt;/li&gt;
&lt;li&gt;Azure DevOps&lt;/li&gt;
&lt;li&gt;Azure DevOps Agent hosted on your Windows VM. (needed for SoapUI Pro)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Flow Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ZWqcDL5m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b94v5tkbxwn9oyem9bfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZWqcDL5m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/b94v5tkbxwn9oyem9bfo.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  1) My code &amp;amp; Dockerfile
&lt;/h4&gt;

&lt;p&gt;What I got is a simple .NET Web API project. And below is my Dockerfile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XzFoFrJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvkvr3my4dn4dha21urt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XzFoFrJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/bvkvr3my4dn4dha21urt.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2) My Github link to project
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://github.com/mohammednaseem/aksistio-hospital"&gt;https://github.com/mohammednaseem/aksistio-hospital&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  3) Azure Container Registry ####
&lt;/h4&gt;

&lt;p&gt;This is the repository where the Docker image is hosted. Below is a screenshot of the Azure Container Registry from Azure Portal.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tw5unW66--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vdqvb7dhtij47pq7pb2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tw5unW66--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/vdqvb7dhtij47pq7pb2t.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  4) Azure Kubernetes Service
&lt;/h4&gt;

&lt;p&gt;Now that we have talked about the prerequisite required we will get right to it. We will walk through how we configure Azure DevOps to build and push Docker image to the registry and then deploying that image to AKS and running integration tests against it using SoapUI Pro&lt;/p&gt;

&lt;p&gt;So we will create 2 pipelines in Azure DevOps. (Details below).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first one is the build pipeline&lt;/li&gt;
&lt;li&gt;and the second pipeline is the release pipeline.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Build pipeline
&lt;/h3&gt;

&lt;p&gt;At the end of the build pipeline the expected output is to have a new Docker image in Azure Container registry. This will be the artifact we will deploying to AKS as part of the release pipeline.&lt;/p&gt;

&lt;p&gt;Below is the build pipeline's YAML file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Dotnet&lt;/span&gt;
&lt;span class="c1"&gt;# Unit the dotnet project. xUnit and NSubtitute&lt;/span&gt;
&lt;span class="c1"&gt;# Docker&lt;/span&gt;
&lt;span class="c1"&gt;# Build and push an image to Azure Container Registry&lt;/span&gt;
&lt;span class="c1"&gt;# https://docs.microsoft.com/azure/devops/pipelines/languages/docker&lt;/span&gt;
&lt;span class="c1"&gt;# Publsih&lt;/span&gt;
&lt;span class="c1"&gt;# Now we get the tag of the published id and update the k8 Deployment yaml image&lt;/span&gt;
&lt;span class="c1"&gt;# https://docs.microsoft.com/azure/devops/pipelines/languages/docker&lt;/span&gt;

&lt;span class="na"&gt;trigger&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;master&lt;/span&gt;

&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;repo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;self&lt;/span&gt;


&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Container registry service connection established during pipeline creation&lt;/span&gt;
  &lt;span class="na"&gt;dockerRegistryServiceConnection&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;7c46ccde-aa97-4bd0-be94-abcd31bbe20b'&lt;/span&gt;
  &lt;span class="na"&gt;containerRegistry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;dockerstore'&lt;/span&gt;
  &lt;span class="na"&gt;imageRepository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hospital'&lt;/span&gt;
  &lt;span class="na"&gt;dockerfilePath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/Dockerfile'&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(Build.BuildId)-$(Build.SourceVersion)'&lt;/span&gt;

  &lt;span class="c1"&gt;# Agent VM image name&lt;/span&gt;
  &lt;span class="na"&gt;vmImageName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ubuntu-latest'&lt;/span&gt;

&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;UnitTestBuildAndPublish&lt;/span&gt;
  &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Unit Test then Build and Push Docket to Register then Publish of release pipeline&lt;/span&gt;
  &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;UnitTest&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Running Unit tests for the Hospital Microservice&lt;/span&gt;
    &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;vmImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(vmImageName)&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DotNetCoreCLI@2&lt;/span&gt;
      &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;test'&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build&lt;/span&gt;
    &lt;span class="na"&gt;dependsOn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;UnitTest&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push to container registry&lt;/span&gt;
    &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;vmImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(vmImageName)&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;    
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Docker@2&lt;/span&gt;
      &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push an image to container registry&lt;/span&gt;
      &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;buildAndPush&lt;/span&gt;
        &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hospital'&lt;/span&gt;
        &lt;span class="na"&gt;dockerfile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(dockerfilePath)&lt;/span&gt;
        &lt;span class="na"&gt;containerRegistry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(containerRegistry)&lt;/span&gt;
        &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;$(tag)&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PreReleasePrepForhospitalMicroservice&lt;/span&gt;
    &lt;span class="na"&gt;dependsOn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build&lt;/span&gt;
    &lt;span class="na"&gt;displayName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pre Release Preparation (Bash build id and Publish for Release pipeline)&lt;/span&gt;
    &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;vmImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;$(vmImageName)&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Bash@3&lt;/span&gt;
      &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;targetType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;inline'&lt;/span&gt;
        &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;# Write your commands here            &lt;/span&gt;

            &lt;span class="s"&gt;cat '$(Build.SourcesDirectory)/hospital.yaml'              &lt;/span&gt;
            &lt;span class="s"&gt;value=`cat '$(Build.SourcesDirectory)/hospital.yaml'`              &lt;/span&gt;
            &lt;span class="s"&gt;value=${value//##BUILD_ID##/$(tag)}            &lt;/span&gt;
            &lt;span class="s"&gt;echo "$value" &amp;gt; '$(Build.SourcesDirectory)/hospital_build.yaml'             &lt;/span&gt;
            &lt;span class="s"&gt;value1=`cat '$(Build.SourcesDirectory)/hospital_build.yaml'`             &lt;/span&gt;
            &lt;span class="s"&gt;echo "$value1"&lt;/span&gt;
            &lt;span class="s"&gt;mkdir '$(Pipeline.Workspace)/hospital'  &lt;/span&gt;
            &lt;span class="s"&gt;echo 'after creation of hospital'  &lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;task&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PublishPipelineArtifact@1&lt;/span&gt;
      &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;targetPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;$(Pipeline.Workspace)'&lt;/span&gt;
        &lt;span class="na"&gt;artifact&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;hospital'&lt;/span&gt;
        &lt;span class="na"&gt;publishLocation&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;pipeline'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;Before talking about what is happening in above pipeliene; it maybe better to look at Azure Build Pipeline hierachy first.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A&lt;/span&gt;
  &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A1&lt;/span&gt;
    &lt;span class="na"&gt;timeoutInMinutes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="na"&gt;pool&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;vmImage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ubuntu-16.04'&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;bash&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "Hello world"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A2&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;bash&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "A"&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;B&lt;/span&gt;
  &lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;B1&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;bash&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "B"&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;job&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;B2&lt;/span&gt;
    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;bash&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;echo "A"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;So the hierarchy is like above. You can have a list of stages which is the top level. Underneath it you can have a list of jobs that can further be broken down into steps and then steps into tasks. Also you can assign the kind of Build agent you want at the job level. So you can have one job using Windows 10 agent another using Ubuntu.&lt;/p&gt;

&lt;p&gt;OK now that the pipeline hierarchy is clear; let's go back to the original build pipeline I have above. In that we have only 1 Stage called UnitTestBuildAndPublish. But there are 3 jobs within it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Job1 called UnitTest&lt;/li&gt;
&lt;li&gt;Job2 called Build There is a task within this Job called Docker@2 which is for Build and pushing to Azure Container Registry.&lt;/li&gt;
&lt;li&gt;Job3 called PreReleasePrepForhospitalMicroservice. There are two tasks withing this. In the first task I use the bash command to get hold of the K8 YAML file and replace the placeholder with the ImageTag. Then in the second task I use the PublishPipelineArtifact. This is kind of pushing the artifact so I can get hold of this in the release pipeline. There I need info on the ImageTag that was pushed to Azure Container Registry. 
&lt;a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/publish-pipeline-artifact?view=azure-devops"&gt;Publish-Pipeline-Artifact&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On whether the jobs run in sequence or in parallel. By default, they run parallel. But in the above YAML file you may notice the "dependsOn" tag under each job. You will note that the Build job "dependsOn" Unit test project to complete. And Prepublish job "dependsOn" Build Job. So in essence they are executed in a sequential manner.&lt;br&gt;
So the Sequence is UnitTest &amp;gt; Build(&amp;amp;Push Docker Image to Registry &amp;gt; Prepublish&lt;/p&gt;
&lt;h4&gt;
  
  
  Variables in above build pipeline.
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;dockerRegistryServiceConnection: This is the Azure Container Registry's connection string. This is preconfigured.&lt;/li&gt;
&lt;li&gt;containerRegistry: 'dockerstore' is the ACR service name,&lt;/li&gt;
&lt;li&gt;imageRepository: This is an image repository (or microservice name).&lt;/li&gt;
&lt;li&gt;dockerfilePath: Relative path to the Dockerfile in Github&lt;/li&gt;
&lt;li&gt;tag: Docker Image tag. We are using the BuildId and the Github Commitid for traceability from Docker Image to Github code.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Release Pipeline
&lt;/h3&gt;

&lt;p&gt;Below is the screenshot of my release pipeline tasks page. As you can see I am using two Agents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GDlD8hXb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zorln77znoz9gepdv7ps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GDlD8hXb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/zorln77znoz9gepdv7ps.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hosted Agent running Ubuntu 18.04

&lt;ul&gt;
&lt;li&gt;Needed to build Linux Docker images for deployment to Linux Nodepool in AKS.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Custom Agent on Windows 10 VM

&lt;ul&gt;
&lt;li&gt;SoapUI Pro requires a Windows 10 OS to run its tests.&lt;/li&gt;
&lt;li&gt;API Management tasks also require Windows 10 OS to run.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Download Pipeline Artifact (running on Ubuntu agent)
&lt;/h4&gt;

&lt;p&gt;So the first step is to Download the Pipeline Artifact. We need this for the K8 YAML file. In the Build pipeline we have updated the ##BuildID## placeholder with the real tag of the image that was pushed to ACR.&lt;/p&gt;
&lt;h4&gt;
  
  
  Kubectl Apply (running on Ubuntu agent)
&lt;/h4&gt;

&lt;p&gt;We will use this task to apply the above YAML file against the "NM" namespace of AKS. The namespace is a way to logically have multiple environments in Kubernetes. The above YAML shows that the pods and services will be deployed to "NM" namespace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rUWW3LhL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/320hqepdrdu7n98kzqn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rUWW3LhL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/320hqepdrdu7n98kzqn5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above command confirms that the services and pods are deployed in that namespace.&lt;/p&gt;
&lt;h4&gt;
  
  
  Bash Script Task (running on Ubuntu agent)
&lt;/h4&gt;

&lt;p&gt;This is more of a hack I put in there. This task just sleeps and holds the pipeline for a few seconds. I wanted to ensure that the Pods are given enough time to be up and running. This is required because if the Pods are not Up SoapUI Pro will fail all the tests. &lt;/p&gt;
&lt;h4&gt;
  
  
  Azure SQL Dacpac Task (running on Ubuntu agent)
&lt;/h4&gt;

&lt;p&gt;This is for database deployment. I am not doing anything currently for this project. That is why it is in a disabled state. But this is the task you would use to run the DDL and DML statements against your Azure SQL.&lt;/p&gt;
&lt;h4&gt;
  
  
  API Management - Create/Update API (running on Windows 10 agent)
&lt;/h4&gt;

&lt;p&gt;This is a task, I got from the marketplace which helps me deploying API definitions to Azure API Management. It accepts OAS 3.0 definition that I build using Swagger editor. Below is my API definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;openapi&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;3.0.0&lt;/span&gt;
&lt;span class="c1"&gt;# Added by API Auto Mocking Plugin&lt;/span&gt;
&lt;span class="na"&gt;servers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SwaggerHub API Auto Mocking&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://virtserver.swaggerhub.com/BRB/Hospital/1.0.0&lt;/span&gt;
&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Demo API on Hospital.&lt;/span&gt; 
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0"&lt;/span&gt;
  &lt;span class="na"&gt;title&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Hospital&lt;/span&gt;
  &lt;span class="na"&gt;contact&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;email&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;m.naseem@outlook.com&lt;/span&gt;
  &lt;span class="na"&gt;license&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Apache &lt;/span&gt;&lt;span class="m"&gt;2.0&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;http://www.apache.org/licenses/LICENSE-2.0.html'&lt;/span&gt;
&lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Hospital&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Hospital related matters&lt;/span&gt;
&lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="s"&gt;/hospital/{hospitalId}&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Hospital&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Finds hospital by id&lt;/span&gt;
      &lt;span class="na"&gt;operationId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GetHospitalById&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;By passing in the valid id, you can search for&lt;/span&gt;
        &lt;span class="s"&gt;the hospial details in the database&lt;/span&gt;
      &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;in&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;path&lt;/span&gt;
          &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hospitalId&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pass the hospitalId for looking up the database&lt;/span&gt;
          &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
          &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;integer&lt;/span&gt;
          &lt;span class="na"&gt;example&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;415&lt;/span&gt;
      &lt;span class="na"&gt;responses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;200'&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;search result matching criteria&lt;/span&gt;
          &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="s"&gt;$ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#/components/schemas/Hospital'&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;400'&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bad input parameter&lt;/span&gt;
    &lt;span class="na"&gt;delete&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Hospital&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deletes a hospital from list&lt;/span&gt;
      &lt;span class="na"&gt;operationId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DeleteHospial&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deletes a hospital from list&lt;/span&gt;
      &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hospitalId&lt;/span&gt;
        &lt;span class="na"&gt;in&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;path&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Hospital id to delete&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
      &lt;span class="na"&gt;responses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;400'&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Invalid&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ID&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;supplied"&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;404'&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hospital&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;not&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;found"&lt;/span&gt;
  &lt;span class="s"&gt;/hospital&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;post&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Hospital&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;adds a hospital to the list&lt;/span&gt;
      &lt;span class="na"&gt;operationId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AddHospital&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Adds a hospital to the list&lt;/span&gt;
      &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Authorization&lt;/span&gt;
        &lt;span class="na"&gt;in&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;header&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
      &lt;span class="na"&gt;responses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;201'&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hospital added&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;400'&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;invalid input, object invalid&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;409'&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hospital already exists&lt;/span&gt;
      &lt;span class="na"&gt;requestBody&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="s"&gt;$ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#/components/schemas/Hospital'&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Add Hospital to list&lt;/span&gt;
    &lt;span class="na"&gt;patch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Hospital&lt;/span&gt;
      &lt;span class="na"&gt;summary&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Updates a hospital in the list&lt;/span&gt;
      &lt;span class="na"&gt;operationId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;UpdateHospital&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Updates a hospital to the list&lt;/span&gt;
      &lt;span class="na"&gt;responses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;200'&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hospital updated&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;400'&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;invalid input, object invalid&lt;/span&gt;
        &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;404'&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hospital does not exists&lt;/span&gt;
      &lt;span class="na"&gt;requestBody&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="s"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="s"&gt;$ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;#/components/schemas/Hospital'&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Updates Hospital in the list&lt;/span&gt;
&lt;span class="na"&gt;components&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;schemas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Hospital&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;object&lt;/span&gt;
      &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Id&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Name&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Address&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;City&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Pincode&lt;/span&gt;
      &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;integer&lt;/span&gt;
          &lt;span class="na"&gt;example&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;56&lt;/span&gt;
        &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
          &lt;span class="na"&gt;example&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Epidemic Diseases Hospial&lt;/span&gt;
        &lt;span class="na"&gt;Address&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
          &lt;span class="na"&gt;example&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MAJESTIC&lt;/span&gt;
        &lt;span class="na"&gt;City&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
          &lt;span class="na"&gt;example&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Bangalore&lt;/span&gt;
        &lt;span class="na"&gt;Pincode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;integer&lt;/span&gt;
          &lt;span class="na"&gt;example&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;562110&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  SoapUI Pro for Azure DevOps (running on Win10 agent)
&lt;/h4&gt;

&lt;p&gt;After all of the above steps are completed; it is time to do the integration tests. And make sure everything is complying. SLA will be met. We didn't break anything.&lt;br&gt;
Once the tests are completed SoapUI exports some of the reports to Azure DevOps. Below is one such report.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--piZpS4uh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0p5ejp7dffypxoxiagdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--piZpS4uh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/0p5ejp7dffypxoxiagdi.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once all the test passes; we can promote the deployments to higher environments. This too can be automated nicely in Azure DevOps. Each environment is called stages and we can add Manual Approver too as part of environment promotion. Below is a screenshot of the stages graphic that Azure DevOps provides.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wVN0ae_y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/za5wozdocs1qr4f05u4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wVN0ae_y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/za5wozdocs1qr4f05u4w.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aks</category>
      <category>cicd</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>SQL Server container in Azure Kubernetes Services (AKS)</title>
      <dc:creator>Naseem Mohammed</dc:creator>
      <pubDate>Mon, 25 May 2020 03:36:57 +0000</pubDate>
      <link>https://dev.to/mnaseem/sql-server-container-in-azure-kubernetes-services-aks-2f8</link>
      <guid>https://dev.to/mnaseem/sql-server-container-in-azure-kubernetes-services-aks-2f8</guid>
      <description>&lt;p&gt;So recently I got involved with an ASP.NET project which was built over 10 years ago and over the years Developers and Change Requests came and went. And over the period the Application became quite cumbersome and quite hard to understand and manage, the Application became quite large in terms of functionality, codebase, and data. It was cumbersome and quite hard to understand for a new developer and manage for the Ops team. A lot of technical debts started accumulating because there was no real-time spend on optimizing or refactoring the systems. The database was slow and deadlocks were becoming normal. It was hosted on huge on-premise hardware making it a heavy and costly solution. &lt;/p&gt;

&lt;p&gt;The management realized this and decided it was time for a refresh. The plan was made to look at utilizing cloud technologies. And I joined the project as the software architect for the upgrade/migration. &lt;/p&gt;

&lt;p&gt;In this article the focus is on Data and I am putting down my thoughts on what would be the right Cloud Data architecture for this organization. &lt;/p&gt;

&lt;p&gt;Let's start with categorizing Data, Use case, and the corresponding system. &lt;br&gt;
Datastore can be categorized into two types of systems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transaction&lt;/li&gt;
&lt;li&gt;Analytical&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tckXYEw9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/alhbmhgdlwnpcmnsfxsr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tckXYEw9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/alhbmhgdlwnpcmnsfxsr.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is the *Data Architecture diagram *&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_rnLQUVQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/17bgvuusl2voe80zlgd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_rnLQUVQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/17bgvuusl2voe80zlgd5.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the left we have the client Apps connecting to Microservices through an API Management platform.&lt;/li&gt;
&lt;li&gt;The Microservices are hosted in Azure Kubernetes Service.  The apps will be hosted within VMs appropriate for running regular applications.
&lt;/li&gt;
&lt;li&gt;For database we are going with SQL Server Container: The SQL Server Container we will host in a different Nodepool with VMs optimized for Database usage.&lt;/li&gt;
&lt;li&gt;Then for Data Warehousing we will use Snowflake. Snowflake is a SaaS-based Cloud Warehouse. They are optimized for cloud and do not have an on-premise offering.&lt;/li&gt;
&lt;li&gt;ETL: To move data from SQL Server to Snowflake we will be using Apache Spark Jobs hosted on Azure Databricks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This article is going to focus on item #3. The SQL Server Container. The others will be visited in future articles.  So now we will walk-through on how to  &lt;/p&gt;
&lt;h3&gt;
  
  
  Deploy SQL Server DB Instance into an AKS Cluster.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;We will be using Kubernetes Static Persistent Volume storage for DB as opposed to Kubernetes Dynamic Persistent Volume Storage for the DB. This will give us control when we have to restore an existing Database(s) into the SQL Server Container.  Using this approach, we can take regular snapshots of our Azure Disk and restore from the snapshot when disaster strikes. Using Dynamic Provisioning we would not have the control to specify an existing Azure Disk for storage/restoration. &lt;/li&gt;
&lt;li&gt;We will look at how we are gaining Performance, High Availability, and have a plan in for Disaster Recovery. (restoration from a Snapshot or restore DB from a backup file).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Databases are stateful. That means it needs storage that can be persisted. So we will first look at storage.&lt;/p&gt;
&lt;h3&gt;
  
  
  AKS Storage
&lt;/h3&gt;

&lt;p&gt;Applications hosted on Azure Kubernetes Service (AKS) may need to store and retrieve data. The data storage requirements of many types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast local data storage and that need not be persisted after the pod is deleted. &lt;/li&gt;
&lt;li&gt;Data storage that needs to be persisted even after the pod is deleted or relocated to some other node in the cluster.&lt;/li&gt;
&lt;li&gt;Storage may need to be shared between multiple pods. &lt;/li&gt;
&lt;li&gt;Also, their Access Modes required by the applications (like read/write) will be different. &lt;/li&gt;
&lt;li&gt;For some application workloads, this data storage can use local, fast storage on the node that is no longer needed when the pods are deleted. &lt;/li&gt;
&lt;li&gt;Some storage may be used to inject configuration or sensitive data into pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below we will address four concepts that provide storage to applications in AKS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Volumes       &lt;/li&gt;
&lt;li&gt;Persistent volumes&lt;/li&gt;
&lt;li&gt;Storage classes&lt;/li&gt;
&lt;li&gt;Persistent volume claims &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gIsuigno--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pw06d4ton4gpje8up34f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gIsuigno--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/pw06d4ton4gpje8up34f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Volumes
&lt;/h3&gt;

&lt;p&gt;This is the storage and in Azure it comes in two forms.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure Disks (there are many flavors of this. Starting with HDD. SDD, Ultra SDD)&lt;/li&gt;
&lt;li&gt;Azure Files 
For our SQL Server container we will be creating an Azure Disk for the data storage requirements.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bHtvIy7x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/gu7vjau24rnpadwdxb6q.png" alt="Alt Text"&gt;
###Persistent Volume###
A persistent volume is a storage resource that is managed by the Kubernetes Master API that can exist beyond the lifetime of Pod. It can be statically created by the Kubernetes cluster or dynamically provisioned. We will be looking at static provisioning.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolume&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;azure-disk-pv&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80Gi&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
  &lt;span class="na"&gt;volumeMode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Filesystem&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;azureDisk&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Managed&lt;/span&gt;
    &lt;span class="na"&gt;diskName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Kube_static_disk&lt;/span&gt;
    &lt;span class="na"&gt;diskURI&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;  &lt;span class="s"&gt;/subscriptions/15cxx96af-xxxxx-xxx-a760-1f58cxxxxxfe/resourceGroups/MC_maltax_southeastasia/providers/Microsoft.Compute/disks/Kube_static_disk&lt;/span&gt;              
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Storage Classes (SC)
&lt;/h3&gt;

&lt;p&gt;A storage class defines the tier (Premium/Standard), Access Modes. Reclaim policy.&lt;/p&gt;
&lt;h3&gt;
  
  
  Persistent Volume Claim (PVC)
&lt;/h3&gt;

&lt;p&gt;When an application requires some persistent storage from AKS it has an issue a claim or Persistent Volume Claim. This has to define the Storage Class, Access Mode, and Size. &lt;br&gt;
&lt;em&gt;If the annotation like below is set in the PVC; then Kubernetes will try to dynamically create the resource. Assuming a matching storageclass called my-storage-class is found.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;volume.beta.kubernetes.io/storage-class&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;m-azure-disk&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;But in our case, we are going for Static provisioning. We already created an Azure Disk earlier 80GB size. We also created the Persistent Volume (PV).&lt;/p&gt;

&lt;p&gt;Now let’s create a Persistent Volume Claim (PVC).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql-data-pvc&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
  &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
  &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;80Gi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;As you can we don't have the annotation for Dynamic provisioning.  Instead Kubernetes will map this PVC with earlier created PV. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0xT82Gw1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rrcv6pf95p2nd9dfesdm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0xT82Gw1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/rrcv6pf95p2nd9dfesdm.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above screenshot confirms the PVC is mapped to PV. Or the claim is mapped to real storage(disk).&lt;/p&gt;

&lt;p&gt;Now we got storage sorted. Let's deploy the SQL Server Container onto Kubernetes. Below is the file that has the Kubernetes SQL Server Deployment and Service details.  Check the section under Volumes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1beta1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql-deployment&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;terminationGracePeriodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mcr.microsoft.com/mssql/server:2017-latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1433&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MSSQL_PID&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Developer"&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ACCEPT_EULA&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Y"&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SA_PASSWORD&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;secretKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;SA_PASSWORD&lt;/span&gt; 
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssqldb&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/var/opt/mssql&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssqldb&lt;/span&gt;
        &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql-data-pvc&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql-deployment&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mssql&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1433&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1433&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;p&gt;We will deploy this. See the below screenshot.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F8Oma-jq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lsjfwq2kp5l9k92xujm1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F8Oma-jq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/lsjfwq2kp5l9k92xujm1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
We can see after around 40 seconds the SQL Server Instance is up and running.  We are able to connect using sqlcmd command. &lt;/p&gt;

&lt;p&gt;We can also see that the pod is running on the VM and below screenshot shows that the VM has mounted the storage disk that we provisioned earlier. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lCz92Ezp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1kd4kpewwpx5tlhghq7f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lCz92Ezp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/1kd4kpewwpx5tlhghq7f.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Parameters
&lt;/h3&gt;

&lt;p&gt;Now SQL Server is set up and running; let's look at some key parameters.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Performance&lt;/li&gt;
&lt;li&gt;High Availability&lt;/li&gt;
&lt;li&gt;Disaster Recovery &lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Performance
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--K_LDY9wv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y3mccne03b22a3nnfj3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--K_LDY9wv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/y3mccne03b22a3nnfj3y.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;br&gt;
In AKS a feature called Azure Accelerated Networking is turned by default. Since the Applications and our SQL Server Container Instance are deployed in the same AKS instance we are automatically able to gain these benefits of this feature.  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Significantly improved network performance.&lt;/li&gt;
&lt;li&gt;Network throughput of up to 30Gbps.&lt;/li&gt;
&lt;li&gt;Reduced latency / higher packets per second (pps). &lt;/li&gt;
&lt;li&gt;Reduced jitters.&lt;/li&gt;
&lt;li&gt;Decreased CPU Utilization: Less CPU Utilization for processing network traffic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Another way to gain performance is to use Ultra SSD which scale&lt;br&gt;
performance up to 160,000* IOPS and 2 GB/s per disk with zero downtime.&lt;/p&gt;

&lt;h4&gt;
  
  
  High Availability
&lt;/h4&gt;

&lt;p&gt;Container Level:&lt;br&gt;
Kubernetes regularly check whether the SQL Server Containers Instances are running healthy. If for some reason the instance crashes or stop being responsive. Kubernetes restarts it. When I tried to delete a SQL Server Pod; Kubernetes instantly detected and spun a new Pod within 4 seconds.  See the below screenshot.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c16tJENI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/f09qxjh0jzwc3kvmni2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c16tJENI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/f09qxjh0jzwc3kvmni2h.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Disaster Recovery
&lt;/h4&gt;

&lt;p&gt;Our storage for the container is Azure Disks which provides an SLA of 99.999% and Microsoft had no reported outage till now. But still, we should be prepared for a Disaster. One of the options we have to take regular incremental snapshots of Azure Disks. If a disaster strikes, we can restore the most snapshot back to a Disk. This Disk can be then mounted onto a new SQL Server Instance. But since we have the data (maybe old by a few mins or hours based on our Recovery Point Objective (RPO) of Disaster Recovery strategy).&lt;/p&gt;

&lt;p&gt;Another way to get back up and running when a disaster strike is to use SqlPackage command. We can regular backups (automated). Disaster strikes we can spin up a new SQL Server Instance and Azure Disk Storage. Then we will restore the bacpac file back on to the Azure Disk for use by SQL Server. &lt;/p&gt;

</description>
      <category>aks</category>
      <category>kubernetes</category>
      <category>database</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
