<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: wmchurchill3</title>
    <description>The latest articles on DEV Community by wmchurchill3 (@wmchurchill3).</description>
    <link>https://dev.to/wmchurchill3</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/wmchurchill3"/>
    <language>en</language>
    <item>
      <title>Developing Images Post Docker</title>
      <dc:creator>wmchurchill3</dc:creator>
      <pubDate>Tue, 07 Jun 2022 13:57:20 +0000</pubDate>
      <link>https://dev.to/leading-edje/developing-images-post-docker-1ofi</link>
      <guid>https://dev.to/leading-edje/developing-images-post-docker-1ofi</guid>
      <description>&lt;p&gt;The &lt;a href="https://www.docker.com/blog/the-grace-period-for-the-docker-subscription-service-agreement-ends-soon-heres-what-you-need-to-know/" rel="noopener noreferrer"&gt;Grace Period&lt;/a&gt; for Docker has ended.  Organizations and professional will need to pay to play in Docker.  &lt;a href="https://dev.to/leading-edje/a-window-into-docker-minikube-and-containerd-16bi"&gt;Another post&lt;/a&gt; covers an unsuccessful attempt to move to another platform .  There was some hope with &lt;a href="https://minikube.sigs.k8s.io/docs/" rel="noopener noreferrer"&gt;minikube&lt;/a&gt; if running Linux images.  That still did not address developing new images to run in other container runtimes.&lt;/p&gt;

&lt;p&gt;How does one develop container images without using Docker?  &lt;a href="https://buildah.io" rel="noopener noreferrer"&gt;Buildah&lt;/a&gt; is a project that builds Docker/Kubernetes compatible images.  &lt;a href="https://podman.io" rel="noopener noreferrer"&gt;Podman&lt;/a&gt; can run those images.  Both of these application do not require root permission.  This post covers some use cases and syntax for both buildah and podman.  The installation of these applications will left as an exercise for the reader.  Usage on Windows will need WSL installed and configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build an Image (Buildah)
&lt;/h2&gt;

&lt;p&gt;The nice thing about buildah is we do not need to learn any new syntax for the build file.  Any Dockerfile used to create an image should work.  This is even true for multi-stage build files.  The only caveat with the Dockerfile is referencing the images in the file.  If one uses a notation like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:14-alpine AS node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;one will either have to update &lt;code&gt;/etc/containers/registries.conf&lt;/code&gt; to find the repository hosting the image or use a fully qualified path.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM docker.io/library/node:14-alpine AS node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The sample multi-stage Dockerfile builds a web front-end with a .NET backend and looks like this (names have been changed to protect the not-so-innocent):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM docker.io/library/node:14-alpine AS node
WORKDIR /app
COPY ./AcmeWidgets.OMS.Web ./
RUN cd ./WEB &amp;amp;&amp;amp; npm install &amp;amp;&amp;amp; npm run test &amp;amp;&amp;amp; npm run build

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS dnet
WORKDIR /app
COPY . ./
RUN cd ./AcmeWidgets.OMS.Web &amp;amp;&amp;amp; dotnet publish AcmeWidgets.OMS.Web.csproj -c Release -o /app/Output 
COPY --from=node /app/wwwroot /app/Output/wwwroot/
RUN cd /app/AcmeWidgets.Tests &amp;amp;&amp;amp; dotnet test -c Release --blame 

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
RUN apt-get update \
    &amp;amp;&amp;amp; apt-get install -y curl \
    &amp;amp;&amp;amp; rm -rf /var/lib/apt/lists/*

COPY --from=dnet /app/Output ./
COPY ./Build/entry.sh .
ENTRYPOINT ["./entry.sh"]
CMD ["dotnet", "AcmeWidgets.OMS.Web.dll"]
EXPOSE 4213
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command to build an image is&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; buildah bud -t AcmeWidgets/OMS:latest -f &amp;lt;Docker file name&amp;gt; &amp;lt;project directory&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To verify you have created a new image, issue the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; buildah images
REPOSITORY                        TAG                IMAGE ID       CREATED          SIZE
localhost/AcmeWidgets/OMS         latest             0be7a36b48d9   10 seconds ago   255 MB
mcr.microsoft.com/dotnet/sdk      6.0                83ae347bcb57   3 days ago       740 MB
mcr.microsoft.com/dotnet/aspnet   6.0                69cb014b394b   3 days ago       212 MB
docker.io/library/node            14-alpine          04883debec4a   12 days ago      123 MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One can also confirm the image creation using podman&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; podman image ls
REPOSITORY                        TAG                IMAGE ID       CREATED          SIZE
localhost/AcmeWidgets/OMS         latest             0be7a36b48d9   10 minutes ago   255 MB
mcr.microsoft.com/dotnet/sdk      6.0                83ae347bcb57   3 days ago       740 MB
mcr.microsoft.com/dotnet/aspnet   6.0                69cb014b394b   3 days ago       212 MB
docker.io/library/node            14-alpine          04883debec4a   12 days ago      123 MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running an image (Podman)
&lt;/h2&gt;

&lt;p&gt;In addition to listing images, podman can run them.  A quick example would be running an nginx image on port 8081.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; podman run -p 8081:80 nginx:latest
Error: error getting default registries to try: short-name "nginx:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Oops, we got ahead of ourselves.  Our system does not have the image.  One can either pull the image and run it&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; podman pull docker.io/library/nginx:latest
&amp;gt; podman run -p 8081:80 nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or use a fully qualified name in the run command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; podman run -p 8081:80 docker.io/library/nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Bonus: Building Legacy Code
&lt;/h2&gt;

&lt;p&gt;I maintain multiple applications using a variety of tech stacks.  Some of them are niche and a bit dated.  Because of this, I do not want to install the tools on my workstation.  Some of them are not available for download.  Some interfere with my daily work.  Virtualization is great because I can grab an image with the tool(s) I need and run it without installing it. &lt;/p&gt;

&lt;p&gt;Let's say I have a Java 8 application built with Maven.  I do not want that on my machine for a variety of reasons.  I can get an image from &lt;a href="https://hub.docker.com" rel="noopener noreferrer"&gt;dockerhub&lt;/a&gt; and build my application that way.  Getting and running the image is easy enough, but I am interacting with the filesystem. Podman has ways of mounting folders just like Docker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; podman run --mount type=bind,src=/home/cool_username/src/directory,target=/src --mount type=bind,src=/home/cool_username/.m2,target=/root/.m2 docker.io/library/maven:3.6.0-jdk-8-slim mvn clean 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will break down the command&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The mount flag tells podman a directory should be mounted into the running container.  The bind type is a directory bind.&lt;/li&gt;
&lt;li&gt;The first bind is where the source code to compile is.&lt;/li&gt;
&lt;li&gt;The second bind is the local Maven cache, so it does not have to download everything every time.&lt;/li&gt;
&lt;li&gt;The image to run&lt;/li&gt;
&lt;li&gt;The command to run when the image is started&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can get crafty with this and create an interactive shell out of it.  Mine looks like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export APP_SRC=/home/cool_username/src/directory
export M2_DIR=/home/cool_username/.m2
export POD_IMAGE=maven:3.6.0-jdk-8-slim

podman run --mount type=bind,src=$APP_SRC,target=/src --mount type=bind,src=$M2_DIR,target=/root/.m2 -i $POD_IMAGE /bin/sh $@
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You might note the &lt;code&gt;-i&lt;/code&gt;.  That is to make it interactive.  For more details, the podman commands are broken down &lt;a href="https://docs.podman.io/en/latest/Commands.html" rel="noopener noreferrer"&gt;here&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This post has only scratched the surface of what you can do with Podman and Buildah.  I hope the use cases presented here inspire you to make the switch.  If not, there are some pretty cool things you can do with Podman.  Like generating Kubernetes YAML files and the &lt;code&gt;play kube&lt;/code&gt; feature deserves a &lt;a href="https://www.redhat.com/sysadmin/podman-play-kube-updates" rel="noopener noreferrer"&gt;post of its own&lt;/a&gt;.  Enjoy!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloud</category>
      <category>docker</category>
    </item>
    <item>
      <title>My Long, Strange Trip into DevOps</title>
      <dc:creator>wmchurchill3</dc:creator>
      <pubDate>Mon, 15 Nov 2021 12:08:43 +0000</pubDate>
      <link>https://dev.to/leading-edje/my-long-strange-trip-into-devops-14ca</link>
      <guid>https://dev.to/leading-edje/my-long-strange-trip-into-devops-14ca</guid>
      <description>&lt;p&gt;My introduction to  DevOps best practices was different.  Career experience led me to much of my DevOps knowledge.  My journey did not start because management was looking for a silver bullet solution.  DevOps wasn't even a word to put on a resume.  I learned best practices out of a need to deliver a high quality product and support my teams.&lt;/p&gt;

&lt;p&gt;Early in my career, I helped write manufacturing processes for  ISO9001 compliance.  The overriding rule in the documentation was it should reflect the actual process.  If the process had to change, document it.  The resulting documentation was available to all members of the company.  We encouraged everyone to contribute to the document. This year's State of DevOps has an entire section on documentation. This experience and reading &lt;a href="https://www.amazon.com/Goal-Process-Ongoing-Improvement-Anniversary/dp/B00IFGGDA2"&gt;The Goal&lt;/a&gt; taught me consistency in process is essential for business.  Consistency is repeatability.  Once you can repeat a result, you can change variables to get a desired outcome.  By closing the feedback loop, your iterations gain purpose.  That purpose can range from increasing quality to cutting costs. This ties in with the DevOps principle of &lt;a href="https://netflixtechblog.com/deploying-the-netflix-api-79b6176cc3f0"&gt;"Providing Automation and Insight"&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I took this experience into my software career.  One of the first things I do in any project is automate the build.  Right after that, I try to have a single release point.  I never subscribed to the idea of cutting a release from someone's desktop.  This was too prone to error and tribal knowledge.  The first time I was a lead on a project, I lobbied for a dedicated source control server.  Not satisfied and knowing I was not getting another server, I installed build tools on it.  Continuing, I added provisions to stand up the application to run manual validations.  This was about 20 years ago, before there was such a thing as pipelines.&lt;/p&gt;

&lt;p&gt;Shortly after that, I had the opportunity to work on a series of mission critical batch processes.  The Operations team asked for monitoring and alerting around those processes.  All data center applications logged states into a database log table.  The dashboard was a display from this log table.  I wrote quite a few processes that alerted from the log table, and I learned the importance of monitoring.  Monitoring provides insight to a system.  This coupled with metrics keep the systems healthy.  The collaboration with Operations forms a primary pillar for DevOps.  Expecting another group to deploy and manage your application is unrealistic.  There has to be some level of teamwork.  During that time, I learned much from the Operations team.  Two of those things are preparing an application for deployment and proper monitoring.  I am confident the Operations team learned more about what to ask from developers.  This makes for smoother deployments and upgrades. An unexpected benefit is a reduction in unplanned work due to failed deployments or upgrades.&lt;/p&gt;

&lt;p&gt;Over the years, collaboration with other teams has continued.  Collaboration and communication is one of the hallmarks of DevOps. To quote &lt;a href="https://services.google.com/fh/files/misc/state-of-devops-2021.pdf"&gt;Accelerate State of DevOps 2021&lt;/a&gt;, 'The successful execution of DevOps requires your organization to have teams that work collaboratively and cross-functionally.'  Working with database administrators improved the application and increased my database skills.  Teaming up with Quality Engineering improves test coverage and quality.  The reduced testing burden on QE results in quicker turnaround time during testing.  Today's threats require working with Security.  Earlier involvement from security makes creating a secure application easier.  This is the essence of DevSecOps.&lt;/p&gt;

&lt;p&gt;One's journey into DevOps does not have to be at a conference, blog post, or in a book.  It can come from asking, "How can I help my team?"  DevOps can come from improving the product quality or security.  Better DevOps can even come from wanting to improve yourself.  There is still a lot to learn and I invite you to &lt;br&gt;
continue or start your journey.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image" width="800" height="280"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>An Introduction to Cloud Functions</title>
      <dc:creator>wmchurchill3</dc:creator>
      <pubDate>Thu, 30 Sep 2021 16:19:18 +0000</pubDate>
      <link>https://dev.to/leading-edje/an-introduction-to-cloud-functions-2605</link>
      <guid>https://dev.to/leading-edje/an-introduction-to-cloud-functions-2605</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--62IMlW1Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x22l6kmpvptw7hvqhe1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--62IMlW1Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x22l6kmpvptw7hvqhe1j.png" alt="Cloud Function Icon"&gt;&lt;/a&gt;&lt;br&gt;
During our yearly hackathon, I had an opportunity to work with Google Cloud Functions.  I was already familiar with AWS Lambdas.  Since many of my clients are either in GCP or looking at it as an alternative, this was a good time to experiment.  In this instance, the Functions were acting as REST endpoints for a cloud database.  The Function implementations are in Java built with Gradle.&lt;/p&gt;

&lt;p&gt;After sorting out the accounts and permissions, I deployed the first Function.  I tried to use an IAM authentication to get around having a DB password in the application.  Because IAM authentication would not work in the few days we had, the connection used username/password credentials.   Best practices require the credentials to be in a secrets manager.&lt;/p&gt;

&lt;p&gt;Another stumbling block encountered was library dependencies.  Each Function has a backing container.  By default, the containers would not have any additional dependencies.  The shadow plugin fixed that problem. The resulting jar was much larger than the default. The &lt;code&gt;--source=build/lib&lt;/code&gt; argument in the &lt;code&gt;gcloud functions deploy&lt;/code&gt; command deploys the jar in the folder.&lt;/p&gt;

&lt;p&gt;Google’s library for Functions provides a nice way to test the Function before deploying it. The command &lt;code&gt;gradle runFunction -Prun.functionTarget=&amp;lt;package&amp;gt;.&amp;lt;class name&amp;gt;&lt;/code&gt; starts a local server on port 8081.  With this local server, you can run Postman commands or view the results in a browser.  This is great for verifying logic and connection information in your application.  You will need to set the GOOGLE_APPLICATION_CREDENTIALS environment variable before running this.  The credentials are the local file location for the downloaded service account json key.&lt;/p&gt;

&lt;p&gt;A couple Functions and a common library later things, seemed to be going well.  Then the application started having issues.  The Functions failed with a SQLTransientConnectionException.  A &lt;code&gt;connection not available&lt;/code&gt; caused the exception.  Looking at the database, the connections to our database were high.  Connection management seemed to be the problem.  In hindsight if I had run the Function request on my local machine multiple times, I would have found the issue.  The Function needed to release the connection before exiting.  After releasing the connection, the Functions worked better. &lt;/p&gt;

&lt;p&gt;The only other glitch was 'warming up' the Function.  If the endpoint is idle too long, the Function has to initialize, taking longer to respond.  The console has a "Minimum instances" field. The &lt;code&gt;gcloud functions deploy&lt;/code&gt; command does not have a way to configure the minimum instances. &lt;/p&gt;

&lt;p&gt;In my opinion, this was a successful experiment.  The Functions were able to act as simple REST services.  Setting up the Functions was simple enough. As they say in academia, further research is needed.&lt;br&gt;
You should do your own &lt;a href="https://cloud.google.com/functions/docs/quickstarts"&gt;research&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>A Window into Docker, minikube, and containerd</title>
      <dc:creator>wmchurchill3</dc:creator>
      <pubDate>Tue, 28 Sep 2021 15:23:03 +0000</pubDate>
      <link>https://dev.to/leading-edje/a-window-into-docker-minikube-and-containerd-16bi</link>
      <guid>https://dev.to/leading-edje/a-window-into-docker-minikube-and-containerd-16bi</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xiomsbe1pg2lis88o9n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xiomsbe1pg2lis88o9n.png" alt="Container Runtime Logos"&gt;&lt;/a&gt;&lt;br&gt;
Like many of you, I received an email from Docker notifying me of their changes to service.  Having used Docker Desktop for many years as part of my work, I was a little concerned.  My concern was not great enough to do anything... Until a co-worker suggested an article switching from Docker for Windows to containerd.  &lt;a href="https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd" rel="noopener noreferrer"&gt;This link&lt;/a&gt; from 2018 seemed to suggest containerd could run on Windows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spoiler Alert/TL;DR:&lt;/strong&gt; This is not a post about getting containerd running on Windows.  I was able to get a Windows nanoserver image running in containerd.  I could not get that image connecting to any network.  This post is a survey of the source code, GitHub issues, and dead links chased.  All documented to show how close and far away we are to something useful.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where does minikube fit in here?
&lt;/h3&gt;

&lt;p&gt;In my research and frustration, I wanted to try running something else.  I enabled Hyper-V on my machine.  Followed the instructions at &lt;a href="https://minikube.sigs.k8s.io/docs/start/" rel="noopener noreferrer"&gt;minikube quickstart&lt;/a&gt;.  Things worked!  Thank you to the maintainers of minikube!  Great Job!  I definitely will be using this more in the future.&lt;/p&gt;

&lt;p&gt;The only place I deviated was in starting the minikube cluster.  I used the command &lt;code&gt;minikube start --driver=hyperv --container-runtime=containerd&lt;/code&gt;.  For fun, I checked the Hyper-V Manager and saw a new virtual machine named 'minikube'.  Then it hit me.  A Linux VM hosts the minikube cluster complete with its own version of containerd.  This means I could not run a Windows image! &lt;/p&gt;

&lt;h3&gt;
  
  
  The Journey Begins
&lt;/h3&gt;

&lt;p&gt;The first stop was the &lt;a href="https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/containerd" rel="noopener noreferrer"&gt;Container Platform Tools on Windows&lt;/a&gt;.  This is where the dead links begin (see the Links to CRI Spec).  My second stop was the &lt;a href="https://containerd.io" rel="noopener noreferrer"&gt;containerd site&lt;/a&gt;.  I downloaded and installed the requirements and release tarball.  When the compiling started, I ran into an issue with make looking for gcc.  This seemed odd since 1) it is a Go application, 2) having gcc on Windows seems like a high bar for running containers.&lt;/p&gt;

&lt;p&gt;Some more Googling brought me to &lt;a href="https://www.jamessturtevant.com/posts/Windows-Containers-on-Windows-10-without-Docker-using-Containerd/" rel="noopener noreferrer"&gt;James Sturtevant's&lt;/a&gt; site.  This made me aware pre-built Windows containerd binaries exist.  Now I was making some progress.  &lt;/p&gt;

&lt;p&gt;The following code snippet will download and configure containerd as a service.  Each line does the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download the latest (as of 20210924) release of containerd&lt;/li&gt;
&lt;li&gt;Make a directory for the containerd binaries and configs&lt;/li&gt;
&lt;li&gt;Expand the containerd tarball&lt;/li&gt;
&lt;li&gt;Move the binaries to the directory created above&lt;/li&gt;
&lt;li&gt;Add containerd to the Path environment variable&lt;/li&gt;
&lt;li&gt;Create a default containerd configuration in the containerd directory&lt;/li&gt;
&lt;li&gt;Tell Windows Defender not worry about the containerd executable&lt;/li&gt;
&lt;li&gt;Register containerd as a service&lt;/li&gt;
&lt;li&gt;Start containerd&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In a Admin PowerShell window,&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

curl.exe -LO https://github.com/containerd/containerd/releases/download/v1.5.5/containerd-1.5.5-linux-amd64.tar.gz
mkdir "C:\Program Files\containerd"
tar -xzf containerd-1.5.5-linux-amd64.tar.gz
mv .\bin\* "C:\Program Files\containerd"
$env:Path = $env:Path + ';C:\Program Files\containerd'
containerd.exe config default | Set-Content "C:\Program Files\containerd\config.toml" -Force
Add-MpPreference -ExclusionProcess "$Env:ProgramFiles\containerd\containerd.exe"
.\containerd.exe --register-service
Start-Service containerd


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To verify containerd is running:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open the Task Manager&lt;/li&gt;
&lt;li&gt;Go into the &lt;code&gt;More Details&lt;/code&gt; view&lt;/li&gt;
&lt;li&gt;Scroll to &lt;code&gt;Background Processes&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;You should see a &lt;code&gt;containerd.exe&lt;/code&gt; process
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuf5howvhopubabei7rm9.png" alt="Task Manager Process Listing"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Running a Container
&lt;/h3&gt;

&lt;p&gt;Under ideal circumstances, we would pull an image using the &lt;code&gt;ctr&lt;/code&gt; command. &lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

.\ctr.exe pull docker.io/library/mcr.microsoft.com/windows/nanoserver:10.0.19042.1165-amd64{% raw %}`
```
Unfortunately, there is some authentication around the Microsoft images.  Assuming you have one downloaded using Docker, we can 
1. Save the image 
1. Import the image using ctr
1. Run the image.
From the Admin PowerShell window,
```
docker save mcr.microsoft.com/nanoserver:10.0.19042.1165-amd64 -o nanoserver.tar
.\ctr.exe image import  --all-platforms c:\wherever\you\put\this\nanoserver.tar
.\ctr.exe run -rm mcr.microsoft.com/windows/nanoserver:10.0.19042.1165-amd64 test cmd /c echo hello
```
If you see `hello` on the next line immediately after the command, Success!

That's it, right?
![Lee Corso, Not So Fast Gif](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xy3weei3rrs7spw6j4v6.gif)

We have a container running a Windows image, but no network.  

### Creating A Network for the containers
We need extra setup for networking our pods.  CNI (Container Networking Interface) will provide NAT'ing for our dev environment.  We also must get a helper script to set up the network.  The steps:
1. Get the CNI tools executables
1. Get the helper script hns.psm1
1. Create some directories
1. Expand the CNI tools into the created directories.
1. Allow your machine to execute scripts
1. Unblock the helper script, hns.psm1
1. Import hsn.psm1 for use.  Disregard the warning about verbs.  This is a naming convention.

From the PowerShell window,
```
curl.exe -LO https://github.com/microsoft/windows-container-networking/releases/download/v.0.2.0/windows-container-networking-cni-amd64-v0.2.0.zip
curl.exe -LO https://raw.githubusercontent.com/microsoft/SDN/master/Kubernetes/windows/hns.psm1
mkdir -force "C:\Program Files\containerd\cni\bin"
mkdir -force "C:\Program Files\containerd\cni\conf"
Expand-Archive windows-container-networking-cni-amd6464-v0.2.0.zip -DestinationPath "C:\Program Files\containerd\cni\bin" -Force
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope LocalMachine
Unblock-File -Path .\hns.psm1
ipmo .\hns.psm1 
```

Now to configure the network.  From the Admin PowerShell window,
```
$subnet="10.0.0.0/16"
$gateway="10.0.0.1"
New-HNSNetwork -Type Nat -AddressPrefix $subnet -Gateway $gateway -Name "nat"
```
In this case, the name must be `nat`.
Let's check our work. From the PowerShell window:
```
netsh lan show profiles
```
You should see the new 'nat' network.
```
Profile on interface vEthernet (nat)
=======================================================================
Applied: User Profile

    Profile Version        : 1
    Type                   : Wired LAN
    AutoConfig Version     : 1
    802.1x                 : Enabled
    802.1x                 : Not Enforced
    EAP type               : Microsoft: Protected EAP (PEAP)
    802.1X auth credential : [Profile credential not valid]
    Cache user information : [Yes]
```
If you get an error about dot3svc not running, run `net start dot3svc` and run the `netsh` command again.

Configure containerd to use that network.  From the Admin PowerShell window,
```
@"
{
    "cniVersion": "0.2.0",
    "name": "nat",
    "type": "nat",
    "master": "Ethernet",
    "ipam": {
        "subnet": "$subnet",
        "routes": [
            {
                "gateway": "$gateway"
            }
        ]
    },
    "capabilities": {
        "portMappings": true,
        "dns": true
    }
}
"@ | Set-Content "C:\Program Files\containerd\cni\conf\0-containerd-nat.conf" -Force
```

### Container Runtime Interface (CRI)
We are in the endgame now.  I promise.  From the [README](https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md), crictl provides a CLI for CRI-compatible container runtimes.
The following snippet performs the following:
1. Download the crictl executable.
1. Creates the default location for crictl to look for a configuration
1. Creates the configuration

From a PowerShell,
```
curl.exe -LO https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.20.0/crictl-v1.20.0-windows-amd64.tar.gz                          
tar -xvf crictl-v1.20.0-windows-amd64.tar.gz
mkdir $HOME\.crictl
@"
runtime-endpoint: npipe://./pipe/containerd-containerd
image-endpoint: npipe://./pipe/containerd-containerd
timeout: 10
#debug: true
"@ | Set-Content "$HOME\.crictl\crictl.yaml" -Force
```

### The Payoff

Using a pod.json of 
```
{
    "metadata": {
        "name": "nanoserver-sandbox",
        "namespace": "default",
        "uid": "hdishd83djaidwnduwk28bcsb"
    },
    "logDirectory": "/tmp",
    "linux": {}
}
```
The magic happens with this command:
```
$POD_ID=(./crictl runp .\pod.json)
$CONTAINER_ID=(./crictl create $POD_ID .\container.json .\pod.json)
./crictl start $CONTAINER_ID
```

### The Problem

Running the `.\crictl runp .\pod.json` creates a sandbox pod for use in creating a container in the next command.  The runp command fails setting up the network adapter for the pod.  The output is as follows:
```
time="2021-09-22T09:25:29-04:00" level=debug msg="get runtime connection"
time="2021-09-22T09:25:29-04:00" level=debug msg="connect using endpoint 'npipe://./pipe/containerd-containerd' with '10s' timeout"
time="2021-09-22T09:25:29-04:00" level=debug msg="connected successfully using endpoint: npipe://./pipe/containerd-containerd"
time="2021-09-22T09:25:29-04:00" level=debug msg="RunPodSandboxRequest: &amp;amp;RunPodSandboxRequest{Config:&amp;amp;PodSandboxConfig{Metadata:&amp;amp;PodSandboxMetadata{Name:nanoserver-sandbox,Uid:hdishd83djaidwnduwk28bcsb,Namespace:default,Attempt:0,},Hostname:,LogDirectory:,DnsConfig:nil,PortMappings:[]*PortMapping{},Labels:map[string]string{},Annotations:map[string]string{},Linux:&amp;amp;LinuxPodSandboxConfig{CgroupParent:,SecurityContext:nil,Sysctls:map[string]string{},},},RuntimeHandler:,}"
time="2021-09-22T09:25:29-04:00" level=debug msg="RunPodSandboxResponse: nil"
time="2021-09-22T09:25:29-04:00" level=fatal msg="run pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox \"e4cc6fc22dbdf8ccde0035239873cb9f31b074fca4650acc545a8af5a51d814c\": error creating endpoint hcnCreateEndpoint failed in Win32: IP address is either invalid or not part of any configured subnet(s). (0x803b001e) {\"Success\":false,\"Error\":\"IP address is either invalid or not part of any configured subnet(s). \",\"ErrorCode\":2151350302} : endpoint config &amp;amp;{ e4cc6fc22dbdf8ccde0035239873cb9f31b074fca4650acc545a8af5a51d814c_nat 11d59574-13be-4a14-b3e8-11cc0d5a7805  [] [{ 0}] { [] [] []} [{10.0.0.1 0.0.0.0/0 0}]  0 {2 0}}"
```
There is a [GitHub Issue](https://github.com/containerd/containerd/issues/4851) that hints to a problem with the pod network workflow on Windows

### Conclusion

There is a good possibility this issue will remain for a while.  It has been around for the better part of a year.  If one is running Linux containers, there is a great substitute in [minikube](https://minikube.sigs.k8s.io/docs/).  It is easy to setup, well documented, maintained, and simulates a production environment.  It appears Windows images will still need to run on Docker.  Please leave a comment below if you are able to find a workaround.

### Relevant Links

[GitHub Issue: Windows CNI plugin has no chance to create and configure container VNIC](https://github.com/containerd/containerd/issues/4851)
[James Sturtevant's Windows Containers on Windows 10 without Docker using Containerd](https://www.jamessturtevant.com/posts/Windows-Containers-on-Windows-10-without-Docker-using-Containerd/)
[PowerShell Execution Policies](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies?view=powershell-7.1)
[minikube](https://minikube.sigs.k8s.io/docs/)
[crictl README has pod.json samples](https://github.com/containerd/containerd/blob/main/docs/cri/crictl.md)

&amp;lt;a href="https://dev.to/leading-edje"&amp;gt;
  ![Smart EDJE Image](https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png)
&amp;lt;a/&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>DevOps is not just for Developers and Operations</title>
      <dc:creator>wmchurchill3</dc:creator>
      <pubDate>Fri, 03 Sep 2021 17:19:52 +0000</pubDate>
      <link>https://dev.to/leading-edje/devops-is-not-just-for-developers-and-operations-58k3</link>
      <guid>https://dev.to/leading-edje/devops-is-not-just-for-developers-and-operations-58k3</guid>
      <description>&lt;p&gt;Most people look at the word DevOps and think it applies to Developers and Operations. The reality is a little more complicated. DevOps is an integral part of Agile software development.  Defined software artifacts and fast pipelines are tools in an iterative, agile process.  Can other Agile roles contribute in a DevOps environment?  Do the other roles get any benefit out of DevOps?&lt;/p&gt;

&lt;p&gt;QA or Quality Engineering is the easiest role to see playing into DevOps.  We even have a special flavor of DevOps with QA, DevQAOps.  It does not roll off the tongue as well.  QA designs the testing to confirm the deliverables conform to requirements.  It is so vital to have this closed feedback loop to ensure the project stays on track.  Smoke tests can be subsets of the UI or integration tests.  Using these tests can help ensure the application can run across different environments.  This prevents costly rollbacks in production.  A small number of load tests can verify the application will run as expected under higher loads.&lt;/p&gt;

&lt;p&gt;Business Analysts are not the first thing to come to mind when one thinks of DevOps.  In spite of this, they still play an important role.  The business stories provide information for developing the feature and testing strategies.  These stories can also determine resources allocated in the production environment.  Business Analysts also help by chasing down answers to questions.  When one considers developer context switching, this becomes more important to project velocity.   The developer continues to code while the analysts finds the answer.&lt;/p&gt;

&lt;p&gt;Project Managers have a role to play in DevOps too.  As managers, they can help create a more DevOps friendly environment.  Automation represents a large upfront cost with an undefined savings throughout the project.  Prioritizing automation work moves the DevOps needle in the right direction.  Encouraging smaller units of work assists with quicker turnaround of feature delivery.  Monitoring provides valuable insight into the health and operation of the application.  Holding the team accountable for monitoring pays off with reduced downtime.  Additionally, new insights can drive new business features.&lt;/p&gt;

&lt;p&gt;So what do these roles get out of DevOps? Having a working pipeline provides focus for the team.  Artifact versioning enables testing anytime.  Having artifacts available removes the need for developers to create them.  This allows more working time during the sprint to keep the new features flowing.  Automated deployment ensures one knows exactly what is running in any environment.   Good visualization of the pipeline provides immediate feedback.  The health of the codebase is no longer a mystery.  The quick feedback keeps the problems smaller.  Automated testing provides higher confidence in the code. &lt;/p&gt;

&lt;p&gt;This is by no means an exhaustive list.  These examples should illustrate how a good DevOps plan helps everyone.  All roles have an important part to play in the Software Development Lifecycle.  After all, software development is a team sport.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>agile</category>
    </item>
    <item>
      <title>How do you know you are doing DevOps right?</title>
      <dc:creator>wmchurchill3</dc:creator>
      <pubDate>Mon, 12 Apr 2021 12:55:48 +0000</pubDate>
      <link>https://dev.to/leading-edje/how-do-you-know-you-are-doing-devops-right-37pk</link>
      <guid>https://dev.to/leading-edje/how-do-you-know-you-are-doing-devops-right-37pk</guid>
      <description>&lt;p&gt;In refining a DevOps process, it can be difficult to determine if a change had a positive effect.  Continuously measuring and analyzing metrics can show a change is moving the system in the right direction.  Before you go off and start counting lines-of-code or hours spent, let's talk about meaningful metrics.&lt;/p&gt;

&lt;p&gt;Back in 2019, Google and few other heavy hitters put together a study detailing measurements high performing organizations shared.  The results are available in the &lt;a href="https://services.google.com/fh/files/misc/state-of-devops-2019.pdf"&gt;State of DevOps&lt;/a&gt;.  It is worth the read, but if you need a good summary of the metrics, my co-worker has put one together &lt;a href="https://dev.to/leading-edje/measuring-your-team-s-software-delivery-performance-4795"&gt;here at Dev.to&lt;/a&gt;.  While the metrics themselves are interesting, the reasons for them are just as intriguing.  The four metrics are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;&lt;strong&gt;Lead Time for Change&lt;/strong&gt;&lt;/em&gt; : throughput of software delivery process from check-in to release&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;&lt;strong&gt;Deployment Frequency&lt;/strong&gt;&lt;/em&gt; : how often code is released to production&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;&lt;strong&gt;Time to Restore Service&lt;/strong&gt;&lt;/em&gt; : time to restore service when a service incident or a user impacting defect is detected&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;&lt;strong&gt;Change Failure Rate&lt;/strong&gt;&lt;/em&gt; : percentage of releases that result in degraded user experience that requires remediation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These measurements work together to provide four legs holding up a stable platform that is your application.  Trying to optimize one metric at the expense of the others will have negative results.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Lead Time for Change&lt;/em&gt; seems to be the one most organizations want to improve the most.  It sounds good on paper, if we get more changes in, we are more responsive to change and it makes the application better.  The problem arises when you are pushing for faster changes, shortcuts are taken.  Tests are not written, code written is not extensible, and more technical debt is accrued.  On some occasions, the new change can cause an outage due to a missed edge condition.  &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Change Failure Rate&lt;/em&gt; and &lt;em&gt;Time to Restore Service&lt;/em&gt; tempers &lt;em&gt;Lead Time for Change&lt;/em&gt;.  By keeping the &lt;em&gt;Change Failure Rate&lt;/em&gt; low or constant as you reduce your &lt;em&gt;Lead Time For Change&lt;/em&gt;, you can maintain a good quality of code and enjoy a dynamic application.  A good &lt;em&gt;Time to Restore Service&lt;/em&gt; measurement provides a good safety net to make the changes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Deployment Frequency&lt;/em&gt; is another metric that seems easy to improve.  There are always changes in the pipeline, just push them out as they are done and the frequency increases.  Again in our push for a better number, substandard code can be deployed as checks and balances are compromised for a single, better score.  &lt;/p&gt;

&lt;p&gt;Having a good &lt;em&gt;Time to Restore Service&lt;/em&gt; number can mitigate having the site down for a while, but the user experience is still degraded through unplanned down time and lack of new features.  Additionally, these rollbacks will increase the &lt;em&gt;Change Failure Rate&lt;/em&gt;.  A judicious increase in &lt;em&gt;Deployment Frequency&lt;/em&gt; can compliment a reduction in &lt;em&gt;Lead Time for Change&lt;/em&gt; as now there is a shorter window to introduce changes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Time to Restore Service&lt;/em&gt; seems like it could stand on its own.  After all, a robust infrastructure and good roll back strategy can mitigate almost anything that can disrupt service.  Highly available, redundant clusters can allow an application to survive the destruction of the primary host datacenter.  Operations could create that dream infrastructure, put in strong change controls and procedures, and the &lt;em&gt;Time to Restore Service&lt;/em&gt; number is now sub-second.  &lt;/p&gt;

&lt;p&gt;This is a DevOps article, and Development and Operations work together.  Overly burdensome change controls can be hostile to development efforts and negatively affect &lt;em&gt;Lead Time For Change&lt;/em&gt; and &lt;em&gt;Deployment Frequency&lt;/em&gt;.  We have already discussed how this measurement can rein in overzealous increases in &lt;em&gt;Deployment Frequency&lt;/em&gt; and decreases in &lt;em&gt;Lead Time to Change&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Finally, a good &lt;em&gt;Change Failure Rate&lt;/em&gt; is a strong indicator of high quality code.  While not as attractive to the bottom line as the other indicators, it is something Development could drive on their own.  Introduce some automated testing, static code analysis, code linting, documentation and you have some great code, right?  These are all good, but a single minded approach to test coverage can lead to regression test runs that last hours.  Couple this with a requirement to complete a regression for every merge and the &lt;em&gt;Lead Time For Change&lt;/em&gt; number is hurt, which also affects the &lt;em&gt;Deployment Frequency&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;When measuring the performance of your DevOps process, it is easy to be overwhelmed by all the possible measurements.  The four measurements reviewed here are a great place to start or even finish, but for best results, it is important to use all these measurements together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SfUhPiEd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>agile</category>
    </item>
    <item>
      <title>You Are Already Doing DevOps</title>
      <dc:creator>wmchurchill3</dc:creator>
      <pubDate>Thu, 25 Feb 2021 13:47:27 +0000</pubDate>
      <link>https://dev.to/leading-edje/you-are-already-doing-devops-1f4o</link>
      <guid>https://dev.to/leading-edje/you-are-already-doing-devops-1f4o</guid>
      <description>&lt;p&gt;If you have an application deployed to a server, you are already doing DevOps.  Stay with me on this.  First, we need a level set, what is DevOps?  There are multiple definitions (&lt;a href="https://aws.amazon.com/devops/what-is-devops/" rel="noopener noreferrer"&gt;AWS-What is DevOps?&lt;/a&gt;, &lt;a href="https://www.atlassian.com/devops" rel="noopener noreferrer"&gt;Atlassian-DevOps&lt;/a&gt;, &lt;a href="https://theagileadmin.com/what-is-devops/" rel="noopener noreferrer"&gt;Good, but Long read&lt;/a&gt;) and multiple flavors (DevSecOps, DevQAOps, DevSecQAOps,…), but if we boil the definition down; DevOps is the about the coordination and communication between teams to develop and deliver a product to consumers.&lt;/p&gt;

&lt;p&gt;Back to my original statement, you are already doing DevOps.  You must have a process in place to get a developed application onto a customer-facing, production server.  This is true in a one-person codeshop or a Fortune 20 IT department.  The differences could be the number of people involved, regulations followed for compliance, number of buttons pushed,... you get the idea.  There is still a process that is followed and communicated to the departments involved that always includes Development and Operations.  &lt;/p&gt;

&lt;p&gt;Most organizations have a development group responsible for maintaining and developing an application, and an operations group maintaining the infrastructure the application runs on.  The development and deployment process usually goes like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Development has a new version of the application they want to deploy that adds new features and corrects bugs.&lt;/li&gt;
&lt;li&gt;Development creates a new artifact containing the release candidate of the application&lt;/li&gt;
&lt;li&gt;Validation of the release candidate&lt;/li&gt;
&lt;li&gt;Sign-off the the release candidate&lt;/li&gt;
&lt;li&gt;Hand off of the release candidate to Operations&lt;/li&gt;
&lt;li&gt;Operations deploys release candidate to production&lt;/li&gt;
&lt;li&gt;Production validation&lt;/li&gt;
&lt;li&gt;Development makes the release candidate the current production version and cuts a new dev version.&lt;/li&gt;
&lt;li&gt;Rinse and repeat.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I will say that even if all these steps are manual, you are practicing a form of DevOps.  Is there room for improving this form of DevOps?  Definitely, but we all have to start our journey somewhere.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Journey&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DevOps is not a tool you can purchase.  It is not a one time company initiative that fixes all the organizational ills.  It is an ongoing process and organizational mindset.  It sounds intimidating, but it should not be if we break it up into small steps.  We can start by automating something in each step.  It could look like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Development automatically compiles a list of features and bugs corrected from the work items tracked during the development cycle.  This becomes the release notes&lt;/li&gt;
&lt;li&gt;Development automates the build process to create the artifact.  This would include version information so deploying the correct version becomes easier.&lt;/li&gt;
&lt;li&gt;QA and Security can automate some or all of their application validations.  Reports are generated from these validations and are made available to all to learn how to make the application better.&lt;/li&gt;
&lt;li&gt;The responsible parties could be automatically emailed to notify them a release candidate is ready for sign off.&lt;/li&gt;
&lt;li&gt;Operations can stand up an artifact repository like Nexus to provide a single, safe place to store release candidates. &lt;/li&gt;
&lt;li&gt;Deployments can be scripted to ensure consistency and prevent the introduction of errors.&lt;/li&gt;
&lt;li&gt;Validations from step 3 can be borrowed to run smoke tests on the production deployment to further ensure success.&lt;/li&gt;
&lt;li&gt;Version increment scripts and plugins can be used to ensure the application version is managed correctly.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With the exception of step 7, all of these tasks can happen independently of each other.  All of this helps move the needle to a more automated DevOps experience.  If only 2 and 5 are implemented, a CI/CD pipeline in the form of GitLab, GitHub Actions, Jenkins, etc can be introduced to orchestrate the release process.  Each step can benefit from further automation to decrease turnaround times and increase feature deployments.  &lt;/p&gt;

&lt;p&gt;I have only outlined a generic example of a DevOps journey.  I hope I have shown you that not only are you practicing DevOps, but you are not that far from having a more automated DevOps process.  What kind of DevOps are you practicing?  What can you do to improve it?  Please share and continue the conversation in the comments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/leading-edje"&gt;&lt;br&gt;
  &lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F5uo60qforg9yqdpgzncq.png" alt="Smart EDJE Image"&gt;&lt;br&gt;
&lt;/a&gt;&lt;a&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>The Perils of Feature Driven IT</title>
      <dc:creator>wmchurchill3</dc:creator>
      <pubDate>Mon, 14 Sep 2020 15:11:40 +0000</pubDate>
      <link>https://dev.to/wmchurchill3/the-perils-of-feature-driven-it-5h69</link>
      <guid>https://dev.to/wmchurchill3/the-perils-of-feature-driven-it-5h69</guid>
      <description>&lt;p&gt;Many IT organizations emphasize the need for new features in their product portfolio. Often at the expense of stability, security, and in many cases sanity. The old adage states there are three qualities of software projects; cheap, fast, and good. Pick two. These feature driven shops, in an effort to squeeze out as much functionality as they can, pick cheap and fast. While this looks good in the short term, it is not sustainable. This post will outline strategies and provide reasons for adopting a less aggressive feature schedule.&lt;/p&gt;

&lt;p&gt;In their haste to get it out the door, many corners are cut. There is no automated testing. This requires code that does not provide new features and is deemed a waste of time. In reality, these automated tests are a cost savings. Not only do these tests help verify the desired functionality, they also build up a battery of tests to fall back upon to prevent regressions from being introduced when new features are added. They also facilitate safe refactors, which are always necessary as a project ages (more on that below...). These automated tests coupled with an ever growing catalog of good regression tests, help focus any manual QA and UAT. These groups no longer have to run laborious, error prone, and manual regression scripts to validate the application. This potentially reduces the amount of QA time per release cycle, thus reducing costs and freeing up resources for other projects and features. Additionally, because the application is self-testing, there is less risk of the changes producing a production defect or coming back from QA. Both of these significantly reduce costs in the development cycle and improved adherence to the project timeline. The overall result from just introducing automated testing is higher quality of code at a reduced cost.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UEHpZSRZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5i2isuc1y9hah5qjtkk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UEHpZSRZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/i/5i2isuc1y9hah5qjtkk4.png" alt="Automated Test Returns"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://nist.gov/system/files/documents/director/planning/report02-3.pdf"&gt;Report Referenced&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another tactic in cutting corners is hardcoding behaviors or values into the application. There are many intentions of doing it right later, but later seldom arrives. This results in a extremely brittle application that often cannot survive environment promotion (ie. QA to production) and/or difficult to extend. This causes a lot of lost time troubleshooting something that should be as simple as a configuration change. This lost time is time that could be used for developing more features. Again, this is lost cost savings.&lt;/p&gt;

&lt;p&gt;Adding features to a brittle application becomes increasing problematic as time passes. Many of the quickest ways to get something working is to copy-and-paste an existing solution (because it works) and shoe horn it to resemble the new feature. The problem with this is while it does allow for similar changes to be made, it makes it almost impossible to implement a large, new feature that deviates from normal business processes. Innovation is stifled, resulting in lost revenue. In today's world, an actively used application is constantly changing to fit the user base. If the application cannot change, it can become harder to use, and its usage declines. This is where refactoring can help. A safe refactor can keep code clean and efficient by removing unused (and potentially dangerous) chunks of code. These refactors can also allow for larger, faster changes as there is less legacy code to navigate. We see similar things in nature. A controlled fire can clean out a section of forest to allow it to come back healthier. In many instances, when a code base cannot be made to accommodate a new feature request, it has to be replaced either with an off-the-shelf offering requiring customization or a purely custom solution replacing the entire application. Either solution is expensive and time consuming.&lt;/p&gt;

&lt;p&gt;Infrastructure to support the creation of software is often skipped in the name of new features. If software is used to automate business processes to improve performance and efficiencies, shouldn't we automate the creation and deployment of software as much as we can? The scenario described above where an application is promoted from a QA environment to production should be as simple as a push of a button. Many of us remember staying up late to deploy sites and having to follow lengthy scripts to ensure the site would come back up with the new changes. In spite of our best efforts, there were mistakes made and things overlooked. Automation significantly reduces the opportunities for these oversights providing a much smoother deployment. Many organizations with a significant on-line presence deploy changes to their sites multiple times a day. Some even test in production. How are they able to do this? By having a robust infrastructure to build, test, and deploy their software with built-in monitoring and redundancies to improve resiliency. While this is not free, it does provide more up-time and more rapid feature implementation.&lt;/p&gt;

&lt;p&gt;New features can definitely enhance a product, but not at the expense of stability, maintainability, and extensibility. A small investment up front in things like automated testing, CI/CD, and good coding practices can yield a big pay off down the line. This pay off can be fewer unplanned outages, more time for planned work, and faster development cycles.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>devops</category>
      <category>testing</category>
    </item>
    <item>
      <title>Some Musings about Embedded Application Development</title>
      <dc:creator>wmchurchill3</dc:creator>
      <pubDate>Wed, 22 Apr 2020 14:13:51 +0000</pubDate>
      <link>https://dev.to/leading-edje/some-musings-about-embedded-application-development-15l7</link>
      <guid>https://dev.to/leading-edje/some-musings-about-embedded-application-development-15l7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Originally Published: 2015 July 12&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With the ubiquity of ARM processors and *nix distributions running on them, embedded application development more closely resembles desktop or server application development. No longer is it mandated that an application be their own operating system. Anyone who has done application development in a *nix environment can develop for an embedded linux appliance. However, there are some significant differences that change the approach of development in these environments. Right after application design, hardware constraints dominate a developer's thoughts.&lt;/p&gt;

&lt;p&gt;An embedded device would have a limited number of processes running at any one time and each must be a good team player. No single application can monopolize any resource on the device. Using a smart phone as an example, if you have an application take up all the memory or processing cycles, the other applications will cease to perform in a responsive manner. You may not even be able to make calls or text until a reboot clears the problem.&lt;/p&gt;

&lt;p&gt;Memory is a major limitation. In a PC environment, one typically has ample memory. Even when this is not the case, you still have the option of upgrading the memory. While it may be possible to run a memory managed vm like Java on your device, your performance can vary greatly. Typically, the choice of languages for a new application will be closer to the metal (C or C++). This allows finer grained control over memory allocation. Embedded applications usually grab any memory it would need at startup. This prevents out of memory exceptions or application halts for garbage collection (if a managed environment is used). These types of errors can hide during development and QA but rear its ugly head in the field.&lt;/p&gt;

&lt;p&gt;While SD cards are growing in capacity, the root disk space is still very limited. Often the SD card is not used, as EPROM may be preferred. Small, tight libraries are extremely important in this type of development. The ubiquity of BusyBox on embedded devices illustrates this. Even within the application code, keeping things small and simple is important. Many systems load the entire root filesystem into memory. Another side benefit of using small libraries is the inherent small memory footprint when the application is loaded.&lt;/p&gt;

&lt;p&gt;This is by no means an all inclusive list. Hopefully it will assist any developers looking to make the leap to embedded development. As phones become more sophisticated and wearables become more common, now is a good time to look at embedded application development for fun and/or profit.&lt;/p&gt;

</description>
      <category>embedded</category>
      <category>development</category>
    </item>
  </channel>
</rss>
