<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: A B Vijay Kumar</title>
    <description>The latest articles on DEV Community by A B Vijay Kumar (@abvijaykumar).</description>
    <link>https://dev.to/abvijaykumar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/abvijaykumar"/>
    <language>en</language>
    <item>
      <title>Supercharge Your Applications with GraalVM — Book</title>
      <dc:creator>A B Vijay Kumar</dc:creator>
      <pubDate>Sun, 06 Feb 2022 14:30:03 +0000</pubDate>
      <link>https://dev.to/abvijaykumar/supercharge-your-applications-with-graalvm-book-5g90</link>
      <guid>https://dev.to/abvijaykumar/supercharge-your-applications-with-graalvm-book-5g90</guid>
      <description>&lt;h2&gt;
  
  
  Supercharge Your Applications with GraalVM — Book
&lt;/h2&gt;

&lt;p&gt;It’s been a long gap, since I blogged. &lt;a href="https://www.packtpub.com/"&gt;Packt&lt;/a&gt; reached out to me, after reading my blog, and offered to convert the content into a more details hands on book.&lt;/p&gt;

&lt;p&gt;It's been a super exciting, learning experience writing the book.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lE-w3qW1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AjXR1pdicf62X3x0ro8axdg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lE-w3qW1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AjXR1pdicf62X3x0ro8axdg.png" alt="" width="250" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before I explain what is gone into this book, let me introduce GraalVM. Here are some blogs on GraalVM, I had published before.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Episode 1: “The Evolution” **— Java JIT Hotspot &amp;amp; C2 compilers (the current episode…scroll down)&lt;br&gt;
 &lt;a href="https://medium.com/@abvijaykumar/episode-2-the-holy-grail-graalvm-building-super-optimum-microservices-architecture-series-c068b72735a1"&gt;*&lt;em&gt;Episode 2: “The Holy Grail” *&lt;/em&gt;— GraalVM&lt;/a&gt;&lt;br&gt;
 In this blog, I will talk about how GraalVM embraces polyglot, providing interoperability between various programming languages. I will then cover how it extends from Hotspot, and provides faster execution, and smaller footprints with “Ahead-of-time” compilations &amp;amp; other optimisations.&lt;br&gt;
 &lt;a href="https://abvijaykumar.medium.com/java-serverless-on-steroids-with-fn-graalvm-hands-on-3f95e8f0de16"&gt;**Java Serverless on Steroids with fn+GraalVM Hands-On&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
 This blogs provides a hands on example of how to build a serverless application using fn project and run it on GraalVM&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;GraalVM is a universal virtual machine that allows programmers to embed, compile, interoperate, and run applications written in JVM languages such as Java, Kotlin, and Groovy; non-JVM languages such as JavaScript, Python, WebAssembly, Ruby, and R; and LLVM languages C and C++.&lt;/p&gt;

&lt;p&gt;GraalVM provides an Graal Just-in time (JIT) compiler, an implementation of JVMCI (Java Virtual Machine Compiler Interface), which is completely built on Java and uses Java Just in Time compiler (C2 compiler) optimization techniques as the baseline and builds on top of it. Graal JIT is much more sophisticated than Java C2 JIT compiler. GraalVM is a drop-in replacement with JDK, which means that all the applications that are currently running on JDK, should run on GraalVM without any application code changes.&lt;/p&gt;

&lt;p&gt;GraalVM also provides Ahead-of time (AOT) compilation to build native image, with static linking. GraalVM AOT helps build native image, that have a very small footprint, faster startup and execution, which is ideal for the modern day microservices architectures.&lt;/p&gt;

&lt;p&gt;While GraalVM is built on Java, it not only supports Java, but also enables polyglot development with JavaScript, Python, R, Ruby, C, and C++. It provides an extensible framework called Truffle, that allows any language to be built and run on the platform.&lt;/p&gt;

&lt;p&gt;GraalVM is becoming a default runtime for running cloud native Java microservices. Soon, all Java developers will be using GraalVM to run their cloud native Java microservices. There are already a lot of Microservices Frameworks that are emerging in the market such as Quarkus, Micronaut, Spring native etc, that is built on GraalVM&lt;/p&gt;

&lt;p&gt;Developers working with Java will be able to put their knowledge to work with this practical guide to GraalVM and Cloud Native Microservices Java Frameworks. The book provides a hands-on approach to implementation and associated methodologies that will have you up-and-running, and productive in no time. The book will provide step-by-step explanations of essential concepts with simple and easy to understand examples.&lt;/p&gt;

&lt;p&gt;This book would be a hands-on guide for developers who wish to optimize their apps’ performance and are looking for solutions. We will start by giving a quick introduction to GraalVM architecture and how things work under the hood. Developers would quickly move on to build explore the performance benefits they can gain by running their Java applications on GraalVM. We’ll learn how to create native images and understand how AOT can improve application performance significantly. We’ll then move on to explore examples of building polyglot applications and explore the interoperability with between languages, running on the same VM. We’ll explore Truffle framework to implement our own languages to run optimally GraalVM. Finally, we’ll also learn how GraalVM is specifically beneficial in cloud-native and microservices development&lt;/p&gt;

&lt;h2&gt;
  
  
  Who this book is for?
&lt;/h2&gt;

&lt;p&gt;The primary audience would be Java developers looking to optimize their application’s performance. This book would also be useful to Java developers who are exploring options to develop polyglot applications by using tooling from the Python/R/Ruby/Node.js ecosystem. Since this book is for experienced developers/programmers, readers must be well-versed with basic software development concepts and should have decent knowledge writing java code.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this book covers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Chapter 1, History of JVM
&lt;/h3&gt;

&lt;p&gt;This chapter will walk through the evolution of JVM, and how it optimized the interpreter and compiler. It will walk through C1, C2 Compilers, the kind of code optimizations that the JVM performs to run Java programs faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chapter 2, JIT, HotSpot and Graal
&lt;/h3&gt;

&lt;p&gt;This chapter does a deep dive on how JIT compilers and Java Hotspot works and how it JVM optimizes the code at runtime&lt;/p&gt;

&lt;h3&gt;
  
  
  Chapter 3, GraalVM Architecture
&lt;/h3&gt;

&lt;p&gt;This chapter provides the architecture overview of Graal and the various architecture components. The chapter goes into details on how GraalVM works and how it provides a single VM for multiple language implementations. This chapter also covers the optimizations GraalVM brings on top of Standard JVM&lt;/p&gt;

&lt;h3&gt;
  
  
  Chapter 4, Graal Compiler — Just in Time
&lt;/h3&gt;

&lt;p&gt;This chapter talks about Just in Time compilation option of Graal VM. It will go through in detail about the various optimizations Graal Just in Time compiler performs. This is followed by a Hands-on tutorial to use various compiler options to optimize the execution.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chapter 5, Graal Compiler — Ahead of Time
&lt;/h3&gt;

&lt;p&gt;This chapter is a hands-on tutorial, that walks through how to build native images and optimize and run these images, with Profile Guided Optimization techniques.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chapter 6, Truffle
&lt;/h3&gt;

&lt;p&gt;This chapter introduces the Truffle Polyglot Interoperability capabilities and high-level framework components. This will also cover how data can be transferred between applications, that are written in different languages, that are running on GraalVM&lt;/p&gt;

&lt;h3&gt;
  
  
  Chapter 7, Graal Polyglot (JavaScript and Node)
&lt;/h3&gt;

&lt;p&gt;This section introduces the JavaScript and NodeJS. This is followed by a tutorial on how to use Polyglot API for interoperability JavaScript and NodeJS application&lt;/p&gt;

&lt;h3&gt;
  
  
  Chapter 8, Graal Polyglot (Python, R and Java on Truffle)
&lt;/h3&gt;

&lt;p&gt;This section introduces the Python, R and Java Truffle (Espresso). This is followed by a tutorial on how to use Polyglot API for interoperability between various languages.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chapter 9, Graal Polyglot (LLVM, Ruby, WASM)
&lt;/h3&gt;

&lt;p&gt;This section introduces the JavaScript and NodeJs. This is followed by a tutorial on how to use Polyglot API to interoperate between an example JavaScript and NodeJS application&lt;/p&gt;

&lt;h3&gt;
  
  
  Chapter 10, Microservices and Serverless Architecture, Frameworks (Micronaut, Quarkus, fn Project) with Case Study
&lt;/h3&gt;

&lt;p&gt;This chapter covers the modern MicroServices Architecture and how the new frameworks such as Quarkus and Micronaut implement Graal for most optimum Microservices architecture&lt;/p&gt;

&lt;p&gt;The book is scheduled to release in June 2021, and open for pre-order. You can find the book at&lt;br&gt;
&lt;a href="https://www.packtpub.com/product/supercharge-your-applications-with-graalvm/9781800564909"&gt;&lt;strong&gt;Supercharge Your Applications with GraalVM | Packt&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.amazon.com/Supercharge-Your-Applications-GraalVM-hands/dp/1800564902"&gt;&lt;strong&gt;Supercharge Your Applications with GraalVM: A hands-on guide to building high-performance polyglot…&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.barnesandnoble.com/w/supercharge-your-applications-with-graalvm-a-b-vijay-kumar/1139252019"&gt;&lt;strong&gt;Supercharge Your Applications with GraalVM: A hands-on guide to building high-performance polyglot…&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes Operators to realize the dream of Zero-Touch Ops</title>
      <dc:creator>A B Vijay Kumar</dc:creator>
      <pubDate>Thu, 20 Jan 2022 06:00:03 +0000</pubDate>
      <link>https://dev.to/abvijaykumar/kubernetes-operators-to-realize-the-dream-of-zero-touch-ops-4b9</link>
      <guid>https://dev.to/abvijaykumar/kubernetes-operators-to-realize-the-dream-of-zero-touch-ops-4b9</guid>
      <description>&lt;h2&gt;
  
  
  Kubernetes Operators to realize the dream of Zero-Touch Ops
&lt;/h2&gt;

&lt;p&gt;Kubernetes Operators has the power to realize the dream of Zero-touch Ops, bringing in AIOps to life…and this is how I believe it will.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operators
&lt;/h2&gt;

&lt;p&gt;As we step into MicroServices architectures, and ways to deploy these on cloud with containers, and all the goodness of DevOps …the application functionality grows..the clusters and the number of resources in the cluster also grows…if the application is not “built-for-manage”, its going to be a nightmare to manage these applications, and we might end up spending more effort in managing these applications, than building them…ironically!!! while the world of automation technology has huge promise, and we are talking about zero-touch ops as nirvana for managing cloud applications!!!.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;According to me Operators is the most important architectural component in the k8s world, that has a huge promise to carry us towards our zero-touch (or low-touch) ops journey..&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before I jump in…let me quickly walk u thru my understanding of operators (and I am sure there are a lot of blogs, vblogs, youtube videos, which might do a better job.. :-).)&lt;/p&gt;

&lt;p&gt;k8s is all about Controllers &amp;amp; Resources.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#:~:text=A%20resource%20is%20an%20endpoint,a%20collection%20of%20Pod%20objects.&amp;amp;text=However%2C%20many%20core%20Kubernetes%20functions,resources%2C%20making%20Kubernetes%20more%20modular."&gt;Resource&lt;/a&gt;: A &lt;em&gt;resource *is an endpoint in the &lt;a href="https://kubernetes.io/docs/reference/using-api/api-overview/"&gt;Kubernetes API&lt;/a&gt; that stores a collection of &lt;a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/kubernetes-objects/"&gt;API objects&lt;/a&gt; of a certain kind; for example, the built-in *pods&lt;/em&gt; resource contains a collection of Pod objects.&lt;br&gt;
 &lt;a href="https://kubernetes.io/docs/concepts/architecture/controller/"&gt;Controllers&lt;/a&gt;: In Kubernetes, controllers are control loops that watch the state of your &lt;a href="https://kubernetes.io/docs/reference/glossary/?all=true#term-cluster"&gt;cluster&lt;/a&gt;, then make or request changes where needed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IvNHsVAc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ArVlbUlxIPAzfOMbudajeNA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IvNHsVAc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ArVlbUlxIPAzfOMbudajeNA.png" alt="Resources &amp;amp; Controllers" width="602" height="116"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Controllers have the logic of managing the resources, and that's how the K8s cluster runs.&lt;/p&gt;

&lt;p&gt;In the initial versions of the k8s, it came with defined resources, and we were only restricted to use those resources that came along with the k8s.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Controllers are very good in managing stateless applications, as its like a constant control loop to track and fix. since applications are stateless, there is no backup/recovery/restore of state. for-example if a instance of webserver crashes, controller can easily replace that with another instance of webserver and bring it back to desired state.&lt;br&gt;
 But for stateful applications like databases, it’s not that straight forward, and it will require manual intervention to restore the state!!! so we need something more than standard controllers.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Since the introduction of the Custom Resources, we have the flexibility to declare and create our own k8s resources.&lt;/p&gt;

&lt;p&gt;Now imagine if we can start defining our own resources and letting the k8s also manage them!!!! and even better imagine, if we can build our own controllers to have our own custom manage logic, and letting k8s run our resources!!!…and that is what is “Operators”!!!&lt;/p&gt;

&lt;p&gt;With Operators, we should be able to write the logic for complete management of custom resources, and let k8s manage our resources!!!..and that's how we can move to low-touch ops!!!&lt;/p&gt;

&lt;p&gt;so what all can we automate with operators…the answer is “everything that can be automated”…right from installation, patching, updates, upgrades, backup, recovery, capturing telemetry, and acting based on AI (artificial intelligence to the nirvana stage of zero-touch ops.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4wZUt9S8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AZLRvdqerOAloSVbFWWyEfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4wZUt9S8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AZLRvdqerOAloSVbFWWyEfw.png" alt="Operators Maturity Model" width="842" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There is a very well defined Operators maturity model, that clearly defines the &lt;a href="https://docs.openshift.com/container-platform/4.1/applications/operators/olm-what-operators-are.html"&gt;5 phases of maturity&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5i9d-Xgn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A75UDw8T8l54FsAsezazQ8A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5i9d-Xgn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A75UDw8T8l54FsAsezazQ8A.png" alt="How Operators work" width="680" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are 3 main components of Operators Framework&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operators SDK&lt;/strong&gt;: provides the tools to build, test, and package the Operators. Provides 3 SDK out of the box&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Helm SDK&lt;/strong&gt;: provides a declarative way of building Operators, with this mainly install and configure kind of Operators can be built&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Ansible SDK&lt;/strong&gt;, &lt;strong&gt;Go SDK&lt;/strong&gt;: Ansible and GO SDKs provide more advanced ways of building the Operators. where you can build Operators all the way to “Auto-Pilot” maturity.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Apart from Operators SDK — there are some tools in the market such as &lt;a href="https://kudo.dev/"&gt;KUDO&lt;/a&gt;, &lt;a href="https://book.kubebuilder.io/"&gt;kubebuilder&lt;/a&gt;, &lt;a href="https://metacontroller.app/"&gt;Metacontroller&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operator Lifecycle Manager (OLM)&lt;/strong&gt;: manages the complete lifecycle of the Operator — installing and managing the Operator. OLM monitors the CRD that is deployed and when something changes..then it ensures that the changes are applied across the cluster&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operator Metering&lt;/strong&gt;: reports the usage of the operator to help the metering&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Abffahgl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AQ0_PgdZLpRFFImPQCjxINQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Abffahgl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AQ0_PgdZLpRFFImPQCjxINQ.png" alt="Operator Architecture" width="731" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating &amp;amp; Deploying an Operator
&lt;/h2&gt;

&lt;p&gt;Here is a quick walk-thru of building and deploying an Operator. Just for the completeness, I thought I will do a very quick walk-thru&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://sdk.operatorframework.io/docs/installation/install-operator-sdk/"&gt;**Install Operator SDK&lt;/a&gt;**&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://sdk.operatorframework.io/docs/olm-integration/quickstart-bundle/"&gt;**Build, Test and Deploy&lt;/a&gt;**&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://sdk.operatorframework.io/docs/advanced-topics/operator-capabilities/operator-capabilities/"&gt;**Evolve &amp;amp; Mature&lt;/a&gt;**&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  AIOps for Zero-Touch Ops
&lt;/h2&gt;

&lt;p&gt;Artificial Intelligence &amp;amp; applying machine learning for ITOps has become a reality and has already become a very common practice to bring down the operational cost. So what capabilities are required for AIOps???&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iLHZbt0N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AVJrj3HE4H4_QqK_mvliS-w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iLHZbt0N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AVJrj3HE4H4_QqK_mvliS-w.png" alt="AIOps Capability Architecture" width="838" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The picture above illustrates my understanding of AIOps capability architecture. (thanks &lt;a href="https://dev.toundefined"&gt;Naveen E P&lt;/a&gt; for brainstorming and contribution in building this nice picture).&lt;/p&gt;

&lt;p&gt;AIOps goes beyond standard event detection to advanced prediction with actionable insights. The term “actionable” is important — it’s the recommendation or execution of the best action to fix the current or issues that might occur based on prediction. This is what we really need for an “Auto-Pilot” Maturity, where it will replace or augment Site Reliability Engineers (SRE).&lt;/p&gt;

&lt;p&gt;Now if you connect this generic picture of AIOps with what k8s Operators bring to the table, it is very clear that the operators have all that we need to be our AIOps engine.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;All the various types of capabilities can be built as a CRs, and can be a bunch of operators that will bring all the pieces of AIOps to life, these operators co-locate inside the K8s cluster and run as PODs/Sidecars. They can also integrate with ServiceMesh for additional metrics and telemetry, and act proactively and operate the cluster.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zNSNmTMi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AQHEpWXqvYNRMEJ0JUREeRQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zNSNmTMi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AQHEpWXqvYNRMEJ0JUREeRQ.png" alt="AIOps with Operators — Illustrative Architecture" width="646" height="626"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above picture provides a high-level view of the idea, and let's see how it maps to the 3 layers that we talked on the AIOps capability architecture&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Visibility&lt;/strong&gt;: Visibility layer can be built on &lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt;, providing single pane visibility of the cluster health&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prediction&lt;/strong&gt;: Prediction layer has all the modules (python modules to advanced spark clusters as specific operators), that build machine learning models from the data that is streaming from &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt;, ServiceMesh/istio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resolution&lt;/strong&gt;: Resolution can be simple k8s commands to &lt;a href="https://www.ansible.com/"&gt;Ansible&lt;/a&gt; playbooks or even invoking RPA digital works — depending on standard operating procedures, to recover the failures or take proactive measures&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The best part is all of this AIOps is happening native to Kubernetes (except maybe RPAs)&lt;/p&gt;

&lt;p&gt;There you go, Operators is the key to unlock the “Zero-Touch Ops” Journey.&lt;/p&gt;

&lt;p&gt;In the meantime, I have been playing around with operators and will soon come back with a hands-on session…&lt;/p&gt;

&lt;p&gt;Have fun, take care..ttyl&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://access.redhat.com/documentation/en-us/openshift_container_platform/4.1/html/applications/operators"&gt;https://access.redhat.com/documentation/en-us/openshift_container_platform/4.1/html/applications/operators&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://developers.redhat.com/blog/2020/04/15/crafting-kubernetes-operators/?utm_content=bufferec5c6&amp;amp;utm_medium=social&amp;amp;utm_source=facebook.com&amp;amp;utm_campaign=buffer"&gt;https://developers.redhat.com/blog/2020/04/15/crafting-kubernetes-operators/?utm_content=bufferec5c6&amp;amp;utm_medium=social&amp;amp;utm_source=facebook.com&amp;amp;utm_campaign=buffer&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://sdk.operatorframework.io/"&gt;https://sdk.operatorframework.io/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>aiops</category>
      <category>operators</category>
    </item>
    <item>
      <title>Evolution of k8s worker nodes-CRI-O</title>
      <dc:creator>A B Vijay Kumar</dc:creator>
      <pubDate>Thu, 20 Jan 2022 05:58:35 +0000</pubDate>
      <link>https://dev.to/abvijaykumar/evolution-of-k8s-worker-nodes-cri-o-4m6</link>
      <guid>https://dev.to/abvijaykumar/evolution-of-k8s-worker-nodes-cri-o-4m6</guid>
      <description>&lt;h2&gt;
  
  
  Evolution of k8s worker nodes-CRI-O
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---yP8ExiI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AlXnPDcB82byH1nXPInHtTQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---yP8ExiI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AlXnPDcB82byH1nXPInHtTQ.png" alt="" width="578" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just a few months back, I never used to call containers as containers…I used to call them docker containers…when I heard that OpenShift is moving to CRI-O, I thought what's the big deal…to understand the “big deal”…I had to understand the evolution of the k8s worker node&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Evolution&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If you look at the evolution of the k8s architecture, there has been a significant change and optimization in the way the worker nodes have been running the containers…here are significant stages of the evolution, that I attempted to capture…&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 0 : docker is the captain&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--45zyoxBe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AN95B-t03pNU6f9OyOiWwiA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--45zyoxBe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AN95B-t03pNU6f9OyOiWwiA.png" alt="" width="221" height="483"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It started with a simple architecture of kubelets as the worker node agents that receive the command from admins, through api-server from the master node. The kubelets used docker runtime to launch the docker containers (pulling the images from the registry). This was all good…until the alternate container runtimes, with better performance &amp;amp; unique strengths, started appearing in the market, we realised that it would be good if we can plug and play these runtimes...the obvious design pattern we would use to fix this issue is ??? “adapter/proxy” pattern…right?? that led to the next stage.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Evolution is all about adapting to the changes in the ecosystem&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Stage 1: CRI (Container Runtime Interface)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bxP_h4Dz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ATsyB1oBvlVK2_tw_1zAuTg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bxP_h4Dz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ATsyB1oBvlVK2_tw_1zAuTg.png" alt="" width="221" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Container Runtime Interface (CRI) spec was introduced in K8s 1.5. CRI also consists of protocol buffers, gRPC API and libraries. This brought the abstraction layer, and acted as an adapter, with the help of gRPC client running in kubelet and gRPC server running in CRI Shim. This allowed a simpler way to run the various container runtimes.&lt;/p&gt;

&lt;p&gt;Before we go any further…we need to understand what all functionality is expected from container runtimes. Container runtime used to manage. downloading the images, unpacking them, running them, and also handle the networking, storage. It was fine… until we starting realizing that this is like a monolith!!!&lt;/p&gt;

&lt;p&gt;Let me layer these functionalities into 2 levels.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High level&lt;/strong&gt; — Image management, transport, unpacking the images &amp;amp; API to send commands to run the container, network, storage (eg: rkt, docker, LXC, etc).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;*&lt;em&gt;Low Level *&lt;/em&gt;— run the containers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It made more sense to split these functionalities into components that can be mixed and matched with various open-source options, that provide more optimizations and efficiencies…the obvious design/architecture pattern we would use to fix this issue is ??? “layering” pattern…right?? that led to the next stage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 2: CRI-O &amp;amp; OCI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cl_1q4FQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AytRYGXTAY7osog0ExRCdRQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cl_1q4FQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AytRYGXTAY7osog0ExRCdRQ.png" alt="" width="221" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So the OCI (Open Container Initiative), came up with a clear container runtime &amp;amp; image specification, which helped multi-platform support (Linux, Windows, VMs etc). Runc is the default implementation of OCI, and that is the low level, of container runtime.&lt;/p&gt;

&lt;p&gt;The modern container runtimes are built on this layered architecture, where Kubelets talk to Container Runtimes through CRI-gRPC and the Container Runtimes run the containers through OCI.&lt;/p&gt;

&lt;p&gt;There are various implementations of CRI such as Dockershim, CRI-O, containerD.&lt;/p&gt;

&lt;p&gt;Towards the end of Stage 1, I mentioned the flexibility to create a toolkit for end to end container management… and that needed Captain America to assemble the avengers, to provide an end to end container platform…&lt;/p&gt;

&lt;h2&gt;
  
  
  Avengers of k8s world - led by Captain “OpenShift”
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mtR_LJIi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AK1VxDjmaj0hJTsqB_9lz9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mtR_LJIi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AK1VxDjmaj0hJTsqB_9lz9w.png" alt="" width="630" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;podman&lt;/strong&gt;: deamonless container engine, for developing managing and running OCI containers, and speaks exact docker CLI language, to the extent where u can just Alias it&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;skopeo&lt;/strong&gt;: a complete container management CLI tool. One of the best features I love about skopeo, is the ability to inspect the images, on the remote registry, without downloading or unpacking!!!…and it matured into a full-fledged image management tool for remote registries, including signing images, copying between registries &amp;amp; keeping remote registries in sync. This significantly increases the pace of container build, manage and deploy pipelines…&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;buildah&lt;/strong&gt;: a tool that helps build the OCI images, incrementally!!!..yes incrementally…I was playing around this the other day. I don’t have to imagine the image composition, and write a complex Dockerfile..instead, I just build the image one layer at a time, test it, rollback (if required), and once I am satisfied, I can commit it to the registry…how cool is that!!!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;cri-o&lt;/strong&gt;: light-weight container runtime for k8s…will write more about this in the next section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;OpenShift&lt;/strong&gt;: End to end container platform…&lt;strong&gt;the real Captain&lt;/strong&gt;!!&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Red Hat OpenShift goes for CRI-O&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Red Hat OpenShift 4.x defaults to CRI-O as the Container runtime. A lot of this decision (in my opinion) goes back to the choice of building an immutable infrastructure based on CoreOS, on which the OpenShift 4.x runs. CRI-O was obvious with CoreOS as the base, and all the more, CRI-O is governed by k8s community, completely Open Source, very lean, directly implements k8s container runtime interface…&lt;a href="https://www.projectatomic.io/blog/2017/06/6-reasons-why-cri-o-is-the-best-runtime-for-kubernetes/"&gt;refer these 6 reasons in detail&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a great picture taken from &lt;a href="https://www.redhat.com/en/blog/red-hat-openshift-container-platform-4-now-defaults-cri-o-underlying-container-engine"&gt;this blog&lt;/a&gt;, that shows how CRI-O works under the wood in Red Hat OpenShift 4.x&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3xIFHqb---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2448/0%2AbarzTkbPnPpGBc85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3xIFHqb---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2448/0%2AbarzTkbPnPpGBc85.png" alt="" width="880" height="894"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.projectatomic.io/blog/2017/06/6-reasons-why-cri-o-is-the-best-runtime-for-kubernetes/"&gt;https://www.projectatomic.io/blog/2017/06/6-reasons-why-cri-o-is-the-best-runtime-for-kubernetes/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.redhat.com/en/blog/red-hat-openshift-container-platform-4-now-defaults-cri-o-underlying-container-engine#"&gt;https://www.redhat.com/en/blog/red-hat-openshift-container-platform-4-now-defaults-cri-o-underlying-container-engine#&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/"&gt;https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.redhat.com/en/blog/red-hat-openshift-container-platform-4-now-defaults-cri-o-underlying-container-engine"&gt;https://www.redhat.com/en/blog/red-hat-openshift-container-platform-4-now-defaults-cri-o-underlying-container-engine&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>OpenShift 4 “under-the-hood”</title>
      <dc:creator>A B Vijay Kumar</dc:creator>
      <pubDate>Wed, 19 Jan 2022 04:39:13 +0000</pubDate>
      <link>https://dev.to/abvijaykumar/openshift-4-under-the-hood-44k6</link>
      <guid>https://dev.to/abvijaykumar/openshift-4-under-the-hood-44k6</guid>
      <description>&lt;h2&gt;
  
  
  OpenShift 4 “under-the-hood”
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ow4EXnfo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2866/0%2AIYoU4NsxCdZ-PfyI.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ow4EXnfo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2866/0%2AIYoU4NsxCdZ-PfyI.png" alt="" width="880" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have been playing around with Red Hat OpenShift 4.x for the past few months now… It has been a super exciting learning journey…In this blog, I will attempt to capture the key architectural components of OpenShift 4.x and how they come together, to provide the most comprehensive container platform.&lt;/p&gt;

&lt;p&gt;… let's open the hood…&lt;/p&gt;

&lt;h2&gt;
  
  
  Nuts and Bolts
&lt;/h2&gt;

&lt;p&gt;**Built on CoreOS: **According to me, this is one of the major architectural changes in OpenShift 4.x. and I think it really changed the way platform works…here is how!!!&lt;/p&gt;

&lt;p&gt;CoreOS provides “immutability”!!!…what does that even mean…let me explain…while Red Hat CoreOS is built on RHEL components (brining in all the security and control measures)..CoreOS allows you to modify only a few system settings, and make it much easier to manage upgrades &amp;amp; patches. This immutability allows OpenShift do better state management and perform updates based on the latest configurations.&lt;/p&gt;

&lt;p&gt;so what's the big deal??..here are the top reasons, why I think its a big deal&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Versioning and rollbacks&lt;/strong&gt; of deployments are much easier &amp;amp; straight forward — so the DevOps process is much more manageable.&lt;br&gt;
 “&lt;strong&gt;Configuration Drift&lt;/strong&gt;” is a big issue, if you have managed a large number of containers/MicroServices, in a HA/DR environments. Typically container infrastructure is built by a team and over a period of time is managed by various engineers. There are always situations, where we are forced to change the configuration of the VMs/Containers/OS, that we may never trace back.&lt;br&gt;
 This causes a gap between Production &amp;amp; DR/HA environment. I read somewhere that up-to &lt;strong&gt;99%&lt;/strong&gt; of HA/DR issues are caused due to this..and in my experience a Core Banking System, went down for days, before we could even figure out that the root-cause was configuration gaps between Prod &amp;amp; HA/DR.&lt;br&gt;
 Immutability helps us do better version control of the infra — We will have more &lt;strong&gt;confidence in testing&lt;/strong&gt;, as the underlying infrastructure on which our application containers are running, is immutable, and we are very sure about the test results, and more confident.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UHKrcQUn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AL3BUm28y9QREFu9xQlnyag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UHKrcQUn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AL3BUm28y9QREFu9xQlnyag.png" alt="" width="282" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vertically Integrated stack with CoreOS: *&lt;em&gt;In *&lt;/em&gt;&lt;/strong&gt;OpenShift 4.x, CoreOS is vertically integrated with the Container platform…what it means is that the cluster can now manage the pools of Red Hat CoreOS machines (nodes), and its full lifecycle, in k8s style!!!. Imagine how it will reduce the operational effort!!!, just to compare with OpenShift 3.x — we used to manually provision OS and rely on the administrators to configure the OS properly and more importantly manage the updates &amp;amp; upgrades.&lt;/p&gt;

&lt;p&gt;To my earlier point, this also caused a lot of issues due to “Configuration Drifts” over a period of time…you will see how this vertical integration will help setup &amp;amp; manage Nodes as “Machines” later in the blog.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CRI-O as the container runtime&lt;/strong&gt;: I had published a blog on “why CRI-O”…&lt;a href="https://medium.com/@abvijaykumar/evolution-of-k8sworker-nodes-cri-o-ea58762e7629"&gt;please read this blog&lt;/a&gt;. But in my personal opinion, this is another very critical architectural decisions that make OpenShift 4.x more agile, light-weight, scalable, and high performing container platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operators&lt;/strong&gt;: This is another important component of the architecture…which allows us to extend the k8s and customize the resources and controllers &amp;amp; and build a more manageable system…&lt;a href="https://medium.com/@abvijaykumar/kubernetes-operators-to-realize-the-dream-of-zero-touch-ops-5bc8c3e5e11b"&gt;please read my blog on operators&lt;/a&gt;, where I go deeper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Machine Management&lt;/strong&gt;: Machine Management is one of the most important ecosystem of resources &amp;amp; operators in OpenShift 4.x. These resources and operators provide a comprehensive set of APIs for all node host provisioning &amp;amp; works with the Cluster API for providing the elasticity &amp;amp; autoscaling&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Machine&lt;/strong&gt;: Machine is the fundamental unit that represents the k8 node, which abstracts the cloud platform “specific” implementations. The machine “providerSpec” describes the actual compute node, that gets realized. MachineConfig defines the machine configuration,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MachineSets&lt;/strong&gt;: Like how replicaSets manage the replicas, and ensure and maintain the “Desired state”, MachineSet ensure the desired state of a number of machine (nodes) that are running&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MachineAutoScaler&lt;/strong&gt;: MachineAutoScaler works with MachineSets to manage the load and automatically scale the cluster. The minimum and maximum number of machines are set in MachineSet, and MachineAutoScaler manages scalability&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ClusterAutoScaler&lt;/strong&gt;: This ClusterAutoScaler manages the cluster wise scaling policy based on various cluster-wide parameters such as cores, memory, GPU, etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we walk through the blog, you will see more and more nuts and bolts coming together to build the most advanced container platform&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning
&lt;/h2&gt;

&lt;p&gt;OpenShift 4.x introduced a more sophisticated and automated installation procedure, called Installer-provisioned Infrastructure, which does a full-stack install — leveraging the Ignition &amp;amp; Operator. There are 2 ways to install&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installer-provisioned Infrastructure (IPI)&lt;/strong&gt;: This is only available for OpenShift 4.x, This provides a full-stack installation and setup of the cluster, including the cloud resources and the underlying Operating system, which in this case is RHEL CoreOS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User-Provisioned Infrastructure (UPI)&lt;/strong&gt;: UPI is the traditional installation, that we had since OpenShift 3.x, where we need to set up the underlying Infrastructure (Cloud Resources &amp;amp; OS), and openshift-install can help automatically set up the cluster &amp;amp; cluster services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KYE00Pu8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AgU9xhKDwxOgk3FPr8Z_cfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KYE00Pu8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AgU9xhKDwxOgk3FPr8Z_cfg.png" alt="IPI vs UPI" width="506" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Apart from this, OpenShift is also available as a managed service offered by most of the hyper-scalers &lt;a href="https://www.ibm.com/in-en/cloud/openshift?p1=Search&amp;amp;p4=43700056080455873&amp;amp;p5=e&amp;amp;cm_mmc=Search_Google-_-1S_1S-_-AS_IN-_-ibm%20openshift_e&amp;amp;cm_mmca7=71700000065340837&amp;amp;cm_mmca8=aud-382859943522:kwd-848945002550&amp;amp;cm_mmca9=CjwKCAjw1ej5BRBhEiwAfHyh1O4SANISLfA9x_rD3XcfAFad2C27MZe4PFaZemODx-rH_Xgjf_P7aBoCqy8QAvD_BwE&amp;amp;cm_mmca10=452787885290&amp;amp;cm_mmca11=e&amp;amp;gclid=CjwKCAjw1ej5BRBhEiwAfHyh1O4SANISLfA9x_rD3XcfAFad2C27MZe4PFaZemODx-rH_Xgjf_P7aBoCqy8QAvD_BwE&amp;amp;gclsrc=aw.ds"&gt;IBM&lt;/a&gt;, &lt;a href="https://aws.amazon.com/quickstart/architecture/openshift/"&gt;AWS&lt;/a&gt;, &lt;a href="https://azure.microsoft.com/en-in/services/openshift/"&gt;Azure&lt;/a&gt;, &lt;a href="https://cloud.google.com/solutions/partners/openshift-on-gcp"&gt;GCP&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ignition&lt;/strong&gt; is the most important utility, that has powerful capabilities to manipulate disks during the initial setup, it reads from the configuration files (.ign) and creates the machines, It makes the provisioning process, super easy…&lt;/p&gt;

&lt;p&gt;Lets now see how the Ignition works in setting up the full cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Nt9NuDS---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AkMO-ZJaM43VMGy_Uvm_yoQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Nt9NuDS---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AkMO-ZJaM43VMGy_Uvm_yoQ.png" alt="How Ignition works in IPI to setup the clusters" width="537" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above picture, you can see how Ignition configuration files (.ign) are used by the bootstrap machine (read machine=node in OpenShift 4.x), that spins off the master nodes, replicates etcd, merging the base ignition configuration and any other user customized configurations, which in turn spin off the worker nodes, using worker and master configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updates &amp;amp; Upgrades
&lt;/h2&gt;

&lt;p&gt;OpenShift 4.x provides seamless updates, over-the-air. This is possible because of the integrated CoreOS, and the magical MachineConfig Operator. Here is how it works&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MachineConfig Operator&lt;/strong&gt; manages the configuration changes across the cluster. Before we try to understand how the updates and upgrades, let's understand the key components of this Operator, that come together&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MachineConfig Controller&lt;/strong&gt;: This runs on master and orchestrates updates across clusters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MachineConfigServer&lt;/strong&gt;: This hosts the ignition config files, that provision any new nodes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MachineConfig Daemon&lt;/strong&gt;: This runs on every worker machine (its a daemonset), and is responsible to manage updates on worker machines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MachineConfig&lt;/strong&gt; is a k8s object, that is created by bootstrap ignition, and it represents the state of the machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MachineConfigPool&lt;/strong&gt; is the group of machines of a particular type., like a master machine, worker machine, infrastructure machine etc.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--36eLInBy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ATUAcDU2l2AEgRXP05Fwwvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--36eLInBy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ATUAcDU2l2AEgRXP05Fwwvg.png" alt="Rolling out the updates and changes in the Machine (Upgrading to newer versions of platform)" width="451" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The changes to the system happens through the changes in the MachineConfig. Any changes in the MachineConfig is rendered and applied to all the machines in a MachineConfigPool.&lt;/p&gt;

&lt;p&gt;So if I have to change a configuration of a type of machine (node), I apply the change to the MachineConfig, which is picked up by MachineConfig Controller, that co-ordinates with MachineConfig Daemons.&lt;/p&gt;

&lt;p&gt;The MachineConfig Daemons, pulls the MachineConfig changes, from the API server, and applies to their respective machines (nodes). If the change is an upgrade, it would connect to the quay.io registry to pull the latest image, and applies it.&lt;/p&gt;

&lt;p&gt;MachineConfig Daemons sequences and drains the node/machines, and reboots them, after applying the changes...&lt;/p&gt;

&lt;p&gt;The updates can be applied OTA (over-the-air) using either the admin console or cloud.openshift.com web interface. The release artifacts are packaged as container images, as a single package. ClusterVersion Operator, checks with OCP Update Server (hosted by Red Hat), and then connects to the Red Hat hosted Quay.io, to pull the image, and works with Cluster Operators for rolling out the upgrades.typical application level upgrades are managed by &lt;a href="https://docs.openshift.com/container-platform/4.1/applications/operators/olm-understanding-olm.html"&gt;OLM operators&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;…and that is how OpenShift provides a sophisticated &amp;amp; controller way to do the updates and upgrades across the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Runtime
&lt;/h2&gt;

&lt;p&gt;Now, let's explore how OpenShift 4 architecture looks in runtime…The below diagram shows how the Master and Worker nodes are stacked…&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ej6Qf2vx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AbvhazCv2D2P0stapaPj71g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ej6Qf2vx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AbvhazCv2D2P0stapaPj71g.png" alt="Master Node and Worker Node Architecture" width="691" height="521"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;…and the below diagram, illustrates how the deployment architecture looks at a high-level…if you look closer, the Infra workloads are provisioned separately, from the app workloads. This helps in reducing the load on the application worker machine. The Infrastructure &lt;strong&gt;MachineSet&lt;/strong&gt; takes care of&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MLTKOyis--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A4YOUyOKdgWWJm184G_g7jg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MLTKOyis--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A4YOUyOKdgWWJm184G_g7jg.png" alt="High-level deployment architecture" width="468" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just to make more practical sense of the above diagram, if you run an IPI on AWS…here is a how the typical cluster would look like, from the topology perspective…&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;6 EC2&lt;/strong&gt;: 3xMaster nodes, 3xWorkers nodes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;2 Route53 Configurations&lt;/strong&gt;: 1xAPI Server, 1xApp domain&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;3 ELB&lt;/strong&gt;: 1xInternal API Load balancer, 1xExternal API Load balancer: 6443 port traffic is the target group of 3 masters — API traffic, 1xApplication Load balancer — This is configured to 3 worker nodes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;2 security groups&lt;/strong&gt;: One for master, One for worker&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;All of them in a &lt;strong&gt;VPC&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;you can see a detailed AWS &lt;a href="https://aws.amazon.com/quickstart/architecture/openshift/"&gt;deployment architecture here&lt;/a&gt;, similarly, you can refer to the respective hyper scalers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scaling
&lt;/h2&gt;

&lt;p&gt;OpenShift 4.x out-of-the-box supports both auto and manual scaling. Each of the nodes are configured as Machine resources, and the MachineSet manages multiple machines.&lt;/p&gt;

&lt;p&gt;One MachineSet is configured per availability zone. MachineSet ensures the “desired state” of the number of machines (nodes).&lt;/p&gt;

&lt;p&gt;In the case of manual scaling, The MachineSet configurations can be edited to increase the number of machines.&lt;/p&gt;

&lt;p&gt;In case of auto-scaling, MachineAutoScaler automatically scales the MachineSet desired state up and down, and limits between the minimum and maximum number of the machine that is configured, and ClusterAutoScaler decides the scaling up and down based on various parameters such as CPU, memory, etc. All of this works independently of underlying cloud infrastructure!!! 😎. The diagram below illustrates how it works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kodz1d0u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AS3zObiWNP6NfqFd1ksDyFg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kodz1d0u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AS3zObiWNP6NfqFd1ksDyFg.png" alt="" width="621" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DevOps
&lt;/h2&gt;

&lt;p&gt;Now moving to the most important part of the SDLC — DevOps. There are 2 key components that help improve the developer experience for rapid development and deployment in OpenShift 4.x&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CodeReady Workspace&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CodeReady workspace is based on Eclipse Che, which brings a completely integrated web-based development environment and seamless integration with the OpenShift platform. It also comes pre-packaged with various development environment template for polyglot development &amp;amp; deployment. How cool is that!!!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Native CI/CD&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Moving beyond Jenkins, OpenShift 4.x brings the cloud-native CI/CD with Tekton, which runs within K8s. Tekton runs completely serverless, with no extra load on the system.&lt;/p&gt;

&lt;p&gt;I will be covering more in detail about Tekton and GitOps soon in a separate blog, will leave a link, here once done!!!&lt;/p&gt;

&lt;p&gt;Both CodeReady Workspaces and Tekton and pipelines are available as Operators in OperatorHub…so just click and install…&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ops&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Managing cloud application is the most critical, as the number of MicroServices grow, and the deployments grow, it becomes very important to have a integrated management platform, that supports&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt; (with Grafana and Prometheus)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Traceability&lt;/strong&gt; (with Kiali and Jaeger)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Canary Deployments &amp;amp; Rolling Updates&lt;/strong&gt; (with Istio)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API routing &amp;amp; Management&lt;/strong&gt; (with Istio )&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SsBDnC5t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Ajvi37mJh62_ApSEe33o6cQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SsBDnC5t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Ajvi37mJh62_ApSEe33o6cQ.png" alt="ServiceMesh" width="600" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;OpenShift &lt;strong&gt;ServiceMesh&lt;/strong&gt; provides a complete management solution, that is highly extendable to integrate with the larger enterprise Ops…also &lt;a href="https://medium.com/faun/kubernetes-operators-to-realize-the-dream-of-zero-touch-ops-5bc8c3e5e11b"&gt;check out my other blog on Operators, on how we can achieve zero-touch ops here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There you go..this is really the enchilada of container world 🌐&lt;/p&gt;

&lt;p&gt;This is not all, there is a lot more, such as: how OpenShift abstracts the storage with &lt;a href="https://www.redhat.com/en/technologies/cloud-computing/openshift-container-storage"&gt;Red Hat OpenShift Container Storage&lt;/a&gt;, and how it abstracts the underlying cloud platform network with &lt;a href="https://docs.openshift.com/container-platform/4.5/networking/cluster-network-operator.html"&gt;Networking Operator- CNO and CNI&lt;/a&gt;, and the most important feature of &lt;strong&gt;Multi-cluster management&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I will soon be publishing a blog on Multi-cluster management, I will be covering in detail, and will be leaving a link on the blog, when published…&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;phew!!! that's a lot for one blog…&lt;/p&gt;

&lt;p&gt;I hope it now makes sense, why we need a container platform like OpenShift…imagine, it will be a nightmare if I had to “DYI” with k8s and a bunch of OpenSource libraries. 😳&lt;/p&gt;

&lt;p&gt;The only downside of OpenShift is that it is an “opinionated full-stack” platform…if you think about it!!! we need a container platform to run business-critical workloads..(personally, I always love to build my own stack, and play around with it…but can’t risk experimenting with serious enterprise applications)&lt;/p&gt;

&lt;p&gt;in the meantime…you can play around with OpenShift with a free trial, or install it on ur laptop with &lt;a href="https://developers.redhat.com/products/codeready-containers/overview"&gt;CodeReady Containers from here&lt;/a&gt; (your laptop might need a good cooler :-) )…&lt;/p&gt;

&lt;p&gt;Enjoy!!! stay safe..ttyl…&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YRBEhMWG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2APiks8Tu6xUYpF4DU" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YRBEhMWG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/0%2APiks8Tu6xUYpF4DU" alt="" width="727" height="18"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Subscribe to &lt;a href="https://www.faun.dev/join?utm_source=medium.com/faun&amp;amp;utm_medium=medium&amp;amp;utm_campaign=faunmediumprebanner"&gt;FAUN topics&lt;/a&gt; and get your weekly curated email of the must-read tech stories, news, and tutorials *&lt;/em&gt;🗞️&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Follow us on &lt;a href="https://twitter.com/joinfaun"&gt;Twitter&lt;/a&gt; **🐦&lt;/strong&gt; and &lt;a href="https://www.facebook.com/faun.dev/"&gt;Facebook&lt;/a&gt; *&lt;em&gt;👥 **and &lt;a href="https://instagram.com/fauncommunity/"&gt;Instagram&lt;/a&gt; *&lt;/em&gt;📷 *&lt;em&gt;and join our &lt;a href="https://www.facebook.com/groups/364904580892967/"&gt;Facebook&lt;/a&gt; and &lt;a href="https://www.linkedin.com/company/faundev"&gt;Linkedin&lt;/a&gt; Groups *&lt;/em&gt;💬&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2vsdN_O7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/3000/1%2A_cT0_laE4iPcqW1qrbstAg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2vsdN_O7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_880/https://cdn-images-1.medium.com/max/3000/1%2A_cT0_laE4iPcqW1qrbstAg.gif" alt="" width="880" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇
&lt;/h3&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.openshift.com/container-platform/4.4/welcome/index.html"&gt;https://docs.openshift.com/container-platform/4.5/welcome/index.html&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/"&gt;https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.youtube.com/channel/UCZKMj3YI0wP-kq4QYpaKdEA"&gt;https://www.youtube.com/channel/UCZKMj3YI0wP-kq4QYpaKdEA&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>openshift</category>
      <category>cloud</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Java Serverless on Steroids with fn+GraalVM Hands-On</title>
      <dc:creator>A B Vijay Kumar</dc:creator>
      <pubDate>Tue, 18 Jan 2022 12:26:34 +0000</pubDate>
      <link>https://dev.to/abvijaykumar/java-serverless-on-steroids-with-fngraalvm-hands-on-1f93</link>
      <guid>https://dev.to/abvijaykumar/java-serverless-on-steroids-with-fngraalvm-hands-on-1f93</guid>
      <description>&lt;h2&gt;
  
  
  Java Serverless on Steroids with fn+GraalVM Hands-On
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qENfBm0v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A9Z1Bv650NqE8VOvmRgb_Qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qENfBm0v--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A9Z1Bv650NqE8VOvmRgb_Qw.png" alt="" width="563" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Function-as-a-Service or Serverless is the most economical way to run code and use the cloud resources to the minimum. Serverless approach runs the code when a request is received. The code boots up, executes, handles the requests, and shuts down. Thus, utilizing the cloud resources to the optimum. This provides a high, available, scalable architecture, at the most optimum costs. However, Serverless architecture demands a faster boot, quicker execution, and shutdown.&lt;/p&gt;

&lt;p&gt;GraalVM native images (ahead of time) is the best runtime. GraalVM native images have a very small footprint, they are fast to boot &amp;amp; they come with embedded VM (Substrate VM).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I had blogged about GraalVM here. Please refer to the following blogs, for better understanding of the architecture of Graal VM and how it builds on top of Java Virtual Machine&lt;br&gt;
 &lt;a href="https://medium.com/faun/episode-1-the-evolution-java-jit-hotspot-c2-compilers-building-super-optimum-containers-f0db19e6f19a"&gt;*&lt;em&gt;Episode 1: “The Evolution” *&lt;/em&gt;— Java JIT Hotspot &amp;amp; C2 compilers &lt;/a&gt;(the current episode…scroll down)&lt;br&gt;
 &lt;a href="https://medium.com/@abvijaykumar/episode-2-the-holy-grail-graalvm-building-super-optimum-microservices-architecture-series-c068b72735a1"&gt;*&lt;em&gt;Episode 2: “The Holy Grail” *&lt;/em&gt;— GraalVM&lt;/a&gt;&lt;br&gt;
 In these blogs, I will talk about how GraalVM embraces polyglot, providing interoperability between various programming languages. I will then cover how it extends from Hotspot, and provides faster execution, and smaller footprints with “Ahead-of-time” compilations &amp;amp; other optimisations&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  fn Project
&lt;/h3&gt;

&lt;p&gt;fn project is a great environment to build serverless applications. fn supports building serverless applications in Go, Java, JavaScript, Python, Ruby, C#. It is a very simple and rapid application development environment that comes with fn daemon &amp;amp; a CLI which provides most of the scaffolding to build serverless applications.&lt;/p&gt;

&lt;p&gt;In this blog let's focus on building a simple KG to Pounds converter function in Java. First, we will build a serverless application with Java, and then later build it using GraalVM native image. We will then compare how fast and small GraalVM implementation is.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisite
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Install docker (refer to &lt;a href="https://www.docker.com/"&gt;https://www.docker.com/&lt;/a&gt; for latest instructions)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install fn (refer to &lt;a href="https://fnproject.io/"&gt;https://fnproject.io/&lt;/a&gt; for latest instructions)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1. Starting fn daemon
&lt;/h3&gt;

&lt;p&gt;Start the fn daemon server using fn start&lt;/p&gt;

&lt;p&gt;The fn server runs in docker, you can check that by running docker ps The screenshot below shows, what I am able to see in my computer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Zj-tPMaG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3700/1%2AY80KsvAfq2sl9KVRfFgfbA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Zj-tPMaG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3700/1%2AY80KsvAfq2sl9KVRfFgfbA.png" alt="" width="880" height="56"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Generating the fn boilerplate code
&lt;/h3&gt;

&lt;p&gt;Now we can generate the boilerplate code with&lt;/p&gt;

&lt;p&gt;fn init --runtime java converterFunc&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7dPohjj6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2012/1%2AgxW064ryrszhRFWf5ycOPg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7dPohjj6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2012/1%2AgxW064ryrszhRFWf5ycOPg.png" alt="" width="880" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This creates a folder converterFunc with all the boilerplate code&lt;/p&gt;

&lt;p&gt;cd converterFunc&lt;/p&gt;

&lt;p&gt;Let’s inspect what is inside that folder. You will see a func.yaml, pom.xml &amp;amp; a src folder&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1EiKH1mI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2256/1%2A65BpLRdBtHPsGo4_tmSrLw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1EiKH1mI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2256/1%2A65BpLRdBtHPsGo4_tmSrLw.png" alt="" width="880" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;func.yml is the main manifest yaml file that has the key information about the class that implements the function and the entry point. Let's inspect that&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;schema_version: 20180708
name: converterfunc
version: 0.0.1
runtime: java
build_image: fnproject/fn-java-fdk-build:jdk11-1.0.118
run_image: fnproject/fn-java-fdk:jre11-1.0.118
cmd: com.example.fn.HelloFunction::handleRequest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;name&lt;/strong&gt;: The name of the function, we can see the name of the function that we specified in our command line fn init&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;version&lt;/strong&gt;: Version of this function&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;runtime&lt;/strong&gt;: Java Virtual machine as the runtime&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;build_image&lt;/strong&gt;: The docker image that should be used to build the java code, in this case, we see its JDK 11&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;run_image&lt;/strong&gt;: The docker image that should be used as a runtime. In this case, it is JRE11&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;cmd&lt;/strong&gt;: This is the entry point, which is the ClassName:MethodName&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;fn has all the information that it needs in this yaml to build and run the function when it is invoked.&lt;/p&gt;

&lt;p&gt;Now let's look at the maven file (pom.xml).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ddfL4glB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AwmxsQ-XW4giNa_2M0DVS6Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ddfL4glB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AwmxsQ-XW4giNa_2M0DVS6Q.png" alt="" width="880" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We see the repository from where the fn dependencies are to be pulled&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--op5oL36A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AqjL9JCzoBKyhmH2xSajM2Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--op5oL36A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AqjL9JCzoBKyhmH2xSajM2Q.png" alt="" width="880" height="813"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and the dependencies com.fnproject.fn.api, com.fnproject.fn.testing-core, com.fnproject.fn.testing-junit4.&lt;/p&gt;

&lt;p&gt;In the src the folder we will find HelloFunction.java, which is the default boilerplate code that is generated by fn.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LOsoynQ6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2624/1%2A0wLVrmVYp2BYW6tMLSVENg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LOsoynQ6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2624/1%2A0wLVrmVYp2BYW6tMLSVENg.png" alt="" width="880" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code is very straightforward. It has a handleRequest () method, which takes in the String an input and returns String as an output. we can write our function logic in this method. This is the method that fn, calls when we invoke the function.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Writing our logic
&lt;/h3&gt;

&lt;p&gt;Let's build our converter application. I am going to deploy it into the path src/main/java/com/abvijay/converter , and the name of my Class is ConverterFunction.java&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Esq2JeKE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2448/1%2AtACy0aArSLd0Sx7_oTQY7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Esq2JeKE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2448/1%2AtACy0aArSLd0Sx7_oTQY7w.png" alt="" width="880" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code is very straightforward. I am just expecting a kgs value in String, converting that to Double and calcualting pound value and returning that back as a String. ( I did not write a lot of exception handling, to check for edge conditions, to keep it simple).&lt;/p&gt;

&lt;p&gt;Now we need to update the func.yaml to point to our new Class&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VxDxaZqz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2340/1%2AWW6db8cM18FaGfnqe-Qs0g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VxDxaZqz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2340/1%2AWW6db8cM18FaGfnqe-Qs0g.png" alt="" width="880" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check line number 7, Which is changed to point to the new class and method.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Build &amp;amp; Deploy the serverless container to the local docker
&lt;/h3&gt;

&lt;p&gt;Functions are grouped into applications. an application can have multiple functions. That helps in grouping them and managing them. So we need to create a converter-app&lt;/p&gt;

&lt;p&gt;fn create app converter-app&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ts4v5jyq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A5zVwhEGPXnXoxcJdyysWBA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ts4v5jyq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A5zVwhEGPXnXoxcJdyysWBA.png" alt="" width="826" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the app is created, we can now deploy the app.&lt;/p&gt;

&lt;p&gt;fn deploy --app converter-app --local&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_mFvO0oB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2336/1%2AmwUPGVBZjSvtgXqIbmkdNg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_mFvO0oB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2336/1%2AmwUPGVBZjSvtgXqIbmkdNg.png" alt="" width="880" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;fn deploy command will build the code using maven, package it as a docker image, and deploy it to the local docker runtime. fn can also be used to deploy to the cloud or k8s cluster directly.&lt;/p&gt;

&lt;p&gt;Lets now use docker images command to check if our image is built.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ElwhbDqT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3872/1%2AzjptFdg-Jmwlp2cOTsr6Nw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ElwhbDqT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3872/1%2AzjptFdg-Jmwlp2cOTsr6Nw.png" alt="" width="880" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also use fn inspect to get all the details about the function, this helps in the discovery of the services.&lt;/p&gt;

&lt;p&gt;fn inspect function converter-app converterfunc&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gIgJ3Fw---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3220/1%2A9-OFI9ci4BfN19gNjFCTKw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gIgJ3Fw---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3220/1%2A9-OFI9ci4BfN19gNjFCTKw.png" alt="" width="880" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Running and Testing
&lt;/h3&gt;

&lt;p&gt;Now lets invoke the service, since our function expects a input argument in number, we can pass it using a echo command and pipe the output to fn invoke to invoke our function&lt;/p&gt;

&lt;p&gt;echo -n ‘10’ | fn invoke converter-app converterfunc&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TkGm5Q9M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2112/1%2AtTPypp3Hk8dxO3R4if8xpA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TkGm5Q9M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2112/1%2AtTPypp3Hk8dxO3R4if8xpA.png" alt="" width="880" height="67"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see the result coming from the function. Now let's run the same logic on GraalVM&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Run on GraalVM, as a native-image
&lt;/h3&gt;

&lt;p&gt;The base image for GraalVM is different, we use fnproject/fn-java-native-init, as the base, and initialize our fn project with that&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn init --init-image fnproject/fn-java-native-init converterfuncongraal

cd cnverterfuncongraal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WZSYqA_X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2284/1%2AfQLIB7FFjPhQb6yb_tWVbA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WZSYqA_X--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2284/1%2AfQLIB7FFjPhQb6yb_tWVbA.png" alt="" width="880" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This fn configuration works differently. It also generates a Dockerfile, with all the necessary docker build commands. This is a multi-stage docker build file. Lets inspect this dockerfile&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OMgrGj3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4052/1%2AaH_j-77UOt8GTCDDRCz41w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OMgrGj3F--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/4052/1%2AaH_j-77UOt8GTCDDRCz41w.png" alt="" width="880" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;line 17&lt;/strong&gt;: The image will be built using fnproject/fn-java-fdk-build .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;line 18&lt;/strong&gt;: setting the working directory to /function&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;line 19–23&lt;/strong&gt;: Then the maven environment is configured.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;line 25–40&lt;/strong&gt;: Using base image as fnproject/fn-java-native, the GraalVM is configured and fn runtime is compiled. This is a very important step, this is what makes our serverless runtime faster and with smaller footprint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;line 43–47&lt;/strong&gt;: Using busybox:glibc (which is the minimal version of linux+glibc) base image the native images are copied.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Line 48&lt;/strong&gt;: is the function entry point. the func.yml in this way of building the serverless image has no information. fn will use dockerfile to perform the build (along with maven) and deploy the image to the repository&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we need to change line 48 to point to our class. let's replace that with&lt;/p&gt;

&lt;p&gt;CMD [ “com.abvijay.converter.ConverterFunction::handleRequest” ]&lt;/p&gt;

&lt;p&gt;Another important configuration file, that we need to change reflection.json under src/main/conf This json file has the manifest information about the class name and the method.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cREvSmo1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ALcUBUZJNNfKiQK1DQN84eg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cREvSmo1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ALcUBUZJNNfKiQK1DQN84eg.png" alt="" width="880" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's change that to&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uNDGpSkK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Ax9eusMxk4fnLIEBUoPAWzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uNDGpSkK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Ax9eusMxk4fnLIEBUoPAWzg.png" alt="" width="848" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;now let’s create a new app and deploy this app and run and see&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fn create app graal-converter-app
fn deploy --app graal-converter-app --local
echo -n '20' | fn invoke graal-converter-app converterfuncongraal
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--70PqIjiG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2640/1%2APSg61C9gPO7-Px5g7nv6Lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--70PqIjiG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2640/1%2APSg61C9gPO7-Px5g7nv6Lw.png" alt="" width="880" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There you go, our code is now running on GraalVM. So what's the big deal. When I ran docker images , I see the size of the Java image as 223 MB and the GraalVM image is just 20MB. That is a 10 times smaller footprint.&lt;/p&gt;

&lt;p&gt;When I timed the function calls, the Java function took around 700ms while GraalVM took around 460ms. That is almost 30% faster. For functions with more complex logic, the differences will be much more significant.&lt;/p&gt;

&lt;p&gt;Java hotspot might catch up with this number, but that is provided the function runs longer, and the Just in time Compiler kicks in to optimize the code. Since most of the functions are expected to be quick and short running, it does not make sense to compare these JIT benchmarks.&lt;/p&gt;

&lt;p&gt;There you go…I hope this was fun…ttyl :-)&lt;/p&gt;

&lt;h3&gt;
  
  
  References:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/fnproject/cli"&gt;https://github.com/fnproject/cli&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://fnproject.io/"&gt;https://fnproject.io/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.docker.com/develop/develop-images/multistage-build/"&gt;https://docs.docker.com/develop/develop-images/multistage-build/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>graalvm</category>
      <category>serverless</category>
      <category>microservices</category>
      <category>java</category>
    </item>
    <item>
      <title>Platform Engineering with Pulumi — Episode 3: Platform &amp; Application Deployment with GitOps Automation</title>
      <dc:creator>A B Vijay Kumar</dc:creator>
      <pubDate>Tue, 18 Jan 2022 12:21:50 +0000</pubDate>
      <link>https://dev.to/abvijaykumar/platform-engineering-with-pulumi-episode-3-platform-application-deployment-with-gitops-automation-13md</link>
      <guid>https://dev.to/abvijaykumar/platform-engineering-with-pulumi-episode-3-platform-application-deployment-with-gitops-automation-13md</guid>
      <description>&lt;h2&gt;
  
  
  Platform Engineering with Pulumi — Episode 3: Platform &amp;amp; Application Deployment with GitOps Automation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Automate the deployment of the app with GitActions and CodeDeploy.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vASAiO1H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2APqvYUGoCgCAskeybM-o_GA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vASAiO1H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2APqvYUGoCgCAskeybM-o_GA.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://aws.plainenglish.io/platform-engineering-with-pulumi-episode-1-building-the-aws-landing-zone-with-pulumi-67b559523c78"&gt;Episode 1&lt;/a&gt; of this blog series, we built an AWS landing zone for our React/Node.js application, using Pulumi. In &lt;a href="https://aws.plainenglish.io/platform-engineering-with-pulumi-episode-2-building-and-deploying-the-nodejs-application-6217ed039e6"&gt;Episode 2&lt;/a&gt; we built a simple React app and an Express API server. We manually deployed the app on the landing zone. In this episode, we will automate it with GitActions and CodeDeploy.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For the sample application used in this blog, I have used Multi-repo. There is a huge debate on Mono vs Multi Repos. I might blog about that separately, but for now let's assume Multi repo is the best for this application and go ahead :-D.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Before we configure the GitActions, we have to modify the infrastructure code to include Infra code for the creation of code deploy configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modifying the Infra code to create CodeDeploy
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://aws.plainenglish.io/platform-engineering-with-pulumi-episode-1-building-the-aws-landing-zone-with-pulumi-67b559523c78"&gt;Episode 1&lt;/a&gt;, we did not create the CodeDeploy Configurations. Find below the modifications done in the Python code to create appropriate Roles and CodeDeploy configurations. Let's walk through the code (&lt;em&gt;please refer to my &lt;a href="https://github.com/abvijaykumar/contactlist-blog-infra"&gt;**GitHub&lt;/a&gt;&lt;/em&gt;* for more latest code.*)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--edBBZp7C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AJ1oNCEB9AtfcGZrDdJvqsQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--edBBZp7C--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AJ1oNCEB9AtfcGZrDdJvqsQ.png" alt="" width="880" height="712"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above code, We are creating a Role, to allow the CodeDeploy service to have access, and attaching appropriate access policies (AmazonEC2FullAccess, AWSCodeDeployFullAccess, AdministratorAccess, AWSCodeDeployRole).&lt;/p&gt;

&lt;p&gt;Let's now create the CodeDeploy application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mHc_dhwQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2692/1%2AD-pg-ALLcmlmQihXkexSuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mHc_dhwQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2692/1%2AD-pg-ALLcmlmQihXkexSuw.png" alt="" width="880" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above code, we are creating a CodeDeploy application. Let's now create deployment groups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fMzUqMbr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AaKFxSHfJtdOHDdJ5kBlKDg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fMzUqMbr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AaKFxSHfJtdOHDdJ5kBlKDg.png" alt="" width="880" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above code, we are creating a CodeDeploy Deployment Group, with the Role that we created. This deployment group will be used to deploy API code (Node.js/Express). The below code created a CodeDeploy Deployment Group to deploy the React application. Both the deployment groups are created under the same CodeDeploy Application&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tBOhCcHL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A-EUXjs2Vfzfqk_8sC7CyrQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tBOhCcHL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A-EUXjs2Vfzfqk_8sC7CyrQ.png" alt="" width="880" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's now run the code withpulumi up.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps Deploying Node.js API application
&lt;/h2&gt;

&lt;p&gt;Now that we have all the infrastructure ready.&lt;/p&gt;

&lt;p&gt;Lets first create a CodeDeploy Configuration file (appspec.yml) in our Node.js application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pQXGRq96--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Apz2EolFkbxVV4yZwPsvFjQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pQXGRq96--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Apz2EolFkbxVV4yZwPsvFjQ.png" alt="" width="529" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the appspec.yml, we are configuring the source and destination (on EC2) folders. This configuration is used by CodeDeploy to deploy the application files in the appropriate folder (/home/ec2-user/contacts-api).&lt;/p&gt;

&lt;p&gt;We are also configuring 2 Hooks&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Before Install Hook&lt;/strong&gt;: This is a shell script that we are asking CodeDeploy to run, before deploying the application code. In this shell script, we will set up the environment required. The following is the code for setup.sh. In this shell script, we are installing NVM and creating a directory for CodeDeploy to deploy the source files:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xTQUJq0i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2588/1%2A8Zud5pL5CF9OWrvANMX9yw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xTQUJq0i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2588/1%2A8Zud5pL5CF9OWrvANMX9yw.png" alt="" width="880" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Application Start Hook&lt;/strong&gt;: Which is another shell script that we are asking CodeDeploy to run to start the application after deploying the application code. The following is the code for appStart.sh, where we are setting up the nvm environment variables, running npm install to install all the dependencies, and then running our application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VdZZhovR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2384/1%2AJgmNIKaH5iAztHqaQJyjvA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VdZZhovR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2384/1%2AJgmNIKaH5iAztHqaQJyjvA.png" alt="" width="880" height="536"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's now build a GitOps pipeline for the NodeJS application. The following is the code for GitActions, which gets triggered on pull request and push.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XMhKToV_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ApW6DYLH27BjKDdAZ3Z68hQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XMhKToV_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ApW6DYLH27BjKDdAZ3Z68hQ.png" alt="" width="768" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above code is very simple. We are using a ubuntu-latest VM instance, setting the AWS credentials (the AWS_IAM_ACCESS_KEY, AWS_SECRET_ACCESS_KEY and AWS_REGION configurations are already configured in GitHub under Settings-&amp;gt;Secrets. Here is a screenshot).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OzZYZhgS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3476/1%2AXNlpV2JCrK7HrOMYwV6DwA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OzZYZhgS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3476/1%2AXNlpV2JCrK7HrOMYwV6DwA.png" alt="" width="880" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are then triggering the appropriate CodeDeploy deployment group, that we have configured using Pulumi in the above section.&lt;/p&gt;

&lt;p&gt;Now that we have all the pipeline code, let's test it by running the GitActions Workflow. The following animated gif shows the execution of the workflow and the deployment.&lt;/p&gt;

&lt;p&gt;Let’s now automate the deployment of the React app.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitOps Deploying React application
&lt;/h2&gt;

&lt;p&gt;To deploy React, the steps are similar, We first configure the AWS key, region, and secret in GitHub secrets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--V-EweNha--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2AiQegcWYOvAUnw8GXRh3J-Q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--V-EweNha--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2AiQegcWYOvAUnw8GXRh3J-Q.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's create the GitActions workflow. The following is the code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Nx4gawVe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2976/1%2AmEDblZPStXxNGLitv1KuoA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Nx4gawVe--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2976/1%2AmEDblZPStXxNGLitv1KuoA.png" alt="" width="880" height="664"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This code is very similar to how we deployed Node.js except that the deployment group we are using for React is different. We are using pulumi-blog-app-codedeploy-deploymentgroup.&lt;/p&gt;

&lt;p&gt;Let's add the appspec.yml in the React root folder. The following is the source code. Similar to what we did with the Node.js application, we have 2 hooks, one to set up the environment and the other script to start the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--MKk-XlwW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AnWGpGahHvCuW8C4Bj7hFPg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--MKk-XlwW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AnWGpGahHvCuW8C4Bj7hFPg.png" alt="" width="400" height="298"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's create the startup.sh, where we set up the environment before we deploy the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sy4iDZRX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2828/1%2ARY-dAjhXmaHfeQunTS7ddA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sy4iDZRX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2828/1%2ARY-dAjhXmaHfeQunTS7ddA.png" alt="" width="880" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above code we are installing nvm, and making sure contacts-app the folder is created.&lt;/p&gt;

&lt;p&gt;The following is the code for running the application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WcGixCfQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AdYdVBFPv2CpPrFCu5XsHGA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WcGixCfQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AdYdVBFPv2CpPrFCu5XsHGA.png" alt="" width="536" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above script, we are installing pm2 &amp;amp; serve, building the application, and running the application. These are the exact steps we followed in &lt;a href="https://aws.plainenglish.io/platform-engineering-with-pulumi-episode-2-building-and-deploying-the-nodejs-application-6217ed039e6"&gt;Episode 2&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Let's now execute the workflow. The following is the screen capture video of the workflow execution.&lt;/p&gt;

&lt;p&gt;Now we have all the pipelines configured whenever there are changes to the application code, the respective GitAction workflow will deploy the application to the EC2 instance.&lt;/p&gt;

&lt;p&gt;We need one more GitOps pipeline for the Infrastructure code. I thought of covering that in this episode..but it has already become a long one. So I will be publishing the same in the next Episode.&lt;/p&gt;

&lt;p&gt;Until then take care and Have fun, hope this was helpful :-D&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;React app Repository: &lt;a href="https://github.com/abvijaykumar/contactlist-blog-react-app"&gt;https://github.com/abvijaykumar/contactlist-blog-react-app&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Express API repo: &lt;a href="https://github.com/abvijaykumar/contactlist-blog-app"&gt;https://github.com/abvijaykumar/contactlist-blog-app&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pulumi Infrastructure code repo: &lt;a href="https://github.com/abvijaykumar/contactlist-blog-infra"&gt;https://github.com/abvijaykumar/contactlist-blog-infra&lt;/a&gt;&lt;br&gt;
&lt;a href="https://discord.gg/GtDtUAvyhW"&gt;**community Discord&lt;/a&gt;*&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>gitops</category>
      <category>javascript</category>
      <category>python</category>
      <category>aws</category>
    </item>
    <item>
      <title>Building GraalVM Native Image of a Polyglot Java+numpy application</title>
      <dc:creator>A B Vijay Kumar</dc:creator>
      <pubDate>Sun, 16 Jan 2022 13:50:36 +0000</pubDate>
      <link>https://dev.to/abvijaykumar/building-graalvm-native-image-of-a-polyglot-javanumpy-application-26l</link>
      <guid>https://dev.to/abvijaykumar/building-graalvm-native-image-of-a-polyglot-javanumpy-application-26l</guid>
      <description>&lt;h2&gt;
  
  
  Building GraalVM Native Image of a Polyglot Java+numpy application
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ceNkBMgs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3088/1%2AOPwFcd33dbbrJIH6MW_ZvA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ceNkBMgs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3088/1%2AOPwFcd33dbbrJIH6MW_ZvA.png" alt="" width="880" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the greatest features of GraalVM is to provide a universal runtime for running code written in different languages. This opens up a huge opportunity to reuse the existing tested and hardened code, without rewriting it in target languages. This is very handy at times where code is tough to migrate, or the host language has features that make it an obvious choice to implement. For example, Python and R are known for the right libraries, and the simplicity they provide in building data science and machine learning applications.&lt;/p&gt;

&lt;p&gt;Before I get into the actual topic, let me introduce GraalVM. Here are some blogs on GraalVM, I had published before.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Episode 1: “The Evolution” — Java JIT Hotspot &amp;amp; C2 compilers (the current episode…scroll down)&lt;/em&gt;&lt;br&gt;
 &lt;a href="https://medium.com/@abvijaykumar/episode-2-the-holy-grail-graalvm-building-super-optimum-microservices-architecture-series-c068b72735a1"&gt;*Episode 2: “The Holy Grail” — GraalVM&lt;/a&gt;*&lt;br&gt;
 &lt;a href="https://abvijaykumar.medium.com/java-serverless-on-steroids-with-fn-graalvm-hands-on-3f95e8f0de16"&gt;*Java Serverless on Steroids with fn+GraalVM Hands-On&lt;/a&gt;*&lt;br&gt;
 &lt;em&gt;This blogs provides a hands on example of how to build a serverless application using fn project and run it on GraalVM&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I also blogged about what is inside the book here. &lt;a href="https://abvijaykumar.medium.com/supercharge-your-applications-with-graalvm-book-4e1693babb91"&gt;Supercharge Your Applications with GraalVM — Book&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can check out the book at these links&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.packtpub.com/product/supercharge-your-applications-with-graalvm/9781800564909"&gt;https://www.packtpub.com/product/supercharge-your-applications-with-graalvm/9781800564909&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.amazon.com/Supercharge-Your-Applications-GraalVM-hands/dp/1800564902"&gt;https://www.amazon.com/Supercharge-Your-Applications-GraalVM-hands/dp/1800564902&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this blog, we will explore how we can use the GraalVM Polyglot library to call a python program, that uses numpy, from a Java application.&lt;/p&gt;

&lt;p&gt;This python program performs a simple data analysis of a dataset that I picked from Kaggle (&lt;a href="https://www.kaggle.com/rashikrahmanpritom/heart-attack-analysis-prediction-dataset"&gt;https://www.kaggle.com/rashikrahmanpritom/heart-attack-analysis-prediction-dataset&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;This dataset has records of people who had a heart attack and various important information about these patients, which will help us do some data analysis to identify patterns.&lt;/p&gt;

&lt;p&gt;Before we get hands-on, we need to understand the Polyglot architecture of GraalVM. GraalVM comes with a framework called “Truffle”, which allows polyglot interoperability on GraalVM.&lt;/p&gt;

&lt;h2&gt;
  
  
  GraalVM Polyglot Architecture — Truffle
&lt;/h2&gt;

&lt;p&gt;Truffle is an open-source library that provides a framework to implement language interpreters. Truffle helps run guest programming languages that implement the framework to utilize the Graal compiler features to generate high-performance code. Truffle also provides a tools framework that helps integrate and utilize some of the modern diagnostic, debugging, and analysis tools.&lt;/p&gt;

&lt;p&gt;Let’s understand how Truffle fits into the overall GraalVM ecosystem. Along with interoperability between the languages, Truffle also provides embeddability. Interoperability allows the calling of code between different languages, while embeddability allows the embedding of code written in different languages in the same program.&lt;/p&gt;

&lt;p&gt;Language interoperability is critical for the following reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Different programming languages are built to solve different problems, and they come with their own strengths. For example, we use Python and R extensively for machine learning and data analytics, and we use C/C+ for high-performance mathematical operations. Imagine if we would reuse the code as it is, either by calling the code from a host language (such as Java) or embedding that code within the host language. This also increases the reusability of the code and allows us to use an appropriate language for the task at hand, rather than rewriting the logic in different languages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Large migration projects where we are moving from one language to another can be phased out if we have the feature of multiple programming language interoperability. This brings down the risk of migration considerably.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following figure illustrates how to run applications written in other languages on GraalVM:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cY-CBuEf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AWnQPbLq_waLDbHIHS3fuRw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cY-CBuEf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AWnQPbLq_waLDbHIHS3fuRw.png" alt="" width="601" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the figure, we can see GraalVM, which is the JVM and Graal JIT compiler that we covered in the previous chapters. On top of that, we have the Truffle framework. Truffle has two major components. They are as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Truffle API&lt;/strong&gt;: The Truffle API is the language implementation framework that any guest language programmers can use to implement the Truffle interpreter for their respective languages. Truffle provides a sophisticated API for &lt;strong&gt;Abstract Syntax Tree **(&lt;/strong&gt;AST**) rewriting. The guest language is converted to AST for optimizing and running on GraalVM. The Truffle API also helps in providing an interoperability framework between languages that implement the Truffle API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Truffle optimizer&lt;/strong&gt;: The Truffle optimizer provides an additional layer of optimization for speculative optimization with partial evaluation. We will be going through this in more detail in the subsequent sections.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Above the Truffle layer, we have the guest language. This is JavaScript, R, Ruby, and others that implement the Truffle Language Implementation framework. Finally, we have the application that runs on top of the guest language runtime. In most cases, application developers don’t have to worry about changing the code to run on GraalVM. Truffle makes it seamless by providing a layer in between.&lt;/p&gt;

&lt;p&gt;Truffle provides the API that the individual interpreters implement to rewrite the code into ASTs. The AST representation is later converted to a Graal intermediate representation for Graal to execute and also optimize just in time. The guest languages run on top of the Truffle interpreter implementations of the respective guest languages. To read and understand mode about how Truffle works, Please refer to my &lt;a href="https://www.packtpub.com/product/supercharge-your-applications-with-graalvm/9781800564909"&gt;book&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-On — Java &amp;amp; Python Interoperability
&lt;/h2&gt;

&lt;p&gt;The dataset has various columns, the key columns are age, sex, chest pain, cholesterol levels, etc. The following is the screenshot of the dataset&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gBZGVZnf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A_f4ctdyyVItA5PdCCsgaBQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gBZGVZnf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A_f4ctdyyVItA5PdCCsgaBQ.png" alt="" width="664" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets now build a simple numpy module (in python) that calculates the average age of people who have level 3 chest pain, and people with level 3 chest pain. And then we will be calling this python method from Java.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Setup Environment
&lt;/h3&gt;

&lt;p&gt;Let's first start with installing GraalVM. You can refer to GraalVM Documentation on installing it on your target OS. I always prefer using VisualStudio Code, as it provides a great integrated environment and helps manage the environment and different versions of GraalVM with ease.&lt;/p&gt;

&lt;p&gt;You can install GraalVM on Visual Studio Code as an extension. Find below the screenshot of where you can find it on Visual Studio Code&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--p2MCcmab--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2772/1%2AApNtHDSgKvNBYHzqC4G_Fw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--p2MCcmab--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2772/1%2AApNtHDSgKvNBYHzqC4G_Fw.png" alt="" width="880" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can install either Community or Enterprise (or both, as VSCode provides a way to have multiple environments, and provides a very easy way to switch between the environments). In my case, I am installing Community edition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BKmFOo7Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2114/1%2ALVx81XWFxLg8TotHf5_HFg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BKmFOo7Z--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2114/1%2ALVx81XWFxLg8TotHf5_HFg.png" alt="" width="880" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once I install it, VSCode also helps to set the respective environment variable. This ensures that the integrated terminal points to the right version of the GraalVM. (This can be easily switched with other versions)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lCvLeBKz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AZ9G3oehWqhN1TYULUN0wJg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lCvLeBKz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AZ9G3oehWqhN1TYULUN0wJg.png" alt="" width="749" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the environment is set, we also need to install the other optional runtimes. We will need Python, LLVM, Native Image runtimes. You should be able to install them by clicking the “+” button&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6oh9PvqL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A-k0R2Fw3OXUPEPwun80IKw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6oh9PvqL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A-k0R2Fw3OXUPEPwun80IKw.png" alt="" width="502" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once all the optional runtime is installed, you can check the versions in the VSCode integrated terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--LsULMhLK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A_CIw48QyGY5F44TPyiZ7zw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--LsULMhLK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A_CIw48QyGY5F44TPyiZ7zw.png" alt="" width="715" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's now create a virtual environment. Instead of using, we will be using graalpython. The following is the command to create a virtual environment with graalpython&lt;/p&gt;

&lt;p&gt;graalpython -m venv ab_venv&lt;/p&gt;

&lt;p&gt;To activate the virtual environment we execute source ab_venv/bin/activate&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NkAFB8xa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AoQF_1YuXiUqnkGxy8IAQcQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NkAFB8xa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AoQF_1YuXiUqnkGxy8IAQcQ.png" alt="" width="620" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's set the python environment variable&lt;/p&gt;

&lt;p&gt;export GRAAL_PYTHONHOME=$GRAALVM_HOME/languages/python&lt;/p&gt;

&lt;p&gt;Lets now install numpy. Once again we will be using graalpython command line to install the packages. The following is the command for installing numpy&lt;/p&gt;

&lt;p&gt;graalpython -m ginstall install numpy&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Build and test Python application
&lt;/h3&gt;

&lt;p&gt;The following is the python code. It's a very simple numpy API call to calculate the averages, and return the values dataOfPeopleWith3ChestPain, averageAgeofPeopleWith3ChestPain&lt;/p&gt;

&lt;p&gt;you can find the latest code in my GitHub account &lt;a href="https://github.com/abvijaykumar/graalvm-numpy-polyglot"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aJnGytXb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2078/1%2AreCDiEDvgBIXQKrqZlljPA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aJnGytXb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2078/1%2AreCDiEDvgBIXQKrqZlljPA.png" alt="" width="880" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see it is a very simple python application. We are loading the CSV file (dataset that we downloaded from Kaggle), we are then performing a simple statistical calculation and returning the average age of the people who had level 3 chest pain before a heart attack.&lt;/p&gt;

&lt;p&gt;To check if our application is running, we will use graalpython&lt;/p&gt;

&lt;p&gt;graalpython heartAnalysis.py&lt;/p&gt;

&lt;p&gt;you should be able to see the output from the application. Here is what I can see.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kYw9A4P0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AWCft69E9glL99KYXbMUFKw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kYw9A4P0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AWCft69E9glL99KYXbMUFKw.png" alt="" width="371" height="48"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we know our python application is running and we have exposed the heartAnalysis() method, let's build a Java application to call this method.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Build and Test Java Application
&lt;/h3&gt;

&lt;p&gt;Find below the Java Application that calls the Python method, that we developed in Step 2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tDZ3vG6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3232/1%2AQhbFm4pWVwQTwx4xrPViMA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tDZ3vG6Y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3232/1%2AQhbFm4pWVwQTwx4xrPViMA.png" alt="" width="880" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lets understand this Java code&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lines 1–4&lt;/strong&gt;: We are importing the following Java libraries&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;java.io.File: as we will be loading the python source code file into the application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;org.graalvm.polyglot.Context: GraalVM provides a polyglot context, that helps with the interoperability between code written in different langauges. Please refer to API doc &lt;a href="https://www.graalvm.org/sdk/javadoc/org/graalvm/polyglot/Context.html"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;org.graalvm.polyglot.Source: This class represents the source code and the contents of this. We will be using this object to access the Python methods. Please refer to the API doc &lt;a href="https://www.graalvm.org/sdk/javadoc/org/graalvm/polyglot/Source.html"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;org.graalvm.polyglot.Value: This class represents the value that can be passed between the host and guest languages. In this case, the host is Java and the guest is Python. Please refer to the API doc &lt;a href="https://www.graalvm.org/sdk/javadoc/org/graalvm/polyglot/Value.html"&gt;here&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Line 8&lt;/strong&gt;: We are building and initializing the polyglot context object, and setting the permission to have complete access. This object will help us load and run the python code&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 10–11&lt;/strong&gt;: We are loading the python source code into the context and building the code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 13&lt;/strong&gt;: We are accessing the method definition using the Binding object. In our case, we are getting the reference to heartAnalysis() python method.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 15–18&lt;/strong&gt;: We are invoking the method and printing the results.&lt;/p&gt;

&lt;p&gt;Lets us now compile the Java code and run it&lt;/p&gt;

&lt;p&gt;javac HeartAnalysisJava.java&lt;/p&gt;

&lt;p&gt;java HeartAnalysisJava&lt;/p&gt;

&lt;p&gt;Here is the screenshot of my terminal, after compiling and running the Java program.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_5WlO09f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2636/1%2AUUspjM7jOv_RNGD0wSnTeQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_5WlO09f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2636/1%2AUUspjM7jOv_RNGD0wSnTeQ.png" alt="" width="880" height="147"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will see 3 outputs. The first one is coming from print(averageAgeofPeopleWith3ChestPain) a statement that is called within the python code heartAnalysis(). The second output from the Java code invoking the heartAnalysis() python method and the last output is the data that Java code received from python code that we are printing with System.out.println in Line #16.&lt;/p&gt;

&lt;p&gt;Now we have a working Java application that is invoking a python method. Let's now build a native image of this java code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Building Native Image
&lt;/h3&gt;

&lt;p&gt;Ensure that the Native image runtime is installed, if not, you can install it using the VSCode GraalVM plugin, as shown in the screenshot below. Or you can install using GraalVM Updater utility. Please refer to the documentation &lt;a href="https://www.graalvm.org/reference-manual/native-image/"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4hmujSjm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AneDg3PT-cbH9fnY1ERMnkw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4hmujSjm--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AneDg3PT-cbH9fnY1ERMnkw.png" alt="" width="491" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To build the native image, let's execute the following command&lt;/p&gt;

&lt;p&gt;native-image -language:python -Dorg.graalvm.launcher.relative.python.home=$GRAALVM_HOME/languages/python -Dorg.graalvm.launcher.relative.llvm.home=$GRAALVM_HOME/languages/llvm HeartAnalysisJava&lt;/p&gt;

&lt;p&gt;GraalVM native-image command line is used to build the native image.&lt;/p&gt;

&lt;p&gt;The -languageargument lets the native image know that we will be calling python code, and ensures that Python is available as a language for the image. The other 2 arguments are letting the native-image builder know where to find python runtime and the llvm runtime.&lt;/p&gt;

&lt;p&gt;This generates a binary file heartanalysisjava, and we can run the application directly by executing the following command&lt;/p&gt;

&lt;p&gt;./heartanalysisjava&lt;/p&gt;

&lt;p&gt;The following is the screenshot of the build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jb0tjiKz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2640/1%2AXIWopE-5c9kJUsFCEhMNNg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jb0tjiKz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2640/1%2AXIWopE-5c9kJUsFCEhMNNg.png" alt="" width="880" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's it for now, I hope you had fun playing around with GrallVM polyglot and building native images. I have gone into a great level of detail on how GraalVM works in my book &lt;a href="https://abvijaykumar.medium.com/supercharge-your-applications-with-graalvm-book-4e1693babb91"&gt;Supercharge Your Applications with GraalVM — Book&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Hope this was helpful, Keep safe, Have fun, until next time :-D&lt;/p&gt;

&lt;p&gt;References&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;My GitHub Repository — &lt;a href="https://github.com/abvijaykumar/graalvm-numpy-polyglot"&gt;https://github.com/abvijaykumar/graalvm-numpy-polyglot&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Installing GraalVM — &lt;a href="https://www.graalvm.org/docs/getting-started/linux/"&gt;https://www.graalvm.org/docs/getting-started/linux/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GraalVM — &lt;a href="https://www.graalvm.org/"&gt;https://www.graalvm.org/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Platform Engineering with Pulumi Episode 2: Build and Deploy a React.js Application</title>
      <dc:creator>A B Vijay Kumar</dc:creator>
      <pubDate>Tue, 04 Jan 2022 13:19:59 +0000</pubDate>
      <link>https://dev.to/abvijaykumar/platform-engineering-with-pulumi-episode-2-build-and-deploy-a-reactjs-application-31ip</link>
      <guid>https://dev.to/abvijaykumar/platform-engineering-with-pulumi-episode-2-build-and-deploy-a-reactjs-application-31ip</guid>
      <description>&lt;h2&gt;
  
  
  Platform Engineering with Pulumi Episode 2: Build and Deploy a React.js Application
&lt;/h2&gt;

&lt;h3&gt;
  
  
  A guide on how to build the application and deploy it manually.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vASAiO1H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2APqvYUGoCgCAskeybM-o_GA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vASAiO1H--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/NaN/1%2APqvYUGoCgCAskeybM-o_GA.png" alt="" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://aws.plainenglish.io/platform-engineering-with-pulumi-episode-1-building-the-aws-landing-zone-with-pulumi-67b559523c78"&gt;Chapter 1&lt;/a&gt; of this blog (please refer &lt;a href="https://aws.plainenglish.io/platform-engineering-with-pulumi-episode-1-building-the-aws-landing-zone-with-pulumi-67b559523c78"&gt;here&lt;/a&gt;) we built an AWS landing zone for our React.js/Node.js application. In this episode, we will build the application and deploy it manually. In the next chapter, we will use GitOps based automated deployment of both the Infrastructure and application code.&lt;/p&gt;

&lt;p&gt;The app that we will be building is a very simple web application, that creates and fetches contact details to/from DynamoDB.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://aws.plainenglish.io/platform-engineering-with-pulumi-episode-1-building-the-aws-landing-zone-with-pulumi-67b559523c78"&gt;Chapter 1&lt;/a&gt;, we already created a DynamoDB with Pulumi. Here is the snippet of the code, that creates the DynamoDB:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FtKDtvDk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Alrp6Bz0TsCJppAwLmvz87w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FtKDtvDk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Alrp6Bz0TsCJppAwLmvz87w.png" alt="" width="709" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above Pulumi code in Python creates a table called contacts-table with 2 attributes ContactName and ContactNumber, with other important configurations such as hash_key, secondary_index tags etc.&lt;/p&gt;

&lt;p&gt;Now let's build and deploy our application, on the landing zone we created with Pulumi.&lt;/p&gt;

&lt;h3&gt;
  
  
  Node API
&lt;/h3&gt;

&lt;p&gt;To access and perform the add and fetchAll operations on the Contacts database, let's build a simple express Node.js application. You can find the source code &lt;a href="https://github.com/abvijaykumar/contactlist-blog-app"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We could have used Next.js to do both API and App, but I wanted to demonstrate deploying multiple tiers (in a typical tiered web architecture). So please play along 🙏&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this Node.js application, we will expose 2 endpoints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;/fetchAllContacts: This will connect to DynamoDB and fetch all the contacts and returns as a JSON response&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;/addContact: This endpoint accepts the ContactName and ContactNumber as parameters and adds the record to DynamoDB&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The use case and code is very simple, as our focus is more on the IaC and GitOps, I have kept the application very simple, with no security/login and serious exception and log handling, etc.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s walk through the code quickly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--11g3ZzAM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ATD8mRcTHA14GPbiPtJIpoQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--11g3ZzAM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ATD8mRcTHA14GPbiPtJIpoQ.png" alt="" width="668" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 1–5:&lt;/strong&gt; We are importing express, to create the endpoints and cors. For the React.js application to call these endpoints, the URL domains will be different, as the ports are going to be different. So we will need to configure Cross-Origin Resource Sharing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---fIs_YSQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A2a003c8cpkBQpwNmPo3cCQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---fIs_YSQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A2a003c8cpkBQpwNmPo3cCQ.png" alt="" width="880" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 7–11:&lt;/strong&gt; In the above code, we are initializing the aws sdk, and setting the default region, and creating a DynamoDB Client object.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lASP84SJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AD2BBzi8y9cFaozctGeCKLg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lASP84SJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AD2BBzi8y9cFaozctGeCKLg.png" alt="" width="880" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 17–28:&lt;/strong&gt; We are creating an endpoint /fetchAllContacts using express, and fetching all the records from DynamoDB, using Scan. This fetches all the contacts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bwWZIcVH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2832/1%2A9zbCBdFVGN17b1awQCm2cg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bwWZIcVH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2832/1%2A9zbCBdFVGN17b1awQCm2cg.png" alt="" width="880" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 30–49:&lt;/strong&gt; We are extracting the contactName and contactNumber from the request object and adding it to the DynamoDB table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TzCT0KlM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2424/1%2AQEeRo7t42PSATeudVJ4biA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TzCT0KlM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2424/1%2AQEeRo7t42PSATeudVJ4biA.png" alt="" width="880" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 56–61&lt;/strong&gt;: We are then running the Node.js application listening on a port.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please note that in &lt;a href="https://aws.plainenglish.io/platform-engineering-with-pulumi-episode-1-building-the-aws-landing-zone-with-pulumi-67b559523c78"&gt;Episode 1&lt;/a&gt;, we opened port 80 SecurityGroupIngress. In the recent code, this is edited to open 8081 for the Node.js applicaton and 8082 for the React.js one. Please refer to the latest Pulumi code on&lt;a href="https://github.com/abvijaykumar/contactlist-blog-infra"&gt; my GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To run the application we will have to install the dependencies and update the package.json, here are the commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install aws-sdk --save

npm install cors --save

npm install express --save
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--WlyXEjgS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ANGuUOVe4_njN5YkMmP4MoQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--WlyXEjgS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ANGuUOVe4_njN5YkMmP4MoQ.png" alt="" width="652" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The application can be tested from the local machine by configuring AWS using aws configure and pointing it to your account, and running the Node.js application. This is not covered in the blog (to keep the blog short).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once the application is tested, the application code can be copied to the EC2 instance by running the scp command. Here are the commands, that I executed to copy the relevant files:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scp -i rsa /Users/vijaykumarab/AB-Home/Developer/contactlist-blog/contactlist-blog-app/package.json ec2-user@3.235.60.38:/home/ec2-user/api/package.json

scp -i rsa /Users/vijaykumarab/AB-Home/Developer/contactlist-blog/contactlist-blog-app/index.js ec2-user@3.235.60.38:/home/ec2-user/api/index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To run the application on EC2, log in to the EC2 and run npm install to install dependencies, and run the node app (node index.js) to test if the APIs are working.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Ensure that the right ports are opened. I used export PORT=8081, before I run the node index.js and updated my Pulumi code to open that port. Please refer to the latest code on &lt;a href="https://github.com/abvijaykumar/contactlist-blog-infra"&gt;my GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now that we have the API running. Let's build a simple React.js application to access this API, and display the results.&lt;/p&gt;

&lt;h3&gt;
  
  
  ReactJS App
&lt;/h3&gt;

&lt;p&gt;In the ReactJS app, we are building a simple SPA (Single Page Application). You can refer to the complete code on &lt;a href="https://github.com/abvijaykumar/contactlist-blog-infra"&gt;my GitHub&lt;/a&gt;. Let's quickly walk you through the application code.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To keep it simple, I am using Material UI (my personal preference is Tailwind CSS).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--ajl9FCjM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2948/1%2AINxftrFIHoCG_EH0W6ToYg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ajl9FCjM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2948/1%2AINxftrFIHoCG_EH0W6ToYg.png" alt="" width="880" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 11–17&lt;/strong&gt;: In the above code, we are using useEffect() to fetch all the records from DynamoDB, and when the records are fetched, calling the setContacts, to refresh the page to force a render, using useState() hook.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KUAqNLPK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2244/1%2AVjHow5rUNT330bLW2y1atA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KUAqNLPK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2244/1%2AVjHow5rUNT330bLW2y1atA.png" alt="" width="880" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 59–73&lt;/strong&gt;: In the above code, we are defining a JavaScript Method to add a contact. This picks the values provided in the form (the code is below) and calls theaddContact endpoint to add the contact. It then refreshes the page (There are better ways to refresh the page. But to keep the code quick and simple, I am using location.reload(), which is not a good practice. Ideally, we should be using a state)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IjLFemyD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2628/1%2AVyVozR7vEcuS_fILXm04Ew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IjLFemyD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2628/1%2AVyVozR7vEcuS_fILXm04Ew.png" alt="" width="880" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 78–95&lt;/strong&gt;: In the above code, we are rendering the fetches contacts as a table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9NoFhTte--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3468/1%2AQHkaDY2SYQdnBswTj5cACQ.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9NoFhTte--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3468/1%2AQHkaDY2SYQdnBswTj5cACQ.png" alt="" width="880" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 96–115&lt;/strong&gt;: In the above code, we are rendering a form, that accepts Contact Name and Contact Number, and calls the addContact() method.&lt;/p&gt;

&lt;p&gt;To deploy the code, let's copy the React.js code using SCP. The following are the commands, I executed:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;scp -i rsa -r /Users/vijaykumarab/AB-Home/Developer/contactlist-blog/contactlist-blog-react-app/src ec2-user@54.85.92.194:/home/ec2-user/contacts-app/src

scp -i rsa -r /Users/vijaykumarab/AB-Home/Developer/contactlist-blog/contactlist-blog-react-app/public ec2-user@54.85.92.194:/home/ec2-user/contacts-app

scp -i rsa -r /Users/vijaykumarab/AB-Home/Developer/contactlist-blog/contactlist-blog-react-app/package.json ec2-user@54.85.92.194:/home/ec2-user/contacts-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Before we build the application, let's install nvm, pm2 and serve.&lt;/p&gt;

&lt;p&gt;Login to EC2.&lt;/p&gt;

&lt;p&gt;And execute the following command to install nvm (please refer to the latest documentation on nvm):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -o- [https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh](https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh) | bash

. ~/.nvm/nvm.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Execute nvm install nodeto install node. To check if the node is installed, run node --version.&lt;/p&gt;

&lt;p&gt;To install pm2 and serve execute the following commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo npm install -g pm2

sudo npm install -g serve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once all the packages are installed successfully, let's install the application dependencies, and build our React.js app, by executing the following commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd contacts-app

npm install

npm run build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;I faced a strange problem when I try to run the npm run build. I was getting Error: error:0308010C:digital envelope routines::unsupported&lt;br&gt;
 I found the workdaround from (&lt;a href="https://stackoverflow.com/questions/69692842/error-message-error0308010cdigital-envelope-routinesunsupported"&gt;https://stackoverflow.com/questions/69692842/error-message-error0308010cdigital-envelope-routinesunsupported&lt;/a&gt;). Thanks to Peter Mortensen&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CFTky416--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AeqRbeiOYgwfHCuzKvrz-4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CFTky416--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AeqRbeiOYgwfHCuzKvrz-4w.png" alt="" width="706" height="713"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above screenshot shows the code compile. Let’s now run the application using pm2, by running the following commands:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 serve build/ 8082 — name “contactlist-app” — spa
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To check if our application is running let's run pm2 list.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VmakMHGA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2132/1%2AHOZv-5CZ_ioQF28K-u0P_g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VmakMHGA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2132/1%2AHOZv-5CZ_ioQF28K-u0P_g.png" alt="" width="880" height="89"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's now go to the browser and check if the application is running. Following is the screenshot I got on my browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UpJn6LIZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2804/1%2AFQIONcPmZxY-hIC98qaDNw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UpJn6LIZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2804/1%2AFQIONcPmZxY-hIC98qaDNw.png" alt="" width="880" height="692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, manual deployment is very painful and messy. In the next chapter, we will automate all of this using GitActions and AWS CodeDeploy, so that when we push any changes to the Git (or do pull requests), the code automatically gets deployed.&lt;/p&gt;

&lt;p&gt;You can access the complete source code on my GitHub.&lt;/p&gt;

&lt;p&gt;You can find the API code (express code) &lt;a href="https://github.com/abvijaykumar/contactlist-blog-app"&gt;here &lt;/a&gt;and ReactJs app code &lt;a href="https://github.com/abvijaykumar/contactlist-blog-react-app"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope this was helpful, let’s meet in the next blog…until then stay safe, and have fun, take care.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Platform Engineering with Pulumi- Episode 1: Building the AWS Landing Zone with Pulumi</title>
      <dc:creator>A B Vijay Kumar</dc:creator>
      <pubDate>Mon, 03 Jan 2022 05:12:25 +0000</pubDate>
      <link>https://dev.to/abvijaykumar/platform-engineering-with-pulumi-episode-1-building-the-aws-landing-zone-with-pulumi-2ii6</link>
      <guid>https://dev.to/abvijaykumar/platform-engineering-with-pulumi-episode-1-building-the-aws-landing-zone-with-pulumi-2ii6</guid>
      <description>&lt;h3&gt;
  
  
  In this blog, I will cover some key concepts and architecture of Pulumi. We will be building and provisioning the AWS landing zone.
&lt;/h3&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Platform Engineering with Pulumi- Episode 1: Building the AWS Landing Zone with Pulumi&lt;/em&gt;&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Pd2tZmnQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2APqvYUGoCgCAskeybM-o_GA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Pd2tZmnQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2APqvYUGoCgCAskeybM-o_GA.png" alt="" width="681" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have been learning Terraform, Ansible, Vagrant, etc to step into IaC, Writing Infrastructure code declaratively, and creating landing zone infrastructure, with a single click, is magic.&lt;/p&gt;

&lt;p&gt;However, I wonder, Is Terraform the ultimate tech for IaC?? Coming with an application developer background, and entering the full-stack cloud developer — learning another language and syntax is painful.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitOps &amp;amp; Configuration Management&lt;/strong&gt;: I see more and more we are treating infrastructure code as application code, where we have a CI/CD pipeline, with continuous testing requirements. We need strict configuration management, and change management governance on IaC (sometimes stricter than Application Code to avoid Configuration Drift (I explained this problem in my other blog &lt;a href="https://faun.pub/openshift-4-under-the-hood-ab854c3439dd"&gt;here&lt;/a&gt;)).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logic&lt;/strong&gt;: Sometimes I want to use conditional statements, control flows (such as loops) while writing IaC, which I could do easily with application programming languages. I had to learn YAML, JSON, and remember the complex semantics, to make sure I indent and define the objects properly. I don’t have sophisticated IDEs (like what I would get with application programming languages) to validate the syntax or semantics&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning a new language&lt;/strong&gt;: Terraform is a DSL (domain-specific language) and requires significant effort to learn and master.&lt;/p&gt;

&lt;p&gt;How would it be, if I don’t have to learn any new language, use the existing IDEs and still write IaC…that is a dream come true for a full stack developer like me. And that is where I was super impressed with what &lt;a href="https://www.pulumi.com/"&gt;Pulumi&lt;/a&gt; had to offer. It provides a framework for me to build Infrastructure code in any of the popular languages, that I already know!&lt;/p&gt;

&lt;p&gt;To get a deeper understanding of this amazing tool. I am writing this series of blogs. In this &lt;em&gt;“Platform Engineering with Pulumi”&lt;/em&gt; blog series, I will be building a landing zone on AWS to deploy my Node.js application and automate the deployments with GitOps. The series has these 3 blogs:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Episode 1: Building the AWS Landing Zone with Pulumi (current blog)&lt;/strong&gt;: In this blog, I will cover some key concepts and architecture of Pulumi. We will be building and provisioning the landing zone.&lt;br&gt;
 &lt;strong&gt;Episode 2: Building and Deploying the nodejs application&lt;/strong&gt;: In this episode we will be building a simple ContactList application in nodejs/HTML and deploying it on the EC2 instance and store all the contacts in DynamoDB&lt;br&gt;
 &lt;strong&gt;Episode 3: Platform &amp;amp; Application deployment with GitOps automation&lt;/strong&gt;: In this episode we will be automating the inftrastructure and application code deployment and changes using GitOps principles. We will be using GitHub Actions to continously deploy infrastrcture code and using AWS CodeDeploy to deploy application code&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this blog series, we will be building the following infrastructure on AWS and will run our applications in the EC2 instance. You guys might be wondering why EC2? If I were to do this, I would have done it on ECS/OpenShift/K8s, which is more modern. But then, there is no challenge. I selected this (legacy) architecture, to explore more concepts, so bear with me :-D and play along&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AL7-kQZO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ApcsPvLN_RJ_uEEUEH1yA8g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AL7-kQZO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ApcsPvLN_RJ_uEEUEH1yA8g.png" alt="" width="461" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install Pulumi
&lt;/h3&gt;

&lt;p&gt;I am using macOS, it's very convenient for me to install using the following command. Please refer to the Pulumi &lt;a href="https://www.pulumi.com/docs/get-started/aws/begin/"&gt;documentation&lt;/a&gt; for instructions to install Pulumi for your target OS.&lt;/p&gt;

&lt;p&gt;brew install pulumi&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Setup AWS CLI
&lt;/h3&gt;

&lt;p&gt;Please refer to the AWS &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html"&gt;documentation&lt;/a&gt; to install and set up. In my case, I had already installed the CLI, created a credential on AWS IAM, and used aws configureto configure the CLI with the API key and secret access key, that I created in IAM.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M3ACTGH2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2612/1%2AHrxwOcEs0UZHSDCjZgcQew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M3ACTGH2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2612/1%2AHrxwOcEs0UZHSDCjZgcQew.png" alt="" width="880" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We also need to export 2 environment variables, so that Pulumi can use them to log in to our AWS account.&lt;/p&gt;

&lt;p&gt;export AWS_ACCESS_KEY_ID= &amp;amp;&amp;amp; export AWS_SECRET_ACCESS_KEY=&lt;/p&gt;

&lt;p&gt;Now we have aws CLI and Pulumi CLI ready to connect to our aws account. Let's now start writing some infrastructure code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Generate Pulumi project — boilerplate code for AWS in Python
&lt;/h3&gt;

&lt;p&gt;Python?? yes that's the best part about Pulumi, I can write my infrastructure code in my desired language. Pulumi supports the following languages when I was writing this blog for AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kj-GNHXI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2376/1%2AvW64NBF_vDG69nFGS9DIRg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kj-GNHXI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2376/1%2AvW64NBF_vDG69nFGS9DIRg.png" alt="" width="880" height="93"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To generate the boilerplate code for AWS in Python language run the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pulumi new aws-python
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will prompt us to log in to Pulumi, I used my GitHub as a sign-in. Once login is successful the Pulumi CLI generates the boilerplate code.&lt;/p&gt;

&lt;p&gt;The following screenshot shows the generated files. Let's understand what is generated:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_Rcu4JFT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ABPTbvjY9_6iG9kd1IvSEuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_Rcu4JFT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2ABPTbvjY9_6iG9kd1IvSEuw.png" alt="" width="402" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;venv&lt;/strong&gt;: is the Python virtual environment, which will help us work in a sandbox, as we install all the dependencies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;strong&gt;main&lt;/strong&gt;.py&lt;/strong&gt;: This is the main python code file, where we will be writing the code&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pulumi.yaml&lt;/strong&gt;: This file has the project metadata, that we have provided while generating the boilerplate.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pulumi.dev.yaml&lt;/strong&gt;: This file has the configuration values for the stack, we can define different environments as stacks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;requirements.txt&lt;/strong&gt;: This has all the python dependencies. we will be installing all the dependencies in the next step&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's understand Pulumi architecture. The following picture shows a high-level architecture of Pulumi:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4gzcBab_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AXqvA-rw0KllxXUX9uA5xFw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4gzcBab_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2AXqvA-rw0KllxXUX9uA5xFw.png" alt="" width="351" height="644"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Project&lt;/strong&gt;: Pulumi project is a collection of the files, of a module. Each project has a Pulumi.yaml file which has the project configuration. a new project can be created using Pulumi new command. Refer to &lt;a href="https://www.pulumi.com/docs/intro/concepts/project/"&gt;Pulumi documentation&lt;/a&gt; for more details.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Program&lt;/strong&gt;: Program is the actual infrastructure code, This is written in high-level languages such as python, javascript, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stack&lt;/strong&gt;: Stack is a very important concept. This allows us to define various configurations for each environment and allows us to deploy and manage individual environments. typically we would have dev, test, prod environments. For each stack, we will have a Pulumi..yaml file. Refer to &lt;a href="https://www.pulumi.com/docs/intro/concepts/stack/"&gt;Pulumi documentation&lt;/a&gt; for more details&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CLI&lt;/strong&gt;: Pulumi CLI provides a command-line interface to build, run and manage the Pulumi runtime/deployment engine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;State&lt;/strong&gt;: State is a very important component of the architecture, which stores the current state of the infrastructure, and makes sure, the configuration is synced up with the actual infrastructure on the cloud. By default the state is stored in the Pulumi backend, however, we can configure some other object storage (such as S3) to store the state. Please refer to &lt;a href="https://www.pulumi.com/docs/intro/concepts/state/"&gt;Pulumi documentation&lt;/a&gt; for more details.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hyperscaler integration&lt;/strong&gt;: Pulimi internally uses hyperscaler APIs or Terraform modules to provision and manage the infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Activate virtual environment and install dependencies
&lt;/h3&gt;

&lt;p&gt;Let's activate the venv using:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source ./venv/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;And install the dependencies with the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You will see a bunch of dependencies installed in the Python environment. Now we are ready to write some infra code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Set up public/private key infra
&lt;/h3&gt;

&lt;p&gt;When we provision EC2 instance, we will need a public/private key infrastructure to be programmatically created (if we had created the instance from the AWS console, we would have downloaded it manually.) Since we are automating the provisioning, we will create our public/private keys, and configure the EC2 to use our created key-pair&lt;/p&gt;

&lt;p&gt;ssh-keygen -t rsa -f rsa -b 4096 -m PEM&lt;/p&gt;

&lt;p&gt;This command creates rsa, rsa.pub&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Write Infra code
&lt;/h3&gt;

&lt;p&gt;Here is the complete code, in snippets and explanation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rss8z3wG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A_GzCrMpRmrHUGIoVgYaPSg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rss8z3wG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2A_GzCrMpRmrHUGIoVgYaPSg.png" alt="" width="668" height="652"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line 1–7&lt;/strong&gt;: import the dependent modules&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;pulumi is the core module.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;pulumi_aws module has the aws objects, that we will be using.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;provisioners module is an implementation of Terraform provisioner in Pulumi, which allows us to copy files, run commands remotely on the EC2 instance. Refer to &lt;a href="https://github.com/pulumi/examples/tree/master/aws-py-ec2-provisioners"&gt;documentation&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;base64 has the library to encode and decode base64.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;json module is required to serialize json configurations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Line #9–13&lt;/strong&gt;: We are initializing the config object, and retrieving the configuration from the Pulumi.dev.yaml file, where we have set the key name and public key. To create this secret configuration, we can use&lt;/p&gt;

&lt;p&gt;pulumi config set --secret privateKey &lt;/p&gt;

&lt;p&gt;pulumi config set --secret privateKeyPassphrase&lt;/p&gt;

&lt;p&gt;the publicKey is not a secret is a direct copy-paste from the public key&lt;/p&gt;

&lt;p&gt;Here is a screenshot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IXdqebwd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2100/1%2Al5PbK-T8BsgAOQAVlT_J1A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IXdqebwd--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2100/1%2Al5PbK-T8BsgAOQAVlT_J1A.png" alt="" width="880" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #15–25&lt;/strong&gt;: helper function to encode base64&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #27-28&lt;/strong&gt;: getting the secret configs from the respective Pulumi stack config file (since we are running in dev stack, it will read from Pulumi.dev.yaml)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #31–33&lt;/strong&gt;: We are creating a keypair with our keys on AWS. Here is a screenshot after we run this code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rW8DoJQ8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2112/1%2A4z8fhsv9ye_biMIAy7Q_mw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rW8DoJQ8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2112/1%2A4z8fhsv9ye_biMIAy7Q_mw.png" alt="" width="880" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's set up the VPC:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qIkOkjr7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2224/1%2AuU4tGKk9iW0oiIFEbVtIVg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qIkOkjr7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2224/1%2AuU4tGKk9iW0oiIFEbVtIVg.png" alt="" width="880" height="864"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #37–66&lt;/strong&gt;: In the above code, we are creating the VPC and an internet gateway. We are then creating the public subnet, in which we will be running the EC2 instance. We also need a route to the internet gateway, to access the public internet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6v1NM5e3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Arz6el_HO7xc50bZWND1aTw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6v1NM5e3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2000/1%2Arz6el_HO7xc50bZWND1aTw.png" alt="" width="880" height="928"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #69–95&lt;/strong&gt;: We are creating a security group with 2 ingresses (One on port 80 to access the website, that we are going to build, and one on port 22, to SSH to the EC2 instance) and 1 egress for the EC2 instance to access the internet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---a-IL6Zx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2428/1%2A_kt-OcJ_Gc6pIUznSVMxsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---a-IL6Zx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2428/1%2A_kt-OcJ_Gc6pIUznSVMxsg.png" alt="" width="880" height="745"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #98–124&lt;/strong&gt;: We are creating a DynamoDB table with 2 fields — ContactName and ContactNumber.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TaeSAVq_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2156/1%2AFK9NZzwYxOKN3GBVR-_RbA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TaeSAVq_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2156/1%2AFK9NZzwYxOKN3GBVR-_RbA.png" alt="" width="880" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #133–136&lt;/strong&gt;: We are creating a VPC endpoint, to access this DynamoDB directly from the VPC, let's create a VPC endpoint (instead of going through the public internet).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hUrSr4f3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2144/1%2A_-iQ_2qBpQvISRSqzEXZdA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hUrSr4f3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/2144/1%2A_-iQ_2qBpQvISRSqzEXZdA.png" alt="" width="880" height="762"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #143–158&lt;/strong&gt;: We are creating an EC2role that we will need when we set up the GitOps to deploy the applications using CodeDeploy (which I will be covering in my next blog).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #160–165&lt;/strong&gt;: We are looking up the right AMI, using a filter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qisXdHm2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3084/1%2Aqe2yZm_nwlOMeCeN5qlN9A.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qisXdHm2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3084/1%2Aqe2yZm_nwlOMeCeN5qlN9A.png" alt="" width="880" height="611"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #167–182&lt;/strong&gt;: We are providing the user data to set up the EC2 instance when we start it for the first time, which does the typical yum update, installs cURL, Node.js, Yarn, Ruby, CodeDeploy (which we will need to deploy the applications using GitOps, &lt;strong&gt;&lt;em&gt;I will be covering this in the next blog&lt;/em&gt;&lt;/strong&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #183&lt;/strong&gt;: We are creating an IAM instance profile and connecting that to the EC2role that we created, which we will need when we do the Code Deploy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Line #184–195&lt;/strong&gt;: We are creating an instance of the AMI, that we looked up, and creating a t2.micro instance, and passing the user data, IAM instance profile, public subnet, and the security group we created. We are also passing the keypair that we created so that we can connect to this EC2 instance using the key pair that we created.&lt;/p&gt;

&lt;p&gt;Lets now run this code, and see the output, by running pulumi up&lt;/p&gt;

&lt;p&gt;The following screenshot shows Pulumi comparing the state and providing the status of what resources will be created. I will discuss state management in a separate blog.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tllzkMja--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3440/1%2ANHyefJU5QAN_SIRVq9dquw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tllzkMja--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3440/1%2ANHyefJU5QAN_SIRVq9dquw.png" alt="" width="880" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once, we verify, we can access it to go ahead. The following screenshot shows the output of all the resources created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M4kBTEbY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3140/1%2Az5nV6-_Fxy3G1Jk_9LwilA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M4kBTEbY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3140/1%2Az5nV6-_Fxy3G1Jk_9LwilA.png" alt="" width="880" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's now connect to the EC2 instance and check by running:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh -i rsa ec2-user@3.239.26.182
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iqieBavM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3128/1%2AqgBnepMzWLgItwSjB4tIsA.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iqieBavM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://cdn-images-1.medium.com/max/3128/1%2AqgBnepMzWLgItwSjB4tIsA.png" alt="" width="880" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have created the landing zone successfully using Pulumi, let's build and deployNode.js application and run it. Since it's already a long blog…we will continue in Episode 2.&lt;/p&gt;

&lt;p&gt;Hope this was helpful, let's meet in the next blog…until then stay safe, and have fun, take care.&lt;/p&gt;

&lt;p&gt;You can access the complete source code in my GitHub &lt;a href="https://github.com/abvijaykumar/contactlist-blog-infra"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.pulumi.com/"&gt;https://www.pulumi.com&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/abvijaykumar/contactlist-blog-infra"&gt;https://github.com/abvijaykumar/contactlist-blog-infra&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>pulumi</category>
      <category>infrastructureascode</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
