DEV Community

Cover image for A practical introduction to GitOps
Sean Overton
Sean Overton

Posted on

A practical introduction to GitOps

Here is the code base used in this tutorial:
https://github.com/SeanOverton/example-gitops-serverless-infra

What is GitOps?

GitOps is a methodology using Git as the source of truth for both infrastructure and application code. It's declarative, ensuring that desired system states are specified in version-controlled files. Changes trigger automatic deployments through CI/CD, providing easy rollbacks and robust version history. Popular in cloud-native environments, GitOps simplifies managing and deploying complex systems.

Why GitOps?

GitOps is embraced for its efficiency and reliability in managing and deploying software systems. By making Git the single source of truth, it ensures a declarative approach to configuration, allowing teams to define and track the desired state of both infrastructure and applications. This methodology automates deployments through continuous integration and deployment (CI/CD), streamlining processes and reducing manual errors. With version-controlled changes, GitOps provides a robust audit trail and simplifies rollback procedures. Particularly advantageous in dynamic, cloud-native environments, GitOps enhances collaboration, observability, and overall system resilience, making it a preferred methodology for modern software development and operations.

Q. Who is the GitOps Engineers customer?

A. Other developers!

It is fundamental to GitOps engineering to understand that your 'customer' is other engineers and your 'product' is the repo being provided. This means that the customers and product should be treated similarly to any other outbound facing product. For example, active communication and feedback should be established with the customer.

What we will cover in this tutorial

In this tutorial, we will explore the concept of GitOps with an example demonstrating how to build a self-service code repository (using terraform and AWS) to enable developers to quickly deploy REST API's with minimum understanding required of the underlying architecture so developers can focus on the application code itself. The patterns introduced in this example are common to GitOps engineering and help communicate the fundamentals of what GitOps actually is.

Who this tutorial is for?

Any aspiring engineers looking to automate more, anyone interested in getting hands on with IaC and anyone interested in understanding the power of GitOps.
It is expected that readers have some knowledge required as a pre-requisite such as basics with GitHub, Git and AWS (including an AWS account).

Building a Self-Service Code Repository:

A self-service code repository allows development teams to manage, share, and collaborate on code efficiently. In the following steps we will build an example self-service repository that allows a developer to quickly deploy an API utilising AWS serverless computation: lambdas.

Enough talk, let's code!

The code is available here:
https://github.com/SeanOverton/example-gitops-serverless-infra

What does the repo do?

  • build AWS lambda functions (by default runs python)
  • build an AWS rest API gateway, with associated endpoints for each lambda created
  • optionally add cognito authorizer to endpoints that require auth

Setting up:

  1. Fork repo: https://github.com/SeanOverton/example-gitops-eng-serverless-infra/tree/main

  2. Sign in to the AWS console and create a user with Access Key for programmatic access. https://docs.aws.amazon.com/keyspaces/latest/devguide/access.credentials.html

  3. Note: for a production environment it is not recommended to have use access credentials with admin permissions (but for this tutorial, feel free to attach an admin policy). Roles with the concept of least privilege should be applied. *

  4. Add these credentials to GitHub secrets with keys:

AWS_ACCESS_KEY
AWS_SECRET_ACCESS_KEY
Enter fullscreen mode Exit fullscreen mode

Gitops-pushing-code-to-prod

  1. Create an S3 bucket via the AWS console that is only accessible by your previously created user (which you stored the credentials). This bucket will be used to host the tfstate file. This is a dependency of using terraform as seen in this architecture. Your bucket name will need to be unique and will have to be added into the terraform config at: ./main.tf
terraform {
  backend "s3" {
    bucket = "bucket-name"
    key    = "terraform.tfstate"
    region = "ap-southeast-2"
  }
Enter fullscreen mode Exit fullscreen mode

Image description

  1. Configure AWS CLI credentials locally and run terraform init to initialise terraform backend state file in the S3 bucket.

How a developer interacts with the repo

Infrastructure config

Once the project is set up from the above steps. A developer would only have to update the ./config.tfvars file to build infrastructure for the new endpoint. The changes only require understanding of a few minor parameters as shown below.

But first create a branch locally: git branch feature/gitops-is-fun

Then make the below changes.

lambda_functions = {
  + helloworld = {
    + function_name   = "helloworld"
    + auth_required   = false
    + endpoint_method = "GET"
  + },
}

cognito_user_arns = []

stage_name = "default"
Enter fullscreen mode Exit fullscreen mode

The example config code above will create the endpoint and AWS cloud infastructure to support an endpoint that can receive HTTP GET requests. The application code will be added next.

The endpoint application code

A new folder and file can now be created to store the endpoints application code: ./lambdas/helloworld/lambda_function.py
Warning: the folder name 'helloworld' matches the function_name from the previous config!

This file will be what runs at the above configured GET request endpoint. Add the following helloworld response to this file:

import json

def respond(err, res=None):
    return {
        'statusCode': '400' if err else '200',
        'body': err.message if err else json.dumps(res),
        'headers': {
            'Content-Type': 'application/json',
            'Access-Control-Allow-Origin' : '*',
        },
    }

def lambda_handler(event, context):
    return respond(None, "Helloworld")
Enter fullscreen mode Exit fullscreen mode

Now the hardwork is done! Watch the platform's magic work!

Merging in

  1. Push. git push

  2. Pull request. On GitHub, create a pull-request from your recent changes.
    This will trigger a plan workflow that ends with something similar to:

...
Plan: x to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tf-plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tf-plan"
Enter fullscreen mode Exit fullscreen mode

This is terraform highlighting what changes will be made to your current infrastructure. This is a dry run when you can detect if everything will work as expected before actually deploying the infrastructure.

  1. Merge. After reviewing the expected changes. Merge the PR into the main branch.

  2. Now check the actions tab to find your deployment url in the latest workflow run. (This was defined as a terraform output)

...
Apply complete! Resources: x added, 0 changed, 0 destroyed.

Outputs:
deployment_url = "https://<deployment_url>/<stage>"
Enter fullscreen mode Exit fullscreen mode

Test!

Visit: https://<deployment_url>/helloworld.
You should receive a helloworld response. If not, review the steps and checkout the workflow outputs.

Understanding the repo itself

The repo has a couple of stages that need to be understood.

1. Terraform Modules (IaC) and Config
The first stage is the terraform config. This is the tfvars that was edited ./config.tfvars. The .tfvars file type is a file type used for passing user defined values for input variables into terraform code similar to parameters being passed into a function. Note the function_name parameter is important here and will be used for the API endpoint name ie. https://unique-aws-path/<function_name> as well as required for the folder structure defining the application code: ./lambdas/<function_name>/lambda_function.py

2. The GitHub action (automated workflows for continuous deployment)
The GitHub action (defined at .github/workflows/terraform-workflow.yml) is the automated workflow that automates the terraform dry-run and deployment.

Image description

2.1 The terraform plan is triggered when a pull-request is raised which causes a dry-run to run and would allow a team to effectively review the expected changes before actually applying them.

2.2 The terraform apply (which only runs when changes occur in the 'main' branch) will use the above created plan and deploy it.

When used with GitHub branch protection with minimum required reviews this then allows for a safe but efficient workflow for teams to review the expected changes, approve and deploy the API endpoints quickly.

3. The application code
This is referenced in the terraform modules to be deployed at the AWS lambda function and was defined in ./lambdas/helloworld/lambda_function.pyin the example above. It should be noted, that any name could replace the helloworld in this case and as long as the tfvars config is also updated with corresponding information the new API endpoint code will be deployed.

Conclusion

This has been a hands-on introduction demonstrating an example of what Gitops engineering may look like!

Do you want to instantly improve your API design?
Check out this article: https://dev.to/s3an_0vert0n/10-quick-tips-to-instantly-improve-api-design-59mc

Follow me for more book reviews and quick tips I pick up along the way as I continue my book-per-week reading journey.

For my full reading list: https://github.com/SeanOverton/books-i-have-read/blob/main/books.yml

Top comments (0)