Satiated and Ready for a Nap
Nita returned to her cube after lunch, already regretting having eaten. Two tacos were probably one and a half too many, but they were delicious.
She logged into her laptop again and saw her editor still open:
To be continued...
Pressing dd to erase her screen, she reviewed her notes:
- [x] setup a build environment with the correct JDK
- [x] build and run the application
- [x] perform a simple smoke test
- [x] determine the runtime environment (networking, monitoring)
- [x] determine the runtime orchestrator
- [x] create a service definition
- [x] smoke test the service
Earlier that day, she had followed the standard steps that she always followed when she'd been given an embryonic bit of software to deploy.
First, build the thing to make sure it's even possible.
Then work through the basic steps of packaging and runtime configuration. Make sure everything works as best as possible locally before setting out to build any automation. She knew that the feedback loops were much tighter locally and she'd save a bunch of time if there was anything wrong with the code itself.
Finally, with the understanding gained from having done the thing, automate it so it's repeatable and reliable.
Enter Stage Right
Feeling confident that her exploration that morning had found that the application was stable enough to be operationalized, she turned her thoughts towards automation. She knew that her manual process was fine to test things out, but anything that required manual input was a surface area that invited errors to form and she was eager to make a good first impression, even though she still wasn't sure where any of her co-workers actually were.
At her former company, the developer platform was a massive thing that had grown over a decade of Jira tickets and deskside drive-bys. She knew she wasn't going to be able to achieve that in a single go, but she also knew she didn't want to manually have to build every Docker image that her team was tasked with deploying. She remembered that the Peregrine platform she was used to using had been based around the concept of Actors, so she started to sketch out a minimally viable platform using the same concept.
The actor pattern was something she'd learned at her old job. There the build system called "Peregrine" required developers to configure an actor for each stage of the pipeline - things like a gradle actor in the build stage or the ecsService version found down in deploy. In Peregrine's world, each stage fulfilled a contract - the set of properties that defined a stage were expressed as "what went in" and "what came out". Her boss had always been making jokes about the "Yoneda Lemon", as he called it. It was a term from Category Theory, a branch of mathematics that had been developed because a couple of guys kept saying "Yo, you know that thing is just one of these right?" and no one believed them.
An object X is completely determined by the collection of all morphisms pointing into it from every other object.
--- The Yoneda Lemma [category theory]
She thought back to their old wiki, on some system called nightingale where her old boss dumped all of his thoughts and the one he'd left on why they should care about his 🍋.
On Middleware as Morphisms
Data at rest has the potential to be interesting, but it's just a storage mechanism.
Data in motion is a system that's doing work, and watching it work is how you understand it.
| Pattern | Storage | Understanding |
|---|---|---|
| Database row | ✓ | ✗ |
| API response | ✓ | ✗ |
| Pipeline stage | ✗ | ✓ |
| Middleware transform | ✗ | ✓ |
| Edge activity | ✗ | ✓ |
A typical delivery pipeline has a number of inputs; a PR triggers the pipeline containing a repository URL and a hash to be built, configuration management and IaC configurations pitch in to help shape the final output, a test case or two if we're lucky help verify intent; but none of them achieve anything on their own.
The git commit is just a bit of metadata until something comes along and builds it. Gradle and JUnit are constantly conspiring to teach us why we shouldn't use the compiler as spell chekc.
A Selenium test failure by itself is generally noise. A Selenium test failure that was successful yesterday is worth notifying someone about.
In category theory, the Yoneda Lemma states that we only need to know the relationships between objects, the collection of its effects on the world, to describe the object. In platform engineering, we define objects as artifacts produced, and the relationships between them become the contracts that we are going to watch Nita establish.
What they do is their own business, each a black box to be known by its effects on the system - the complete set of capabilities and constraints it brings.
In category theory, the relationships between objects are called morphisms.
In platform engineering, we just call it middleware and it's everywhere.
Act 1 :: Know Your Lines
Nita drew on that memory as started to sketch out her requirements for her pipeline.
"I'm going to need something to represent an actor", she thought. She remembered back to what she'd learned before, complexity should come from composition. Each individual component of her system should be designed for a single operation, "Combinatorics are a harsh mistress" was a mantra she'd had drilled into her each time she'd presented a design that tried to cram too many features into a small surface area.
The objects in the universe of her new pipeline were simple:
- source enters the universe in the form on a repository with a specific hash to be built
- an application artifact in the form of a JAR
- an image artifact in the form of an OCI image
- a web service responding to HTTP requests
- a set of test results that verify that the deployment was completed successfully
A simple relationship that described the transitions between the objects could be written as:
1. source
[compiles to]
2. application artifact
[is packaged in]
3. image artifact
[is deployed as]
4. web service
[is verified by]
5. test results
This mean that her pipeline was going to require four relationships to be defined:
-
compiles to could be implemented with a wrapper around
mvn -
is packaged in could be implemented with
docker build -
is deployed as could be implemented with
docker compose up -
is verified by could be implemented by a wrapper around
Net::HTTPthat captured the results in the context
She decided to sketch a simple actor class with a single #execute method to start.
module Kremis
class Actor
def execute(context)
raise NotImplementedError, "#{self.class}#execute must be implemented"
end
end
end
Act 2 :: Tension Builds
For this pipeline, she decided that she would be safe to just hardcode some of the build metadata as long as she designed her actor appropriately. Data driven design had been drilled into her head, her old boss really hated having to crack code open to make what he felt should have been a configuration change. She knew that she'd eventually need to build an attribute loader, but she also knew to only engineer for the problem you have and not the one you want to solve.
Normally, she thought to herself, she'd want to separate the build and image stages in a larger application as often having access to the application artifacts outside of their final home can be useful for debugging, but at this point she didn't feel that the infinite-money necessitated the extra work that would entail, so she made a decision to compress two of the relationships in her sketched pipeline into a single actor. Her multi-stage Dockerfile was already designed to support this model.
She cracked open another terminal and converted the manual build steps she had executed before into a form that her Kremis::Actor could run:
require_relative "../../lib/kremis"
module Actors
class Build < Kremis::Actor
def execute(context)
app_dir = File.expand_path("../../../app", __dir__)
tag = "infinity-service:#{context[:commit]}"
puts " docker build -t #{tag} #{app_dir}"
result = system("docker build -t #{tag} #{app_dir}")
raise "Docker build failed" unless result
context.merge(image_tag: tag)
end
end
end
"There", she thought, expertly typing :wq to save her work and close her editor, "that will do for now".
She reviewed her work:
- her final build script would need to be co-located with the application being deployed and Alice had left her code in an
appdirectory, so she'd be ok to use relative paths. She hoped her boss was back soon so she could inquire about Infinity Co's artifact repository, she'd used Nexus in the past and was eager to get her new gem package deployed somewhere the build servers could access it. She didn't want to leave the deployment code packaged with the application any longer than she needed to - the only requirement that her boss had given her in her brief orientation was that she capture the specific hash of any comit contributing to the build for tracking purposes. Infinity Co were big on numbers he'd implied and really liked to track things. She'd decided that it would be ok to use the commit has as an input to her pipeline for now, as it was generally available in most CI systems environments. This would enter the build actor as part of the global context
- as this was an internal system in use for one application with a fully deterministic configuration, she felt safe combining string concatenation with a
#systemcall
With that bit of work complete, she prepared to shutdown her workstation:
nita@infinity:~/work/infinite-money > git add .; git commit -m save your files 💾 - tom dibblee; git push
[ticket-001 924d48a] save your files 💾 - tom dibblee
1 file changed, 1 insertion(+)
Enumerating objects: 11, done.
Counting objects: 100% (11/11), done.
Delta compression using up to 12 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 733 bytes | 733.00 KiB/s, done.
Total 6 (delta 1), reused 0 (delta 0), pack-reused 0 (from 0)
To git.sr.ht:~graemefawcett/sprout
bb441f3..924d48a ticket-001 -> ticket-001
Before she left for the day, she reviewed her build list. Happy that she'd satisified the criteria she'd set for herself, she checked off the first item and closed the lid to her laptop.
- [x] A build actor that can wrap Docker and execute multi-stage Dockerfiles to combine the
buildandimagestages that her old build system separated - [ ] A deploy actor that could automatically update and deploy a Docker compose service definition with an updated image
- [ ] A simple verification actor that can perform a post-deployment validation
- [ ] A simple engine to chain them together and manage shared context
Top comments (0)