In my previous post, I talked about why I built Lynq and the problems it solves. But I didn't really get into how it actually works. So let's fix that.
If you want to follow along hands-on, there's a Killercoda scenario that walks through everything in about 10 minutes: https://killercoda.com/lynq-operator/course/killercoda/lynq-quickstart
The basic idea
Here's the mental model. You have three things:
- LynqHub connects to your database and watches for changes
- LynqForm defines what kubernetes resources to create (templates)
- LynqNode is the actual instance, one per database row per template
The flow is simple. A row appears in your database. The hub sees it, creates a LynqNode. The node controller renders your templates and applies resources. Done.
When the row disappears or gets deactivated, cleanup happens automatically.
Connecting to your database
First you need a LynqHub. This tells Lynq where your data lives.
apiVersion: operator.lynq.sh/v1
kind: LynqHub
metadata:
name: my-saas-hub
spec:
source:
type: mysql
mysql:
host: mysql.default.svc.cluster.local
port: 3306
database: nodes
table: node_data
username: node_reader
passwordRef:
name: mysql-secret
key: password
syncInterval: 30s
valueMappings:
uid: node_id
activate: is_active
extraValueMappings:
planId: subscription_plan
region: deployment_region
The valueMappings section is important. You're telling Lynq which columns matter:
-
uidis the unique identifier for each node -
activateis a boolean that controls whether resources should exist
Then extraValueMappings lets you pull in whatever custom fields you need. These become variables you can use in your templates.
The hub polls your database at syncInterval and syncs changes. If a row has activate=true, Lynq creates resources. If it changes to false, cleanup starts.
Defining your resource templates
Next is the LynqForm. This is basically your blueprint for what gets created per database row.
apiVersion: operator.lynq.sh/v1
kind: LynqForm
metadata:
name: web-app
spec:
hubId: my-saas-hub
deployments:
- id: app
nameTemplate: "{{ .uid }}-app"
labelsTemplate:
app: "{{ .uid }}"
plan: "{{ .planId | default \"basic\" }}"
spec:
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 2
template:
spec:
containers:
- name: app
image: "{{ .deployImage | default \"nginx:latest\" }}"
services:
- id: svc
nameTemplate: "{{ .uid }}-svc"
dependIds: ["app"]
spec:
apiVersion: v1
kind: Service
# ...
Templates use Go's text/template syntax with Sprig functions. So you get 200+ functions out of the box. The variables come from your database columns via the hub mappings.
A few things I find myself using constantly:
-
{{ .uid }}for unique names -
{{ .planId | default "basic" }}when columns might be null -
{{ .uid | trunc63 }}to respect kubernetes naming limits
Where policies come in
Here's where it gets interesting. Not every resource should behave the same way.
Each resource in your template can have its own policies:
deployments:
- id: app
creationPolicy: WhenNeeded
deletionPolicy: Delete
conflictPolicy: Stuck
patchStrategy: apply
creationPolicy controls when resources get created or updated.
WhenNeeded (the default) means Lynq continuously syncs. If someone deletes the resource manually, it comes back. If you update the template, changes apply. This is what you want for most things.
Once means create it once and never touch it again. Perfect for init jobs or migration scripts that should only run on first setup.
jobs:
- id: init-job
creationPolicy: Once
nameTemplate: "{{ .uid }}-init"
spec:
apiVersion: batch/v1
kind: Job
spec:
template:
spec:
containers:
- name: init
command: ["sh", "-c", "echo 'one-time setup'"]
restartPolicy: Never
deletionPolicy controls what happens when a LynqNode is deleted or the row disappears.
Delete (default) cleans up the resource. Lynq sets an ownerReference so kubernetes garbage collection handles it automatically.
Retain keeps the resource around. Lynq tracks it via labels instead of ownerReference. When deleted, it just gets marked as orphaned so you can find it later.
If you're dealing with PersistentVolumeClaims or anything with data you don't want to lose, use Retain:
persistentVolumeClaims:
- id: data-pvc
deletionPolicy: Retain
nameTemplate: "{{ .uid }}-data"
conflictPolicy handles what happens when a resource already exists with a different owner.
Stuck (default) is conservative. If there's a conflict, reconciliation stops and you get an event. Safe but requires manual intervention.
Force takes ownership using Server-Side Apply with force=true. Useful when you're migrating from another system or when Lynq should be the single source of truth.
Ordering with dependencies
Sometimes you need resources to come up in order. A deployment needs its configmap first. A service should wait for its deployment.
That's what dependIds does:
secrets:
- id: db-creds
nameTemplate: "{{ .uid }}-creds"
deployments:
- id: db
dependIds: ["db-creds"]
waitForReady: true
- id: app
dependIds: ["db"]
waitForReady: true
Lynq builds a DAG (directed acyclic graph) from your dependencies and applies resources in topological order. If you accidentally create a cycle, it fails fast with an error.
The waitForReady: true flag is important. Without it, dependIds only guarantees creation order. With it, Lynq actually waits for the dependency to become ready before creating the dependent resource.
There's also skipOnDependencyFailure (defaults to true). If a dependency fails, dependent resources get skipped instead of failing too. Sometimes you want the opposite though:
jobs:
- id: cleanup-job
dependIds: ["main-app"]
skipOnDependencyFailure: false # run even if main-app fails
How it all ties together
So the full flow looks like this:
- Hub controller polls database every syncInterval
- For each active row, it creates or updates a LynqNode CR
- Node controller picks up the LynqNode
- It renders all templates with the row's data
- It builds a dependency graph and sorts resources
- It applies each resource in order using Server-Side Apply
- It waits for readiness if configured
- It updates LynqNode status with what got created
When a row is deactivated or deleted:
- Hub controller detects the change
- LynqNode CR is deleted
- Finalizer runs cleanup based on each resource's deletionPolicy
- Resources with Delete policy get removed
- Resources with Retain policy get orphan labels added
You can watch this in action:
kubectl get lynqnodes -w
kubectl describe lynqnode <name>
The status shows exactly what's happening:
status:
desiredResources: 5
readyResources: 5
failedResources: 0
appliedResources:
- "Deployment/default/acme-app@app"
- "Service/default/acme-svc@svc"
Try it yourself
The best way to understand this is to actually run through it. I set up a Killercoda scenario that walks through the whole thing:
https://killercoda.com/lynq-operator/course/killercoda/lynq-quickstart
It takes about 10 minutes. You'll set up MySQL, deploy the operator, create a hub and template, and then insert/update/delete rows to see how resources respond.
When would you actually use this
This pattern works well when:
- You already have business data in a database (users, orgs, tenants)
- You need fast provisioning, not commit-sync-reconcile loops
- You want to replicate the same resources many times with different values
- Template versioning matters more than instance versioning
It's not for everything. If you have a small number of snowflake environments, traditional IaC is probably fine. But if you're replicating the same pattern hundreds of times based on database records, this approach is worth considering.
Docs: https://lynq.sh
GitHub: https://github.com/k8s-lynq/lynq
Happy to answer questions in the comments.
Top comments (0)