Imagine each Kube object is a unique dinosaur specie and you're a novice Kube user/operator or a seasoned Kube application developer.
The term "KIND" would have a double connotation:
- it could mean the your shortcut framework to create a Kube cluster inside Docker... or
- it may mean creating your own Kube YAML "KIND:" objective.
- it could mean a specific KIND of dinosaur. Just joking.
But you, as my reader here in Dev.to, will not be indulged by run-of-the-mill tips and tricks. You want more action and value... you want something to showcase to your senior colleagues that Kubernetes can indeed be extended by writing your own Kube API objects.
So by choosing the second bullet above, all are possible by leveraging the Kubernetes API's CRD: "Custom Resource Definition". Let's discuss that second bullet above and create your own KIND of dinosaur... I mean Kube Object !
Use Case! Create your new kind of dinosaur!
To demonstrate the WHY and the HOW of CRD's, let us imagine these points in your Kube cluster:
- you need multiple ingressControllers to avoid single point of failures.
- you need unique ingressClasses that correspond to each ingressController mentioned above, to avoid traffic route mixing.
For our solution, you want to use a CRD as a "wrapper" of sorts, so you can "bootstrap" multiple Kube objects in one go, at the same time give the user some freedom to enforce UNIQUENESS by making some Kube metadata configurable.
Diagram flow
So here's the flow of how the CRD will work:
Above image illustrates how a CRD would act as a bootstrap with configurable metadata.
To start off, imagine these steps:
STEP 1
Prepare your consolidated YAML file that contains all needed components for your ingress-controller & ingressClass pattern:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: primary-ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.1.3
name: primary-ingress-nginx-controller
spec:
externalTrafficPolicy: Local
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: primary-ingress-nginx
type: LoadBalancer
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: primary-ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.1.3
name: primary-ingress-nginx-controller-admission
spec:
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: primary-ingress-nginx
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: primary-ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.1.3
name: primary-ingress-nginx-controller
spec:
replicas: 2
minReadySeconds: 0
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: primary-ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: primary-ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --publish-service=kube-system/primary-ingress-nginx-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=primary-ingress-nginx-class
- --configmap=kube-system/primary-ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: k8s.gcr.io/ingress-nginx/controller:v1.1.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 1Gi
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: primary-ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: primary-ingress-nginx-admission
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: dedicated
operator: In
values:
- essentials
tolerations:
- key: dedicated
operator: Equal
value: essentials
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: primary-ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.1.3
name: primary-ingress-nginx-class
spec:
controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: primary-ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.1.3
name: primary-ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: primary-ingress-nginx-controller-admission
namespace: kube-system
path: /networking/v1/ingresses
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None
The value of using CRDs
In above YAML example, we consolidated 2 Kube SVC objects, alongside a DEPLOYMENT object and an ingressClass object.
But at his point you may be thinking? I can simply & easily instantiate these objects using above consolidated YAML. So why need CRD?
That is a valid concern! However, the value of CRD comes when you want to have metadata configurable prior to object instantiation.
Imagine the challenge of externalizing the metadata "ingress-nginx"? How would you make a single configurable item for this? Does that mean you have to refactor your consolidated YAML each time you need to implement this pattern? Kind of time-consuming and prone to human errors.
For our use case, we want to externalize these three metadata, to make it user-defined:
- app.kubernetes.io/instance: ingress-nginx
- app.kubernetes.io/part-of: ingress-nginx
- name: primary-ingress-nginx-*
STEP 2
Define the Custom Resource Definition itself:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: ingresscontrollerclass.example.com
spec:
group: example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
instanceName:
type: string
partOf:
type: string
name:
type: string
replicas:
type: integer
ingressClassName:
type: string
serviceAnnotations:
type: object
scope: Namespaced
names:
plural: ingresscontrollerclass
singular: ingresscontrollerclass
kind: IngressControllerClass
shortNames:
- icc
There are alot of conceptual attributes to introduce in above example, but I have faith in your capacity to associate new stuff with your stock knowledge. Here, you are instantiating a widely accepted Kubernetes object of "kind: CustomResourceDefinition".
But the more critical attributes we need to introduce here, the mechanism to externalize the needed metadata, defined here as object properties:
properties:
instanceName:
type: string
partOf:
type: string
name:
type: string
ingressClassName:
type: string
# extra properties:
replicas:
type: integer
serviceAnnotations:
type: object
The crucial stuff to externalize here are instanceName, partOf, name and ingressClassName because these properties enforce uniqueness of ingress-controllers and their corresponding ingress classes. The extra properties are nice to have but are still helpful.
Step 3: Create your very own Kube KIND (Custom Resource)!
So here's the reason why you need a CRD, because you need a hard pattern so users can create a "Custom Resource".
After instantiating your CRD, feel free to create your very own customized Kube Kind --> kind: IngressControllerClass
apiVersion: example.com/v1
kind: IngressControllerClass
metadata:
name: my-nginx-ingress-setup
spec:
instanceName: "melvin-nginx"
partOf: "melvin-nginx"
name: "melvin-ingress-nginx"
replicas: 2
ingressClassName: "melvin-ingress-nginx-class"
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
^
As you can see in the "spec" part of your new YAML KIND, the operator now has the freedom to define the name of the new ingress controller, with corresponding ingress class name and associated metadata.
That way, the CRD enforces tight coupling, thereby ensuring that the resulting ingress-controller and ingressClass will be tighty associated with each other and enforce uniqueness from other parallel implementations.
A proper Kube coupling like the image below:
If you like this article, tell my boss to buy me a coffee! =)
~
Top comments (0)