In the previous blog it's mentioned some benefits about of IaD and how this concept this wide used in cloud native, let me write in this post about KCL for create anything declarative manifest.
KCL
KCL is an open source configuration and policy language, currently a Cloud Native Computing Foundation (CNCF) Sandbox project.
It was created to overcome the limitations of static formats like YAML or JSON, especially in complex cloud-native environments. To achieve this, KCL combines the simplicity of a data language with the power of a modern programming language, allowing you to create abstractions, enforce validations through a constraint system, and automate large-scale data management.
In short, you write in KCL to generate configurations for tools like Kubernetes and any tools that use yaml in a much safer, more scalable, and programmable way.
Think in KCL like as high level language with a DevEx similar a Pyhton.
There is a fundamental concepts that I will omit, but if you have experience with other programing languages, you don't have problems with KCL foundations.
If could be validate this concepts in this official link
KCL Schemas
Basic Schema
schema is a blueprint that defines the structure of your configuration. It tells KCL exactly what attributes an object should have, what type they are, and what their default values are.
Key points**
Attribute Definition: Within a schema, each attribute must have a type (name: str, replica: int).
Optional attributes (?) : If an attribute may not be present, you add a ? to its name (labels?: {str:str}). If you don't add, the attribute is mandatory.
Default values (=): You can assign a default value to any attribute (replica: int = 1). If you do not specify a value when creating an instance, this value will be used.
Create an Instance: You use the schema name followed by braces {} to create a concrete configuration from the blueprint.
1. Schemas and Data Types
The fundamental unit in KCL is the schema, which acts as a blueprint or template for your configuration. Within a schema, you define attributes with specific data types (str, bool, int) and can assign default values.
Example: Basic application configuration.
schema eksCluster:
clusterName: str
encryptCluster: bool = True
# It's not necessary to specify 'EncryptCluster' if its default value is desired.
eksCluster {
clusterName = "segoja7-cluster"
}
Resulting YAML:
clusterName: segoja7-cluster
encryptCluster: true
2. Nested Schemas: Creating Structure
To handle more complex configurations, a schema can use another schema as the type for one of its attributes. This creates a nested and organized structure, mirroring the hierarchy of the final YAML.
Example: Adding a Node section.
schema nodeCluster:
diskSize : int
ami: str
schema eksCluster:
clusterName: str
encryptCluster: bool = True
nodeConfig: nodeCluster # <-- Attribute whose type is another schema
eksCluster {
clusterName = "segoja7-cluster"
nodeConfig.diskSize = 20
nodeConfig.ami = "BOTTLEROCKET_x86_64"
}
Resulting YAML:
clusterName: segoja7-cluster
encryptCluster: true
nodeConfig:
diskSize: 20
ami: BOTTLEROCKET_x86_64
3. Lists of Schemas: Handling Collections
To define a list of complex objects, you use bracket syntax [] with the schema name (e.g., [MySchema]). This ensures that every element in the list conforms to the structure defined in the schema.
Example: Adding a list of Tags.
schema nodeCluster:
diskSize : int
ami: str
schema tags:
key: str
value: str
schema eksCluster:
clusterName: str
encryptCluster: bool = True
nodeConfig: nodeCluster
tags: [tags] # <-- Attribute whose type is a LIST of another
eksCluster {
clusterName = "segoja7-cluster"
nodeConfig.diskSize = 20
nodeConfig.ami = "BOTTLEROCKET_x86_64"
tags = [
{ key = "Name", value = "NodeGroup" },
]
}
Resulting YAML:
clusterName: segoja7-cluster
encryptCluster: true
nodeConfig:
diskSize: 20
ami: BOTTLEROCKET_x86_64
tags:
- key: Name
value: NodeGroup
4. Schemas (Logic and Validation)
This is where schemas become really powerful. They are not only data containers, they can also contain logic and validation rules.
Schema arguments (schema MySchema[argument]: ) You can pass parameters to a schema that are not final attributes, but are used for internal logic.
The check block: This block is the "quality inspector" of your schema. Inside it, you write a series of rules (asserts) that each schema instance must comply with. If any rule fails, KCL will stop the compilation with an error. This is much cleaner than having loose asserts in the file.
schema nodeCluster:
diskSize : int
ami: str
schema tags:
key: str
value: str
schema eksCluster[env: str]:
clusterName: str
encryptCluster: bool = False
nodeConfig: nodeCluster
tags: [tags]
check:
encryptCluster != False if env == "prod", "Production cluster need to be encrypt"
eksCluster(env="prod") {
clusterName = "segoja7-cluster"
nodeConfig.diskSize = 20
nodeConfig.ami = "BOTTLEROCKET_x86_64"
tags = [
{ key = "Name", value = "NodeGroup" },
]
}
Output with encryptCluster = false and check using production env argument:
kcl run main.k
EvaluationError
--> /main.k:18:1
|
18 | eksCluster(env="prod") {
| ^ Instance check failed
|
--> /main.k:15:1
|
15 | encryptCluster != False if env == "prod", "Production cluster need to be encrypt"
| Check failed on the condition: Production cluster need to be encrypt
|
Output with encryptCluster = true and check using production env argument:
clusterName: segoja7-cluster
encryptCluster: true
nodeConfig:
diskSize: 20
ami: BOTTLEROCKET_x86_64
tags:
- key: Name
value: NodeGroup
- Inheritance, protocol and Mixins The Core Concepts: When to Use Each.
The key is to identify the relationship between your schemas.
Inheritance: Use inheritance when one schema is a more specialized version of another. It creates a clear parent-child hierarchy.
protocol/Mixin: Use a protocol/mixin when you want to add a reusable packet of features to different, unrelated schemas.
Protocol: Is a contract that defines a set of attributes that a schema must have in order to use a specific functionality.
mixin: A mixin is a block of code containing attributes and logic that can be "mixed" or injected into a schema.
The data flow is always: ➡️ Esquema ➡️ Mixin ➡️ Protocol.
Example: Inheritance, protocol and Mixins.
schema nodeCluster:
diskSize : int
ami: str
schema tags:
key: str
value: str
schema eksCluster[env: str]:
clusterName: str
encryptCluster: bool = True
nodeConfig: nodeCluster
tags: [tags]
monitoring_logs?: [str] = []
check:
encryptCluster != False if env == "prod", "Production cluster need to be encrypt"
protocol monitoringProtocol: # <-- Protocol: contract for mixin with optional field enabledLogs
enabledLogs?: bool
mixin monitoringMixin for monitoringProtocol: # <-- Mixin: adds monitoring monitoring_logs if enabledLogs is provided
if enabledLogs:
monitoring_logs = [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler",
]
schema eksClusterWithMonitoring[env: str](eksCluster): # <-- Inheritance
mixin [monitoringMixin]
enabledLogs: bool
prod = eksClusterWithMonitoring(env="prod") { #Prod with logs
clusterName = "segoja7-cluster"
nodeConfig.diskSize = 20
nodeConfig.ami = "BOTTLEROCKET_x86_64"
tags = [
{ key = "Name", value = "NodeGroup" },
]
enabledLogs = True
}
dev = eksCluster(env="dev"){ #Dev without logs
clusterName = "segoja7-cluster"
encryptCluster = False
nodeConfig.diskSize = 20
nodeConfig.ami = "BOTTLEROCKET_x86_64"
tags = [
{ key = "Name", value = "NodeGroup" },
]
}
Final Resulting YAML:
prod:
clusterName: segoja7-cluster
encryptCluster: true
nodeConfig:
diskSize: 20
ami: BOTTLEROCKET_x86_64
tags:
- key: Name
value: NodeGroup
monitoring_logs:
- api
- audit
- authenticator
- controllerManager
- scheduler
enabledLogs: true
dev:
clusterName: segoja7-cluster
encryptCluster: false
nodeConfig:
diskSize: 20
ami: BOTTLEROCKET_x86_64
tags:
- key: Name
value: NodeGroup
monitoring_logs: []
CONCLUSION:
KCl is a language with features like as another high level programing language with a DevEx similar to Python, KCL create declarative manifests that all tools from cloud-native systems understand. With KCL we can introduce abstraction, early validation and modularity enabling platform teams to scale their operations safely and efficiently.
Top comments (0)