What's mTLS
mTLS (Mutual Transport Layer Security) is an enhanced version of TLS (Transport Layer Security) where the client and the server authenticate each other during the connection establishment.
A very Simple Flow for mTLS. (Note arrow direction)
+--------+ ---1--> +--------+
| Client | <--2--- | Server |
+--------+ <--3--> +--------+
1a) Client sends request to server along with its certificate.
1b) Server verifies the client certificate using client's public key.(Server has the public Key in its store/DB)
2a) Server sends response along with its certificate.
2b) Client verifies the server certificate with the server public key
3) After mutual authentication, encrypted communication between client and server.
Typically, the server and client certificates are signed by a CA. As the server can manage multiple clients, the server usually maintains the public key of the CA that is used to sign all clients.
How is this different from regular TLS?
Regular TLS is unidirectional. I.E Only the client verifies the certificates presented by the server. The server does not verify the client's certificates.
Using mTLS in OpenTelemetry(OTEL)
To use mTLS between 2 OTEL components, we need client and server certificates.
Scenario
2 OTEL Collectors. One as client (exporter) and the other as server (receiver). Metrics are sent/forwarded from the client to the server. Since this is a demo setup (or internal) we will be using Self Signed certificates.
+-------------+ +--------------+
| Otel Client |-------GRPC----------->| OTEL Server |
| (Exporter) | | (Receiver) |
+-------------+ +--------------+
We'll be running this inside a Kubernetes cluster in 2 different namespaces.
Prerequisites
- A Kubernetes cluster
- kubectl & helm
- System with OpenSSL client installed
Demo
1) Generate certificates
a)Generate rootCA certificate.
Since we are using Self-signed certificates, we will create and use our own rootCA.
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:4096 -subj '/O=A Team/CN=root.com' -keyout rootCA.key -out rootCA.pem
b)Generate server certificate
We need to first create a certificate signing request (csr)
for the server certificate: server.csr
and the key server.key
openssl req -out server.csr -newkey rsa:4096 -nodes -keyout server.key -subj "/CN=test.com/O=PM" -config server.cnf -extensions req_ext
You will notice a server.cnf
config file. This file consists of extensions for adding Subject Alternative Names (SAN
), which are alternative host names that can identify the server.
[req]
default_bits = 4096
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
CN = 'test.com'
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = 'test.com'
DNS.2 = 'test.server.svc.cluster.local'
DNS.3 = 'localhost'
DNS.4 = 'test'
# add more for other names
We will then use the server.csr
to create a signed server certificate: server.pem
. The signing is done by the rootCA
certificate that was generated in step a
.
openssl x509 -req -sha256 -days 365 -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -in server.csr -out server.pem -extensions req_ext -extfile server.cnf
Notice: We use the rootCA
certificate and private key to sign the certificate.
b)Generate client certificate
Similarly, we create client certificates & key: client.pem
| client.key
. For the ease of this demo , we use the same rootCA
to sign the client certificates.
As before we use a client.cnf
also maintained for SAN
:
[req]
default_bits = 4096
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
[ dn ]
CN = 'localhost'
[ req_ext ]
subjectAltName = @alt_names
[alt_names]
DNS.1 = 'localhost'
Creating the csr
and signing it with the rootCA
:
openssl req -out client.csr -newkey rsa:4096 -nodes -keyout client.key -subj "/CN=localhost/O=client" -extensions req_ext -config client.cnf
openssl x509 -req -sha256 -days 365 -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -in client.csr -out client.pem -extensions req_ext -extfile client.cnf
1) Deploy on Kubernetes
Prerequisite: Please install the opentelemetry-operator in the cluster.
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm install otel-operator open-telemetry/opentelemetry-operator --namespace otel-collector-operator --create-namespace --set "manager.collectorImage.repository=otel/opentelemetry-collector-k8s" --set admissionWebhooks.certManager.enabled=false --set admissionWebhooks.autoGenerateCert.enabled=true
a)Create namespaces
Create the 2 namespaces
kubectl create ns server
kubectl create ns client
b)Create Kubernetes secrets.
The certificates that were generated in step 1 will be used to create the secrets. These secrets will be mounted into the OTEL pods in the next step.
k create secret generic server -n server --from-file=tls.pem=server.pem --from-file=tls.key=server.key --from-file=ca.pem=rootCA.pem # server certificates
k create secret generic client -n client --from-file=tls.pem=client.pem --from-file=tls.key=client.key --from-file=ca.pem=rootCA.pem # client certificates
c)Deploy the OTEL Receiver (server)
The OTEL receiver is the server that accepts the metrics. For our demo purpose the OTEL server will receive the metrics (Port 4317) and print it in the logs. The server is deployed in server
namespace and the secrets are mounted as volume to be used by the otel-collector receiver.
We also create a service named test with port 4317. While this is optional for the demo, its a good practice. Additionally certain meshes..For E.g istio relies on the port name
for the protocol. (grpc
is the name of the port in the service here)
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: test
namespace: server
labels:
app: otel-collector
spec:
mode: deployment
volumeMounts:
- name: server
mountPath: /etc/pki/ca-trust/source/server-ca
readOnly: true
volumes:
- name: server
secret:
secretName: server
config: |
receivers:
otlp:
protocols:
grpc:
#endpoint: test.server.svc.cluster.local:4317
tls:
cert_file: /etc/pki/ca-trust/source/server-ca/tls.pem
key_file: /etc/pki/ca-trust/source/server-ca/tls.key
client_ca_file: /etc/pki/ca-trust/source/server-ca/ca.pem
exporters:
# NOTE: Prior to v0.86.0 use `logging` instead of `debug`.
debug:
verbosity: detailed
service:
pipelines:
metrics:
receivers: [otlp]
processors: []
exporters: [debug]
---
apiVersion: v1
kind: Service
metadata:
name: test
namespace: test
spec:
selector:
app.kubernetes.io/component: opentelemetry-collector
app.kubernetes.io/instance: test.test
app.kubernetes.io/name: test-collector
type: ClusterIP
ports:
- name: grpc # important for istio!
protocol: TCP
port: 4317
Verify the otel-collector instance and service is created
kubectl get pods,svc -n server
b)Deploy the OTEL Exporter (client)
The OTEL Exporter is the client that generates (i.e source) the metrics. We use the hostmetrics receiver plugin. This will vary according to the needs.
The other receiver is deployed in the client
namespace. The client certificates are mounted as volumeMounts
just as before.
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: client
namespace: client
labels:
app: client
spec:
mode: deployment
volumeMounts:
- name: client
mountPath: /etc/pki/ca-trust/source/client-ca
readOnly: true
volumes:
- name: client
secret:
secretName: client
config: |
receivers:
# Data sources: metrics
hostmetrics:
scrapers:
cpu:
disk:
processors:
# Data sources: traces
attributes:
actions:
- key: environment
value: external-test-3
action: insert
# Data sources: traces, metrics, logs
exporters:
otlp:
endpoint: test.server.svc.cluster.local:4317
tls:
ca_file: /etc/pki/ca-trust/source/client-ca/ca.pem
cert_file: /etc/pki/ca-trust/source/client-ca/tls.pem
key_file: /etc/pki/ca-trust/source/client-ca/tls.key
debug: #just to print metrics in logs of the client
service:
pipelines:
metrics:
receivers: [hostmetrics]
processors: [attributes]
exporters: [otlp,debug]
Verify the otelcollector instance is created
kubectl get pods -n client
c) Verify the setup
Check the logs of the server otel (receiver). You should see logs of metrics generated by the client otel (exporter).
kubectl logs -f <otel-receiver-pod> -n server
Top comments (0)