DEV Community

Cover image for [Hands-on] Testing Separate Ports in Athenz ZTS
Jeongwoo Kim
Jeongwoo Kim

Posted on

[Hands-on] Testing Separate Ports in Athenz ZTS

[Hands-on] Testing Separate Ports in Athenz ZTS

The goal of this test is to verify the new port-URI filtering feature introduced in PR: Adding support to filter requests based on port-uri combination #3190.

Result

I successfully reproduced the multiple ports configuration on the ZTS server. Initially, the server was only listening on the default port 4443:

# Defaulted container "athenz-zts-server" out of: athenz-zts-server, zms-cli (init), athenz-conf (init), athenz-plugins (init)
# sh: 1: ss: not found
# Active Internet connections (only servers)
# Proto Recv-Q Send-Q Local Address           Foreign Address         State
# tcp6       0      0 :::4443                 :::*                    LISTEN
Enter fullscreen mode Exit fullscreen mode

After applying the new configuration, it now listens on both the original port 4443 AND the newly added port 8443:

# Active Internet connections (only servers)
# Proto Recv-Q Send-Q Local Address           Foreign Address         State
# tcp6       0      0 :::8443                 :::*                    LISTEN
# tcp6       0      0 :::4443                 :::*                    LISTEN
Enter fullscreen mode Exit fullscreen mode

Table of Contents

Steps to Reproduce

Walk through the following steps to achieve the same result.

Setup: Working Directory & Athenz Server

Let's quickly create a test directory build:

test_name=separate_port_in_athenz
tmp_dir=$(date +%y%m%d_%H%M%S_$test_name)
mkdir -p ~/test_dive/$tmp_dir
cd ~/test_dive/$tmp_dir
Enter fullscreen mode Exit fullscreen mode

This tutorial has a one-command setup script that will create a local kubernetes cluster with kind, and deploy athenz server into it:

git clone https://github.com/mlajkim/dive-manifest.git manifest
make -C manifest setup

# ...
# ✅ Athenz Server deployment finished
Enter fullscreen mode Exit fullscreen mode

Check that every pod is running:

kubectl get pods -n athenz

# NAME                                 READY   STATUS    RESTARTS   AGE
# athenz-cli-574d747dff-qk8lr          1/1     Running   0          5m5s
# athenz-db-0                          1/1     Running   0          5m5s
# athenz-ui-59f7f77667-wgr5l           2/2     Running   0          5m4s
# athenz-zms-server-568d4cfd89-whcjn   1/1     Running   0          5m4s
# athenz-zts-server-6966ff7f66-897lg   1/1     Running   0          5m4
Enter fullscreen mode Exit fullscreen mode

Setup: port-uri configuration

Please take a look what is inside the https://raw.githubusercontent.com/AthenZ/athenz/refs/heads/master/containers/jetty/conf/port-uri.json.example.

This sample configuration file is basically saying:

  • This file is a security rulebook that decides which API paths can be accessed and whether strict security (mTLS) is required for four different server ports.
  • Port 4443 is the main port that allows all API requests, but it absolutely requires mTLS to keep things safe.
  • The other three ports don't need mTLS, but they are strictly locked down to only handle their own specific jobs—like registration (9443), health checks (8443), or OpenID (443).

Use the following commands to create a K8s cm from the sample file:

_ns=athenz
_cm_name=port-uri-config
_cm_file_name=port-uri.json

cat <<EOF > $_cm_file_name
{
  "ports": [
    {
      "port": 4443,
      "mtls_required": false,
      "description": "Main API port - all endpoints with default 4443, but no mTLS required for test sake",
      "allowed_endpoints": []
    },
    {
      "port": 8443,
      "mtls_required": false,
      "description": "Health/status port - /zts/v1/status (ZTS API) and /status (file-based health check returning OK)",
      "allowed_endpoints": [
        {
          "path": "/zts/v1/status",
          "methods": ["GET"],
          "description": "ZTS API status - returns JSON { \"code\": 200, \"message\": \"OK\" }"
        },
        {
          "path": "/status",
          "methods": ["GET"],
          "description": "Legacy file-based health check - returns OK when athenz.health_check_uri_list includes /status"
        }
      ]
    }
  ]
}
EOF

kubectl create ns $_ns 2>/dev/null || true && \
kubectl create configmap $_cm_name -n $_ns --from-file=$_cm_file_name --dry-run=client -o yaml | kubectl apply -f - && \
rm $_cm_file_name

# configmap/port-uri-config created
Enter fullscreen mode Exit fullscreen mode

Check:

kubectl get cm $_cm_name -n $_ns

# NAME               DATA   AGE
# port-uri-config    1      18s
Enter fullscreen mode Exit fullscreen mode

Setup: Volume Injection

Let's mount the cm created above into the zts server pod.

_ns=athenz
_cm_name=port-uri-config
_cm_file_name=port-uri.json
_deploy_name=athenz-zts-server
_mnt_path=/opt/athenz/zts/conf/$_cm_file_name
_patch_json=$(cat <<EOF
[
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/-",
    "value": {
      "name": "port-uri-vol",
      "configMap": {
        "name": "${_cm_name}"
      }
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/volumeMounts/-",
    "value": {
      "name": "port-uri-vol",
      "mountPath": "${_mnt_path}",
      "subPath": "${_cm_file_name}"
    }
  }
]
EOF
)

kubectl patch deployment $_deploy_name -n $_ns --type='json' -p="$_patch_json"

# deployment.apps/athenz-zts-server patched
Enter fullscreen mode Exit fullscreen mode

Check:

_deploy_name=athenz-zts-server
_ns=athenz

kubectl exec -it deployment/$_deploy_name -n $_ns -- sh -c 'ls -l /opt/athenz/zts/conf/port-uri.json && cat /opt/athenz/zts/conf/port-uri.json'

# ...
#         {
#           "path_starts_with": "/zts/v1/.well-known",
#           "path_ends_with": "openid-configuration",
#           "methods": ["GET"],
#           "description": "OpenID discovery (alternative using path_starts_with and path_ends_with)"
#         }
#       ]
#     }
#   ]
# }
Enter fullscreen mode Exit fullscreen mode

Setup: port open in ZTS

[!TIPS]
You can learn what kind of port is required with the port-uri.json

[!NOTE]
We only need the status, but for reference sake, let's set up multiple ports open (not security best practice, but this is tutorial!)

The default port https:4443 is used for all API requests by manifest default. Let's open other ports as well in Kubernetes:

_deploy_name=athenz-zts-server
_ns=athenz

kubectl patch deployment $_deploy_name -n $_ns --type='json' -p='[
  { "op": "add", "path": "/spec/template/spec/containers/0/ports/-", "value": { "containerPort": 8443, "name": "status", "protocol": "TCP" } },
  { "op": "add", "path": "/spec/template/spec/containers/0/ports/-", "value": { "containerPort": 9443, "name": "instance", "protocol": "TCP" } },
  { "op": "add", "path": "/spec/template/spec/containers/0/ports/-", "value": { "containerPort": 443, "name": "openid", "protocol": "TCP" } }
]'

# deployment.apps/athenz-zts-server patched
Enter fullscreen mode Exit fullscreen mode

Check:

_deploy_name=athenz-zts-server
_ns=athenz

kubectl get deployment $_deploy_name -n $_ns -o custom-columns="PORTS:.spec.template.spec.containers[0].ports[*].containerPort"

# PORTS
# 4443,8443,9443,443
Enter fullscreen mode Exit fullscreen mode

Setup: Readiness/Liveness watching the port 8443

[!NOTE]
This phase deliberately fails readiness state and will be fixed in the next phase.

We have done the following so far:

  • k8s cm created and mounted as a file in ZTS pod
  • ZTS deployment patched to open ports required to communicate between ZTS and requsters

Now we want to change the readiness probe to watch the port 8443 instead of 4443:

_ns=athenz
_deploy=athenz-zts-server
_new_status_port=8443
_patch_json=$(cat <<EOF
[
  {
    "op": "replace",
    "path": "/spec/template/spec/containers/0/readinessProbe/exec/command",
    "value": [
      "curl",
      "-s",
      "--fail",
      "--resolve",
      "athenz-zts-server.athenz:${_new_status_port}:127.0.0.1",
      "https://athenz-zts-server.athenz:${_new_status_port}/zts/v1/status"
    ]
  },
  {
    "op": "replace",
    "path": "/spec/template/spec/containers/0/livenessProbe/exec/command",
    "value": [
      "curl",
      "-s",
      "--fail",
      "--resolve",
      "athenz-zts-server.athenz:${_new_status_port}:127.0.0.1",
      "https://athenz-zts-server.athenz:${_new_status_port}/zts/v1/status"
    ]
  }
]
EOF
)

kubectl patch deployment $_deploy -n $_ns --type='json' -p="$_patch_json"

# deployment.apps/athenz-zts-server patched
Enter fullscreen mode Exit fullscreen mode

Let's check the status of the deployment:

_ns=athenz

kubectl get pods -n $_ns


# NAME                                 READY   STATUS    RESTARTS      AGE
# athenz-cli-574d747dff-7466j          1/1     Running   0             14m
# athenz-db-0                          1/1     Running   0             14m
# athenz-ui-59f7f77667-99ws6           2/2     Running   0             14m
# athenz-zms-server-568d4cfd89-tk8td   1/1     Running   0             14m
# athenz-zts-server-68686cbb54-rhxzs   0/1     Running   1 (48s ago)   109s
Enter fullscreen mode Exit fullscreen mode

As expected, the pod fails the readiness check and goes into a restart loop. This happens because the ZTS server doesn't know about our mounted port-uri.json file yet, so it's still only listening on the default port 4443. We can verify this by checking the active ports inside the container:

_deploy_name=athenz-zts-server
_ns=athenz

kubectl exec -it deployment/$_deploy_name -n $_ns -- sh -c 'ss -ltn || netstat -tln'

# Defaulted container "athenz-zts-server" out of: athenz-zts-server, zms-cli (init), athenz-conf (init), athenz-plugins (init)
# sh: 1: ss: not found
# Active Internet connections (only servers)
# Proto Recv-Q Send-Q Local Address           Foreign Address         State
# tcp6       0      0 :::4443                 :::*                    LISTEN
Enter fullscreen mode Exit fullscreen mode

Setup: athenz.properties

To fix the readiness problem, let's finally add a property athenz.port_uri_config to athenz.properties:

_cm_name=athenz-zts-conf
_ns=athenz
_prop_file="athenz.properties"
_new_line="athenz.port_uri_config=/opt/athenz/zts/conf/port-uri.json"

kubectl get cm $_cm_name -n $_ns -o json | \
jq --arg file "$_prop_file" --arg line "$_new_line" '.data[$file] += "\n" + $line + "\n"' | \
kubectl apply -f -

# configmap/athenz-zts-conf replaced
Enter fullscreen mode Exit fullscreen mode

In k8s, you need to restart the deployment to read the new cm:

_deploy_name=athenz-zts-server
_ns=athenz

kubectl rollout restart deployment/$_deploy_name -n $_ns

# deployment.apps/athenz-zts-server restarted
Enter fullscreen mode Exit fullscreen mode

Let's see if listening ports have been changed:

_deploy_name=athenz-zts-server
_ns=athenz

kubectl exec -it deployment/$_deploy_name -n $_ns -- sh -c 'ss -ltn || netstat -tln'

# Active Internet connections (only servers)
# Proto Recv-Q Send-Q Local Address           Foreign Address         State
# tcp6       0      0 :::8443                 :::*                    LISTEN
# tcp6       0      0 :::4443                 :::*                    LISTEN
Enter fullscreen mode Exit fullscreen mode

Let's see the pod status as well:

_ns=athenz

kubectl get pods -n $_ns

# NAME                                 READY   STATUS    RESTARTS   AGE
# athenz-cli-574d747dff-7466j          1/1     Running   0          19m
# athenz-db-0                          1/1     Running   0          19m
# athenz-ui-59f7f77667-99ws6           2/2     Running   0          19m
# athenz-zms-server-568d4cfd89-tk8td   1/1     Running   0          19m
# athenz-zts-server-797757d5cb-dqzs8   1/1     Running   0          26s
Enter fullscreen mode Exit fullscreen mode

Setup: Rollback to 4443

What happens when we roll back to the 4443 from 8443 for readiness and liveness?

_ns=athenz
_deploy=athenz-zts-server
_new_status_port=4443
_patch_json=$(cat <<EOF
[
  {
    "op": "replace",
    "path": "/spec/template/spec/containers/0/readinessProbe/exec/command",
    "value": [
      "curl",
      "-s",
      "--fail",
      "--resolve",
      "athenz-zts-server.athenz:${_new_status_port}:127.0.0.1",
      "https://athenz-zts-server.athenz:${_new_status_port}/zts/v1/status"
    ]
  },
  {
    "op": "replace",
    "path": "/spec/template/spec/containers/0/livenessProbe/exec/command",
    "value": [
      "curl",
      "-s",
      "--fail",
      "--resolve",
      "athenz-zts-server.athenz:${_new_status_port}:127.0.0.1",
      "https://athenz-zts-server.athenz:${_new_status_port}/zts/v1/status"
    ]
  }
]
EOF
)

kubectl patch deployment $_deploy -n $_ns --type='json' -p="$_patch_json"

# deployment.apps/athenz-zts-server patched
Enter fullscreen mode Exit fullscreen mode

Let's see the pod status will be changed from Running:

sleep 5
_ns=athenz

kubectl get pods -n $_ns

# NAME                                 READY   STATUS    RESTARTS   AGE
# athenz-cli-574d747dff-7466j          1/1     Running   0          21m
# athenz-db-0                          1/1     Running   0          21m
# athenz-ui-59f7f77667-99ws6           2/2     Running   0          21m
# athenz-zms-server-568d4cfd89-tk8td   1/1     Running   0          21m
# athenz-zts-server-78b94c8948-884qq   1/1     Running   0          35s
Enter fullscreen mode Exit fullscreen mode

Why is it still Running after rolling back to 4443? If you recall our port-uri.json configuration, the 4443 port has an empty allowed_endpoints array [], meaning it accepts all paths, including /status. This overlapping configuration enables a seamless, zero-downtime migration to the new port in production environments.

What I learned

I learned that I really enjoyed the following command as it was a bit hassle to do it manually for cluster setup & Athenz setup:

git clone https://github.com/mlajkim/dive-manifest.git manifest
make -C manifest setup
Enter fullscreen mode Exit fullscreen mode

Also, sometimes I crash ZMS/ZTS servers, which is a bit hassle to delete again + redeploy again.

What's next?

I will keep writing hands-on tutorials for other PRs as well. Stay tuned!

Closing

If you enjoyed this deep dive, please leave a like & subscribe for more!

Also, leave comments if you have any questions or suggestions. Thank you in advance!

like_this_photo_cat

Top comments (0)