Below is the expanded, detailed guide. I keep it organized by resource and include keys, typical value types, allowed options, and short notes. Use it as a working YAML schema reference and cookbook.
1 — General YAML & Kubernetes basics (structure, syntax, common metadata)
A. File structure (top-level order)
A valid Kubernetes manifest document typically follows:
apiVersion: <string> # required: e.g. v1, apps/v1, networking.k8s.io/v1
kind: <string> # required: e.g. Pod, Service, Deployment
metadata: # required (object)
name: <string> # required in most cases
namespace: <string> # optional (default: "default")
labels: { key: value } # optional
annotations: { key: value } # optional
spec: <object> # required for most kinds; shape depends on kind
You can place multiple resources in one YAML file separated by ---.
B. YAML syntax reminder
- Indentation: spaces only, not tabs.
-
Lists: start with
- -
Scalars: string, number, boolean (unquoted allowed),
null -
Multiline:
|preserves newlines,>folds into spaces. - Comments start with
#.
C. Metadata keys (common)
metadata object fields and types:
-
name(string) — object name, DNS-1123 label rules (lowercase,-, max length varies). -
namespace(string) — namespace name. -
labels(map[string]string) — for selectors, service matching; recommended:app,component,tier,release,chart,heritage. -
annotations(map[string:string]) — arbitrary key/value (longer strings OK). -
finalizers(array[string]) — list of finalizer names to block deletion until cleared. -
ownerReferences(array[object]) — to declare controllers/owners (fields:apiVersion,kind,name,uid,controller,blockOwnerDeletion). -
creationTimestamp,uid,resourceVersion— system populated read-only fields. -
labelsandannotationskeys may be any string but usually followdomain/nameform for annotations.
2 — Pod (full structure and important keys)
A Pod is the atomic unit. Pod YAML keys:
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: default
labels: { app: myapp }
annotations: { key: value }
spec:
# lifecycle / runtime
restartPolicy: Always | OnFailure | Never # default: Always
terminationGracePeriodSeconds: <int> # default: 30
activeDeadlineSeconds: <int> # optional, total runtime limit
dnsPolicy: Default | ClusterFirst | ClusterFirstWithHostNet | None
hostNetwork: true | false
hostPID: true | false
hostIPC: true | false
serviceAccountName: <string>
automountServiceAccountToken: true | false
nodeSelector: { key: value } # simple scheduling hint
affinity: # advanced scheduling
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: <string>
operator: In | NotIn | Exists | DoesNotExist | Gt | Lt
values: [ ... ]
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector: { matchExpressions: [...] }
topologyKey: <string>
podAntiAffinity: ...
tolerations:
- key: <string>
operator: Equal | Exists
value: <string>
effect: NoSchedule | PreferNoSchedule | NoExecute
tolerationSeconds: <int> # only for NoExecute with Exists/Equal
topologySpreadConstraints: [] # controls distribution across zones/nodes
schedulerName: <string> # default "default-scheduler"
imagePullSecrets:
- name: secret-name
initContainers:
- name: init
image: busybox
command: ["/bin/sh", "-c", "do something"]
resources: {}
volumeMounts: []
containers:
- name: app
image: repo/image:tag
imagePullPolicy: Always | IfNotPresent | Never
command: [ <entrypoint> ] # overrides image ENTRYPOINT
args: [ ... ] # overrides CMD
workingDir: <string>
ports:
- name: http
containerPort: <int> # documentation only (container port)
protocol: TCP | UDP | SCTP
env:
- name: KEY
value: "string"
- name: FROM_CONFIG
valueFrom:
configMapKeyRef:
name: my-config
key: someKey
optional: true
- name: SECRET_VAL
valueFrom:
secretKeyRef:
name: my-secret
key: password
optional: false
envFrom:
- configMapRef:
name: my-config
- secretRef:
name: my-secret
resources:
limits:
cpu: "500m" # 0.5 core
memory: "256Mi"
requests:
cpu: "250m"
memory: "128Mi"
volumeMounts:
- name: data
mountPath: /data
readOnly: false
subPath: optional-subpath
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
httpHeaders:
- name: X-Header
value: val
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe: { ... } # same schema as livenessProbe
startupProbe: { ... } # for slow-starting processes
lifecycle:
postStart:
exec: { command: [ "sh", "-c", "echo started" ] }
preStop:
exec: { command: [ "sh", "-c", "sleep 5" ] }
securityContext:
runAsUser: <int>
runAsGroup: <int>
runAsNonRoot: true | false
allowPrivilegeEscalation: true | false
capabilities:
add: [ NET_ADMIN ]
drop: [ ALL ]
stdin: true | false
tty: true | false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File | FallbackToLogsOnError
volumes:
- name: data
emptyDir: {}
# or hostPath:
hostPath:
path: /var/data
type: Directory | FileOrCreate | Socket | CharDevice | BlockDevice
configMap:
name: my-config
items:
- key: config.yaml
path: config.yaml
mode: 0644
defaultMode: 0644
optional: true
secret:
secretName: my-secret
items: [...]
persistentVolumeClaim:
claimName: pvc-name
projected:
sources:
- secret:
name: my-secret
- configMap:
name: my-config
hostAlias:
- ip: "127.0.0.1"
hostnames: ["local"]
priorityClassName: <string>
restartPolicy: Always | OnFailure | Never
Notes & allowed types
-
containerPortis documentation-only; Kubernetes doesn't automatically expose it externally. Use ServicetargetPort. -
imagePullPolicydefault depends on tag::latest= Always, otherwise IfNotPresent. -
resourcesCPU format: integer cores (e.g. "2") or millicores ("500m"). Memory: "Mi", "Gi". -
livenessProbe,readinessProbe,startupProbeaccepthttpGet,tcpSocket, orexecforms. -
volumetypes: many specific drivers exist (CSI, awsElasticBlockStore, gcePersistentDisk, nfs, cinder, azureDisk, azureFile, cephFS, iscsi, rbd, flexVolume, vsphereVolume, portworxVolume, glusterfs, etc.)
3 — Deployment (apps/v1)
Deployment manages ReplicaSets.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3 # desired pods
revisionHistoryLimit: <int> # retention of old ReplicaSets
minReadySeconds: <int> # pod must be ready for this many seconds
progressDeadlineSeconds: <int> # default 600
paused: true | false
strategy:
type: Recreate | RollingUpdate
rollingUpdate:
maxUnavailable: int | "25%" # allowable unavailable pods during update
maxSurge: int | "25%" # extra pods beyond desired
selector:
matchLabels: { app: myapp } # required: must match template labels
matchExpressions: [...]
template:
metadata:
labels: { app: myapp }
annotations: { ... }
spec: # pod spec (see Pod section above)
containers: [...]
Important keys
-
strategy.rollingUpdate.maxSurgeandmaxUnavailableaccept either integer or percentage strings. -
selectoris immutable after creation for Deployments inapps/v1. -
spec.templateis a Pod template. Any change triggers new ReplicaSet & rolling update.
4 — Service (v1) — full keys, examples, and allowed values
A Service exposes Pods.
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: default
labels: {}
annotations: {}
spec:
type: ClusterIP | NodePort | LoadBalancer | ExternalName
selector: { key: value } # optional for ExternalName headless
clusterIP: <ip> | "None" # "None" -> headless service
clusterIPs: [<ip>, ...] # IPv4/IPv6 support in multi-stack clusters
ipFamily: IPv4 | IPv6 # deprecated in newer releases in favor of ipFamilies/ipFamilyPolicy
ipFamilies: [ "IPv4", "IPv6" ]
ipFamilyPolicy: SingleStack | PreferDualStack | RequireDualStack
ports:
- name: <string> # optional, used by DNS SRV and to identify port
protocol: TCP | UDP | SCTP
port: <int> # required: the port exposed by the service (cluster)
targetPort: <int | string> # name or number, forwards to pod's port
nodePort: <int> # (only for type NodePort or LoadBalancer) range 30000-32767 unless configured otherwise
appProtocol: <string> # optional: application protocol label (http, h2, grpc)
sessionAffinity: None | ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: <int> # session affinity timeout
externalIPs: [ "1.2.3.4" ] # route traffic to Node ips; used in some envs
loadBalancerIP: <ip> # request specific LB IP (cloud-specific)
loadBalancerSourceRanges: [ "0.0.0.0/0" or "192.0.2.0/24" ] # accepted client CIDRs
externalTrafficPolicy: Cluster | Local # Cluster balances node->pod; Local preserves client source IP
healthCheckNodePort: <int> # healthcheck port for externalTrafficPolicy=Local
allocateLoadBalancerNodePorts: true|false
topologyKeys: [ ... ] # legacy; prefer TopologySpreadConstraints
publishNotReadyAddresses: true|false
internalTrafficPolicy: Cluster | Local
Common examples
ClusterIP (default):
kubectl expose deployment myapp --port=80 --target-port=3000
# creates service type=ClusterIP
NodePort:
spec:
type: NodePort
ports:
- port: 80
targetPort: 3000
nodePort: 31080
LoadBalancer (cloud or MetalLB):
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
ExternalName:
spec:
type: ExternalName
externalName: example.com
Notes
-
clusterIP: Nonemakes the service headless; no ClusterIP is allocated and DNS returns A records of pods (or SRV). -
targetPortmay be a number or a named port that matchescontainerPort.nameon the pod. - Multiple ports are supported; each port object is independent.
5 — Ingress (networking.k8s.io/v1) keys and values
Ingress routes HTTP(S) traffic to Services. Behavior depends on Ingress Controller.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations: { nginx.ingress.kubernetes.io/rewrite-target: / }
spec:
ingressClassName: <string> # ties to controller, e.g. "nginx"
defaultBackend: # optional default backend
service:
name: default-backend
port:
number: 80
rules:
- host: example.com
http:
paths:
- path: /foo
pathType: Prefix | Exact | ImplementationSpecific
backend:
service:
name: myservice
port:
number: 80
tls:
- hosts: [ "example.com" ]
secretName: tls-secret # TLS cert secret
Notes:
-
pathType:Exactrequires exact path match;Prefixmatches path prefix;ImplementationSpecificlets controller decide. - Annotations are controller-specific (nginx ingress supports many).
- Ingress controllers implement specifics for rewrite, auth, rate limiting, etc.
6 — ConfigMap & Secret
ConfigMap (v1)
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
key1: "value1"
key2: "value2"
binaryData: # base64-encoded binary blobs
bin1: <base64>
immutable: true | false # prevent updates (opt)
Use in Pod:
-
envFromwithconfigMapRef -
envwithvalueFrom.configMapKeyRef -
volumewithconfigMap.nameanditemsmapping keys->paths
Secret (v1)
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque | kubernetes.io/service-account-token | kubernetes.io/dockercfg | kubernetes.io/dockerconfigjson | kubernetes.io/basic-auth | kubernetes.io/ssh-auth | etc.
data:
username: <base64>
password: <base64>
stringData: # convenience field; server base64-encodes
username: admin
password: pass
immutable: true | false
Use:
-
envFromsecretRef -
envsecretKeyRef -
volumetypesecret
Note: Secrets are base64-encoded, not encrypted unless using encryption at rest.
7 — PersistentVolumes & PersistentVolumeClaims
PersistentVolume (PV)
PV spec fields:
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: "10Gi"
volumeMode: Filesystem | Block
accessModes:
- ReadWriteOnce # single node RW
- ReadOnlyMany # many nodes read-only
- ReadWriteMany # many nodes RW (depends on storage provider)
persistentVolumeReclaimPolicy: Retain | Recycle | Delete
storageClassName: my-storage-class
mountOptions: [ "noatime" ]
local:
path: /mnt/disks/ssd1 # local volume
hostPath:
path: /data/pv1
type: Directory | File | Socket | CharDevice | BlockDevice | DirectoryOrCreate
nfs:
server: nfs.example.com
path: /exported/path
readOnly: false
csi:
driver: my.csi.driver
volumeHandle: <id>
readOnly: false
fsType: ext4
volumeAttributes: { "attrib": "value" }
# many other cloud-specific volume types exist (awsElasticBlockStore, gcePersistentDisk, azureDisk, azureFile, rbd, iscsi, cephfs, glusterfs, etc.)
PersistentVolumeClaim (PVC)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: "10Gi"
storageClassName: my-storage-class
volumeName: my-pv # bind to specific PV
volumeMode: Filesystem | Block
PVC binds to a PV that matches storageClassName, accessModes, capacity and optional selector.
8 — StatefulSet (apps/v1)
Used for stateful workloads with stable network IDs and persistent storage.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "web-headless" # headless service providing network identity
replicas: 3
selector:
matchLabels: { app: web }
template:
metadata:
labels: { app: web }
spec: { containers: [...] }
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
storageClassName: sc
podManagementPolicy: OrderedReady | Parallel
updateStrategy:
type: RollingUpdate | OnDelete
rollingUpdate:
partition: <int> # number of pods not updated
revisionHistoryLimit: <int>
Notes:
- Pods created with stable names:
web-0,web-1, ... -
serviceNamemust refer to a headless service (clusterIP: None) that governs DNS.
9 — DaemonSet (apps/v1)
DaemonSet runs a copy of a pod on each selected node.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-daemon
spec:
selector:
matchLabels: { name: daemon }
template:
metadata:
labels: { name: daemon }
spec:
containers: [...]
updateStrategy:
type: RollingUpdate | OnDelete
rollingUpdate:
maxUnavailable: int | "25%"
Common use: logging, monitoring agents, system daemons.
10 — Job & CronJob (batch)
Job
apiVersion: batch/v1
kind: Job
metadata:
name: myjob
spec:
parallelism: <int>
completions: <int>
backoffLimit: <int>
template:
spec:
containers: [...]
restartPolicy: Never | OnFailure
CronJob
apiVersion: batch/v1
kind: CronJob
metadata:
name: mycron
spec:
schedule: "*/5 * * * *" # cron expression
startingDeadlineSeconds: <int>
concurrencyPolicy: Allow | Forbid | Replace
successfulJobsHistoryLimit: <int>
failedJobsHistoryLimit: <int>
jobTemplate:
spec: { template: ... }
11 — NetworkPolicy (networking.k8s.io/v1)
Controls Pod-level L3/L4 access. If no network plugin supports NetworkPolicy, it does nothing.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-frontend
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels: { role: frontend }
- namespaceSelector:
matchLabels: { team: qa }
- ipBlock:
cidr: 172.17.0.0/16
except: [172.17.1.0/24]
ports:
- protocol: TCP
port: 5432
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 53
Key fields:
-
podSelectorselects the pods the policy applies to (empty selects all pods). -
policyTypesdefault to Ingress ifingressspecified, Egress ifegressspecified. -
from/toentries acceptpodSelector,namespaceSelector,ipBlock. -
portsaccept port number or name and protocol.
12 — RBAC (Authorization) — Role, ClusterRole, RoleBinding, ClusterRoleBinding
Role (namespace-scoped)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
ClusterRole (cluster-scoped)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
RoleBinding or ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
subjects:
- kind: User | Group | ServiceAccount
name: <name>
namespace: <namespace> # for ServiceAccount
roleRef:
kind: Role | ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
13 — ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
automountServiceAccountToken: true | false
imagePullSecrets:
- name: regsecret
ServiceAccounts are used by pods to access the API server; token secrets are auto-generated unless automountServiceAccountToken: false.
14 — Pod SecurityContext vs Container securityContext
Pod-level securityContext (applies to all containers unless overridden):
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
runAsNonRoot: true
fsGroup: 2000
seLinuxOptions:
level: "s0:c123,c456"
sysctls:
- name: net.core.somaxconn
value: "1024"
Container-level securityContext overrides pod-level:
containers:
- name: app
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
15 — Probes (readiness/liveness/startup) detailed
Probe object:
livenessProbe:
httpGet:
path: /healthz
port: 8080 # number or name
host: <string> # optional, rarely used in k8s
scheme: HTTP | HTTPS
httpHeaders:
- name: X-Header
value: val
tcpSocket:
port: 8080
exec:
command: [ "cat", "/tmp/healthy" ]
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
-
startupProbeused for slow-starting containers; livenessProbe disabled until startupProbe succeeds.
16 — Volumes: common types (fields and values)
emptyDir: { medium: "" | "Memory", sizeLimit: "1Gi" }hostPath: { path: "/data", type: DirectoryOrCreate }configMap: { name: my-config, items: [{key: k, path: p}], defaultMode: 0444 }secret: { secretName: my-secret, items: [...] }persistentVolumeClaim: { claimName: my-pvc }projected: { sources: [{configMap: {name: ...}}, {secret: {name: ...}}], defaultMode: 0644 }downwardAPI: { items: [{ path: "labels", fieldRef: { fieldPath: metadata.labels['app'] } }], defaultMode: 0644 }csi: { driver: driver.name, volumeAttributes, nodePublishSecretRef }nfs: { server: <ip|hostname>, path: <path>, readOnly: false }
17 — Topology, scheduling and affinity in detail
NodeSelector (simple)
spec:
nodeSelector:
disktype: ssd
Tolerations & Taints
- Taints applied to nodes:
kubectl taint nodes node1 key=value:NoSchedule - Pod tolerations permit pods to be scheduled on tainted nodes.
Toleration example:
tolerations:
- key: "key"
operator: "Equal" # or "Exists"
value: "value"
effect: "NoSchedule"
tolerationSeconds: 3600
Affinity (advanced)
-
nodeAffinitywithrequiredDuringSchedulingIgnoredDuringExecutionorpreferredDuringSchedulingIgnoredDuringExecution(weights). -
podAffinityandpodAntiAffinityuselabelSelectorandtopologyKey(e.g.,kubernetes.io/hostnameortopology.kubernetes.io/zone).
TopologySpreadConstraints (v1)
Controls even distribution across topology domains.
topologySpreadConstraints:
- maxSkew: 1
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: DoNotSchedule | ScheduleAnyway
labelSelector:
matchLabels: { app: myapp }
18 — Service Discovery / DNS details
Service names resolve in the cluster via CoreDNS:
-
<service>-> resolves to ClusterIP -
<service>.<namespace>and<service>.<namespace>.svc.cluster.localfully qualified.
When Service has multiple ports with names, SRV records are created.
19 — Advanced Service fields and behaviors
-
publishNotReadyAddresses→ if true, DNS returns endpoints even if pods are not readiness-ready. -
externalTrafficPolicy: Localpreserves client source IP but requires health checks and careful node-level routing. -
sessionAffinityandsessionAffinityConfigcontrol client stickiness. -
loadBalancerSourceRangesrestrict which CIDRs can access the LoadBalancer.
20 — CLI shortcuts: kubectl create / expose / apply examples
Create deployment
kubectl create deployment nginx --image=nginx
kubectl apply -f deployment.yaml
Expose as ClusterIP
kubectl expose deployment nginx --port=80 --target-port=80 --name nginx-svc
Create NodePort
kubectl expose deployment nginx --type=NodePort --name nginx-node --port=80 --target-port=80 --node-port=31080
Get resource
kubectl get pods
kubectl get svc -o wide
kubectl describe svc nginx-svc
kubectl get ingress
kubectl logs pod/mypod
Edit now
kubectl apply -f myfile.yaml
kubectl rollout status deployment/my-deployment
kubectl rollout undo deployment/my-deployment
21 — Useful conventions and best practices (expanded)
-
Labels: use a consistent schema:
app=NAME,app.kubernetes.io/name,app.kubernetes.io/instance,app.kubernetes.io/version,app.kubernetes.io/component,app.kubernetes.io/part-of,app.kubernetes.io/managed-by. -
Annotations: for non-selection metadata, e.g.,
kubectl.kubernetes.io/last-applied-configuration,prometheus.io/scrape: "true",prometheus.io/port: "9100". -
Resource requests/limits: always set
resources.requestsandresources.limits.requestsare used by scheduler to choose node.limitsenforce cgroup limits. - Probes: prefer TCP or HTTP probes rather than only liveness commands.
-
Security: set
runAsNonRoot: true, drop capabilities, avoid privileged containers. -
Image pull secrets: use
imagePullSecretsin Pod spec. -
Immutable configmaps/secrets: use
immutable: trueif not changing. - NetworkPolicy: default deny policy recommended for multi-tenant or security-sensitive clusters.
- RBAC least privilege: grant only required verbs and API groups.
-
Pod Disruption Budgets (PDB): set
minAvailableormaxUnavailableto control voluntary disruptions.
22 — Quick reference: container fields (compact)
-
name(string) - required -
image(string) - required -
imagePullPolicy-Always|IfNotPresent|Never -
command- array - entrypoint override -
args- array - command args override -
env- array of { name, value | valueFrom } -
envFrom- array of configMapRef / secretRef -
ports- array of { name?, containerPort:int, protocol? } -
resources- { limits: { cpu, memory }, requests: { cpu, memory } } -
volumeMounts- array of { name, mountPath, readOnly?, subPath? } -
livenessProbe,readinessProbe,startupProbe- probes -
lifecycle- preStop/postStart -
securityContext- container-level security settings -
stdin,tty- booleans -
terminationMessagePath,terminationMessagePolicy
23 — Headless services and DNS SRV usage
-
clusterIP: Nonemakes service headless; DNS A records return the Pod IPs directly. - Useful for StatefulSets (stable network IDs) and for service discovery (headless with SRV records for per-port discovery).
24 — Examples: multi-port Service mapping to pods (complete)
Pod
apiVersion: v1
kind: Pod
metadata:
name: multi-pod
labels: { app: multi }
spec:
containers:
- name: web
image: myweb:1.0
ports:
- name: http
containerPort: 3000
- name: admin
containerPort: 7000
- name: metrics
image: metrics:1.0
ports:
- name: metrics
containerPort: 9090
Service
apiVersion: v1
kind: Service
metadata:
name: multi-svc
spec:
selector:
app: multi
ports:
- name: http
port: 80
targetPort: 3000
protocol: TCP
- name: admin
port: 7000
targetPort: 7000
protocol: TCP
- name: metrics
port: 9090
targetPort: 9090
protocol: TCP
25 — Troubleshooting checklist (networking / ports)
-
kubectl get endpoints <svc>→ should show Pod IPs:ports that the Service forwards to. -
kubectl describe svc <svc>→ shows ClusterIP, ports, selectors, events. -
kubectl exec -it <somepod> -- curl <ClusterIP>:<port>→ test in-cluster access. -
kubectl logs <pod>→ check app is listening on expected port. -
kubectl get pods -o wide→ shows pod IP and node info. - Check NetworkPolicy denies/permits traffic.
- For NodePort/LoadBalancer: check firewall, host firewall and cloud provider security groups.
Top comments (0)