Introduction
Using grpc in gke service with ingress requires that backend is configured to allow https connections. This is because grpc uses http2 protocol and some implementators require tls. Gcp loadbalancer can be set to use http2 by setting it to https. Google load balancer also requires that backed implements health check which means that for example / endpoint returns 200 status code.
Grpc is implemented top of http2 protocol and it uses paths like any other http rest server. Path is /Package.ServiceName/rpcMethod this means that we can route all methods by putting /package.ServiceName/* to ingress rule.
Requirements
- Kubernetes engine cluster
- Authenticated kubectl client
- Domain name
- ssl certificate (optional)
Implementation
In this example we are using typescript language and grpc npm package.
Program
First we start from creating a simple program that uses grpc to create bi directional connection between server and client.
server.ts
import { chattingServiceProto } from "./proto";
import { Server, ServerCredentials } from "grpc";
const server = new Server();
server.addService(chattingServiceProto.ChattingService.service, {
createChatConnection: call => {
console.log("new chat connection");
call.on("end", () => {
console.log("chatconnection ended");
});
}
});
const bindedPort = server.bind(
"0.0.0.0:4200",
ServerCredentials.createInsecure()
);
console.log("server binded to port", bindedPort);
server.start();
client.ts
import { ChattingServiceImplementation } from "./proto";
import { credentials } from "grpc";
const client = new ChattingServiceImplementation(
"example.com:443",
credentials.createSsl()
);
const chat = client.createChatConnection();
chat.write({
channel: "testChannel",
mesage: "Hello"
});
proto.ts
import { join } from "path";
import { loadPackageDefinition } from "grpc";
import { loadSync } from "@grpc/proto-loader";
const protosPath = join(__dirname, "../chatting-service.proto");
const packageDefinition = loadSync(protosPath);
export const chattingServiceProto: any = loadPackageDefinition(
packageDefinition
).Chatting;
export const ChattingServiceImplementation =
chattingServiceProto.ChattingService;
chatting-service.proto
syntax = "proto3";
package Chatting;
message SentMessage {
string channelId = 1;
string text = 2;
}
message ReceivedMessage {
string text = 1;
}
service ChattingService {
rpc createChatConnection(stream SentMessage) returns(stream ReceivedMessage) {}
}
Dockerfile
FROM node:12
WORKDIR /source
ADD package.json package.json
ADD package-lock.json package-lock.json
RUN npm install
ADD tsconfig.json tsconfig.json
COPY ./src ./src
RUN npm run build
ADD ./chatting-service.proto ./chatting-service.proto
CMD node /source/dist/server.js
Now that we have created simple program next we implement deployment for this program.
And we start from deployment file
Deployment
In bellow we have simple deployment file which makes 1 repica of pod that contains two pods. chatting is container for program we just created which binds grpc server to port 4200.
grpc-proxy is container that is made top of nginx which purpouse is terminate tls, proxy traffic to program we created and implemented health check for google load balancer. We go more deeply to grpc-proxy implementation later.
chatting-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: chatting
spec:
replicas: 1
template:
metadata:
labels:
app: chatting
spec:
containers:
- name: chatting
image: jaska/chatting:4
ports:
- containerPort: 4200
- name: grpc-proxy
image: jaska/grpc-proxy:1
env:
- name: GRPC_PATH
value: "/Chatting.ChattingService/"
- name: TARGET_HOST
value: "localhost:4200"
ports:
- containerPort: 9900
Next we create create service file for our deployment.
apiVersion: extensions/v1beta1
kind: Service
apiVersion: v1
metadata:
annotations:
cloud.google.com/app-protocols: '{"my-port":"HTTP2"}'
name: chatting-service
spec:
type: NodePort
selector:
app: chatting
ports:
- name: my-port
protocol: TCP
port: 443
targetPort: 9900
cloud.google.com/app-protocols: '{"my-port":"HTTP2"}' annotation above tells google loadbalancer to route traffic to my-port using http2 protocol.
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: chatting-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: public-ip
spec:
tls:
- secretName: tls-secret
rules:
- host: "example.com"
http:
paths:
- backend:
serviceName: chatting-service
servicePort: 443
path: /Chatting.ChattingService/*
This requires that you have reserved static ip named public-ip
Grpc proxy
This proxy allows you to proxy grpc traffic to server. Proxy is configured using environment variables GRPC_PATH which is in this case /Chatting.ChattingService/* and TARGET_HOST which tells proxy which adress to route traffic. Proxy also haves / path handler which returns always 200 statusCode.
Dockerfile
FROM nginx:1.17
RUN apt-get update
RUN apt-get install -y openssl1.1
ADD nginx.conf /etc/nginx/nginx.conf
ADD run.sh /usr/local/bin/run.sh
RUN chmod +x /usr/local/bin/run.sh
CMD /usr/local/bin/run.sh
nginx.conf
events {
worker_connections 1024;
}
http {
server {
listen 9900 http2 ssl;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
ssl_certificate /certs/certificate.crt;
ssl_certificate_key /certs/privateKey.key;
location ${GRPC_PATH} {
grpc_socket_keepalive on;
grpc_send_timeout 300s;
grpc_read_timeout 300s;
grpc_connect_timeout 300s;
grpc_pass ${TARGET_HOST};
}
location / {
return 200 'gangnam style!';
}
}
}
run.sh
envsubst '${GRPC_PATH},${TARGET_HOST}' < /etc/nginx/nginx.conf > /etc/nginx/nginx.conf
echo "Generating self-signed cert"
mkdir -p /certs
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 \
-keyout /certs/privateKey.key \
-out /certs/certificate.crt \
-subj "/C=UK/ST=Warwickshire/L=Leamington/O=OrgName/OU=IT Department/CN=example.com"
echo "Starting nginx"
ln -sf /dev/stdout /var/log/nginx/access.log
ln -sf /dev/stderr /var/log/nginx/error.log
echo $GRPC_PATH
echo $TARGET_HOST
nginx -g "daemon off;"
Deployment
After you have created files described above you need to build dockerfiles and push them to some registry and replace image names from deployment. You also need some domain and point it to static ip you have reserved from gcp console and replace example.com from ingress.yaml. After you have done that move deployment files to deployment folder and run kubectl apply -f deployments. After you can go to gcp console and see in gke workloads newly created deployments and services. Because google http2 requires tls you need to have valid ssl certificate for service. You can use your own certificate or use google provided certificate. https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs This way google automatically provides you free letsencrypt certificate to domain name in ingress file.
Running
Now when you run client you can see message new chat connection pop up stackdriver logging.
Links
- Managed certificates https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
- Example project https://github.com/J45k4/gke-grpc
Top comments (0)