In a previous blog post I explained how to deploy YugabyteDB on Amazon managed Kubernetes (EKS). Let me show a powerful but less known feature of the Linux Kernel: Pressure Stall Information (PSI)
I described in the past how to enable it on Centos7 but the good news is that it is enabled by default in the EKS Kubernetes Worker AMI with AmazonLinux2 image. This means that you can simply:
tail /proc/pressure/*
and you will get the percentage of time, in the past 10 seconds, 1 minute, 5 minutes, where at least one process was on CPU, IO, or RAM pressure, and also when the full system was on IO or RAM pressure:
Without this, it can be difficult to know if you have to increase the CPU, RAM or Disk resources.
Script to get PSI every 10 seconds from all YugabyteDB tablet server pods
Here is an example where I check the Pressure Stall Information from all my tablet servers in a YugabyteDB deployment over 3 regions (I have a namespace per region):
while sleep 10
do
for namespace in yb-demo-eu-west-1{a,b,c}
do
for pod in yb-tserver-{0,1}
do
kubectl -n $namespace exec -i $pod -c yb-tserver -- bash -c '
awk '"'"'
{ x[$1]=x[$1] sprintf (" %5.2f%% %-3s ",gensub(re,"\\2",1),gensub("^.*/(...?)[^/]*$","\\1",1,FILENAME)) }
END{printf "%5s pressure on some tasks: %s, on all tasks: %s %30s\n",ts,x["some"],x["full"],host}
'"'"' re="(some|full) avg10=([0-9.]+) avg60=([0-9.]+) avg300=([0-9.]+) total=([0-9.]+)" ts=$(date "+%H:%M:%S") host="$NAMESPACE $(hostname)" /proc/pressure/?*
'
done ; done ; done
Here is an example:
15:12:01 pressure on some tasks: 31.21% cpu 0.31% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1a yb-tserver-0
15:12:02 pressure on some tasks: 31.21% cpu 0.31% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1a yb-tserver-1
15:12:03 pressure on some tasks: 4.79% cpu 0.35% io 0.00% mem , on all tasks: 0.02% io 0.00% mem yb-demo-eu-west-1b yb-tserver-0
15:12:04 pressure on some tasks: 3.92% cpu 0.28% io 0.00% mem , on all tasks: 0.02% io 0.00% mem yb-demo-eu-west-1b yb-tserver-1
15:12:05 pressure on some tasks: 1.22% cpu 0.00% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1c yb-tserver-0
15:12:06 pressure on some tasks: 2.32% cpu 0.00% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1c yb-tserver-1
15:12:17 pressure on some tasks: 40.00% cpu 0.56% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1a yb-tserver-0
15:12:19 pressure on some tasks: 40.00% cpu 0.56% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1a yb-tserver-1
15:12:20 pressure on some tasks: 9.78% cpu 0.49% io 0.00% mem , on all tasks: 0.12% io 0.00% mem yb-demo-eu-west-1b yb-tserver-0
15:12:21 pressure on some tasks: 9.78% cpu 0.49% io 0.00% mem , on all tasks: 0.12% io 0.00% mem yb-demo-eu-west-1b yb-tserver-1
15:12:22 pressure on some tasks: 1.95% cpu 0.00% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1c yb-tserver-0
15:12:23 pressure on some tasks: 0.73% cpu 2.93% io 0.00% mem , on all tasks: 2.76% io 0.00% mem yb-demo-eu-west-1c yb-tserver-1
15:12:34 pressure on some tasks: 49.90% cpu 0.25% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1a yb-tserver-0
15:12:35 pressure on some tasks: 55.35% cpu 0.20% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1a yb-tserver-1
15:12:36 pressure on some tasks: 16.05% cpu 0.09% io 0.00% mem , on all tasks: 0.02% io 0.00% mem yb-demo-eu-west-1b yb-tserver-0
15:12:37 pressure on some tasks: 16.05% cpu 0.09% io 0.00% mem , on all tasks: 0.02% io 0.00% mem yb-demo-eu-west-1b yb-tserver-1
15:12:38 pressure on some tasks: 2.76% cpu 0.00% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1c yb-tserver-0
15:12:39 pressure on some tasks: 2.25% cpu 12.06% io 0.00% mem , on all tasks: 11.75% io 0.00% mem yb-demo-eu-west-1c yb-tserver-1
15:12:50 pressure on some tasks: 33.73% cpu 0.28% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1a yb-tserver-0
15:12:51 pressure on some tasks: 32.51% cpu 0.22% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1a yb-tserver-1
15:12:52 pressure on some tasks: 3.24% cpu 0.14% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1b yb-tserver-0
15:12:53 pressure on some tasks: 2.65% cpu 0.11% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1b yb-tserver-1
15:12:54 pressure on some tasks: 0.55% cpu 0.00% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1c yb-tserver-0
15:12:55 pressure on some tasks: 0.45% cpu 2.43% io 0.00% mem , on all tasks: 2.37% io 0.00% mem yb-demo-eu-west-1c yb-tserver-1
15:13:06 pressure on some tasks: 39.73% cpu 0.50% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1a yb-tserver-0
15:13:07 pressure on some tasks: 36.52% cpu 0.41% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1a yb-tserver-1
15:13:08 pressure on some tasks: 8.96% cpu 0.37% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1b yb-tserver-0
15:13:10 pressure on some tasks: 7.51% cpu 0.49% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1b yb-tserver-1
15:13:11 pressure on some tasks: 0.89% cpu 0.00% io 0.00% mem , on all tasks: 0.00% io 0.00% mem yb-demo-eu-west-1c yb-tserver-0
15:13:12 pressure on some tasks: 0.93% cpu 0.57% io 0.00% mem , on all tasks: 0.56% io 0.00% mem yb-demo-eu-west-1c yb-tserver-1
From this I know that I have enough RAM, no pressure on IO (except 11.75% of the 10 seconds before 15:12:39 on yb-demo-eu-west-1c yb-tserver-1, so that's about one second in one pod only). The CPU is used in yb-demo-eu-west-1a
Control Groups Version 2
Note that running this in pods may show the pressure per Kubernetes worker node rather than per container. You heed to use the cgroup2
interface for that, which you can check with:
[root@yb-master-0 cores]# stat -fc %T /sys/fs/cgroup
cgroup2fs
If the result is tmpfs
rather than cgroup2
then you are using Control Groups Version 1 which is the default (see https://github.com/awslabs/amazon-eks-ami/issues/824).
Get PSI from PostgreSQL or YugabyteDB backend
I'll publish a cleaner script for this later, but here is how to quickly get Presure Stall Information from YugabyteDB or PostgreSQL (requires superuser to read /proc/pressure
):
create table if not exists ybwr_psi (
primary key(ts asc, server, resource, scope)
, ts timestamptz default now()
, server text default inet_server_addr()
, scope text
, avg10 float , avg60 float , avg300 float , total float
, raw_avg10 text , raw_avg60 text , raw_avg300 text , raw_total text
, resource text default current_setting('ybwr.psi_resource')
);
create or replace function ybwr_psi_insert() returns trigger as $$ begin
new.avg10 := replace(new.raw_avg10,'avg10=','')::float;
new.avg60 := replace(new.raw_avg60,'avg60=','')::float;
new.avg300 := replace(new.raw_avg300,'avg300=','')::float;
new.total := replace(new.raw_total,'total=','')::float;
return new; end; $$ language plpgsql;
create trigger ybwr_psi_insert before insert on ybwr_psi for each row
execute function ybwr_psi_insert();
create or replace function ybwr_psi_snap() returns table(psi text, avg10 float) as $$
set local ybwr.psi_resource='cpu';
copy ybwr_psi(scope,raw_avg10,raw_avg60,raw_avg300,raw_total) from '/proc/pressure/cpu' with ( delimiter ' ');
set local ybwr.psi_resource='io';
copy ybwr_psi(scope,raw_avg10,raw_avg60,raw_avg300,raw_total) from '/proc/pressure/io' with ( delimiter ' ' , rows_per_transaction 0);
set local ybwr.psi_resource='memory';
copy ybwr_psi(scope,raw_avg10,raw_avg60,raw_avg300,raw_total) from '/proc/pressure/memory' with ( delimiter ' ');
--select ts,scope,resource,avg10 from ybwr_psi;
select format('%s /proc/pressure/%-6s %s %s %s %s %s',server,resource,scope,raw_avg10,raw_avg60,raw_avg300,raw_total), avg10
from ybwr_psi where ts=now();
$$ language sql;
-- to avoid warning in YugabyteDB
set yb_default_copy_from_rows_per_transaction=0;
select * from ybwr_psi_snap() where avg10>0;
\watch 10
This stores into a yb_psi
table the pressure on the node you are connected to, and displays the latest value:
The following function takes a snapshot on all YugabyteDB nodes:
create temporary table if not exists ybwr_psi_tmp (out text) on commit delete rows;
create or replace function ybwr_psi_snap_all()
returns table(server text, scope text, resource text, avg10 float , avg60 float , avg300 float) as $DO$
declare r record;
begin
set yb_default_copy_from_rows_per_transaction=0;
for r in (select host from yb_servers()) loop
execute format( $SQL$ copy ybwr_psi_tmp from program $SH$
bash -c '$(dirname $(readlink -f /proc/$(pgrep --newest yb-tserver)/exe))/ysqlsh -h %L -c "select ybwr_psi_snap()"'
$SH$ $SQL$ , r.host ) ; end loop ;
return query select t.server,t.scope,t.resource,t.avg10,t.avg60,t.avg300 from ybwr_psi t
where t.ts>now() and t.avg300>0 order by avg300 desc;
end; $DO$ language plpgsql;
select * from ybwr_psi_snap_all() order by avg300 limit 5
;
To save costs in the cloud you need to be elastic, that's one reason for Distributed SQL Databases, but you also need to know which resources you have to scale. The Linux kernel Pressure Stall Information can help.
Top comments (0)