DEV Community

Nixon Islam
Nixon Islam

Posted on

Clean Sentry On-Premise Database

So we track our error logs via Sentry on-premises which we hosted on our own. Recently it started to give error, the disk space is running out!!!

WHY??

Apparently we needed to clear the database after certain time. Otherwise it will stack up all of the error logs.

Solution??

We log into our server. Sentry was running in docker. We went to the docker folder ran this
docker-compose exec worker bash
After that from the worker bash we ran
sentry cleanup --days 30
basically this will clean up all the events data before 30 days.

After this we go inside of the database by running these

  • docker-compose exec postgres bash
  • psql -U postgres
  • VACUUM FULL; Point to be noted, VACUUM FULL; will lock your db tables unless the full vacuum is being done.

Voila! Database and hard driver storage cleaned up! :)

Top comments (4)

Collapse
 
c80609a profile image
scout • Edited

My solution was used before I found this pull request.

To checkout volume:

$ cd /var/lib/docker/volumes/sentry-postgres/_data/base/12407
$ ls -AlhtS

    total 13G
    -rw------- 1 999 docker 1.0G Nov 24 11:51 20250
    -rw------- 1 999 docker 1.0G Nov 23 11:58 20250.1
    -rw------- 1 999 docker 1.0G Nov 22 12:53 20250.10
    -rw------- 1 999 docker 1.0G Nov 21 12:05 20250.2
    -rw------- 1 999 docker 1.0G Nov 20 12:12 20250.3
    -rw------- 1 999 docker 1.0G Nov 19 12:18 20250.4
    -rw------- 1 999 docker 1.0G Nov 18 12:25 20250.5
    -rw------- 1 999 docker 1.0G Nov 17 12:32 20250.6
    -rw------- 1 999 docker 1.0G Nov 16 12:51 20250.7
    -rw------- 1 999 docker 1.0G Nov 15 12:45 20250.8
    -rw------- 1 999 docker 1.0G Nov 14 12:49 20250.9
    -rw------- 1 999 docker 507M Nov 14 12:54 20250.11
    -rw------- 1 999 docker 237M Nov 23 12:53 20252
    <...>
Enter fullscreen mode Exit fullscreen mode

To connect to docker:

docker exec -it sentry_onpremise_postgres_1 bash
Enter fullscreen mode Exit fullscreen mode

To enter console:

psql -U postgres
Enter fullscreen mode Exit fullscreen mode

To find out all tables with TOAST tables:

SELECT oid::regclass, reltoastrelid::regclass, pg_relation_size(reltoastrelid) AS toast_size FROM pg_class WHERE relkind = 'r' AND reltoastrelid <> 0 ORDER BY 3 DESC;

                    oid                     |      reltoastrelid      | toast_size
--------------------------------------------+-------------------------+------------
 nodestore_node                             | pg_toast.pg_toast_20247 | 12020846080
 pg_rewrite                                 | pg_toast.pg_toast_2618  |      385024
 pg_statistic                               | pg_toast.pg_toast_2619  |      212992
 sentry_groupedmessage                      | pg_toast.pg_toast_16900 |       81920
 sentry_apikey                              | pg_toast.pg_toast_16542 |           0
 sentry_authidentity                        | pg_toast.pg_toast_16605 |           0
 sentry_authprovider                        | pg_toast.pg_toast_16616 |           0
 <...>
Enter fullscreen mode Exit fullscreen mode

To cleanup:

DELETE FROM nodestore_node WHERE timestamp < '2021-11-23 00:00:00';
Enter fullscreen mode Exit fullscreen mode

PROFIT.

Collapse
 
michabbb_76 profile image
Michael Bladowski

why a "truncate" after the delete ?

Collapse
 
simllll profile image
Simon Tretter

I was wondering why the disk size wasn't shrinking, turned out while running VACUUM nodestore_node; it returned pq: could not resize shared memory segment "..." to ... bytes: No space left on device.
the solution to this was found here stackoverflow.com/questions/567515...
I just added shm_size: '1gb' to the postgres docker definition and rerun docker compose up that recreated the postgres container. after this, the vacuum started working correclty :)

Collapse
 
makrand profile image
Makrand

Hello Ashraful,

For me I am stuck at this in worker container. I have close to 1.1 TB data here.

root@6868e8f50bf8:/# sentry cleanup --days 8
/usr/local/lib/python3.8/site-packages/sentry/runner/initializer.py:555: DeprecatedSettingWarning: The SENTRY_URL_PREFIX setting is deprecated. Please use SENTRY_OPTIONS['system.url-prefix'] instead.
warnings.warn(DeprecatedSettingWarning(old, "SENTRY_OPTIONS['%s']" % new))
/usr/local/lib/python3.8/site-packages/memcache.py:1303: SyntaxWarning: "is" with a literal. Did you mean "=="?
if key is '':
/usr/local/lib/python3.8/site-packages/memcache.py:1304: SyntaxWarning: "is" with a literal. Did you mean "=="?
if key_extra_len is 0:
06:03:37 [INFO] sentry.plugins.github: apps-not-configured
Removing expired values for LostPasswordHash
Removing expired values for OrganizationMember
Removing expired values for ApiGrant
Removing expired values for ApiToken
Removing expired files associated with ExportedData
Removing old NodeStore values

It is stuck here for about 15 mins now. Any idea what can be done now?

Also, here is how psql looks

postgres=# SELECT oid::regclass, reltoastrelid::regclass, pg_relation_size(reltoastrelid) AS toast_size FROM pg_class WHERE relkind = 'r' AND reltoastrelid <> 0 ORDER BY 3 DESC;
oid | reltoastrelid | toast_size

--------------------------------------------+-------------------------+--------------
nodestore_node | pg_toast.pg_toast_20250 | 561381097472
pg_rewrite | pg_toast.pg_toast_2618 | 385024