🧨 How We Fixed Log Bloat in Supabase Self-Hosted While on a Coolify Server
We host two Supabase instances* on a single self-hosted VPS using Coolify. One day, the *entire server crashed — no frontend, no terminal, no Coolify dashboard. Rescue mode was the only way in.
TL;DR:
- PostgreSQL tables in the _analytics schema were bloated by Logflare events
- A single table reached 17GB
- We freed over 30GB by truncating the largest table
- Added a cron job via Coolify to prune the logs regularly
---
🚨 Symptoms
- Server unreachable via SSH
- Coolify dashboard inaccessible
- Supabase services non-functional
- Logs indicated out-of-space issues
---
🔍 Investigation in Rescue Mode
After booting into Rescue Mode via OVH’s dashboard:
1. Inspect mounted volumes
bash
lsblk
fdisk -l
mount /dev/sdb1 /mnt/root
2. Check disk usage
bash
du -sh /mnt/root/*
Result:
89G /mnt/root/var
65G /mnt/root/swapfile
3. Drill into /var
bash
du -sh /mnt/root/var/*
du -sh /mnt/root/var/lib/*
4. Isolate Docker and volumes
bash
du -sh /mnt/root/var/lib/docker/*
du -sh /mnt/root/var/lib/docker/volumes/*
Found two massive Supabase database volumes, each ~18GB.
---
🐳 Identify Bloated Supabase Container
Back in normal mode, we logged into the running container:
bash
docker exec -it psql -U postgres
We found _supabase at 17 GB:
sql
SELECT datname, pg_size_pretty(pg_database_size(datname)) FROM pg_database;
---
🧠 Find the Culprit Table
sql
\c _supabase
SELECT nspname || '.' || relname AS table,
pg_size_pretty(pg_total_relation_size(c.oid)) AS total_size
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind = 'r'
ORDER BY pg_total_relation_size(c.oid) DESC
LIMIT 5;
Result:
_analytics.log_events_754f7ab7... | 17 GB
This was created by Logflare, Supabase’s built-in logging system.
---
🧹 Fix: Truncate the Log Table
As postgres user, this didn’t work due to permissions.
We switched to supabase_admin:
bash
docker exec -it psql -U supabase_admin -d _supabase
TRUNCATE TABLE _analytics."log_events_754f7ab7...";
Result:
TRUNCATE TABLE
Space instantly freed.
---
🔁 Prevent Future Bloat
We wrote a shell script inside the Supabase Database container.
This is the code:
bash
#!/bin/bash
psql -U supabase_admin -d _supabase -c \
"DELETE FROM _analytics."log_events_754f7ab7..." WHERE timestamp < NOW() - INTERVAL '30 days'; VACUUM;"
Here are the steps
Log into the docker container through your distro console:
docker exec -it /bin/bash
Navigate to the db folder:
cd /var/lib/postgresql/
Create the file:
Note: log_events_754f7ab7... should be replaced with the event table discovered previously.
cat < prune_supabase_logs.sh
#!/bin/bash
psql -U supabase_admin -d _supabase -c \\
"DELETE FROM _analytics.\"log_events_754f7ab7...\" WHERE timestamp < NOW() - INTERVAL '30 days'; VACUUM;"
EOF
Make the script executable:
bashchmod +x prune_supabase_logs.sh
Then scheduled it in Coolify:
- Name: Prune Supabase Logs
- Container: supabase-db-
- Command: /var/lib/postgresql/prune_supabase_logs.sh
- Frequency: Daily
---
✅ Result
- Disk usage dropped from 100% to under 65%
- Coolify back online
- Supabase services operational
- Log table now automatically cleaned
---
🧪 Key Learnings
- Supabase + Logflare can silently bloat PostgreSQL
- Use pg_size_pretty, pg_total_relation_size to debug bloat
- You may need to use supabase_admin to manage internal schemas
- Automate cleanup using shell scripts and Coolify’s built-in cron support
