Hardware: VPS Hetzner YunoHost version: 11.0.9.14 (stable) I have access to my server : Through SSH | through the webadmin Are you in a special context or did you perform some particular tweaking on your YunoHost instance ? : no
Description of my issue
Hello everyone,
Today, while I did nothing, the mysql service stopped. I realized it tonight because many apps were not working anymore. I restarted the service. It lasted 2 min and it stopped againâŠ
I donât know what to do, do you have any idea where the problem could be?
Sep 14 23:17:37 systemd[1]: mariadb.service: A process of this unit has been killed by the OOM killer.
Sep 14 23:17:37 systemd[1]: mariadb.service: Main process exited, code=killed, status=9/KILL
I think I have the same issue. Posted it elsewhere but I think here it fits better.
Description
Updated to 11.0.9.14 (stable).a few days ago without major issues. Every once in a while sth crashes and some apps relying on MYSQL get unusable:
Mattermost logs http_code":500 and connect: connection refused)
Nextcloud shows an Internal Server Error site and the logs SQLSTATE[HY000]: General error: 2006 MySQL server has gone away and SQLSTATE[HY000] [2002] Connection refused
Logs
/var/log/mysql/error.log is empty.
sudo yunohost service log mysql gives back:
Sep 18 15:30:34 systemd[1]: mariadb.service: A process of this unit has been killed by the OOM killer.
Sep 18 15:30:34 systemd[1]: mariadb.service: Main process exited, code=killed, status=9/KILL
Sep 18 15:30:34 systemd[1]: mariadb.service: Failed with result 'oom-kill'.
Sep 18 15:30:34 systemd[1]: mariadb.service: Consumed 6min 26.714s CPU time.
finally in /var/log/syslog.log:
There was a OOM Killer invoked at Sep 18 15:30:34: python3 invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
which finally leads to: Out of memory: Killed process 1296 (mariadbd) total-vm:2104396kB, anon-rss:341680kB, file-rss:0kB, shmem-rss:0kB, UID:106 pgtables:892kB oom_score_adj:0
and systemd[1]: mariadb.service: A process of this unit has been killed by the OOM killer.
After that multiple services (among them Mattermost & Nextcloud) complain about lost connections.
Monitoring Data
there seems to be many requests before the crash (crash is on the right side at 15:30)?
My current workaround for this situation is reboot via GUI or sudo yunohost service restart mysql.
So do I get this right: for some reason the system kills mariadb because it has to less RAM and then the mysql dbs crash? Is that expected behaviour? Would it be solved if Iâd just buy more RAM? (Did so and didnât have a crash since then). Can I deliver more logs for you?
Yes, that would be one way. Another is to check how much memory MySQL is allowed to use.
The first thing you could check is to see whether you have swap space allocated. The quickest way is to run the free command. On my home computer it shows 16 GB of RAM, 10 GB of swap:
free
total used free shared buff/cache available
Mem: 16186984 4342812 6643316 590836 5200856 10905008
Swap: 10485752 456192 10029560
The easiest way to check is via diagnosis on the Yunohost admin page:
If you have no swap space, the easiest is to create a swap file. You can do it like this for a swap file of half a GB:
Thank you very much for the answer.
So, do I get you right, that this is expected behaviour and that (if I do not want to reduce performance of the server by using swap) I have to stay on the higher RAM Option (easy but costly on a VPS)?
Seems that the system uses about 3GB when it starts which then rises to 4GB over a few days. So I think I go fpr 8GB and reboot on a weekly basisâŠ
Well⊠Expected behaviour would be to âgracefullyâ lower the performance, instead of just going bust.
Depending how flexible your VPS provider is, you could first see if swap helps you out. Hardly any VPS still provides HDD-only servers, performance degradation with, say, 2 GB swap on SSD might be acceptable (even if only for experimenting). It seems (in the images) most of the time load is relatively low, with now and again a peak.
My Yunohosts run for small audiences, on much lighter hardware. Mostly 1 GB of ram. Without swap, they crash. With swap available, they donât crash and hardly ever use all of the swap.
Thanks very much for your anwser.
So, I create the missing swap partition but my question is : Is there a correlation with the upgrade to Yunohost 11 and Debian 11 ? Because itâs been a year that my vps has been running without any problem with the same applications installed. (I also receive much more often emails from Netdata telling me about the saturation of my CPU which then returns to normal)
This is a screenshot from Netdata, we can see how ârootâ user use huge amount of RAM :
Yes, swap partition or swap file. On a running system it is often easier to add a swap file if you have no free partitions.
Linux is supposed to use available RAM as cache. Iâd speculate that usage shows up as ârootâ but I have no proof for that.
Does top/htop sorted by memory usage show the same pattern?
It would be quite a coincidence if that were not the case: about all software is replaced of course. I was thinking of maybe changed defaults for MariaDB, but I didnât compare the configuration files. Maybe someone else has an idea?
I never know which memory column to be concerned about. I thought 'RESâerved memory was the ârealâ amount of RAM that is taken by the application, while 'VIRTâual is including swap and 'SHaRâed is the amount of memory (either RAM or swap) that is used by shared objects such as libraries in this process.
Trying to match the numbers in the screenshots, it doesnât add up (virtual is way more, while reserved is way below 8 GB). So: I have no clue
Itâd be interesting to see whether you now got peak, and then lowering usage, or peaks that keep growing until the server is out of memory again.
I donât know if itâs helping, but i have to restart my vps to make amount of ram return to normal. Maybe other users can share with us their experiences ?