Hardware: VPS Hetzner YunoHost version: 220.127.116.11 (stable) I have access to my server : Through SSH | through the webadmin Are you in a special context or did you perform some particular tweaking on your YunoHost instance ? : no
Description of my issue
Today, while I did nothing, the mysql service stopped. I realized it tonight because many apps were not working anymore. I restarted the service. It lasted 2 min and it stopped again…
I don’t know what to do, do you have any idea where the problem could be?
Bonjour tout le monde,
Aujourd’hui, alors que je n’ai rien fait, le service mysql s’est arrêté. Je m’en suis rendu compte ce soir car de nombreuses apps ne fonctionnaient plus. J’ai relancé le service. Ça a tenu 2 min et il s’est encore arrêté…
Je ne sais pas quoi faire, auriez-vous une idée d’où peut venir le problème ?
Merci @Aleks pour ta réponse !
Alors je n’ai rien installé de plus depuis des mois. Néanmoins, depuis la mise à jour vers la version 11 de Debian et de Yunohost le 10 août, je reçois presque quotidiennement des alertes de Netdata par mail m’indiquant que mon processeur ou ma mémoire sont saturés pendant une certaine période (une heure par exemple). Quand je vais voir, c’est toujours l’user root qui utilise ce processeur. Pour la mémoire, je n’ai pas encore eu le temps de voir quel user monopolise cette ram.
I think I have the same issue. Posted it elsewhere but I think here it fits better.
Updated to 18.104.22.168 (stable).a few days ago without major issues. Every once in a while sth crashes and some apps relying on MYSQL get unusable:
Mattermost logs http_code":500 and connect: connection refused)
Nextcloud shows an Internal Server Error site and the logs SQLSTATE[HY000]: General error: 2006 MySQL server has gone away and SQLSTATE[HY000]  Connection refused
/var/log/mysql/error.log is empty.
sudo yunohost service log mysql gives back:
Sep 18 15:30:34 systemd: mariadb.service: A process of this unit has been killed by the OOM killer.
Sep 18 15:30:34 systemd: mariadb.service: Main process exited, code=killed, status=9/KILL
Sep 18 15:30:34 systemd: mariadb.service: Failed with result 'oom-kill'.
Sep 18 15:30:34 systemd: mariadb.service: Consumed 6min 26.714s CPU time.
finally in /var/log/syslog.log:
There was a OOM Killer invoked at Sep 18 15:30:34: python3 invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
which finally leads to: Out of memory: Killed process 1296 (mariadbd) total-vm:2104396kB, anon-rss:341680kB, file-rss:0kB, shmem-rss:0kB, UID:106 pgtables:892kB oom_score_adj:0
and systemd: mariadb.service: A process of this unit has been killed by the OOM killer.
After that multiple services (among them Mattermost & Nextcloud) complain about lost connections.
there seems to be many requests before the crash (crash is on the right side at 15:30)?
My current workaround for this situation is reboot via GUI or sudo yunohost service restart mysql.
So do I get this right: for some reason the system kills mariadb because it has to less RAM and then the mysql dbs crash? Is that expected behaviour? Would it be solved if I’d just buy more RAM? (Did so and didn’t have a crash since then). Can I deliver more logs for you?
Thank you very much for the answer.
So, do I get you right, that this is expected behaviour and that (if I do not want to reduce performance of the server by using swap) I have to stay on the higher RAM Option (easy but costly on a VPS)?
Seems that the system uses about 3GB when it starts which then rises to 4GB over a few days. So I think I go fpr 8GB and reboot on a weekly basis…
Well… Expected behaviour would be to ‘gracefully’ lower the performance, instead of just going bust.
Depending how flexible your VPS provider is, you could first see if swap helps you out. Hardly any VPS still provides HDD-only servers, performance degradation with, say, 2 GB swap on SSD might be acceptable (even if only for experimenting). It seems (in the images) most of the time load is relatively low, with now and again a peak.
My Yunohosts run for small audiences, on much lighter hardware. Mostly 1 GB of ram. Without swap, they crash. With swap available, they don’t crash and hardly ever use all of the swap.
Thanks very much for your anwser.
So, I create the missing swap partition but my question is : Is there a correlation with the upgrade to Yunohost 11 and Debian 11 ? Because it’s been a year that my vps has been running without any problem with the same applications installed. (I also receive much more often emails from Netdata telling me about the saturation of my CPU which then returns to normal)
This is a screenshot from Netdata, we can see how “root” user use huge amount of RAM :
Yes, swap partition or swap file. On a running system it is often easier to add a swap file if you have no free partitions.
Linux is supposed to use available RAM as cache. I’d speculate that usage shows up as ‘root’ but I have no proof for that.
Does top/htop sorted by memory usage show the same pattern?
It would be quite a coincidence if that were not the case: about all software is replaced of course. I was thinking of maybe changed defaults for MariaDB, but I didn’t compare the configuration files. Maybe someone else has an idea?
I never know which memory column to be concerned about. I thought 'RES’erved memory was the ‘real’ amount of RAM that is taken by the application, while 'VIRT’ual is including swap and 'SHaR’ed is the amount of memory (either RAM or swap) that is used by shared objects such as libraries in this process.
Trying to match the numbers in the screenshots, it doesn’t add up (virtual is way more, while reserved is way below 8 GB). So: I have no clue
It’d be interesting to see whether you now got peak, and then lowering usage, or peaks that keep growing until the server is out of memory again.
Hello tout le monde. J’ai toujours le même soucis. Depuis pas mal de jours, ma mémoire et ma mémoire swap sont complètement saturées et cette nuit, mysql a encore planté. j’ai du redémarrer le serveur.
Je ne sais pas trop quoi faire !
Quand j’ouvre le fichier des logs de mysql dans /var/log/mysql/ il est vide. J’ai aussi des traces de mysql dans syslog.