Nextcloud : impossible de mettre à jour de la version 25.0.6~ynh1 vers 26.0.2~ynh2

Bonjour
Je suis sous yunohost 11.1.21.4, et ça fait plusieurs fois que j’essaie de mettre nextcloud à jour sans succès.
Voilà le log d’installation : https://paste.yunohost.org/raw/uronexalen

Une fois la mise à jour échouée, nextcloud est désinstallé, et le service mysql est désactivé (j’arrive à le redémarrer).
Quand j’essaie de restaurer mon archive de nextcloud à la suite de ça, j’ai une erreur (du type : “mysql database already exists”). J’ai réussi à contourner le problème en trouvant une astuce sur un autre forum (à savoir, installer “archivemount”, via la commande “sudo apt-get install archivemount”).

Toutefois, même si j’arrive à revenir à mon point de départ, je suis toujours dans l’incapacité à effectuer la mise à jour en question.

Merci d’avance si vous pouvez m’aider (je précise que je ne suis pas très calé sur le fonctionnement de mysql)

EDIT : je précise que ce n’est pas la même erreur que dans ce topic Nextcloud : upgrade failed 25.0.6~ynh1 à 26.0.2~ynh2

EDIT 2 : je suppose que le problème arrive à cet endroit du log
Doctrine\DBAL\Exception: Failed to connect to the database: An exception occurred in the driver: SQLSTATE[HY000] [2002] Connection refused in /var/www/nextcloud/lib/private/DB/Connection.php:142

Erf ok, on dirait que MySQL fonctionne avant de lancer l’upgrade, mais que un truc casse mysql pendant l’upgrade …

Ce qui serait intéressant ce serait de reproduire le probleme (c’est a dire de lancer l’upgrade et attendre que ca casse tout) puis de regarder précisément quel est le probleme de mysql avec :

journalctl -u mariadb -n 300 --no-pager --no-hostname

(oui on parle de MySQL mais c’est un abus de language car le vrai service est MariaDB, longue histoire toussa)

Merci pour ta réponse.
Voilà ce que donne la commande :

# journalctl -u mariadb -n 300 --no-pager --no-hostname
-- Journal begins at Fri 2023-06-23 19:25:23 CEST, ends at Fri 2023-06-23 20:09:34 CEST. --
Jun 23 19:54:59 systemd[1]: mariadb.service: A process of this unit has been killed by the OOM killer.
Jun 23 19:54:59 systemd[1]: mariadb.service: Main process exited, code=killed, status=9/KILL
Jun 23 19:54:59 systemd[1]: mariadb.service: Failed with result 'oom-kill'.
Jun 23 19:54:59 systemd[1]: mariadb.service: Consumed 5h 51min 37.579s CPU time.

Je mets aussi le log de l’installation de la mise à jour, avec les erreurs que ça m’affiche :

Info: Now upgrading nextcloud...
Info: [....................] > Loading installation settings...
Warning: Nextcloud will soon deprecate 32-bit support. It is recommended to upgrade to a 64-bit architecture.
Info: [+...................] > Ensuring downward compatibility...
Info: [#+++++++++..........] > Backing up the app before upgrading (may take a while)...
Info: [##########++........] > Upgrading dependencies...
Info: [############........] > Making sure dedicated system user exists...
Info: [############........] > Upgrading PHP-FPM configuration...
Info: [############+.......] > Upgrading NGINX web server configuration...
Info: The service nginx has correctly executed the action reload-or-restart.
Info: [#############+......] > Upgrading Nextcloud...
Info: Upgrade to nextcloud 26.0.0
Info: '/tmp/tmp.yC9NJfAiAn' wasn't deleted because it doesn't exist.
Warning: [Error] Upgrade failed.
Warning: mysqlshow: Can't connect to local MySQL server through socket '/run/mysqld/mysqld.sock' (111)
Warning: Database nextcloud not found
Warning: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/run/mysqld/mysqld.sock' (111)
Warning: 341769 ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/run/mysqld/mysqld.sock' (111)
Warning: 343266 Could not restore nextcloud: An error occured inside the app restore script
Warning: 343407 Here's an extract of the logs before the crash. It might help debugging the error:
Warning: 354073 mysqlshow: Can't connect to local MySQL server through socket '/run/mysqld/mysqld.sock' (111)
Warning: 354182 Database nextcloud not found
Warning: 354471 ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/run/mysqld/mysqld.sock' (111)
Warning: 376297 The user nextcloud was not found
Warning: 394321 The operation 'Restore 'nextcloud' from a backup archive' could not be completed. Please share the full log of this operation using the command 'yunohost log share 20230623-180608-backup_restore_app-nextcloud' to get help
Warning: 396220 Nothing was restored
Warning: Uhoh ... Yunohost failed to restore the app to the way it was before the failed upgrade :|
Error: Could not upgrade nextcloud: An error occurred inside the app upgrade script
Info: The operation 'Upgrade the 'nextcloud' app' could not be completed. Please share the full log of this operation using the command 'yunohost log share 20230623-173701-app_upgrade-nextcloud' to get help
Warning: Here's an extract of the logs before the crash. It might help debugging the error:
Info: DEBUG - 343409 DEBUG - ++ filter=A-Za-z0-9
Info: DEBUG - 343411 DEBUG - ++ sed --quiet 's/\(.\{24\}\).*/\1/p'
Info: DEBUG - 343412 DEBUG - ++ tr --complement --delete A-Za-z0-9
Info: DEBUG - 343414 DEBUG - ++ dd if=/dev/urandom bs=1 count=1000
Info: DEBUG - 343416 DEBUG - + local new_db_pwd=**********
Info: DEBUG - 343417 DEBUG - + db_pwd=**********
Info: DEBUG - 343419 DEBUG - + ynh_mysql_create_db nextcloud nextcloud **********
Info: DEBUG - 343421 DEBUG - + local db=nextcloud
Info: DEBUG - 343422 DEBUG - + local 'sql=CREATE DATABASE nextcloud;'
Info: DEBUG - 343424 DEBUG - + [[ 3 -gt 1 ]]
Info: DEBUG - 343425 DEBUG - + sql+=' GRANT ALL PRIVILEGES ON nextcloud.* TO '\''nextcloud'\''@'\''localhost'\'''
Info: DEBUG - 343426 DEBUG - + [[ -n ********** ]]
Info: DEBUG - 343428 DEBUG - + sql+=' IDENTIFIED BY '\''**********'\'''
Info: DEBUG - 343429 DEBUG - + sql+=' WITH GRANT OPTION;'
Info: DEBUG - 343431 DEBUG - + ynh_mysql_execute_as_root '--sql=CREATE DATABASE nextcloud; GRANT ALL PRIVILEGES ON nextcloud.* TO '\''nextcloud'\''@'\''localhost'\'' IDENTIFIED BY '\''**********'\'' WITH GRANT OPTION;'
Info: DEBUG - 343432 DEBUG - + database=
Info: DEBUG - 343434 DEBUG - + '[' -n '' ']'
Info: DEBUG - 343435 DEBUG - + mysql -B ''
Info: DEBUG - 343436 WARNING - ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/run/mysqld/mysqld.sock' (111)
Info: DEBUG - 343438 DEBUG - + ynh_exit_properly
Error: The app 'nextcloud' failed to upgrade, and as a consequence the following apps' upgrades have been cancelled: nextcloud, vpnclient, zerobin
Error: The operation 'Upgrade the 'nextcloud' app' could not be completed. Please share the full log of this operation using the command 'yunohost log share 20230623-173701-app_upgrade-nextcloud' to get help

Du coup c’est typique d’un manque de ressource genre RAM sur le serveur, qui fait que le process se fait tué … Il n’y a pas de solution magique si ce n’est déinstaller des apps gourmandes (ou a minima, désactiver temporairement les services correspondants)

Faire un petit nettoyage avec ces commandes :

sudo apt-get autoclean
sync; echo 3 > /proc/sys/vm/drop_caches && swapoff -a && swapon -a

Je ne dis pas que cela va résoudre le problème…

Après si la mémoire physique (RAM) arrive à saturation… il n’y a-t-il pas une solution avec le swap pour soulager la RAM ?

je veux bien ! mais je ne m’y connais pas du tout assez pour mettre ça en place…

EDIT :

J’ai fait tout ça. Mais l’erreur a été la même à la mise à jour…

Salut,

Tu as pu résoudre ton problème ?

1 Like

non, ça ne fonctionne toujours pas, mais j’ai l’impression que je progresse vers l’identification du souci.
Je crois que j’ai un problème de “mémoire pleine”, en particulier sur /var/log (post ici : Erreur "no space left on device" - #11 by Smidge )
Et j’ai aussi CRON qui m’envoie un mail toutes les heures pour me dire qu’il a vidé près de 15Mo de log (Envoi automatique de mail CRON toutes les 8 heures - #6 )

Après avoir regardé un peu les plusieurs dizaines de milliers de lignes de log… j’ai l’impression qu’il y a un souci avec le service fail2ban, qui génère des messages en boucle et me remplit mon journal de log à n’en plus finir.

Après, je ne m’y connais pas suffisamment pour savoir quoi en faire…

Voilà notamment ce que je reçois très très très très souvent dans le log : hastebin

Je mets ici aussi d’autres extraits de ce que me donne de façon récurrente journalctl :

Started YunoHost VPN Client Checker..
Jul 02 14:11:56 smidge.noho.st systemd[1]: ynh-vpnclient-checker.service: Succeeded.
Jul 02 14:11:59 smidge.noho.st systemd[1]: avahi-alias@briqueinternet.local.service: Scheduled restart job, restart counter is at 55262.
Jul 02 14:11:59 smidge.noho.st systemd[1]: avahi-alias@internetcube.local.service: Scheduled restart job, restart counter is at 55262.
Jul 02 14:11:59 smidge.noho.st systemd[1]: Stopped Advertise briqueinternet.local as a local domain for this machine.
Jul 02 14:11:59 smidge.noho.st dbus-daemon[598]: [system] Activating via systemd: service name='org.freedesktop.Avahi' unit='dbus-org.freedesktop.Avahi.service>
Jul 02 14:11:59 smidge.noho.st systemd[1]: Started Advertise briqueinternet.local as a local domain for this machine.
Jul 02 14:11:59 smidge.noho.st systemd[1]: Stopped Advertise internetcube.local as a local domain for this machine.
Jul 02 14:11:59 smidge.noho.st systemd[1]: Started Advertise internetcube.local as a local domain for this machine.
Jul 02 14:11:59 smidge.noho.st dbus-daemon[598]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.Avahi.service': Unit dbus-org.freedeskto>
Jul 02 14:11:59 smidge.noho.st bash[8178]: Failed to create client object: Daemon not running
Jul 02 14:11:59 smidge.noho.st systemd[1]: avahi-alias@briqueinternet.local.service: Main process exited, code=exited, status=1/FAILURE
Jul 02 14:11:59 smidge.noho.st systemd[1]: avahi-alias@briqueinternet.local.service: Failed with result 'exit-code'.
Jul 02 14:11:59 smidge.noho.st dbus-daemon[598]: [system] Activating via systemd: service name='org.freedesktop.Avahi' unit='dbus-org.freedesktop.Avahi.service>
Jul 02 14:11:59 smidge.noho.st dbus-daemon[598]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.Avahi.service': Unit dbus-org.freedeskto>
Jul 02 14:11:59 smidge.noho.st bash[8184]: Failed to create client object: Daemon not running
Jul 02 14:11:59 smidge.noho.st systemd[1]: avahi-alias@internetcube.local.service: Main process exited, code=exited, status=1/FAILURE
Jul 02 14:11:59 smidge.noho.st systemd[1]: avahi-alias@internetcube.local.service: Failed with result 'exit-code'.
Jul 02 14:12:09 smidge.noho.st systemd[1]: avahi-alias@briqueinternet.local.service: Scheduled restart job, restart counter is at 55263.
Jul 02 14:12:09 smidge.noho.st systemd[1]: avahi-alias@internetcube.local.service: Scheduled restart job, restart counter is at 55263.
Jul 02 14:12:09 smidge.noho.st systemd[1]: Stopped Advertise briqueinternet.local as a local domain for this machine.
Jul 02 14:12:09 smidge.noho.st systemd[1]: Started Advertise briqueinternet.local as a local domain for this machine.
Jul 02 14:12:09 smidge.noho.st systemd[1]: Stopped Advertise internetcube.local as a local domain for this machine.
Jul 02 14:12:09 smidge.noho.st systemd[1]: Started Advertise internetcube.local as a local domain for this machine.
Jul 02 14:12:09 smidge.noho.st dbus-daemon[598]: [system] Activating via systemd: service name='org.freedesktop.Avahi' unit='dbus-org.freedesktop.Avahi.service>
Jul 02 14:12:09 smidge.noho.st dbus-daemon[598]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.Avahi.service': Unit dbus-org.freedeskto>
Jul 02 14:12:09 smidge.noho.st bash[8190]: Failed to create client object: Daemon not running
Jul 02 14:12:09 smidge.noho.st systemd[1]: avahi-alias@briqueinternet.local.service: Main process exited, code=exited, status=1/FAILURE
Jul 02 14:12:09 smidge.noho.st systemd[1]: avahi-alias@briqueinternet.local.service: Failed with result 'exit-code'.
Jul 02 14:12:09 smidge.noho.st dbus-daemon[598]: [system] Activating via systemd: service name='org.freedesktop.Avahi' unit='dbus-org.freedesktop.Avahi.service>
Jul 02 14:12:09 smidge.noho.st dbus-daemon[598]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.Avahi.service': Unit dbus-org.freedeskto>
Jul 02 14:12:09 smidge.noho.st bash[8194]: Failed to create client object: Daemon not running
Jul 02 14:12:09 smidge.noho.st systemd[1]: avahi-alias@internetcube.local.service: Main process exited, code=exited, status=1/FAILURE
Jul 02 14:12:09 smidge.noho.st systemd[1]: avahi-alias@internetcube.local.service: Failed with result 'exit-code'.
Jul 02 14:12:19 smidge.noho.st systemd[1]: avahi-alias@briqueinternet.local.service: Scheduled restart job, restart counter is at 55264.
Jul 02 14:12:19 smidge.noho.st systemd[1]: avahi-alias@internetcube.local.service: Scheduled restart job, restart counter is at 55264.
Jul 02 14:12:19 smidge.noho.st systemd[1]: Stopped Advertise briqueinternet.local as a local domain for this machine.
Jul 02 14:12:19 smidge.noho.st systemd[1]: Started Advertise briqueinternet.local as a local domain for this machine.
Jul 02 14:12:19 smidge.noho.st systemd[1]: Stopped Advertise internetcube.local as a local domain for this machine.
Jul 02 14:12:19 smidge.noho.st systemd[1]: Started Advertise internetcube.local as a local domain for this machine.
Jul 02 14:12:19 smidge.noho.st dbus-daemon[598]: [system] Activating via systemd: service name='org.freedesktop.Avahi' unit='dbus-org.freedesktop.Avahi.service>
Jul 02 14:12:19 smidge.noho.st dbus-daemon[598]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.Avahi.service': Unit dbus-org.freedeskto>
Jul 02 14:12:19 smidge.noho.st bash[8202]: Failed to create client object: Daemon not running

Ou encore :

Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]: --- Logging error ---
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]: Traceback (most recent call last):
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3.9/logging/__init__.py", line 1082, in emit
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     stream.write(msg + self.terminator)
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]: ValueError: I/O operation on closed file.
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]: Call stack:
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/jailthread.py", line 82, in _bootstrap
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     return super(JailThread, self)._bootstrap();
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     self._bootstrap_inner()
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     self.run()
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/jailthread.py", line 69, in run_with_except_hook
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     run(*args, **kwargs)
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filterpyinotify.py", line 345, in run
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     self.__notifier.process_events()
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/pyinotify.py", line 1275, in process_events
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     self._default_proc_fun(revent)
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/pyinotify.py", line 910, in __call__
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     return _ProcessEvent.__call__(self, event)
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/pyinotify.py", line 636, in __call__
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     return self.process_default(event)
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filterpyinotify.py", line 307, in __process_default
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     self.callback(event, origin='Default ')
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filterpyinotify.py", line 133, in callback
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     self._process_file(path)
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filterpyinotify.py", line 142, in _process_file
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     self.getFailures(path)
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filter.py", line 1123, in getFailures
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     self.processLineAndAdd(line.rstrip('\r\n'))
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filter.py", line 692, in processLineAndAdd
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     logSys.info(
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3.9/logging/__init__.py", line 1442, in info
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     self._log(INFO, msg, args, **kwargs)
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/helpers.py", line 246, in __safeLog
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]:     __origLog(self, level, msg, args, **kwargs)
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]: Message: '[%s] Found %s - %s'
Jul 02 14:04:38 smidge.noho.st fail2ban-server[716]: Arguments: ('pam-generic', '157.230.49.63', '2023-07-02 14:04:38')
Jul 02 14:04:40 smidge.noho.st sshd[7465]: Failed password for invalid user ubuntu from 157.230.49.63 port 43086 ssh2
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]: --- Logging error ---
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]: Traceback (most recent call last):
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3.9/logging/__init__.py", line 1082, in emit
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     stream.write(msg + self.terminator)
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]: ValueError: I/O operation on closed file.
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]: Call stack:
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/jailthread.py", line 82, in _bootstrap
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     return super(JailThread, self)._bootstrap();
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3.9/threading.py", line 912, in _bootstrap
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     self._bootstrap_inner()
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     self.run()
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/jailthread.py", line 69, in run_with_except_hook
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     run(*args, **kwargs)
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filterpyinotify.py", line 345, in run
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     self.__notifier.process_events()
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/pyinotify.py", line 1275, in process_events
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     self._default_proc_fun(revent)
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/pyinotify.py", line 910, in __call__
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     return _ProcessEvent.__call__(self, event)
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/pyinotify.py", line 636, in __call__
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     return self.process_default(event)
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filterpyinotify.py", line 307, in __process_default
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     self.callback(event, origin='Default ')
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filterpyinotify.py", line 133, in callback
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     self._process_file(path)
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filterpyinotify.py", line 142, in _process_file
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     self.getFailures(path)
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filter.py", line 1123, in getFailures
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     self.processLineAndAdd(line.rstrip('\r\n'))
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/server/filter.py", line 692, in processLineAndAdd
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     logSys.info(
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3.9/logging/__init__.py", line 1442, in info
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     self._log(INFO, msg, args, **kwargs)
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:   File "/usr/lib/python3/dist-packages/fail2ban/helpers.py", line 246, in __safeLog
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]:     __origLog(self, level, msg, args, **kwargs)
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]: Message: '[%s] Found %s - %s'
Jul 02 14:04:40 smidge.noho.st fail2ban-server[716]: Arguments: ('sshd', '157.230.49.63', '2023-07-02 14:04:40')
Jul 02 14:04:43 smidge.noho.st sshd[7465]: pam_unix(sshd:auth): check pass; user unknown
Jul 02 14:04:45 smidge.noho.st sshd[7465]: Failed password for invalid user ubuntu from 157.230.49.63 port 43086 ssh2

Je suis prenneur de toute tentative de solutions !

Ceci peut t’aider :

J’ai un ami qui a pu résoudre le même problème et voici la procédure qu’il m’a donnée.
Apparemment, il a trouvé cette solution sur internet :
J’espère que cela peut t’aider.

Édite le fichier : /usr/lib/systemd/system/mysqld.service
Va jusqu’à la ligne et modifie/change :

Number of files limit. previously [mysqld_safe] open-files-limit
Limite du nombre de fichiers. précédemment [mysqld_safe] open-files-limit

Ca doit se présenter comme ça et remplace par 65535

LimitNOFILE=65535

Maximium core size. previously [mysqld_safe] core-file-size
Taille du noyau Maximium. précédemment [mysqld_safe] taille_fichier_noyau

Ici ajoute la même chose : 65535

LimitCore=65535

Enregistrer le fichier et exécute :
systemctl daemon-reload pour généraliser la modification

Redémarrer MySQL :
service mysqld restart

Merci pour ta réponse.

Ce que je voudrais, c’est pas juste “masquer” le problème en augmentant la taille de la RAM dédié aux logs (problème qu’on m’a souligné ici Erreur "no space left on device" - #8 by Benance ), mais bien régler le problème qui fait que les journaux de log se remplissent à n’en plus finir.

Je ne suis pas assez calé pour savoir ce que fait la manipulation que tu m’as proposée. J’espère que ça règle “bien” (dans le sens “de la bonne façon”) le problème ?

est-ce que zram est activé sur votre serveur ? ça peut vraiment aider dans certains cas où la mémoire vient à manquer. De quelle genre de machine s’agit-il ?

comment je peux savoir ça ?

J’ai une brique internet, je crois que c’est une olimex lime 2.

Si j’en crois cette page, zram doit exister par défaut sur armbian (le système que vous utilisez je pense).
Pour avoir des infos, dans une fenêtre de terminal, les commandes zramctl et swapon --show devraient donner qques indices…
Ou sinon directement les infos du noyau avec grep -R . /sys/module/zswap/parameters/

Le principe de zram est d’appliquer une compression des données en mémoire qui peut être très efficace dans certains cas, au prix d’une charge processeur (pour la compression/décompression). Il y a qques réglages possibles comme le pourcentage de la RAM physique (sur lequel zram va être appliqué) qui peut être augmenté. Ce réglage est max_pool_percent.

Merci pour ta réponse.
Toutefois, j’aimerais si possible ne pas simplement augmenter la taille virtuellement disponible dans la RAM, mais plutôt réduire le nombre d’erreurs qui s’inscrivent dans les logs (et qui me produisent des milliers de lignes chaque minute), et qui m’y bouffe toute la place.

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.