Nextcloud oc_filecache growing huge : too many rows (hence backup fails with : lost connection to MySQL)

My YunoHost server

Hardware: Raspberry Pi at home
YunoHost version: latest
I have access to my server : Through SSH
Are you in a special context or did you perform some particular tweaking on your YunoHost instance ? : not really

Description of my issue

I can’t backup Nextcloud, as it gives me this error:

mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `oc_filecache` at row: 2881244

I tried to increase both net_read_timeout, net_write_timeout to 120, and max_allowed_packet to 160M (was 16M) in /etc/mysql/my.cf, reloaded mysql service.
(As advised here Solving Error 2013: Lost connection to MySQL server during query when dumping table | Ottomatik Help Center)
Still it fails, and not really further than before (still around row ~3000000).

Since it is running on RPi, is the DB on an SD card? Could it be that the backup asks to many of it?

Does the timeout match the actual runtime of the backup, or could it be that it is running out of space? How large is the DB and the free space?

(I haven’t been able to upgrade NC since version 18, because the DB got too large for my system to backup :sob: )

Thanks for your help :slight_smile:

I have a doubt now, but I think so.
I don’t know, but a previous backup a few days ago worked.

I don’t understand this.

I will check that, but I am supposed to have enough free space… Several GB, while the DB shouldn’t be that big.

You mean using Yunohost automatic backup ?
I might have a couple of possibilities to check in order to help you make that upgrade, if you wish :slight_smile:
Do you have a topic on this, so I can avoid the off-topic here ?

I mean, when you for example change the timeout from 120 seconds to 300 seconds, and compare the logs from the two attempts, does the second run about 180 seconds longer?

Another guess: you changed the timeout of net_r/w_timeout, could it be that that timeout is not used by mysqldump? The flash layer of the SD card is heavily hammered and maybe has to clear blocks before writing data, MySQL is patient enough to wait, but mysqldump thinks MySQL has gone away?

Yes, Nextcloud upgrade: how to shrink oc_filecache.ibd (441GB at the moment), but the topic is closed. If you have a suggestion, I will open a new topic for it.

8 GB of free space, it failed (and even sooner than before, line 2264100).
Not a disk space issue I think.

Ok !
I did not measure it, at all, it’s a long automatic backup (using archivist), I won’t monitor it, it’s too painful ^^
But it stops around the same number of rows, which makes me think that’s not a timeout issue (I tripled the value, no visible change).

I have no idea… :frowning:


One quick suggestion, just to be sure: if you have enough space for the temporarybackup (before it makes the final .tar file I mean), but not for it + the final tar archive, and are using a recent version of Yunohost where all backups are not compressed (not .tar.gz files), one solution could be to compress existing backups to save space, at least temporary.
In addition, you could check if some system log files are using a lot of storage (like >100MB), do apt-get clean to free apt cache. That might save you a few gigs.

Thank you for the suggestions for my off topic problem :slight_smile:

For your problem I am out of ideas, sorry :frowning:

1 Like

I might have a broader issue: several times a day, my server start lagging a lot, and mysql takes a very high load, for no apparent reason. Restarting it solves the issue. Waiting a long time too (such as 1h).

Did you install the ‘prepare thumbnails’ plugin? That would be my first suspect (mostly because it is the only one I can think of :stuck_out_tongue: )

I didn’t.

Turns out I have another issue that is causing this: my oc_filecache table has… 12495189 rows. 38GB.
I don’t have that many files.
In fact around 10000000 “cached files” rows are from… /home/myaccount/ folder… Which is almost empty, apart from a symbolic link to /media, where I mount an external HDD (via USB, not smb or such). Only something like 10 000 rows are dedicaded directly to /media (in another group of oc_filecache table).

This explains also why mysql has been regularly overloading my server… turns out there is an issue with file caching ?

Related to : Nextcloud upgrade: how to shrink oc_filecache.ibd (441GB at the moment), same issue as you @wbk
(and maybe Le cron.php de nextcloud pompe toutes les ressources - #2 by tierce Something hammering MySQL (5M/s disk writes)?)

Sorry for keeping silent, yes, I also had the filecaching problem.
I removed all external storage from the Nextcloud configuration, but the problem persists because my server is not capable enough to remove records from the table. No help from me, at this point :frowning:
I may have some notes with a simple cursor for removing cache rows, let me check…

I “solved” my issue by very slowly (it took days) deleting most of these lines, from the category belonging to some “external storage” (actually /home, with a few symbolic link pointing to an external storage mounted that was counted a few hundred times…), then doing a “truncate” to empty the database table. The filecache rebuilding went well.

Problem not solved: the nextcloud cron did reindex several times some files (from one external storage, /home)… Disk full again.

:frowning:

I wished there were a more lightweight option for Nextcloud to present files through the web interface. As far as I understand, any file to be shown in Nextcloud, has to come from the cache. It would be nice if it could just link to the file on disk as long as no changes/versioning is made.

Sorry not to reply any sooner. I did look for my cleanup scripts, but have not found them.

1 Like

@wbk @Lapineige same issue as yours with a 2.7Gb oc_filecache !
I have 6 SMB/CIFS external_storage from NAS…
I understand why my NC has become so slow !
I was migrating it from LXC to Docker… and the SQL backup show me the issue.
I will report if I find something than can help !

1 Like

That’s a “tiny” one, mine went up to ~42GB :sweat_smile:

For your migration: just empty that file, (mariadbuse nextcloudtruncate oc_filecache), Nextcloud will recreate it.

2 Likes

Thanks !
I’ll give a try (actually is already in process…)

Me too. Thanks @Lapineige!

Back then I have spend hours (days, if not weeks) trying to delete just the records matching the external storage. That ran fast enough, but there were millions of records to be removed, and they got added again in a blink of an eye.

Now I can hardly believe that I just freed 330 GB of tablespace in less than 20 seconds :rofl:

MariaDB [nextcloud]> truncate oc_filecache; Query OK, 0 rows affected (17.120 sec)

I might finally be able to upgrade away from NC 18 :stuck_out_tongue: