Hardware: computer YunoHost version: 4.0.8 I have access to my server : Through SSH | through the webadmin Are you in a special context or did you perform some particular tweaking on your YunoHost instance ? : yes If yes, please explain: having a reverse proxy in front
Description of my issue
I have successfully migrated from 3 to 4 and my apps seems to be working fine.
I’m running Synapse, Seafile and Baikal
The problem now is that after I have started a back-up from web GUI the yunohost-api goes crazy and loads the CPU to near 100%.
If I restart the yunohost-api I can use the web GUI except for the pack-up part, if I press “Local archives” the problem is back.
The backup seems to be Ok;
$ sudo yunohost backup list
archives:
- 20201114-154909
Any help on how to fault find would be much appreciated!
So the problem is that creating a backup eats all the CPU ? … Imho it’s not entirely unexpected … except maybe if it does eats all the CPU for a loooong time …
Or is it that listing the existing archives eats the CPU ?
I dont think it during the backup, I think it is still eating CPU after the backup is done(?) So it looks like I cant check my local backups via web GUI after the backup is ready. “yunohost” service was not loading the CPU only “yunohost-api”. Normally you get a timeout on web GUI if something is going on (I did not get this) like - “There is already a YunoHost operation running. Please wait for it to finish before running another one.”, right?
I will do a new backup via SSL and check again to be sure.
There is about 10 archives and 3 of them are big (Seafile), like 300GB.
Im doing a full backup now via SSL and it looks Ok so far, GUI is giving me the message that its occupied…
I will let it finish and then check again (it will take hours).
Ugh yeah alright … I’m not that much surprised that backup operations (even listing the archive) may take a while then … Though it’s still a “bug”, which imho is related to the fact that so far we’re Yunohost created compressed archives (.gz) which therefore require a shitload of CPU for large archives I think - even though that’s probably relevant (because large archives are usually large because of media file which are … already compressed…)
With Yunohost 4.1, archives will be uncompressed by default (regular .tar instead of .tar.gz) which should mitigate that kind of issues