Take the time to say Hi and stay friendly, this is a forum and project 100% ran by volunteer human beings
Hi!
My YunoHost server
Hardware: Fujitsu Siemens Esprimo Q920 Intel Core i5-4590T 64bit CPU @ 4 Ă 2,8 GHz, 16GB RAM, 8TB SSD YunoHost version: 11.2.10.3 I have access to my server : Through SSH | through the webadmin | direct access via keyboard / screen Are you in a special context or did you perform some particular tweaking on your YunoHost instance ? : yes If yes, please explain: The yunohost-installation itself is unaltered. But I added task-xfce-desktop, which is not started by default. I only start it, if needed (which is rarely the case) and end it immediately afterwards again.
Description of my issue
Iâd very much like to backup the whole of my yunohost server regularly as well as automatically (in the background without necessity of me taking action once set up) and first and foremost without the need of being a hacker of sorts. Iâve seen some individual solutions here, for all of which my programming knowledge is not sufficient.
Since yunohost (cl)aims to make selfhosting easier, I would have expected that something essential like (continuing background) backups would be available with a few mouse clicks.
Why is this so exotic? Am I really the only one who wants to know his most precious and irredeemable data is safe because there is a backup?
Please donât get me wrong. I donât mean this as a complaint. Iâm really just genuinely surprised that this is not implied as standard in a new yunohost installation.
I have a cloud provider that I can connect to via webdav and if need be an external harddrive (which would be only half a backup, since potential burglars would steal that together with the yunohost server). Is there any possibility that non-geek-serviceable backups will be a thing anytime soon?
The only alternative to me as of now seems to be, that I try not to forget to make backups to an external hard drive, which I then connect to another PC and upload the backup to my cloud provider.
Anyway, thanks for being patient with me and be safe!
Many spots in the documentation says you can/must install borg_server, but thatâs old documentation, youâll only have to install it if you want to host a backup server.
On my server, I have 4 borg instances, 2 for local backups, 2 for distant backups (one weekly, one daily), and I receive emails when an error occurs.
My suggestion : As youâll do it, can you take notes and improve the documentationS ? (the wiki AND the git README)
Please explain what you mean by that
What kind of content would you like to backup, and in which kind of format / ease to restore ?
For instance, doing a full Yunohost backup could be seen as a âwhole backupâ but it doesnât contains system files, random files that you could have added outside of apps, and so on. On the other end itâs super easy to restore compare to other tools.
Where do you want the backup to go ?
Local storage, local external storage (external disk, another server on local networkâŠ), âin the cloudâ ?
From you message I understand you would need a solution that supports webdav ?
Itâs not yet first class citizen in Yunohost, but they are some apps to do that
To my knowledge the main one being Borg (on the local server / with another (YNH) server with Borg), Restic (same with more option in the cloud) that Iâm not familiar with, and Archivist (which mainly uses Yunohost backup system).
Itâs not half a backup. Itâs a real backup, an extra replica, that donât cover you from any accident in your server location. As the cloud backup doesnât cover you from an accident on their side.
There is no perfect backup, itâs just a matter of making enough replicas in different situation to mitigate the risks you want to prevent you from
Hello,
I faced the same problem as @Walsonde and ended up writing my own bash script, launched with cron, to automate the backup process the way I wanted.
For me, Yunohost backup/restore is great and does correctly the job. Just it needs to be launched automatically.
Also, I wanted to keep only a limited number of past backups and delete the older ones.
Finally, I wanted to copy the backup out of Yunohost, to an externnal volume, mounted under Yunohost, then possible also via curl to an external site. Only the curl part was not tested, but all the above is working well.
You can find the script here: https://github.com/Nidal-Tech/backupynh.
Iâm not an expert of bash, so I think that the script can be enhanced. So if you find issues or think you can make the bash smoother, donât hesitate to send me your enhancements.
Sorry for not having explained better, what Iâm looking for. English is not my native language but I guessed, Iâd reach most people in english.
Let me explain by giving examples: I have a Webhoster (Hetzner) that does automatic daily backups. Whenever I have a problem with the website or a database, I open the admin backend and choose, which backup I want to restore. Letâs say, today I tried to update my Nextcloud on that Webspace and something went wrong. Then I simply choose yesterdayâs backup, click restore and few minutes later my Nextcloud is working again, like it did (yesterday).
I also have a cloudspace provider (pCloud) where I used to backup my synology and qnap NAS-drives automatically via webdav. If an error occured on the qnap for example, that required reverting to a previous point in the backup-timeline, I could choose the backup from that certain date and restore it on the nas-drive with few and simple actions.
Iâd like that for my yunohost. I want to know, that when the ssd in my server dies, I simply have to replace it and restore the latest backup (with as little mouseclicks or command line commandos as possible, and everything will be as if it never happened.
Iâve got a solution now, that I donât like for several reasons: I shutdown the server, boot a live system from usb-stick and diskdump the ssd onto an external hdd. But that takes roundabout three days for the 8TB with my hardware. That means every time I want a new backup so it is up-to-date my yunohost will be down for 3 days. Also it will take 3 days to restore a backup which will not be very up-to-date, since I cannot use this solution very often, for obvious reasons.
If there at least were a possibility to clone the internal ssd without having to shutdown the server first. When I still used a Mac, there was CarbonCopyCloner, which made an exact clone of the internal drive while you could continue to use the computer. That would be better than my actual solution, but still not optimal.
Incremental background backup to a remote webdav host would be best for me. I guess, Iâll first have to find out, how to make yunohost mount my pcloud per default at boot. Maybe then I could use Borg or Archivist or even the standard yunohost backup.
The gigantic difference between your server at home and one rented online is the gigantic organisation around the one you rented.
If you want something similar, you can have your server, run proxmox or any other tool to run VMs, use this tool to do snapshots of your server, and run your YunoHost server inside a VM.
Just a note : YunoHostâs backups are « real » backups (export of the database).
Snapshots made of a VM is a picture of the disk+ram (or just disk), and depending on what is running, you have to pray to be able to restore it (databases do not like snapshots, and clusters of servers hates it).
And this is just a minimum, online, your snapshots are stored on another machine, so you need at least another tool to maintain to be able to make those backups.
But all of this is totally out of tho point of YunoHost, YunoHost will just run inside your big architecture.
Hi Nido,
that is a perfect extension to yunohost backup!
I tried it on two yunohosts and it was working perfect and even fast.
(only problem for me was to find the lot of backup-space you need if you want to keep more backups for everyday use)
My place for the script is in the moment /usr/local/bin - if you have a better tip I would be very happy to get it to know.
Same for an example of an URL if I would use a separate raspi as a local fileserver - with SMB,NFS or whatever. I think, 50 or 100GB over the web would be to much for fair use.
Many thanks for your work
Bruno
Hello @brunogiscoat ,
Glad it helps, I guess you are my first user
For me /usr/local/bin is a good place, especially if you want to access the script in cron as root. I did the same.
Concerning the URL, I think you can permanently mount your file share on your Yunohost Linux and use it as a destination for the backup in the script.
I would also be interested in the outcome of any experience you might do to copy the backups to an external location using curl.
decide what is the most important data. i.e. /nextcloud/photos etc.
pick a way to back that up somewhere. ie. restic using rest-server to another computer, or rsync to another computer, or even rsync to a connected USB drive.
i donât think backup up âyunohostâ itself is useful unless you have lots of users and want to preserve user accounts. i think just having your data is good enough and if you have to reinstall, you just do that. debian 12 is coming soon, so imagine your backups of yunohost are debian 11 and then deb 12 rolls around, do you really want to restor your debian 11 and then upgrade. probably would be better to install deb 12 then install fresh yunohost.
there isnât one âblessedâ way in yunohost because of all the different apps you can run.
My configuration is more or less similar to the one used by @Nido , except that my âexternalâ backup is just a folder I configured in Nextcloud, and it gets backed up to my other devices (a desktop computer, and my laptop). I just placed this script on /etc/cron.weekly to run weekly:
#!/bin/bash
defaultfolder="/home/yunohost.backup/archives"
#I use the multimedia folder to store my archives. Substitute the target folder for your own user.
targetfolder="/home/yunohost.multimedia/csolisr/archives/ynh_backup"
#Compression format. I generally use .gz as that's the format that Yunohost uses natively to compress.
cmp="gz"
#You can set this parameter to have multiple copies of each backup.
maxbackupnumber=1
#This iterates through all your installed YNH apps
for app in $(/usr/bin/yunohost app list | /usr/bin/grep "id:" | /usr/bin/sed "s/.*id\: //g")
do
appbackup="$app"_backup
#If old backup exists, find a free spot and rename it
if [[ ! -z $(/usr/bin/yunohost backup list | /usr/bin/grep $appbackup) ]]
then
spotfound=0
backupnumber=1
while [[ spotfound -eq 0 ]]
do
if [[ -f "$defaultfolder/$appbackup_$backupnumber.tar" || -f "$defaultfolder/$appbackup_$backupnumber.tar.$cmp" ]]
then
backupnumber=$((backupnumber+1))
else
spotfound=1
fi
done
if [[ $backupnumber -lt $maxbackupnumber ]]
then
appbackupnumber="$appbackup"_"$backupnumber"
mv "$defaultfolder/$appbackup.info.json" "$defaultfolder/$appbackupnumber.info.json" -v
mv "$defaultfolder/$appbackup.tar" "$defaultfolder/$appbackupnumber.tar" -v #|| \
mv "$defaultfolder/$appbackup.tar.$cmp" "$defaultfolder/$appbackupnumber.tar.$cmp" -v
#Do the same in the backup target
mv "$targetfolder/$appbackup.info.json" "$targetfolder/$appbackupnumber.info.json" -v
mv "$targetfolder/$appbackup.tar.$cmp" "$targetfolder/$appbackupnumber.tar.$cmp" -v
else
if [[ $backupnumber -eq 1 ]]
then
appbackupnumber="$appbackup"
else
appbackupnumber="$appbackup"_"$backupnumber"
fi
rm "$defaultfolder/$appbackupnumber.info.json"
rm "$defaultfolder/$appbackupnumber.tar" #|| \
rm "$defaultfolder/$appbackupnumber.tar.$cmp"
#Do the same in the backup target
rm "$targetfolder/$appbackupnumber.info.json"
rm "$targetfolder/$appbackupnumber.tar.$cmp"
/usr/bin/yunohost backup delete "$appbackupnumber"
fi
fi
#Create the backup
/usr/bin/yunohost backup create --apps $app --name "$appbackup"
#Compress and move the backup
cp "$defaultfolder/$appbackup.info.json" "$targetfolder/$appbackup.info.json" -v
cp "$defaultfolder/$appbackup.tar.$cmp" "$targetfolder/" -v || \
cp "$targetfolder/$appbackup.tar.$cmp" "$defaultfolder/" -v
done
#System = ynh_core_backup
if [[ ! -z $(/usr/bin/yunohost backup list | /usr/bin/grep "ynh_core_backup") ]]
then
spotfound=0
backupnumber=1
while [[ spotfound -eq 0 ]]
do
if [[ -f "$defaultfolder/ynh_core_backup_$backupnumber.tar" || -f "$defaultfolder/ynh_core_backup_$backupnumber.tar.$cmp" ]]
then
backupnumber=$((backupnumber+1))
else
spotfound=1
fi
done
if [[ $backupnumber -lt $maxbackupnumber ]]
then
appbackupnumber=ynh_core_backup_"$backupnumber"
mv "$defaultfolder/ynh_core_backup.info.json" "$defaultfolder/$appbackupnumber.info.json" -v
mv "$defaultfolder/ynh_core_backup.tar" "$defaultfolder/$appbackupnumber.tar" -v #|| \
mv "$defaultfolder/ynh_core_backup.tar.$cmp" "$defaultfolder/$appbackupnumber.tar.$cmp" -v
#Do the same in the backup target
mv "$targetfolder/ynh_core_backup.info.json" "$targetfolder/$appbackupnumber.info.json" -v
mv "$targetfolder/ynh_core_backup.tar.$cmp" "$targetfolder/$appbackupnumber.tar.$cmp" -v
else
if [[ $backupnumber -eq 1 ]]
then
appbackupnumber="ynh_core_backup"
else
appbackupnumber=ynh_core_backup_"$backupnumber"
fi
rm "$defaultfolder/$appbackupnumber.info.json"
rm "$defaultfolder/$appbackupnumber.tar" #|| \
rm "$defaultfolder/$appbackupnumber.tar.$cmp"
#Do the same in the backup target
rm "$targetfolder/$appbackupnumber.info.json"
rm "$targetfolder/$appbackupnumber.tar.$cmp"
/usr/bin/yunohost backup delete "$appbackupnumber"
fi
fi
/usr/bin/yunohost backup create --system conf_ldap conf_ynh_settings conf_ynh_certs data_mail data_xmpp conf_manually_modified_files --name "ynh_core_backup"
#Compress and move the backup
cp "$defaultfolder/$appbackup.info.json" "$targetfolder/$appbackup.info.json" -v
cp "$defaultfolder/ynh_core_backup.tar.$cmp" "$targetfolder/" -v || \
cp "$targetfolder/ynh_core_backup.tar.$cmp" "$defaultfolder/" -v
#Fix permissions
rsync -rtvz $defaultfolder/ $targetfolder
find $targetfolder/* -type f -name '*' -not -user nextcloud -exec chown -R nextcloud:nextcloud {} \;
find $targetfolder/* -type f -name '*' -not -perm 777 -exec chmod 777 {} \;
#Substitute your folder here as well
sudo -u nextcloud php8.2 --define apc.enable_cli=1 /var/www/nextcloud/occ files\:scan --all --path="/csolisr/files/Multimedia/archives"
Hello, @csolisr,
You seem to be a pro of bash scripting :-). Hoever I have two quetions:
1-Why did you write a script to back up all apps one by one? Isnât Yunohost backup supposed to do the same with one operation? (except of rolling the old backups)
2-How easy it is to restore your backup in case of a full crash?
As for why did I back things up app per app, itâs because by default, YunoHost would roll every single app in one massive compressed file, and to restore a single app, it would need to decompress everything first, which is not efficient for my usage case.
My backup is designed to restore individual apps first, and full systems second. If I ever needed to restore a full system, Iâd just restore the core, then each individual app I needed to restore.
Those are some shenanigans I was dealing with when syncing with my particular file system. If I left it at 660, some things would break in my dual-boot partition (which I use to store files both over Windows and Linux), but feel free to change it to 660 yourself for extra safety.