I just wanted to document my steps so that others can follow. I’m writing with the intent of guiding people who are right at my skill level - dangerous with a little bit of knowledge.
Digital Ocean
For me, I went with Digital Ocean. Amazon and other service providers will have their own service to enable their users to interact with their S3. You’ll have to review their HOW-TOs, but I imagine they would have similar features.
In DO’s case, you can consider installing s3fs and s3cmd. For s3fs, this command enables you to mount a bucket (or space in DO lingo) to a directory. You can move files into the space as needed once you’re done. You’ll also configure your directory to enable the user and group permissions in favour of GoToSocial so GTS can access the files properly.
My scenario is I set up GTS and ran it for a few days before realizing that I would need additional space. So I’ll
[1] mount my space to a directory of my choice,
[2] backup my original Storage Base Path contents into a folder in my space with rsync,
[3] rename the original Storage Base Path to something else like GoToSocial-backup,
[4] mount my space again, but this time with the same Storage Base Path
NOTE: This is what I did, but just know that once GTS is configured to use the S3, it’ll just directly dump the contents of the Storage Base Path files into the bucket. You can just copy the contents of the Storage Base Path directly into the S3 bucket, configure GTS to use the S3, and call it a day. As I never used an S3 before, I back things up and keep stuff around in the event I break something.
As you’ll see, I ended up making a directory within my bucket and copied over my original files into it. In hindsight, you probably don’t have to do that.
When you go into your DO web admin space, you’ll be given the option to set up your Spaces Object Storage. You’ll be asked to associate the Space Bucket with an existing Datacenter, which is also associated with your existing droplets. Just select the one where your GTS server is.
I did not enable a CDN at this time.
You will also create a space bucket name, preferably short and nondescript.
You will also select the project.
Once you create your bucket, there will be options to generate an Access Key. For a learner like myself, I chose an all access type key, but select the one that makes sense to your use.
Copy and set aside your Origin Endpoint, Access Key ID, Security Key, S3 Endpoint.
SPECIAL NOTE: You will only be presented your Security Key once. If you lose it, you’ll have to generate a new one.
Now refer to this link to install and configure s3fs:
If you need to figure out the user ID and group ID for GoToSocial, simply
cat /etc/passwd
Go ahead and set aside the IDs for later.
NOTE: I don’t think I needed s3cmd to actually complete my task, but it’s a good learning opportunity and you can explore this command later with the link below:
[1] mount my space to a directory of my choice:
You can make a mount point anywhere, and the purpose is to back up the existing GoToSocial Storage Base Path. If you’re starting from scratch and already decided you needed to set up an S3 upfront, you don’t need to worry about backing up and existing directory.
For example, we’ll call this folder “gts”, and you can change the [region] accordingly. Now you’ll also change your UID and GID to match your GoToSocial (don’t just punch in 666)
s3fs bucketname:/path/to/folder/ gts/ -o url=https://[region].digitaloceanspaces.com -o use_cache=/tmp -o allow_other -o use_path_request_style -o uid=666 -o gid=666
You can test this out by making a test folder in your new mounted directory. Then you’ll load up the space on your web panel in Digital Ocean to see if everything matches up.
Congrats, you’ve just used s3fs.
[2] backup my original Storage Base Path contents into a folder in my space with rsync:
For copy and paste jobs that I care about, I use rsync.
apt-get update
apt-get install rsync
rsync -avh /home/yunohost.app/gotosocial /path/to/mounted/space
So with rsync, given the small VPS, this can take hours depending on how large GTS has grown. For reference I copied about 15 GB worth of stuff which took pretty much 4-5 hours.
NOTE: for rsync the path to the original files “/home/yunohost.app/gotosocial” will copy the folder and its contents to the target destination.
Alternatively, ending the original file path with a forward slash “/home/yunohost.app/gotosocial/“ will tell rsync to just copy the contents of the folder to the target destination.
So figure out what you want to do. In my case I did put the contents into a new folder within the bucket.
[3] rename the original Storage Base Path to something else like GoToSocial-backup:
Change directory to /home/yunohost.app/gotosocial
cd /home/yunohost.app/gotosocial
mv /home/yunohost.app/gotosocial /home/yunohost.app/gotosocial-back
You’ll end up with your original Storage Base Path renamed as “gotosocial-back”.
[4] mount my space again, but this time with the same Storage Base Path:
I created the same folder “gotosocial”, but this time mounted it and directed the path to my files that were copied over to my bucket earlier with rsync.
s3fs bucketname:/path/to/folder/ gotosocial/ -o url=https://[region].digitaloceanspaces.com -o use_cache=/tmp -o allow_other -o use_path_request_style -o uid=666 -o gid=666
Now, you can head over to the Yunohost Config Panel for GTS, and simply enter your DO space details using the information you set aside earlier when you made your space. Once you save and let Yunohost start up the service, GTS should directly access the bucket you made to start loading files in.