Added extra diskspace is not used

Hello!
I need some help.

Hardware: VPS bought online
YunoHost version: 3.8.5.7
I have access to my server : Through SSH
Are you in a special context or did you perform some particular tweaking on your YunoHost instance ? : no

Description of my issue

I use Yunohost version 3.8.5.7 which is installed on a VPS.
I installed Yunohost beginning with a diskspace of 10 GB knowing that I could easily upgrade when I needed more. Well, a little while ago I upgraded it at the service portal of my VPS host to 40 GB, and thought it went oke, everything was working. Till I wanted to make a back-up but could not because there was not enough space. I got the message:
Not enough free space on '/home/yunohost.backup/archives'

So I looked how that could be and saw that at the Diagnosis page under System resources, there is the following warning:
Storage / (on device /dev/xvda1) has only 1.8 GiB (18%) space remaining (out of 10.0 GiB). Be careful.
and also in Nextcloud it is saying that there is only 10 GB available. Instead of the added 40Gb.

I looked in the topics on the forum, found one topic that looked a bit the same, but that one went into another direction. (How to add more disk space to yunohost server?) But the first questions asked there for more information I have already looked up:

When I give the command lsblk I got:

xvda1 202:1    0  40G  0 disk /```

So this 40 added diskspace is there, but when I give the next command given in the same post ```df -h``` I got this:
```Filesystem      Size  Used Avail Use% Mounted on
udev            976M     0  976M   0% /dev
tmpfs           200M   22M  179M  11% /run
/dev/xvda1       10G  8.7G  1.4G  87% /
modules         100M   54M   47M  54% /lib/modules
tmpfs           999M   80K  999M   1% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup```

What do I have to do so that Yunohost uses the extra diskspace to which I have upgraded?

Many thanks!

Naively I think it’s about resizing the root partition which is doable but not trivial

First, let’s make sure that you have indeed 40GB with fdisk -l

than I get:
-bash: fdisk: command not found

Ugh … well let’s install it then, with apt install fdisk

o dear…

E: Unable to lock the administration directory (/var/lib/dpkg/), are you root?```

With sudo : sudo apt install fdisk

thanks

Building dependency tree       
Reading state information... Done
E: Unable to locate package fdisk```

My bad, it’s util-linux …

sudo apt install util-linux
Building dependency tree       
Reading state information... Done
util-linux is already the newest version (2.29.2-1+deb9u1).
The following packages were automatically installed and are no longer required:
  netfilter-persistent sgml-base xml-core
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.```

Woookay so in fact fdisk is probably installed but we need to use root (because it’s stored in /sbin and … ugh whatever)

So :

sudo fdisk -l

ok, i got a long list:

Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram1: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram2: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram3: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram4: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram5: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram6: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram7: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram8: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram9: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram10: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram11: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram12: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram13: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram14: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/ram15: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/xvda1: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Annnnnd I just realized that you already ran lsblk in your initial post which showed the 40GB were there… Ugh, sorry for all the unecessary mess ~_~

Anyway, I’m checking how to resize the rootfs and it looks surprisingly simpler than what I expected

Though I recommend to run the following directly as root : so let’s first become root by running sudo su

Then run :

ROOT_DEV=$(findmnt / -o source -n)
resize2fs $ROOT_DEV

And hopefully that should do the trick (df -h should show the root partition being 40GB) or maybe a reboot is needed

no worries…

done that and got:

resize2fs: Bad magic number in super-block while trying to open /dev/xvda1
Couldn't find valid filesystem superblock.

Hmokay let’s check the type of that partition with

lsblk -f | grep "/$"

this is the answer:

xvda1 xfs 4b26769b-4e56-4d69-b42c-f44335854b27 /

Some people mention a different command for xfs partition … so let’s try

xfs_growfs /
1 Like

something went wrong…

bash: xfs_growfs: command not found

Let’s install the supposedly relevant package that contain the command then : sudo apt install xfsprogs

1 Like

hihi

         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 2621440 to 10485760

So then maybe df -h shows the full amount of disk ?