[solved!] Apt reinstall yunohost --> savage post-install --> now what?

Hi all,

I broke my Yunohost during the migration.

Apt got stuck, every solution involved removing yunohost and yunohost-admin. I surrendered.

Debian got updated, apt install yunohost followed. I ran apt reinstall yunohost before on a working Yunohost, and found no adverse effect. Wrong, I guess the uninstall previously removed some of the configuration.

Now I got stuck pre- post-install. Yunohost knows I messed up:

# yunohost backup restore 20210926-194650
Info: Preparing archive for restoration...
Warning: unable to retrieve string to translate with key 'The following critical error happened during restoration: It looks like you're trying to re-postinstall a system that was already working previously ... If you recently had some bug or issues with your installation, please first discuss with the team on how to fix the situation instead of savagely re-running the postinstall ...' for default locale 'locales/en.json' file (don't panic this is just a warning)
Error: The following critical error happened during restoration: It looks like you're trying to re-postinstall a system that was already working previously ... If you recently had some bug or issues with your installation, please first discuss with the team on how to fix the situation instead of savagely re-running the postinstall ...

Iā€™m not quite sure (read: no idea & out of fantasy) how to continue from here!

Hmpf then letā€™s try to : touch /etc/yunohost/installed ā€¦

1 Like

Iā€™m all ears!

# touch /etc/yunohost/installed
# yunohost tools update 
# yunohost tools upgrade system
Info: Upgrading packages...
Info: Upgrading system packages
... 
Info: + Processing triggers for man-db (2.9.4-2) ...
Info: + Processing triggers for libc-bin (2.31-13+deb11u4) ...
Success! System upgraded

Weeellā€¦ That seems quite promising!

Things look better now. Not all is running yet; some apps give a gateway error, some canā€™t upgrade yet. That gave me the idea to re-run the migration:

yunohost tools migrations run --accept-disclaimer
Info: Running migration 0021_migrate_to_bullseye...
Error: Migration 0021_migrate_to_bullseye did not complete, aborting. Error: The current Debian distribution is not Buster! If you already ran the Buster->Bullseye migration, then this error is symptomatic of the fact that the migration procedure was not 100% succesful (otherwise YunoHost would have flagged it as completed). It is recommended to investigate what happened with the support team, who will need the **full** log of the `migration, which can be found in Tools > Logs in the webadmin.
Info: The operation 'Run migrations' could not be completed. Please share the full log of this operation using the command 'yunohost log share 20220925-121827-tools_migrations_migrate_forward' to get help
Error: Run these migrations: '0021_migrate_to_bullseye', before migration 0022_php73_to_php74_pools.
Error: Run these migrations: '0021_migrate_to_bullseye', before migration 0023_postgresql_11_to_13.
Error: Run these migrations: '0021_migrate_to_bullseye', before migration 0024_rebuild_python_venv.# yunohost log share 20220925-121827-tools_migrations_migrate_forward
Info: This log is now available via https://paste.yunohost.org/raw/vehubesode

So, Iā€™ll have a look at the log

Ok, the log tells that the migration didnā€™t complete 100%, and that I should have a look in the logs.
The web GUI gives an error on Yunohost API not respording, bad gateway.

Looking in /var/log/yunohost/categories/, there is a number of logs relating to migration, a couple of days old (when I ran the migration). I ran the migration after updating the separate apps as far as they would without needing Yunohost 11 to continue:


 72K -rw-r--r-- 1 root root  69K Sep 19 06:34 operation/20220919-053409-backup_create.log
4.0K -rw-r--r-- 1 root root  393 Sep 19 06:34 operation/20220919-053409-backup_create.yml
 64K -rw-r--r-- 1 root root  58K Sep 19 06:42 operation/20220919-053418-app_remove-synapse.log
4.0K -rw-r--r-- 1 root root  674 Sep 19 06:42 operation/20220919-053418-app_remove-synapse.yml
4.0K -rw-r--r-- 1 root root  222 Sep 19 06:42 operation/20220919-054252-permission_delete-synapse.log
4.0K -rw-r--r-- 1 root root  329 Sep 19 06:42 operation/20220919-054252-permission_delete-synapse.yml
180K -rw-rw-rw- 1 root root 179K Sep 19 06:50 operation/20220919-054255-backup_restore_app-synapse.log
4.0K -rw-rw-rw- 1 root root  881 Sep 19 06:50 operation/20220919-054255-backup_restore_app-synapse.yml
4.0K -rw-rw-rw- 1 root root 1.1K Sep 19 06:42 operation/20220919-054255-permission_create-synapse.log
4.0K -rw-rw-rw- 1 root root  497 Sep 19 06:42 operation/20220919-054255-permission_create-synapse.yml
4.0K -rw-rw-rw- 1 root root   67 Sep 19 06:42 operation/20220919-054255-permission_url-synapse.log
4.0K -rw-rw-rw- 1 root root  390 Sep 19 06:42 operation/20220919-054255-permission_url-synapse.yml
4.0K -rw-rw-rw- 1 root root 1.1K Sep 19 06:42 operation/20220919-054256-permission_create-synapse.log
4.0K -rw-rw-rw- 1 root root  489 Sep 19 06:42 operation/20220919-054256-permission_create-synapse.yml
4.0K -rw-rw-rw- 1 root root   73 Sep 19 06:42 operation/20220919-054256-permission_url-synapse.log
4.0K -rw-rw-rw- 1 root root  376 Sep 19 06:42 operation/20220919-054256-permission_url-synapse.yml
4.0K -rw-rw-rw- 1 root root 1.2K Sep 19 06:42 operation/20220919-054258-permission_create-synapse.log
4.0K -rw-rw-rw- 1 root root  509 Sep 19 06:42 operation/20220919-054258-permission_create-synapse.yml
4.0K -rw-rw-rw- 1 root root   82 Sep 19 06:42 operation/20220919-054258-permission_url-synapse.log
4.0K -rw-rw-rw- 1 root root  389 Sep 19 06:42 operation/20220919-054258-permission_url-synapse.yml
8.0K -rw-r--r-- 1 root root 6.5K Sep 19 22:57 operation/20220919-215703-tools_migrations_migrate_forward.log
4.0K -rw-r--r-- 1 root root  308 Sep 19 22:57 operation/20220919-215703-tools_migrations_migrate_forward.yml
4.0K -rw-r--r-- 1 root root 3.9K Sep 19 23:10 operation/20220919-220959-tools_migrations_migrate_forward.log
4.0K -rw-r--r-- 1 root root  308 Sep 19 23:10 operation/20220919-220959-tools_migrations_migrate_forward.yml
4.0K -rw-r--r-- 1 root root 3.9K Sep 19 23:17 operation/20220919-221715-tools_migrations_migrate_forward.log
4.0K -rw-r--r-- 1 root root  308 Sep 19 23:17 operation/20220919-221715-tools_migrations_migrate_forward.yml
4.0K -rw-r--r-- 1 root root 3.9K Sep 19 23:18 operation/20220919-221841-tools_migrations_migrate_forward.log
4.0K -rw-r--r-- 1 root root  308 Sep 19 23:18 operation/20220919-221841-tools_migrations_migrate_forward.yml
4.0K -rw-r--r-- 1 root root 3.9K Sep 19 23:25 operation/20220919-222539-tools_migrations_migrate_forward.log
4.0K -rw-r--r-- 1 root root  308 Sep 19 23:25 operation/20220919-222539-tools_migrations_migrate_forward.yml
 36K -rw-r--r-- 1 root root  34K Sep 20 00:00 operation/20220919-230008-backup_create.log

Trying to restart yunohost-api gives me an ā€˜is maskedā€™-reply:

# service yunohost-api status
ā— yunohost-api.service
     Loaded: masked (Reason: Unit yunohost-api.service is masked.)
     Active: inactive (dead)
root@sanyi:/var/log/yunohost/categories# service yunohost-api start
Failed to start yunohost-api.service: Unit yunohost-api.service is masked.

Iā€™m not familiar enough with systemd to decide whether I should unmask the service. I can imagine it is masked as part of Yunohostā€™s migration procedure. Without changing things now, Iā€™ll try restarting the server before trying to look into each of the log files on the CLI.


edit

Reboot done, Yunohost API is still down.

Before looking into the logs, Iā€™ll list the pending migrations. apt update && apt upgrade tells me all packages are up to date, as does apt-get dist-upgrade.

# yunohost tools migrations list
migrations: 
  0: 
    description: Upgrade the system to Debian Bullseye and YunoHost 11.x
    disclaimer: None
    id: 0021_migrate_to_bullseye
    mode: manual
    name: migrate_to_bullseye
    number: 21
    state: pending
  1: 
    description: Migrate php7.3-fpm 'pool' conf files to php7.4
    disclaimer: None
    id: 0022_php73_to_php74_pools
    mode: auto
    name: php73_to_php74_pools
    number: 22
    state: pending
  2: 
    description: Migrate databases from PostgreSQL 11 to 13
    disclaimer: None
    id: 0023_postgresql_11_to_13
    mode: auto
    name: postgresql_11_to_13
    number: 23
    state: pending
  3: 
    description: Repair Python app after bullseye migration
    disclaimer: Following the upgrade to Debian Bullseye, some Python applications needs to be partially rebuilt to get converted to the new Python version shipped in Debian (in technical terms: what's called the 'virtualenv' needs to be recreated). In the meantime, those Python applications may not work. YunoHost can attempt to rebuild the virtualenv for some of those, as detailed below. For other apps, or if the rebuild attempt fails, you will need to manually force an upgrade for those apps.

Rebuilding the virtualenv will be attempted for the following apps (NB: the operation may take some time!): 
    - borg-env
    id: 0024_rebuild_python_venv
    mode: manual
    name: rebuild_python_venv
    number: 24
    state: pending

Because of the manual upgrade of the system, migrations are a bit out of sync. PHP and Python are already upgraded, but the migration for configuration and virtualenv has not run. PostgreSQL is still at 11.

# php --version
PHP 7.4.30 (cli) (built: Jul  7 2022 15:51:43) ( NTS )
# php-fpm7.4 --version
PHP 7.4.30 (fpm-fcgi) (built: Jul  7 2022 15:51:43)
# python --version
Python 3.9.2
# psql --version
psql (PostgreSQL) 11.17 (Debian 11.17-0+deb10u1)

All previous migrations_migrate_forward-logs ran into the same problem while upgrading MariaDB. This is the most recent entry, before I nuked the system: log

Helpful hint from @tituspijean :stuck_out_tongue:
image

1 Like

I thought of checking 0021_migrate_to_bullseye and see whether following steps manually would get me to the next step. I canā€™t find the script though.

In /etc/yunohost/migrations.yaml I can find migrations 0015-0020 as skipped. I think I might continue with 0022_php73_to_php74_pools by adding 0021 to the yaml, but some configuration might not get migrated.

I did find 0021_migrate_to_bullseye.py.

Reading through it, I realized:

  • the first time I ran the migration, it might have done more than exit on MariaDB, I looked up the log and uploaded it; MariaDB is really one of the first packages to be upgraded after patching sources.list and managing some conflicting packages.
  • During the manual dist-upgrade, I didnā€™t accept the new/installers config for MariaBD, but kept the old config (seemed prudent, at the time). I could reinstall MariaDB and accept the packagerā€™s configuration.
  • I could skip migration 0021 by adding it to migrations.yaml as I wrote above, or
  • I could run the migration with --force-rerun (only if you know what you are doing suggests I shouldnā€™t :stuck_out_tongue: )
  • around line 230 I think to recognize that something is done to dependencies of ynh-packages, to prevent them from getting removed while upgrading build-essentials ā†’ this is where my manual upgrade broke the system
  • During the first few attempts at migration, I manually upgraded groups of packages, hoping to hit the package that would help MariaDB upgrade. All fine, but now many packages are set to ā€˜manually installedā€™ in dpkg.
  • The workaround for dnsmasq : there is an old /etc/init.d/dnsmasq, but no /etc/init.d/dnsmasq.dpkg-dist , but currently resolvectl status returns an error (Failed to get global data: Unit dbus-org.freedesktop.resolve1.service not found.). Reinstall?
  • at some point (line ~400) heuristics try to guess the previous migration log, that should be > 10k. All of mine are 8k or less, so another fact that supports ā€˜only the start of the migration ranā€™.

All in allā€¦ it would seem that running the migration with --force-rerun does not break the already upgraded system. What would you think?


The remainder of this thread shall be named

how to break your Yunohost


Ok okā€¦ Things didnā€™t get more broken than they wereā€¦ yet.

Before continuing from here, I made a full disk image of the server.

I didnā€™t know how to do that on the VPS-infrastructure (SolusVM) Iā€™m renting. There is a helpful howto over at lowendspririt.com

Migration of Debian from Buster to Bullseye has completed (apt-get dist-upgrade), but Yunohost got removed. Upon reinstall I skipped post-install per Aleksā€™ instructions.
Yunohost partly works. Yunohost-api does not run; there is no entry in /etc/init.d for either yunohost or yunohost-api

Yunohost recognizes the migration to Bullseye did not complete satisfactory, and --force-rerun does not have the intended (by me) effect:

# yunohost tools migrations run  0021_migrate_to_bullseye
Info: Running migration 0021_migrate_to_bullseye...
Error: Migration 0021_migrate_to_bullseye did not complete, aborting. Error: The current Debian distribution is not Buster! If you already ran the Buster->Bullseye migration, then this error is symptomatic of the fact that the migration procedure was not 100% succesful (otherwise YunoHost would have flagged it as completed). It is recommended to investigate what happened with the support team, who will need the **full** log of the `migration, which can be found in Tools > Logs in the webadmin.
Info: The operation 'Run migrations' could not be completed. Please share the full log of this operation using the command 'yunohost log share 20220927-214414-tools_migrations_migrate_forward' to get help
# yunohost tools migrations run 0021_migrate_to_bullseye --force-rerun
Error: Those migrations are still pending, so cannot be run again: 0021_migrate_to_bullseye

Iā€™ll try skipping this one, and continuing from there:

# yunohost tools migrations run 0021_migrate_to_bullseye --skip
Warning: Skipping migration 0021_migrate_to_bullseye...
# yunohost tools migrations run --accept-disclaimer
Info: Running migration 0024_rebuild_python_venv...
Info: Now attempting to rebuild the Python virtualenv for `borg-env`
Success! Migration 0024_rebuild_python_venv completed

That was fast!

It doesnā€™t repair yunohost-api. I hope a reinstall (and perhaps subsequent reboot) will:

# apt reinstall yunohost yunohost-admin 
# service yuno (tab tab tab...)
yunomdns    yunoprompt

These are new for me. Does yunohost-api still exist?

# yuno (tab tab ...)
yunohost    yunohost-api
# yunohost-api 

Yeah, that does run, web GUI is also running now.

Not all is happy yet,

Up next: have a look in another Yunohost how the start-script works, and copy it over.
Of course, Yunohost to the rescue: the next section in diagnosis hands me the exact commands to regenerate the scripts :slight_smile:

On reboot, no direct luck though. More tomorrow!


It is tomorrow! On another (almost similar) Yunohost the migration ran without any problem. All services are active; a single warning (relating to DNS) in diagnosis.

That particular Yunohost does not have any yunohost* entry in /etc/init.d either, but it does have some files in /etc/systemd/system/

Over at this Yunohost, yunohost-api is masked, /etc/systemd/system/yunohost-api.service is a symlink to /dev/null. systemctl unmask yunohost-api removes the symlink, but does not create anything to enable yunohost-api. I can now enter the web GUI again after starting yunohost-api on the CLI via SSH.

For yunohost-firewall I created the .service-file copied the contents of /etc/systemd/system/yunohost-firewall.service from the other Yunohost to the file just created on this host.

# vi /etc/systemd/system/yunohost-firewall.service
"yunohost-firewall.service" [New File] 15 lines, 295 bytes written
# systemctl enable yunohost-firewall
Removed /etc/systemd/system/multi-user.target.wants/yunohost-firewall.service.
Created symlink /etc/systemd/system/multi-user.target.wants/yunohost-firewall.service ā†’ /etc/systemd/system/yunohost-firewall.service.
# systemctl start yunohost-firewall

ā€¦ with some success:

After doing the same for yunohost-api.service, killing the manual yunohost-api process and starting the service, that one is in green as well.

Maybe there is a line of difference after copy/paste, system configurations tells me those two service files divert from the default. I forced a rewrite of the config file, it seems OK now.

The only service I canā€™t find is yunomdns, which I think I can live without for the moment.

As far as apps is concerned, Matrix/Synapse is having difficulties (needs troubleshooting; it seems totally goneā€¦), but the others are all OK.


Wohoo! That was just so easy!

# yunohost backup restore synapse-pre-upgrade1
Warning: YunoHost is already installed
Do you really want to restore an already installed system? [y/N]: y
Info: Preparing archive for restoration...
Info: Restoring synapse...
Info: [....................] > Loading settings...
Info: [....................] > Validating restoration parameters...
Info: [+++++++.............] > Reinstalling dependencies...
Warning: Creating new PostgreSQL cluster 13/main ...
Warning: /usr/lib/postgresql/13/bin/initdb -D /var/lib/postgresql/13/main --auth-local peer --auth-host md5
Warning: The files belonging to this database system will be owned by user "postgres".
Warning: This user must also own the server process.
Warning: The database cluster will be initialized with locale "en_US.UTF-8".
Warning: The default database encoding has accordingly been set to "UTF8".
Warning: The default text search configuration will be set to "english".
Warning: Data page checksums are disabled.
Warning: fixing permissions on existing directory /var/lib/postgresql/13/main ... ok
Warning: creating subdirectories ... ok
Warning: selecting dynamic shared memory implementation ... posix
Warning: selecting default max_connections ... 100
Warning: selecting default shared_buffers ... 128MB
Warning: selecting default time zone ... Europe/London
Warning: creating configuration files ... ok
Warning: running bootstrap script ... ok
Warning: performing post-bootstrap initialization ... ok
Warning: syncing data to disk ... ok
Warning: Success. You can now start the database server using:
Warning:     pg_ctlcluster 13 main start
Warning: Ver Cluster Port Status Owner    Data directory              Log file
Warning: 13  main    5433 down   postgres /var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log
Warning: update-alternatives: using /usr/share/postgresql/13/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode
Warning: I: Creating /var/lib/turn/turndb from /usr/share/coturn/schema.sql
Warning: Building PostgreSQL dictionaries from installed myspell/hunspell packages...
Warning: Removing obsolete dictionary files:
Info: [#######+............] > Recreating the dedicated system user...
Info: [########+...........] > Restoring directory and configuration...
Info: [#########...........] > Check for source up to date...
Info: '/opt/yunohost/matrix-synapse/lib64' wasn't deleted because it doesn't exist.
Info: '/opt/yunohost/matrix-synapse/.rustup' wasn't deleted because it doesn't exist.
Info: '/opt/yunohost/matrix-synapse/.cargo' wasn't deleted because it doesn't exist.
Info: [#########+..........] > Reload fail2ban...
Info: [##########+.........] > Restoring the PostgreSQL database...
Info: [###########+........] > Enable systemd services
Info: [############++++....] > Creating a dh file...
Info: [################++..] > Reconfiguring coturn...
Info: [##################..] > Configuring log rotation...
Info: [##################+.] > Configuring file permission...
Info: [###################.] > Restarting synapse services...
Info: The service matrix-synapse has correctly executed the action restart.
Info: [###################.] > Reloading nginx web server...
Info: [####################] > Restoration completed for synapse
Success! Restoration completed
apps: 
  synapse: Success
system: 

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.