Apt reinstall yunohost --> savage post-install --> now what?

Hi all,

I broke my Yunohost during the migration.

Apt got stuck, every solution involved removing yunohost and yunohost-admin. I surrendered.

Debian got updated, apt install yunohost followed. I ran apt reinstall yunohost before on a working Yunohost, and found no adverse effect. Wrong, I guess the uninstall previously removed some of the configuration.

Now I got stuck pre- post-install. Yunohost knows I messed up:

# yunohost backup restore 20210926-194650
Info: Preparing archive for restoration...
Warning: unable to retrieve string to translate with key 'The following critical error happened during restoration: It looks like you're trying to re-postinstall a system that was already working previously ... If you recently had some bug or issues with your installation, please first discuss with the team on how to fix the situation instead of savagely re-running the postinstall ...' for default locale 'locales/en.json' file (don't panic this is just a warning)
Error: The following critical error happened during restoration: It looks like you're trying to re-postinstall a system that was already working previously ... If you recently had some bug or issues with your installation, please first discuss with the team on how to fix the situation instead of savagely re-running the postinstall ...

I’m not quite sure (read: no idea & out of fantasy) how to continue from here!

Hmpf then let’s try to : touch /etc/yunohost/installed

I’m all ears!

# touch /etc/yunohost/installed
# yunohost tools update 
# yunohost tools upgrade system
Info: Upgrading packages...
Info: Upgrading system packages
... 
Info: + Processing triggers for man-db (2.9.4-2) ...
Info: + Processing triggers for libc-bin (2.31-13+deb11u4) ...
Success! System upgraded

Weeell… That seems quite promising!

Things look better now. Not all is running yet; some apps give a gateway error, some can’t upgrade yet. That gave me the idea to re-run the migration:

yunohost tools migrations run --accept-disclaimer
Info: Running migration 0021_migrate_to_bullseye...
Error: Migration 0021_migrate_to_bullseye did not complete, aborting. Error: The current Debian distribution is not Buster! If you already ran the Buster->Bullseye migration, then this error is symptomatic of the fact that the migration procedure was not 100% succesful (otherwise YunoHost would have flagged it as completed). It is recommended to investigate what happened with the support team, who will need the **full** log of the `migration, which can be found in Tools > Logs in the webadmin.
Info: The operation 'Run migrations' could not be completed. Please share the full log of this operation using the command 'yunohost log share 20220925-121827-tools_migrations_migrate_forward' to get help
Error: Run these migrations: '0021_migrate_to_bullseye', before migration 0022_php73_to_php74_pools.
Error: Run these migrations: '0021_migrate_to_bullseye', before migration 0023_postgresql_11_to_13.
Error: Run these migrations: '0021_migrate_to_bullseye', before migration 0024_rebuild_python_venv.# yunohost log share 20220925-121827-tools_migrations_migrate_forward
Info: This log is now available via https://paste.yunohost.org/raw/vehubesode

So, I’ll have a look at the log

Ok, the log tells that the migration didn’t complete 100%, and that I should have a look in the logs.
The web GUI gives an error on Yunohost API not respording, bad gateway.

Looking in /var/log/yunohost/categories/, there is a number of logs relating to migration, a couple of days old (when I ran the migration). I ran the migration after updating the separate apps as far as they would without needing Yunohost 11 to continue:


 72K -rw-r--r-- 1 root root  69K Sep 19 06:34 operation/20220919-053409-backup_create.log
4.0K -rw-r--r-- 1 root root  393 Sep 19 06:34 operation/20220919-053409-backup_create.yml
 64K -rw-r--r-- 1 root root  58K Sep 19 06:42 operation/20220919-053418-app_remove-synapse.log
4.0K -rw-r--r-- 1 root root  674 Sep 19 06:42 operation/20220919-053418-app_remove-synapse.yml
4.0K -rw-r--r-- 1 root root  222 Sep 19 06:42 operation/20220919-054252-permission_delete-synapse.log
4.0K -rw-r--r-- 1 root root  329 Sep 19 06:42 operation/20220919-054252-permission_delete-synapse.yml
180K -rw-rw-rw- 1 root root 179K Sep 19 06:50 operation/20220919-054255-backup_restore_app-synapse.log
4.0K -rw-rw-rw- 1 root root  881 Sep 19 06:50 operation/20220919-054255-backup_restore_app-synapse.yml
4.0K -rw-rw-rw- 1 root root 1.1K Sep 19 06:42 operation/20220919-054255-permission_create-synapse.log
4.0K -rw-rw-rw- 1 root root  497 Sep 19 06:42 operation/20220919-054255-permission_create-synapse.yml
4.0K -rw-rw-rw- 1 root root   67 Sep 19 06:42 operation/20220919-054255-permission_url-synapse.log
4.0K -rw-rw-rw- 1 root root  390 Sep 19 06:42 operation/20220919-054255-permission_url-synapse.yml
4.0K -rw-rw-rw- 1 root root 1.1K Sep 19 06:42 operation/20220919-054256-permission_create-synapse.log
4.0K -rw-rw-rw- 1 root root  489 Sep 19 06:42 operation/20220919-054256-permission_create-synapse.yml
4.0K -rw-rw-rw- 1 root root   73 Sep 19 06:42 operation/20220919-054256-permission_url-synapse.log
4.0K -rw-rw-rw- 1 root root  376 Sep 19 06:42 operation/20220919-054256-permission_url-synapse.yml
4.0K -rw-rw-rw- 1 root root 1.2K Sep 19 06:42 operation/20220919-054258-permission_create-synapse.log
4.0K -rw-rw-rw- 1 root root  509 Sep 19 06:42 operation/20220919-054258-permission_create-synapse.yml
4.0K -rw-rw-rw- 1 root root   82 Sep 19 06:42 operation/20220919-054258-permission_url-synapse.log
4.0K -rw-rw-rw- 1 root root  389 Sep 19 06:42 operation/20220919-054258-permission_url-synapse.yml
8.0K -rw-r--r-- 1 root root 6.5K Sep 19 22:57 operation/20220919-215703-tools_migrations_migrate_forward.log
4.0K -rw-r--r-- 1 root root  308 Sep 19 22:57 operation/20220919-215703-tools_migrations_migrate_forward.yml
4.0K -rw-r--r-- 1 root root 3.9K Sep 19 23:10 operation/20220919-220959-tools_migrations_migrate_forward.log
4.0K -rw-r--r-- 1 root root  308 Sep 19 23:10 operation/20220919-220959-tools_migrations_migrate_forward.yml
4.0K -rw-r--r-- 1 root root 3.9K Sep 19 23:17 operation/20220919-221715-tools_migrations_migrate_forward.log
4.0K -rw-r--r-- 1 root root  308 Sep 19 23:17 operation/20220919-221715-tools_migrations_migrate_forward.yml
4.0K -rw-r--r-- 1 root root 3.9K Sep 19 23:18 operation/20220919-221841-tools_migrations_migrate_forward.log
4.0K -rw-r--r-- 1 root root  308 Sep 19 23:18 operation/20220919-221841-tools_migrations_migrate_forward.yml
4.0K -rw-r--r-- 1 root root 3.9K Sep 19 23:25 operation/20220919-222539-tools_migrations_migrate_forward.log
4.0K -rw-r--r-- 1 root root  308 Sep 19 23:25 operation/20220919-222539-tools_migrations_migrate_forward.yml
 36K -rw-r--r-- 1 root root  34K Sep 20 00:00 operation/20220919-230008-backup_create.log

Trying to restart yunohost-api gives me an ‘is masked’-reply:

# service yunohost-api status
● yunohost-api.service
     Loaded: masked (Reason: Unit yunohost-api.service is masked.)
     Active: inactive (dead)
root@sanyi:/var/log/yunohost/categories# service yunohost-api start
Failed to start yunohost-api.service: Unit yunohost-api.service is masked.

I’m not familiar enough with systemd to decide whether I should unmask the service. I can imagine it is masked as part of Yunohost’s migration procedure. Without changing things now, I’ll try restarting the server before trying to look into each of the log files on the CLI.


edit

Reboot done, Yunohost API is still down.

Before looking into the logs, I’ll list the pending migrations. apt update && apt upgrade tells me all packages are up to date, as does apt-get dist-upgrade.

# yunohost tools migrations list
migrations: 
  0: 
    description: Upgrade the system to Debian Bullseye and YunoHost 11.x
    disclaimer: None
    id: 0021_migrate_to_bullseye
    mode: manual
    name: migrate_to_bullseye
    number: 21
    state: pending
  1: 
    description: Migrate php7.3-fpm 'pool' conf files to php7.4
    disclaimer: None
    id: 0022_php73_to_php74_pools
    mode: auto
    name: php73_to_php74_pools
    number: 22
    state: pending
  2: 
    description: Migrate databases from PostgreSQL 11 to 13
    disclaimer: None
    id: 0023_postgresql_11_to_13
    mode: auto
    name: postgresql_11_to_13
    number: 23
    state: pending
  3: 
    description: Repair Python app after bullseye migration
    disclaimer: Following the upgrade to Debian Bullseye, some Python applications needs to be partially rebuilt to get converted to the new Python version shipped in Debian (in technical terms: what's called the 'virtualenv' needs to be recreated). In the meantime, those Python applications may not work. YunoHost can attempt to rebuild the virtualenv for some of those, as detailed below. For other apps, or if the rebuild attempt fails, you will need to manually force an upgrade for those apps.

Rebuilding the virtualenv will be attempted for the following apps (NB: the operation may take some time!): 
    - borg-env
    id: 0024_rebuild_python_venv
    mode: manual
    name: rebuild_python_venv
    number: 24
    state: pending

Because of the manual upgrade of the system, migrations are a bit out of sync. PHP and Python are already upgraded, but the migration for configuration and virtualenv has not run. PostgreSQL is still at 11.

# php --version
PHP 7.4.30 (cli) (built: Jul  7 2022 15:51:43) ( NTS )
# php-fpm7.4 --version
PHP 7.4.30 (fpm-fcgi) (built: Jul  7 2022 15:51:43)
# python --version
Python 3.9.2
# psql --version
psql (PostgreSQL) 11.17 (Debian 11.17-0+deb10u1)

All previous migrations_migrate_forward-logs ran into the same problem while upgrading MariaDB. This is the most recent entry, before I nuked the system: log

Helpful hint from @tituspijean :stuck_out_tongue:
image

1 Like

I thought of checking 0021_migrate_to_bullseye and see whether following steps manually would get me to the next step. I can’t find the script though.

In /etc/yunohost/migrations.yaml I can find migrations 0015-0020 as skipped. I think I might continue with 0022_php73_to_php74_pools by adding 0021 to the yaml, but some configuration might not get migrated.

I did find 0021_migrate_to_bullseye.py.

Reading through it, I realized:

  • the first time I ran the migration, it might have done more than exit on MariaDB, I looked up the log and uploaded it; MariaDB is really one of the first packages to be upgraded after patching sources.list and managing some conflicting packages.
  • During the manual dist-upgrade, I didn’t accept the new/installers config for MariaBD, but kept the old config (seemed prudent, at the time). I could reinstall MariaDB and accept the packager’s configuration.
  • I could skip migration 0021 by adding it to migrations.yaml as I wrote above, or
  • I could run the migration with --force-rerun (only if you know what you are doing suggests I shouldn’t :stuck_out_tongue: )
  • around line 230 I think to recognize that something is done to dependencies of ynh-packages, to prevent them from getting removed while upgrading build-essentials → this is where my manual upgrade broke the system
  • During the first few attempts at migration, I manually upgraded groups of packages, hoping to hit the package that would help MariaDB upgrade. All fine, but now many packages are set to ‘manually installed’ in dpkg.
  • The workaround for dnsmasq : there is an old /etc/init.d/dnsmasq, but no /etc/init.d/dnsmasq.dpkg-dist , but currently resolvectl status returns an error (Failed to get global data: Unit dbus-org.freedesktop.resolve1.service not found.). Reinstall?
  • at some point (line ~400) heuristics try to guess the previous migration log, that should be > 10k. All of mine are 8k or less, so another fact that supports ‘only the start of the migration ran’.

All in all… it would seem that running the migration with --force-rerun does not break the already upgraded system. What would you think?