YunoHost 11.0 (Bullseye) release / Sortie de YunoHost 11.0 (Bullseye)

Closer investigation of my postgresql migration problem reveals that I have the same error messages as @chmeyer in addition to the locale problem experienced by several, eg. @echapon. The postgresql upgrade logs state

ERROR:  could not access file "$libdir/postgis-2.5": No such file or director
y

And the loadable_libraries.txt file in the log directory says

could not load library "$libdir/postgis-2.5": ERROR:  could not access file "$libdir/postgis-2.5": No such file or directory           
In database: mobilizon                                                                                                                 
could not load library "$libdir/rtpostgis-2.5": ERROR:  could not access file "$libdir/rtpostgis-2.5": No such file or directory       
In database: mobilizon

This is exactly the same as chmeyer’s problem. The postgis package postgresql-11-postgis-2.5 is already automatically installed. Installing postgresql-13-postgis-3 did not help, so I removed it.

The locale problems don’t show up in the postgresql log directory, but in the yunohost migrations log.

After the latest yunohost update yesterday (on 4.3 for version 11.0.9.7) the following error appeared for DOVECOT (mail):

After the obligatory reboot of the server, the dovecot service could not restart because it was not able to find a file anymore (see logs below).

My solution was to locate the missing file ffdhe2048.pem and copy it to the expected location:

  • Expected location: /usr/share/yunohost/other/ffdhe2048.pem
  • Real location: /usr/share/yunohost/ffdhe2048.pem

Afterwards dovecot service was able to start again without problems


  - Aug 18 09:07:26 systemd[1]: Starting Dovecot IMAP/POP3 email server...
  - Aug 18 09:07:26 dovecot[927]: doveconf: Fatal: Error in configuration file /etc/dovecot/dovecot.conf line 26: ssl_dh: Can't open file /usr/share/yunohost/other/ffdhe2048.pem: No such file or directory
  - Aug 18 09:07:26 systemd[1]: dovecot.service: Main process exited, code=exited, status=89/n/a
  - Aug 18 09:07:26 systemd[1]: dovecot.service: Failed with result 'exit-code'.
  - Aug 18 09:07:26 systemd[1]: Failed to start Dovecot IMAP/POP3 email server.
  - Aug 18 09:13:47 systemd[1]: Starting Dovecot IMAP/POP3 email server...
  - Aug 18 09:13:47 dovecot[3897]: doveconf: Fatal: Error in configuration file /etc/dovecot/dovecot.conf line 26: ssl_dh: Can't open file /usr/share/yunohost/other/ffdhe2048.pem: No such file or directory
  - Aug 18 09:13:47 systemd[1]: dovecot.service: Main process exited, code=exited, status=89/n/a
  - Aug 18 09:13:47 systemd[1]: dovecot.service: Failed with result 'exit-code'.
  - Aug 18 09:13:47 systemd[1]: Failed to start Dovecot IMAP/POP3 email server.

The good solution is to force-regen the conf for dovecot with yunohost tools regen-conf dovecot --dry-run --with-diff and yunohost tools regen-conf dovecot --force

4 Likes

Thank you. I’ll do this later today

1 Like

@YouKnowHorst I run into the same issue !

When I read the line 26 of file /etc/dovecot/dovecot.conf on line 25 in comment there is the curl command to fetch the last certificate such as below

curl https://ssl-config.mozilla.org/ffdhe2048.txt > /path/to/dhparam

I did that command with just correct path to dhparam and voila :upside_down_face:

Yes, technically that works, yet that’s not the proper fix. If everybody had this issue, we would have taken care of it, so maybe ask yourself 2 minutes why you have this issue and not the others. The reason is probably that you manually edited dovecot’s conf file therefore the regen-conf is not upgrading it, and in particular it doesn’t upgrade the line mapping to the certificate.

So what you should do is run yunohost tools regen-conf dovecot --dry-run --with-diff, review the diff, and yunohost tools regen-conf dovecot --force if you’re happy with the diff.

4 Likes

I did modify dovecot conf already indeed :face_with_open_eyes_and_hand_over_mouth:

Everything went well except synapse does not start, here’s the log: https://pastebin.com/dGtvh6hL

The app-upgrade

yunohost app upgrade -f synapse

gets me

Error: All apps are already up-to-date

yunohost service restart matrix-synapse

gets me

Aug 18 16:39:49 python[5422]: /opt/yunohost/matrix-synapse/bin/python: Error while finding module specification for 'synapse.app.homeserver' (ModuleNotFoundError: No module named 'synapse')

Port 8448 is not reachable - which was okay before the Bullseye-upgrade. I’m on a Hetzner VPS without firewall on that VPS.

Anyone any idea? Thanks!

unfortunately -f is not the appropriate option, you may retry with either -F or --force

2 Likes

Aha! That’s it! Yes I’ve modified the dovecot config as well to allow more connections :slight_smile: I did totally forget this!

Just migrated my YNH VPS and it went flawless!

I am so amazed by your work and the quality it provides time and time again. This is hands down the best and most practical piece of software I have been using the last 7 years. Thank you so much!

:clap:

3 Likes

hello all, and many thx yunohost you stay awesome.
after struggling with migration postgres, i found the solution (for me) on good old stackoverflow.

    Verify the current cluster is the still the old version:
   
    $ pg_lsclusters
   
    Ver Cluster Port Status Owner    Data directory               Log file
    11  main    5432 online postgres /var/lib/postgresql/11/main  /var/log/postgresql/postgresql-11-main.log
    13  main    5434 down   postgres /var/lib/postgresql/13/main  /var/log/postgresql/postgresql-13-main.log
    
    Run pg_dropcluster 13 main as user postgres:
    
    $ sudo -u postgres pg_dropcluster 13 main
    Warning: systemd was not informed about the removed cluster yet. 
    Operations like "service postgresql start" might fail. To fix, run:
    sudo systemctl daemon-reload
    
    Run the pg_upgradecluster command as user postgres:
    
    $ sudo -u postgres pg_upgradecluster 11 main
    
    Verify that everything works, and that the only online cluster is now 13:
    
    $ pg_lsclusters
    Ver Cluster Port Status Owner    Data directory               Log file
    11  main    5434 down   postgres /var/lib/postgresql/11/main  /var/log/postgresql/postgresql-11-main.log
    13  main    5432 online postgres /var/lib/postgresql/13/main  /var/log/postgresql/postgresql-13-main.log
    
    **Drop the old cluster: //ONLY IN CASE NO ERRORS APPEAR**
    
    $ sudo -u postgres pg_dropcluster 11 main
    Uninstall the previous version of PostgreSQL:
    $ sudo apt remove 'postgresql*11'

after that i restarted nginx (dont know it necessary)
and have to restart postgres cluster twice to get mobilizon workable (got an heart attack mobilizon was empty after first restart) with

sudo pg_ctlcluster 13 main restart
thats it :smiley:

1 Like

Bonsoir,

Rapide compte-rendu de ma migration Ă  l’instant effectuĂ©e :

  • Premier blocage avec des dĂ©pĂŽts propres Ă  OVH (je tourne sur un serveur Kimsufi) que j’ai simplement virĂ©s
Warning: E: The repository 'https://last-public-ovh-rtm.snap.mirrors.ovh.net/debian bullseye Release' does not have a Release file.
Warning: E: The repository 'https://last-public-ovh-metrics.snap.mirrors.ovh.net/debian bullseye Release' does not have a Release file.
  • Second blocage avec les apps Radicale et MyGPO que j’ai dĂ©sinstallĂ©es (pas encore rĂ©installĂ©es d’ailleurs, je les utilise Ă  peine)
Info: +  mygpo-ynh-deps : Depends: python3-pip but it is not going to be installed                                                                                                                                                            
Info: +  radicale-ynh-deps : Depends: python-pip but it is not going to be installed                                                                                                                                                          
Info: +                      Depends: python-virtualenv but it is not going to be installed                                                                                                                                                   
Info: +                      Depends: python-dev                                                                                                                                                                                              
Info: +                      Depends: uwsgi-plugin-python but it is not going to be installed

AprĂšs cela, la migration s’est effectuĂ©e sans problĂšme notable. Puis au reboot :

  • La connexion SSH me demande le mot de passe admin alors que j’avais configurĂ© la connexion par clef. Le mot de passe n’est d’ailleurs pas acceptĂ©. Je me suis dĂ©connectĂ©/reconnectĂ©, et cela a fonctionnĂ© (par clĂ©)
  • Quelques applications me renvoyaient une erreur 502 (pas toutes : le problĂšme est survenu avec Nextcloud et n8n par exemple, pas avec Shaarli). J’ai redĂ©marrĂ© le service nginx, tout est rentrĂ© dans l’ordre.

Encore un grand merci Ă  toute l’équipe ! Bravo pour le job et pour le “service aprĂšs vente” que vous assurez ici. :clap:

1 Like

Uuuuh wokĂ© c’est un peu brutal, j’imagine que c’est OK si il y a bien d’autres fichiers sources.list qui pointent vers les dĂ©pĂŽts debian, mais c’est pas une opĂ©ration Ă  faire la lĂ©gĂšre 


My output of pg_lsclusters is

Ver Cluster Port Status                Owner    Data directory              Log file
11  main    5432 down,binaries_missing postgres /var/lib/postgresql/11/main /var/log/postgresql/postgresql-11-main.log
13  main    5433 online                postgres /var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log

Apparently, Yunohost is searching for the psql on 5432.

psql: error: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Could not run script: /usr/share/yunohost/hooks/conf_regen/35-postgresql

I’m currently not sure what to do to fix my psql issue
?

looks like you don’t do the part to ```
Drop the old cluster: //ONLY IN CASE NO ERRORS APPEAR

2 Likes

En rĂ©alitĂ© j’ai simplement renommĂ© les fichiers pour les garder sous le coude, au cas oĂč.
Ceci dit, je suis preneur de retours de la part d’utilisateurices de Kimsufi/OVH au sujet de la gestion de ces dĂ©pĂŽts :slight_smile:

1 Like

I’m running Yunohost on Armbian 22.05.3 Buster on a PineRockPro64.
Can I start the Yunohost 11.0 (Bullseye) Migration and it should work as expected?

1 Like

Migration worked flawlessly for me. Thanks so much :blush:

4 Likes

It worked perfect so far - thanks a lot and congratulations :grinning:

3 Likes