The âhiddenâ migration should now be available in 12.1.34
Thatâs weird. Can you try yunohost tools regen_conf apt ?
First blocker I found for updating: a file that is expected to exist, but doesnât.
Migration 0036_migrate_to_trixie did not complete, aborting. Error: [Errno 2] No such file or directory: PosixPath('/opt/pipx/venvs/uv/bin/pip')
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/yunohost/tools.py", line 906, in tools_migrations_run
migration.run()
File "/usr/lib/python3/dist-packages/yunohost/migrations/0036_migrate_to_trixie.py", line 188, in run
_backup_pip_freeze_for_python_app_venvs()
File "/usr/lib/python3/dist-packages/yunohost/migrations/0036_migrate_to_trixie.py", line 114, in _backup_pip_freeze_for_python_app_venvs
pip_freeze = subprocess.run([pip, "freeze"], check=True).stdout.decode("utf-8")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/gevent/subprocess.py", line 2029, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/gevent/subprocess.py", line 842, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3/dist-packages/gevent/subprocess.py", line 1866, in _execute_child
raise child_exception
FileNotFoundError: [Errno 2] No such file or directory: PosixPath('/opt/pipx/venvs/uv/bin/pip')
So, I get into the venv for that folder, and I find out that pip for that installation is externally managed:
(uv) root@azkware:/opt/pipx/venvs/uv# pip install pip
error: externally-managed-environment
Ă This environment is externally managed
â°â> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.11/README.venv for more information.
Do I just copy pip from an existing venv elsewhere?
Merf do you know why this /opt/pipx exists in the first place, did you setup something manually using pip or ?
Ah yes, I manually installed yt-dlp via venv, in order to have it updated more timely than via Debian Stable. Should I temporarily remove it then?
The odd thing is that yt-dlp is installed on /opt/venv, while the complaining folder is /opt/pipx which is a totally separate environment. I donât even remember having installed uv to begin with, maybe it was a dependency of some package.
Sounds like you used pipx to setup some venv via pipx, i donât know the full story or how to workaround this, idk, i suppose you can move it elsewhere (though not somewhere in /opt/ iâm guessing), maybe we should have a smarter way to find the venv to freeze to avoid this kind of cases
To be frank, I donât quite remember what did I do in that folder, I think I was following some tutorial to install yt-dlp via pipx or uv instead of pip and it ended up causing a little mess. Iâm currently investigating how to fix it.
Several workarounds later, I managed to upgrade, but I still had an error lingering with the new SSSD manager:
Nov 03 20:38:27 azkware systemd[1]: Starting sssd-nss.socket - SSSD NSS Service responder socket...
Nov 03 20:38:27 azkware sssd_check_socket_activated_responders[125311]: Misconfiguration found for the 'nss' responder.
It has been configured to be socket-activated but it's still mentioned in the services' line of the config file.
Please consider either adjusting services' line or disabling the socket by calling:
"systemctl disable sssd-nss.socket"
Nov 03 20:38:27 azkware systemd[1]: sssd-nss.socket: Control process exited, code=exited, status=17/n/a
Nov 03 20:38:27 azkware systemd[1]: sssd-nss.socket: Failed with result 'exit-code'.
Nov 03 20:38:27 azkware systemd[1]: Failed to listen on sssd-nss.socket - SSSD NSS Service responder socket.
Nov 03 20:38:27 azkware systemd[1]: syncserver-rs.service: Main process exited, code=exited, status=127/n/a
Nov 03 20:38:27 azkware systemd[1]: syncserver-rs.service: Failed with result 'exit-code'.
Nov 03 20:38:27 azkware systemd[1]: Starting sssd-pam.socket - SSSD PAM Service responder socket...
Nov 03 20:38:27 azkware sssd_check_socket_activated_responders[125318]: Misconfiguration found for the 'pam' responder.
It has been configured to be socket-activated but it's still mentioned in the services' line of the config file.
Please consider either adjusting services' line or disabling the socket by calling:
"systemctl disable sssd-pam.socket"
Nov 03 20:38:27 azkware systemd[1]: sssd-pam.socket: Control process exited, code=exited, status=17/n/a
Nov 03 20:38:27 azkware systemd[1]: sssd-pam.socket: Failed with result 'exit-code'.
Nov 03 20:38:27 azkware systemd[1]: Failed to listen on sssd-pam.socket - SSSD PAM Service responder socket.
Nov 03 20:38:28 azkware systemd[1]: Starting sssd-ssh.socket - SSSD SSH Service responder socket...
Nov 03 20:38:28 azkware sssd_check_socket_activated_responders[125337]: Misconfiguration found for the 'ssh' responder.
It has been configured to be socket-activated but it's still mentioned in the services' line of the config file.
Please consider either adjusting services' line or disabling the socket by calling:
"systemctl disable sssd-ssh.socket"
Nov 03 20:38:28 azkware systemd[1]: sssd-ssh.socket: Control process exited, code=exited, status=17/n/a
Nov 03 20:38:28 azkware systemd[1]: sssd-ssh.socket: Failed with result 'exit-code'.
Nov 03 20:38:28 azkware systemd[1]: Failed to listen on sssd-ssh.socket - SSSD SSH Service responder socket.
Nov 03 20:38:28 azkware systemd[1]: Starting sssd-sudo.socket - SSSD Sudo Service responder socket...
Nov 03 20:38:28 azkware sssd_check_socket_activated_responders[125384]: Misconfiguration found for the 'sudo' responder.
It has been configured to be socket-activated but it's still mentioned in the services' line of the config file.
Please consider either adjusting services' line or disabling the socket by calling:
"systemctl disable sssd-sudo.socket"
Nov 03 20:38:28 azkware systemd[1]: sssd-sudo.socket: Control process exited, code=exited, status=17/n/a
Nov 03 20:38:28 azkware systemd[1]: sssd-sudo.socket: Failed with result 'exit-code'.
Nov 03 20:38:28 azkware systemd[1]: Failed to listen on sssd-sudo.socket - SSSD Sudo Service responder socket.
Is there some way of regenerating the SSSD settings? I already tried with yunohost tools regen-conf but it doesnât seem to cover SSSD yet.
hi ! That worked, ie it created /etc/apt/trusted.gpg.d/extra_php_version.gpg
For some reason that didnât occur during the initial setup with curl | bash.
Reviewing the postinstall logs, I see the initial calls to regen-conf failed with an exception:
2025-11-02 19:26:50,712: WARNING - sed: can't read /etc/apt/sources.list: No such file or directory
2025-11-02 19:26:50,814: ERROR - Ăchec de l'exĂ©cution du script : /usr/share/yunohost/hooks/conf_regen/10-apt
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/yunohost/hook.py", line 302, in hook_callback
hook_return = hook_exec(
~~~~~~~~~^
path, args=hook_args, chdir=chdir, env=env, raise_on_error=True
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)[1]
^
File "/usr/lib/python3/dist-packages/yunohost/hook.py", line 430, in hook_exec
raise YunohostError("hook_exec_failed", path=path)
yunohost.utils.error.YunohostError: Ăchec de l'exĂ©cution du script : /usr/share/yunohost/hooks/conf_regen/10-apt
2025-11-02 19:26:50,816: DEBUG - Executing command '['sh', '-c', '/bin/bash -x "./15-nginx" post \'\' \'\' /etc/nginx/conf.d/yunohost_admin.conf,/etc/nginx/conf.d/ssowat.conf,/etc/nginx/conf.d/yunolxc.raoull.org.conf,/etc/nginx/conf.d/yunohost_api.conf.inc,/etc/nginx/conf.d/acme-challenge.conf.inc,/etc/nginx/conf.d/yunohost_http_errors.conf.inc,/etc/nginx/conf.d/yunohost_admin.conf.inc,/etc/nginx/conf.d/security.conf.inc,/etc/nginx/conf.d/global.conf,/etc/nginx/conf.d/yunohost_sso.conf.inc,/etc/nginx/conf.d/yunohost_panel.conf.inc,/etc/nginx/conf.d/default.d/redirect_to_admin.conf,/var/www/.well-known/yunolxc.raoull.org/autoconfig/mail/config-v1.1.xml 7>&1']'
On that system, all sources are located in /etc/apt/sources.list.d/, including debian official sources. There was NO /etc/apt/sources.list file when it was installed.
That file was apparently created at a later time by some script, maybe by Yunohost itself, not sure. And the later call to regen-conf worked.
Another issue found: an apparent âmanual modificationâ to /etc/nssswitch.conf that cannot be fixed with the usual sudo yunohost tools regen-conf nsswitch --force. Doing that outputs the following:
description: Regenerate system configurations 'nsswitch'
log_path: /var/log/yunohost/operations/20251104-141622-regen_conf-nsswitch.log
logs:
- 2025-11-04 14:16:28,009: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:28,153: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:28,289: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:28,434: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:28,606: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:28,746: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:28,886: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:29,023: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:29,167: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:29,303: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:29,452: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:29,634: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:29,797: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:29,945: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:30,086: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:30,240: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:30,370: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:30,516: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:30,656: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:30,784: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:30,924: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:31,062: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:31,204: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:31,351: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:31,546: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:31,765: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:31,906: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:32,042: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:32,203: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:32,345: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:32,496: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:32,668: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:32,812: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:32,946: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:33,078: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:33,219: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:33,373: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:33,538: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:33,722: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:33,884: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:34,049: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:34,209: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:34,380: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:34,554: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:34,715: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:34,860: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:34,990: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:35,119: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:35,262: DEBUG - Formating result in 'export' mode
- 2025-11-04 14:16:35,317: DEBUG - Formating result in 'export' mode
metadata:
args:
dry_run: False
force: False
list_pending: False
names: nsswitch
with_diff: False
ended_at: 2025-11-04 14:16:35
error: Could not regenerate the configuration for category(s):
interface: cli
operation: regen_conf
parent: None
related_to:
- configuration
- nsswitch
started_at: 2025-11-04 14:16:22
started_by: csolisr
success: False
yunohost_version: 13.0.0.alpha1+202511031700
metadata_path: /var/log/yunohost/operations/20251104-141622-regen_conf-nsswitch.yml
name: 20251104-141622-regen_conf-nsswitch
(See also: paste.yunohost.org/raw/amadunugib )
(Supposedly this command fails because thereâs no ânsswitchâ regenconf category anymore)
And another one: turns out that something in the algorithm to renew Letâs Encrypt certificates changed upon upgrading to Trixie. Now every automated certificate renewal returns this error:
2025-11-05 14:10:38,241 ERROR yunohost.certmanager.certificate_renew - Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/yunohost/certificate.py", line 422, in certificate_renew
_fetch_and_enable_new_certificate(domain, no_checks=no_checks)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yunohost/certificate.py", line 526, in _fetch_and_enable_new_certificate
_prepare_certificate_signing_request(domain, domain_key_file, TMP_FOLDER)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/yunohost/certificate.py", line 635, in _prepare_certificate_signing_request
private_key = serialization.load_pem_private_key(pem_file.read(), password=None)
TypeError: argument 'data': from_buffer() cannot return the address of a unicode object
2025-11-05 14:10:38,242 ERROR yunohost.certmanager.certificate_renew - argument 'data': from_buffer() cannot return the address of a unicode object
2025-11-05 14:10:38,248 ERROR yunohost.certmanager.certificate_renew - Sending email with details to root ...
Turns out it might not be that unusual of a problem - searching for the error message shows that it should be easily fixable. Quote:
Under Python 3, it can be converted accordingly with
bytes.fromhex(args.secret), whereasargs.secret.decode("hex")no longer works
Iâd try editing the affected line on my side, but since itâs on a system package, Iâm not sure if thatâd break somethingâŠ
Is a solution being considered for applications that are not yet ready for Python 3.13?
The solution is âfixing the appsâ like we did for all the others âŠ
If you have an app in mind and have time to help debug, you can add an issue in the app repo and discuss the solution.
It was just a simple question, not an attack. So reinstalling 3.12 just isnât an option even for a little while. Okay. Sorry for the inconvenience.
Thank you @jarod5001
While installation I get this error:
5/5 âą Installing YunoHost
===================
Running: apt-get install --assume-yes -o Dpkg::Options::=--force-confold -o APT::install-recommends=true yunohost yunohost-admin postfix
===================
Reading package lists...
Building dependency tree...
Reading state information...
postfix is already the newest version (3.10.4-1~deb13u1).
postfix set to manually installed.
Solving dependencies...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
ca-certificates : Depends: openssl (>= 1.1.1)
dovecot-core : Depends: openssl
ssl-cert : Depends: openssl (>= 0.9.8g-9)
yunohost : Depends: openssl
Recommends: ntp but it is not installable
E: Unable to correct problems, you have held broken packages.
E: The following information from --solver 3.0 may provide additional context:
Unable to satisfy dependencies. Reached two conflicting decisions:
1. ca-certificates:armhf is selected for install because:
1. yunohost:armhf=13.0.0 is selected for install
2. yunohost:armhf Depends ca-certificates
2. ca-certificates:armhf Depends openssl (>= 1.1.1)
but none of the choices are installable:
- openssl:armhf=3.5.4-1~deb13u1+rpt1 is not selected for install because:
1. yunohost:armhf=13.0.0 is selected for install as above
2. yunohost:armhf Conflicts openssl (>= 3.5.4)
[selected yunohost:armhf]
- openssl:armhf=3.5.1-1 is not selected for install
- openssl:arm64=3.5.4-1~deb13u1+rpt1 is not selected for install because:
1. yunohost:armhf=13.0.0 is selected for install as above
2. yunohost:armhf Conflicts openssl (>= 3.5.4)
[selected yunohost:armhf]
[FAIL] Installation of YunoHost packages failed
Rapberry Pi 2 Model B
Raspberry Pi OS
Received a âdpkg-query: no packages found matching postgresql-17â warning during post-install on YunoHost 13.0 in a clean Debian 13 (genericcloud) install.
Log: https://paste.yunohost.org/raw/opurobiwal
Open issue: Trixie: postgresql-17 package missing on clean Yunohost 13.0 install · Issue #2698 · YunoHost/issues · GitHub
Can you provide details of your changes? Iâm facing the same issue.
Open issue: Trixie: Let's Encrypt certificate renewal fails · Issue #2700 · YunoHost/issues · GitHub