Alpha-stage testing for YunoHost 11.0 on Debian Bullseye (and migration that will be shipped in Yunohost 4.4.x)

It has merely been reworded:

# yunohost domain cert install --help
usage: yunohost domain cert install [domain_list ...] [-h] [--force] [--no-checks] [--self-signed] [--staging]

Install Let's Encrypt certificates for given domains (all by default).

positional arguments:
  domain_list    Domains for which to install the certificates

optional arguments:
  -h, --help     show this help message and exit
  --force        Install even if current certificate is not self-signed
  --no-checks    Does not perform any check that your domain seems correctly configured (DNS, reachability) before attempting to install. (Not
                 recommended)
  --self-signed  Install self-signed certificate instead of Let's Encrypt
  --staging      Use the fake/staging Let's Encrypt certification authority. The new certificate won't actually be enabled - it is only intended to test
                 the main steps of the procedure.
2 Likes

OK, no deprecation warning, but…

sudo yunohost domain cert install MYDOMAIN.TLD

just gives:

Error: 'NoneType' object has no attribute 'get'

The result is the same whether I use my primary domain or a secondary domain that was successfully used to install Nextcloud. If I don’t specify a domain I get two sets of this error message (I have two domains). Both domains still show as self signed using cert status.

EDIT: I was able to create an SSL cert eventually by accepting the warning message about the self signed cert for my admin domain. From there the web gui worked fine for creating the let’s encrypt cert. There’s definitely an issue with the cli cert install though. It looks like the domain name just isn’t being passed to it correctly.

1 Like

So… would it help to post an issue for this on Github, or is posting it here enough?

By the way, I also managed to get a crash by installing Synapse (which installed perfectly) on a subdomain, and then selecting “Make default” (which caused a crash with backtrace). 100% reproducible.

**Error**: `"500" Internal Server Error`

**Action**: `"PUT" /yunohost/api/apps/synapse/default`

**Error message:**

Unexpected server error

**Traceback**


Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/moulinette/interfaces/api.py", line 494, in process
    ret = self.actionsmap.process(arguments, timeout=30, route=_route)
  File "/usr/lib/python3/dist-packages/moulinette/actionsmap.py", line 579, in process
    return func(**arguments)
  File "/usr/lib/python3/dist-packages/yunohost/log.py", line 419, in func_wrapper
    result = func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/yunohost/app.py", line 1081, in app_makedefault
    if "/" in app_map(raw=True)[domain]:
KeyError: 'SUBDOMAIN.MYAWESOMEDOMAIN.TLD'
1 Like

Yes, posting on Github could help (ideally with a full stacktrace, if there’s any available)

Hmyeah that one may not be specific to bullseye, I think it’s more related to the fact that Synapse is not a webapp and therefore it doesnt make sense to “make it default”, hence it crashes. The fix would be that we just do not display this UI bit for non-webapp as it’s not relevant

1 Like

It doesn’t seem to successfully create a log when this error occurs, unless I’m missing it.

Does anyone here know, or is able to make an educated guess, on a release date for YunoHost v11 as a complete install for production use? As you probably know by now from my discussion thread, I am a new YNH user, who is preparing for a new installation in the New Year. I am planning to start with v4.3.4.2 which, I believe, is the current stable version available.

I could wait for this new v4.4.x before I begin my install, and then upgrade to v11 from there. Alternatively, I could wait until v11 ships, and do a complete clean install of it, without having to do the upgrade from v4.4.x at all. I am in a good position right now to make such a decision. I just need to get some idea of how far back this would push my installfest start date.

  • YunoHost v4.3.4.2 installfest — can begin immediately
  • YunoHost v4.4.x installfest — t.b.a
  • YunoHost v11 installfest — t.b.a

Somebody asked a similar question and imho you should just stop procrastinating and go for it ¯\_(ツ)_/¯ :stuck_out_tongue_winking_eye:

On one hand, migrating major debian version is always a bit delicate, and at the same time, we try to make it as smooth as possible. Usually there’s a couple of rough edges in the migration in the early days right after we release it as stable, but then it gets smoother. Apart from maybe a couple cases, I never heard of any miserably epic failure for the migration that resulted in an unsolvable, unfixable setup … In the worst case scenario, just come to the support chatroom or forum, provide logs, be patient and methodical, and we’ll find a way to fix whatever went wrong.

Considering that we still need to release a beta then the stable, i wouldn’t expect a stable bullseye before at least mid-february. Don’t waste six weeks just waiting for Bullseye.

4 Likes

Wise words, thanks for that. From my perspective, this project has already been delayed by about 10-20 years! :stuck_out_tongue_closed_eyes: I’m in no hurry. I want to do this right and make it as bulletproof as possible. Over the years I have slowly improved my

“Frittro’s Managed Data Store” project, and in 2022 I expect it to be the best ever, being that I’ll be selfhosting it using YunoHost. Another 6 weeks or so is no big deal to me, if it will improve the quality of the finished product at my end.

This is particularly encouraging, thanks for this reassurance. I’m good to go then, installing v4.3.4.2 starting on 1-Jan-2022. I’m looking forward to it!

3 Likes

I spent the day today trying to get Debian 10 Buster installed on my test raspi, so that I could install YunoHost v4.3.4.2 directly. Encountered several issues (see my discussion thread for details) and had to abandon it.

At present it is looking like I may have to alter my sights, and begin with installing YunoHost v4.1.7.2, and then do the upgrade to v4.3.4.2, as it seems that I cannot install Debian 10 Buster as the base, as had been recommended. Unless someone can provide a solution that helps me get v4.3.4.2 installed, I will start again tomorrow, with v4.1.7.2 instead. Thanks.

Sorry, I really think v4.3.4.2, as the current stable, should be the priority here. Can we please get that working as a complete installable package, with its base OS in place, just as v4.1.7.2 already is?

Just install the 4.1.x image, run this workaround Updates may fail because "Repository [...] changed its 'Suite' value from 'stable' to 'oldstable'", upgrade and you’ll be fine in 4.3.x

So there really is no way to install v4.3.4.2 on a Raspberry Pi, without having to install v4.1.7.2 first? I thought v4.3.4.2 is the “current stable release” of YunoHost. Why is there no release package for it?

Edit: sorry for the tone… edited to make it more factual.

We are understaffed volunteers, so image builds and testings can get behind the actual current version. The upgrade from 4.1 to current version takes minutes, stop overthinking it and embrace the simpleness of YunoHost.

Please pursue this discussion on your own thread if you are not actually discussing the installation and testing of YunoHost 11.0. :wink:

3 Likes

Hello,

Sorry for my English …
I’m actually trying to build a 4.3.5 testing image for Raspberry Pi by following GitHub - YunoHost/rpi-image: Tool used to create the raspberrypi.org Raspbian images.
I’m completely noob in building image, so if it went to the end, i’ll see to share link with it (zip) and the sha256sum without any guarantee at all in any case in these files.

Edit : I’m relaunching the stuff because of a network problem on a Debian repository : some .deb couldn’t be retrieve.

Edit 2 : @frittro the link is here hubiC - OVH and active for 30 days. It’s my first build using this script and this topic is not the good place to talk about this.
By the way i’m not able to fix anything about it. It comes without any guarantee or anything else and it’s not an official release, just a try to helps. Moreover i can’t test it. Especially since the new version is under development/preparation.
It is important to note that :

  • ssh is normally enable according to the default configuration of the script
  • and the locale is normally en_US.UTF-8 so maybe you should reconfigure it.

After it is necessary to cross the fingers …

Edit 3 :
@frittro , the good link hubiC - OVH

ppr

Hi everyone!

Before diving into the details, let me say how much I like Yunohost and how grateful I am of your work.

I tried to upgrade my setup and I encountered some issues. I am using a virtual machine with a configuration similar to my production configuration, and Yunohost is installed in an unprivileged LXC container inside the VM.

  1. fetchmail prevented packages upgrade: https://paste.yunohost.org/raw/ijuximipal
    I am using fetchmail and for it to deliver email locally using dovecot lda it must run under the vmail user (so dovecot lda runs as vmail too). After the upgrade, fetchmail refuses to start because the homedir of its user (vmail) is missing, and because of that apt is failing.
    This homedir is set to /home/vmail on buster, but it changed to /var/vmail on bullseye and it does not exist.
    When I manually create the /var/vmail directory and chown it to vmail:mail (as on buster) fetchmail can start and the upgrade can resume.

  2. php7.3-fpm.service timeout on restart: https://paste.yunohost.org/raw/apuvaxiqex
    Not sure what happened here. When I tried to restart the service manually, it succeeded immediately, and the upgrade can resume.

  3. during yunohost upgrade, I noticed this failure:

Setting up yunohost (11.0.1~alpha+202112311417) ...
postmap: fatal: open /var/cache/yunohost/regenconf/pending/postfix/etc/postfix/sasl_passwd.db: Permission denied
Could not run script: /usr/share/yunohost/hooks/conf_regen/19-postfix

This did not prevent the upgrade to succeed, but when I tried to do it manually afterwards, it failed with the same error:

root@yunohost:~# yunohost tools regen-conf postfix --force
Warning: postmap: fatal: open /var/cache/yunohost/regenconf/pending/postfix/etc/postfix/sasl_passwd.db: Permission denied
Error: Could not run script: /usr/share/yunohost/hooks/conf_regen/19-postfix
Info: The operation 'Regenerate system configurations 'postfix'' could not be completed. Please share the full log of this operation using the command 'yunohost log share 20220102-213108-regen_conf-postfix' to get help
Error: Could not regenerate the configuration for category(s): postfix

The logfile is here: https://paste.yunohost.org/raw/zajobehufa

Let me know how I can help.

1 Like

A quick note to say that since the YunoHost 11.0 install (3 weeks), I haven’t encountered any bugs :heart_eyes:

5 Likes

Is there anything in particular that the team would like tested? I’ve installed Nextcloud, Synapse and Element Web and so far, everything I’ve tried has worked, except what I posted above.

To be honest, I was expecting to find more bugs, so now I’m trying to figure out where I’m most likely to find some for you. :slight_smile:

1 Like

Yeah, but we’re still going kinda slowly because the bugs have a tendency to hide themselves until we release stuff as stable :stuck_out_tongue:

I’m pretty confident on fresh install in Bullseye, but the most important things to test is the migration from buster to bullseye, which is also complex because you gotta setup a machine “as close as possible to real-life” with 5~10 apps etc …

I pushed a bunch of fixes to the migration procedure to address a couple issue, I’m having another round of tests, and next we’re planning for a beta release during the second half of january (“beta” as in “we encourage power user to actually migrate their production server, except a couple minor issues but should be okay to debug manually”)

4 Likes

Now i have a VirtualBox VM for testing my PyInventory project… I restored by prod app data into this VM to test PyInventory upgrade…

Because i can easy made VirtualBox snapshots, i also test switchtoUnstable also to see if PyInventory has some troubles with Bullseye (I don’t know why that should be a problem, though.)

I can confirm this. Work-a-round: tail -f /var/log/yunohost/categories/operation/* :stuck_out_tongue_closed_eyes:

After all migrations done, i still have package that should be upgraded:

Is that normal?

After upgrade the system will not be restarted, but this is needed, isn’t it?

Before i have done a reboot, PyInventory seems to work fine. (Think because the already old services is still running) …But after the reboot i get only a 502 Bad Gateway because gunicorn app server didn’t start:

Think after the system upgrade (that will upgrade Python from 3.7 to 3.9) the PyInventory virtualenv must be recreated too. (This is a normal “behaviour” for python virtualenvs)
The YunoHost migrations will not reinstall the packages, isn’t it?

In case of PyInventory a forced upgrade (sudo yunohost app upgrade pyinventory --force) will fix the problem, because the venv will be recreated:

Maybe it’s good idea to force upgrade all installed packages?!?

1 Like

Yeah I’m trying to address that :sweat_smile: I’m having troubles with the whole php dependency mess, a classic.

Yeah we saw similar issues in the stretch->buster transition where the fact that python got upgraded to 3.7 was messing virtual envs … I’m not knowledgeable about these enough to understand why that’s causing issues … What’s weird is also that we were expecting quite a lot of people to experience these issues, but in fact not so many people complained about these in the end … I’m not sure why

1 Like

You mean all installed apps ? Uuugh I wouldn’t be so comfortable with that because some people have maaaaany apps installed and each upgrade creates a pre-upgrade safety backup and just … ugh …

If the point is to address the virtual env issues, back in the stretch->buster transition we thought we would do some sort of magic, ad-hoc detection of installed venvs and maybe do something about it (that was a random idea though, in the end we didnt do this tho)

1 Like