There’s also Fedora IoT which is the same but for IoT devices such as raspberries and such. Well, I’m not asking to switch from Debian to Fedora or anything like that, just informing in case anybody is interested in the subject.
It seems like the perfect fit for YunoHost, allowing packagers to be independent of debian package versions and such. Container-based apps have many benefits and would ease a lot maintenance AFAICS. This approach would let us manage apps as containers by using systemd service files, allowing intercomunication at host level and stating depenencies there.
Theoretically any systemd-based distro where podman is installable would then become a good installation target for yunohost.
I’m not at all a sysadmin but I personally dislike those “all-in-one” containers.
If a lib commonly used by all apps have a critical bug found, all apps will need an upgrade to have the lib upgraded.
If the apps are “really” installed, the upgrade of the lib in the system solve the problem.
But again, I’m not a sysadmin and may be totally wrong.
Hmf containerization is such a recurring topic I don’t even know where to start anymore.
Yes, containerization solve a lot of issues, but it’s never clear what’s the actual overhead and cost. People regularly say “there’s no overhead” because they compare a single app not in a container vs. the same app in a container. I wish I could see a comparison of 10 apps in no container vs. 10 apps in the container. For all CPU + RAM + disk.
You also have many other issues showing up. For example, let’s say nextcloud is now ran in a container … Great … But then nextcloud cannot access files from the host and people are going to tell you “I want to access /home from nextcloud”, and also /media, and also /var/www/my_webapp, etc … I guess you can mount stuff, and that’s probably more sensible from a security point of view to have a whitelist system, but meh that also means quite a lot of work to adapt all the existing stuff.
You may think “all apps can be installed in a container” but probably no : we have some apps like vpnclient or hotspot or borg that are more like “system modules” than “web apps” and need to live in the host because they gotta tweak the network layer etc. So you loose the “every app is a container aspect” which means more complexity to manage apps.
In the end, I don’t know what’s the right choice about all this. The thing is, there’s never been a real study by anyone of what would be the actual cost vs. benefit of moving to containers. I believe nobody in the team has any real expertize about containers and therefore we just tend to avoid it. And every now and then, somebody shows up and say “wow you should use container” but has absolutely zero knowledge of the real-world issues packagers are currently facing.
And even then we had a real study with a good argumentation in favor of containers, it’s going to end up with “… but it would take 6 months of full-time work by somebody to actually implement it + that’s going to impact every people in the team”. So it’s pretty close to rewriting half of the project.
Maybe container is a necessity to solve the issue of automating system administration and democratizing servers. Honestly, I just wish some people would start their own project based on whichever containerization technology with the same spirit/goal of yunohost being :
make it really easy for non-technical people.
no need to know or even use CLI at all (we still have some progress to do here but most stuff is done with the webadmin). So no, saying "just run docker foobar is not easy.
no PhD in network / DNS configuration required
keep it low-tech
both because 72 layers of abstraction just to run a wordpress is not acceptable, and because people should not / can’t afford spending 50€/month in a 32-core + 32GB memory VPS just to run a wordpress (I’m exaggerating, but not that much)
(Also: be ready to work on it full time for the next 5+ years)
Like, please, I beg you, anybody does that shit and beat the crap out of Yunohost by making something way more robust and user friendly so we can finally say fuck you to GAFAMs and Five Eyes and go back to peacefully baking bread instead of solving stupid computer issues
I don’t have any benchmarks at hand, but I’ve been years running containers in production and in every case the performance has been exactly the same.
Well I’d consider this a feature.
Depending on the service type, it may or may not be interesting to share disk with others.
For example, for services like Syncthing and Nextcloud it would make a lot of sense to be able to share a mount. However for services like Synapse and Gitlab it would be absurd.
Yunohost could easily then decide that its default data goes into /var/lib/yunohost. Then the pro user could just mount a device there and know all data is in a known place. Or Yunohost could have a nice UI setting asking the vanilla user where to store data, so he can just plug in an external disk and tell Yunohost to move all there.
It would make much easier to fix the Disk encryption problem, because all data would be in a single dir. It would also make much easier backups. And make yunohost more secure also.
Those examples you mentioned could be containers too indeed
However I’m just saying that this one could be an additional packaging format, I’m not talking of just dumping all work and starting from scratch. That’d be crazy! At least for Yunohost.
The benefit of podman is that it mostly behaves as a normal Linux package. In docker you have more infrastructure and security issues to think about, but in podman you can just drop in a systemd unit file and know it works as expected. Your app is packaged in a container instead of a .deb, but it respects all the host inherent security systems.
The cost depends on how you face it.
If you just add it as an additional packaging option, then the process of deprecating the old system can take as long as you need. There should be no problem on both coexisting (I ignore the inner details of Yunohost packaging still, I’m just theorizing here).
Every exact byte you test is going to be deployed to your users. No more lib conflicts, no more broken upgrades…
Updates are atomic, per app.
Can roll back.
Very wide knowledge on this packaging format.
Not tied to one distro. This can be a guideline or project requirement, but not a technical one.
Previous point would help on upgrades. I see you’re still on debian 9 (which doesn’t run in my Raspi 4 BTW ). I’m just guessing here, but maybe the difference in debian 9/10 packages is delaying this update? The more apps you containerize, the more independent you become of the underlying host system, the faster you can get to support a new Debian version.
More secure: one vulnerable app could be secured more to not affect others.
First of all sorry if my thread seems to you like a “you’re doing it wrong”. It’s not my intention at all. I think you did a great job! I’m just asking to see how this could be improved.
This is probably one big problem. I’m facing it at work also, where some folks refuse to start using containers, no matter how simple you make them. They just don’t feel at home.
It is more a psychological aspect of these kind of changes than a technical one probably.
Of course you guys probably don’t have a lot of time to invest in changing the roots of a project. However, it’s also true that probably using a more modern technology stack would attract more young workers.
I’ve been wandering around a lot and Yunohost seems to be the best anti-GAFAM project available today.
I agree that sometimes it’s easier to start something new than migrate something old. Hard to balance indeed.
If we abstract away the packaging and deployment system, there’s at least 2 things of Yunohost that could still be usable in that supposed future replacement project: its UI and its SSO. Are those pieces reusable outside of Yunohost? Any other piece I’m missing?