Problèmes éditeurs de texte collaboratif

L’authentification ldap est prise en charge par le plugin Mypads qui permet de gérer des comptes.

Je croyais que le plugin mypad developpé par framasoft avait sa propre base car il y a bien une base SQL pour etherpadMypad dans ma version de l’application. et je ne trouve jamais les utilisateurs avec leur login YuNoHOST. Donc ça me parait louche. Alors j’ai une vieille version de mypad ou c’est buggué

Le support ldap a été ajouté il y a peu, https://github.com/YunoHost-Apps/etherpad_mypads_ynh/commit/51ddcd64d68398018e3af6637339e2c20c4dc9a7

En cas d’upgrade toutefois, le support ldap n’est pas ajouté. Mypads gère ses utilisateurs soit par ldap soit par lui-même. Mais pas les 2 à la fois.

1 Like

Ok effectivement, c’est l’ajout de fonctionnalité que j’attendais. Et qui a tout son sens dans le projet YuNoHOST, et les projet Dégooglisons Internet et CHATONS de Framasoft.
Je suis ravi et mes amis de chantierlibre vont l’être aussi. Je vais voir pour récupérer les pads et supprimer l’appli pour la réinstaller. Merci de l’info. Longue vie et prospérité Maniack :vulcan_salute:

Tu peux éventuellement tenter de migrer, en faisant un backup préalable de l’app.
C’est juste une option ajoutée dans la config de etherpad

Je vais tenté c’est vrai que c’est plus simple. Mon instance tourne sur un LXC. Un snapshot et si ça marche pas je retourne en arrière et je tente autrement. :slight_smile:

Si c’est un LXC alors n’hésite pas !
Pour activer ldap, retire simplement le commentaire //noldap au début de chaque ligne concernée dans le fichier /var/www/etherpad_mypads/settings.json

Ensuite tu restart le service etherpad.

sudo systemctl restart etherpad_mypads

Bonjour,

Désolé j’étais pas mal occupé ces derniers jours, je reviens seulement aujourd’hui.

Voici une capture de la commande journalctl -u nginx :

Pour le chemin la ligne que l’on ne voit pas entièrement, voici le chemin :
" /etc/nginx/conf.d/collabora.ingenieurtest.nohost.me/collabora.conf:1 "

Je n’arrive toujours pas a comprendre ce qui bloque l’installation de Collabora online for Yunohost :frowning:

Ca vous parle ?

Semblerais qu’il y ai une instruction ‘server’ à la première ligne de la conf, et ça ne passe pas.

J’ai essayé de suivre le chemin pour accéder au fichier, mais il semblerait que le fichier “collabora.ingenieurtest.nohost.me” n’existe pas …
Est-ce qu’il y aurait une manip’ a faire ?

c’est dans le collabora.ingenieurtest.nohost.me .d :wink:

Visiblement, il n’y a pas de fichier dans “collabora.ingenieurtest.nohost.me.d” …
Mais dans les logs d’installation, il supprime ce fichier juste avant :

C’est assez bizarre …
Je suis retourné sur le git de “collabora for yunohost” ( GitHub - YunoHost-Apps/collabora_ynh: Collabora package for YunoHost ).
Visiblement, le projet n’arrive pas a se build. Est-ce qu’on peut l’utiliser quand même ?

Merci d’avance pour votre aide.

Bonjour

Oui cette app est cassée et est irréparable car les paquets Debian fournis par Collabora ne fonctionnent pas.

Il y a cette app Collabora qui est fonctionnelle https://github.com/aymhce/collaboradocker_ynh pour l’installer il faut descendre jusqu’en bas de la page des applications dans l’interface d’administration et coller cette URL dans installer une application personnalisée.

1 Like

Bonjour rafi59 !
Merci pour la précision ^^ a propos de Collabora for Yunohost :slight_smile:
J’ai testé collabora docker du coup, mais encore une fois, l’installation ne se termine jamais, elle tourne dans le vide.
Voici les logs d’installation :

La configuration de SSOwat a été générée

La configuration de SSOwat a été générée

+ yunohost app ssowatconf

+ dockerapp_ynh_reloadservices

+ sudo systemctl reload nginx

+ echo '/etc/nginx/conf.d/collabora.ingenieurtest.nohost.me.d/collaboradocker.conf wasn'\''t deleted because it doesn'\''t exist.'

+ '[' -e /etc/nginx/conf.d/collabora.ingenieurtest.nohost.me.d/collaboradocker.conf ']'

+ [[ f = \/ ]]

+ [[ /etc/nginx/conf.d/collabora.ingenieurtest.nohost.me.d/collaboradocker.conf =~ ^/[[:alnum:]]+$ ]]

+ [[ /var/www /home/yunohost.app =~ /etc/nginx/conf\.d/collabora\.ingenieurtest\.nohost\.me\.d/collaboradocker\.conf ]]

+ local 'forbidden_path= /var/www /home/yunohost.app'

/etc/nginx/conf.d/collabora.ingenieurtest.nohost.me.d/collaboradocker.conf wasn't deleted because it doesn't exist.

+ local path_to_remove=/etc/nginx/conf.d/collabora.ingenieurtest.nohost.me.d/collaboradocker.conf

+ ynh_secure_remove /etc/nginx/conf.d/collabora.ingenieurtest.nohost.me.d/collaboradocker.conf

+ ynh_remove_nginx_config

+ sudo rm -R /home/yunohost.docker/collaboradocker

+ '[' -e /home/yunohost.docker/collaboradocker ']'

+ [[ r = \/ ]]

+ [[ /home/yunohost.docker/collaboradocker =~ ^/[[:alnum:]]+$ ]]

+ [[ /var/www /home/yunohost.app =~ /home/yunohost\.docker/collaboradocker ]]

+ local 'forbidden_path= /var/www /home/yunohost.app'

+ local path_to_remove=/home/yunohost.docker/collaboradocker

+ ynh_secure_remove /home/yunohost.docker/collaboradocker

1

+ bash docker/rm.sh

+ dockerapp_ynh_rm

+ incontainer=1

+ export incontainer=1

++ echo 1

++ '[' -f /.dockerenv ']'

++ dockerapp_ynh_incontainer

+ architecture=amd64

+ export architecture=amd64

++ dpkg --print-architecture

+ path_url=/

+ '[' 31000 == '' ']'

+ port=31000

+ export port=31000

++ sudo yunohost app setting collaboradocker port --output-as plain --quiet

++ ynh_app_setting_get collaboradocker port

+ data_path=/home/yunohost.docker/collaboradocker

+ export data_path=/home/yunohost.docker/collaboradocker

+ domain=collabora.ingenieurtest.nohost.me

+ export domain=collabora.ingenieurtest.nohost.me

+ app=collaboradocker

+ export app=collaboradocker

+ dockerapp_ynh_loadvariables

+ domain=collabora.ingenieurtest.nohost.me

++ sudo yunohost app setting collaboradocker domain --output-as plain --quiet

++ ynh_app_setting_get collaboradocker domain

+ app=collaboradocker

++ . /usr/share/yunohost/helpers.d/utils

++ '[' -r /usr/share/yunohost/helpers.d/utils ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/user

++ '[' -r /usr/share/yunohost/helpers.d/user ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/system

++ '[' -r /usr/share/yunohost/helpers.d/system ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/string

++ '[' -r /usr/share/yunohost/helpers.d/string ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/setting

++ '[' -r /usr/share/yunohost/helpers.d/setting ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/print

++ '[' -r /usr/share/yunohost/helpers.d/print ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/package

++ '[' -r /usr/share/yunohost/helpers.d/package ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/network

++ '[' -r /usr/share/yunohost/helpers.d/network ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

+++ MYSQL_ROOT_PWD_FILE=/etc/yunohost/mysql

++ . /usr/share/yunohost/helpers.d/mysql

++ '[' -r /usr/share/yunohost/helpers.d/mysql ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/ip

++ '[' -r /usr/share/yunohost/helpers.d/ip ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

+++ CAN_BIND=1

++ . /usr/share/yunohost/helpers.d/filesystem

++ '[' -r /usr/share/yunohost/helpers.d/filesystem ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/backend

++ '[' -r /usr/share/yunohost/helpers.d/backend ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

+++ run-parts --list /usr/share/yunohost/helpers.d

+ source /usr/share/yunohost/helpers

+ source _common.sh

Exécution du script « /var/cache/yunohost/from_file/collaboradocker_ynh-master/scripts/remove »...

+ exit 1

+ echo ''

!!

collaboradocker's script has encountered an error. Its execution was cancelled.

!!

+ ynh_die

+ type -t ynh_clean_setup

+ echo -e '!!\n collaboradocker'\''s script has encountered an error. Its execution was cancelled.\n!!'

+ set +eu

+ trap '' EXIT

+ '[' 1 -eq 0 ']'

+ local exit_code=1

+ ynh_exit_properly

+ ret=

Sorry ! Collabora don't work with 'aufs' Docker driver, please change it (ex : edit /etc/docker/daemon.json with { "storage-driver": "overlay" }. Next, restart docker engine service) (see more on : https://docs.docker.com/storage/storagedriver/select-storage-driver/)

++ bash docker/run.sh

+ dockerapp_ynh_run

+ cp -rf ../conf/app /home/yunohost.docker/collaboradocker

+ mkdir -p /home/yunohost.docker/collaboradocker

+ dockerapp_ynh_copyconf

+ bash docker/_specificvariablesapp.sh

++ grep -rl YNH_PATH ../conf/.

+ dockerapp_ynh_findreplace ../conf/. YNH_PATH /

+ dockerapp_ynh_findreplacepath YNH_PATH /

++ grep -rl YNH_PORT ../conf/.

+ dockerapp_ynh_findreplace ../conf/. YNH_PORT 31000

+ dockerapp_ynh_findreplacepath YNH_PORT 31000

++ grep -rl YNH_DATA ../conf/.

+ dockerapp_ynh_findreplace ../conf/. YNH_DATA /home/yunohost.docker/collaboradocker

+ dockerapp_ynh_findreplacepath YNH_DATA /home/yunohost.docker/collaboradocker

++ grep -rl YNH_APP ../conf/.

+ dockerapp_ynh_findreplace ../conf/. YNH_APP collaboradocker

+ dockerapp_ynh_findreplacepath YNH_APP collaboradocker

+ dockerapp_ynh_findreplaceallvaribles

+ sudo yunohost app register-url collaboradocker collabora.ingenieurtest.nohost.me /

+ local path=/

+ local domain=collabora.ingenieurtest.nohost.me

+ local app=collaboradocker

+ ynh_webpath_register collaboradocker collabora.ingenieurtest.nohost.me /

True

+ sudo yunohost domain url-available collabora.ingenieurtest.nohost.me /

+ local path=/

+ local domain=collabora.ingenieurtest.nohost.me

+ ynh_webpath_available collabora.ingenieurtest.nohost.me /

+ incontainer=1

+ export incontainer=1

++ echo 1

++ '[' -f /.dockerenv ']'

++ dockerapp_ynh_incontainer

+ architecture=amd64

+ export architecture=amd64

++ dpkg --print-architecture

+ path_url=/

+ sudo yunohost app setting collaboradocker port --value=31000 --quiet

+ ynh_app_setting_set collaboradocker port 31000

+ port=31000

++ echo 31000

++ netcat -z 127.0.0.1 31000

++ test -n 31000

++ local port=31000

++ ynh_find_port 31000

+ '[' '' == '' ']'

+ port=

+ export port=

++ sudo yunohost app setting collaboradocker port --output-as plain --quiet

++ ynh_app_setting_get collaboradocker port

+ data_path=/home/yunohost.docker/collaboradocker

+ export data_path=/home/yunohost.docker/collaboradocker

+ domain=collabora.ingenieurtest.nohost.me

+ export domain=collabora.ingenieurtest.nohost.me

+ app=collaboradocker

+ export app=collaboradocker

+ dockerapp_ynh_loadvariables

+ '[' 0 '!=' 0 ']'

+ '[' 0 == 127 ']'

+ incontainer=1

++ echo 1

++ '[' -f /.dockerenv ']'

++ dockerapp_ynh_incontainer

+ ret=0

++ sh _dockertest.sh

+ dockerapp_ynh_checkinstalldocker

+ trap ynh_exit_properly EXIT

+ set -eu

+ ynh_abort_if_errors

++ . /usr/share/yunohost/helpers.d/utils

++ '[' -r /usr/share/yunohost/helpers.d/utils ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/user

++ '[' -r /usr/share/yunohost/helpers.d/user ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/system

++ '[' -r /usr/share/yunohost/helpers.d/system ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/string

++ '[' -r /usr/share/yunohost/helpers.d/string ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/setting

++ '[' -r /usr/share/yunohost/helpers.d/setting ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/print

++ '[' -r /usr/share/yunohost/helpers.d/print ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/package

++ '[' -r /usr/share/yunohost/helpers.d/package ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/network

++ '[' -r /usr/share/yunohost/helpers.d/network ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

+++ MYSQL_ROOT_PWD_FILE=/etc/yunohost/mysql

++ . /usr/share/yunohost/helpers.d/mysql

++ '[' -r /usr/share/yunohost/helpers.d/mysql ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/ip

++ '[' -r /usr/share/yunohost/helpers.d/ip ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

+++ CAN_BIND=1

++ . /usr/share/yunohost/helpers.d/filesystem

++ '[' -r /usr/share/yunohost/helpers.d/filesystem ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

++ . /usr/share/yunohost/helpers.d/backend

++ '[' -r /usr/share/yunohost/helpers.d/backend ']'

++ for helper in '$(run-parts --list /usr/share/yunohost/helpers.d 2>/dev/null)'

+++ run-parts --list /usr/share/yunohost/helpers.d

+ source /usr/share/yunohost/helpers

+ source _common.sh

+ domain=collabora.ingenieurtest.nohost.me

+ app=collaboradocker

Exécution du script « /var/cache/yunohost/from_file/collaboradocker_ynh-master/scripts/install »...

Vérification des paquets requis pour collaboradocker...

Terminé

Extraction...

Téléchargement...

J’ai bien remarqué la ligne :

Sorry ! Collabora don't work with 'aufs' Docker driver, please change it (ex : edit /etc/docker/daemon.json with { "storage-driver": "overlay" }. Next, restart docker engine service) (see more on : https://docs.docker.com/storage/storagedriver/select-storage-driver/)

Mais je ne comprend pas trop ce qu’il faut faire …
J’ai été voir dans “/etc/docker/” et il y a un fichier “key.json”, mais rien qui ressemble à “storage-driver” dans ce fichier …
Ca vous parle à vous ?

Oui pour réparer ça il te suffit d’éditer /etc/docker/daemon.json avec sudo nano /etc/docker/daemon.json et de replacer
“storage-driver”: “aufs” par “storage-driver”: “overlay”

Sorry ! Collabora don’t work with ‘aufs’ Docker driver, please change it (ex : edit /etc/docker/daemon.json with { “storage-driver”: “overlay” }. Next, restart docker engine service) (see more on : https://docs.docker.com/storage/storagedriver/select-storage-driver)

Hum … J’aimerais bien …
Dans le dossier /etc/docker/ je n’ai qu’un seul fichier “key.json”.
Aucune trace de “deamon.json” donc :frowning:
Je dois le créer moi-même ?

Hi, I have the same problem with Collabora and the deamon.json not existing, just today with everything just updated.

I SSH’ed into my yunohost server and then tried (taken from here):

echo 'DOCKER_OPTS="--config-file=/etc/docker/daemon.json"' > /etc/default/docker

and created the deamon.json by doing:

nano etc/docker/deamon.json

and pasting into it (see this docker post):

{
"storage-driver": "overlay"
}

Then stopped it (sudo systemctl stop docker), but when restarting docker (trying both sudo systemctl start docker and systemctl restart docker.service) it gives the following error:

Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

If I try docker info I get:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

If I try journalctl -xe then I get some repititions of:

-- Unit docker.service has failed.
-- 
-- The result is failed.
Apr 18 21:21:41 ihost.mydomain.nl systemd[1]: docker.service: Unit entered failed state.
Apr 18 21:21:41 ihost.mydomain.nl systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 18 21:21:43 ihost.mydomain.nl systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Apr 18 21:21:43 ihost.mydomain.nl systemd[1]: Stopped Docker Application Container Engine.
-- Subject: Unit docker.service has finished shutting down
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- Unit docker.service has finished shutting down.
Apr 18 21:21:43 ihost.mydomain.nl systemd[1]: docker.service: Start request repeated too quickly.
Apr 18 21:21:43 ihost.mydomain.nl systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support

I would really appreciate if somebody could help get this issue understood/fixed. Because the Collabora is essential for creating the kind of self hosted Google drive-like solution that I think many people are interested in :smiley:

Some more information about docker not working comes from the system log:

Apr 28 21:12:58 ihost slapd[896]: <= mdb_equality_candidates: (cn) not indexed
Apr 28 21:12:58 ihost slapd[896]: <= mdb_equality_candidates: (sudoUser) not indexed
Apr 28 21:12:58 ihost slapd[896]: <= mdb_equality_candidates: (sudoUser) not indexed
Apr 28 21:12:58 ihost slapd[896]: <= mdb_equality_candidates: (sudoUser) not indexed
Apr 28 21:12:58 ihost slapd[896]: <= mdb_equality_candidates: (sudoUser) not indexed
Apr 28 21:12:58 ihost slapd[896]: <= mdb_substring_candidates: (sudoUser) not indexed
Apr 28 21:12:58 ihost systemd[1]: Starting Docker Application Container Engine...
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.687074550Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.687282882Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.687499494Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.687550275Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.688008657Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0  <nil>}]" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.688137405Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.688325529Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x13f5c030, CONNECTING" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.689254010Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0  <nil>}]" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.691601438Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.695202448Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x13f5c030, READY" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.696024628Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x13f5c220, CONNECTING" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.695990098Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.697126650Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0x13f5c220, READY" module=grpc
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.699432151Z" level=error msg="'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded." storage-driver=overlay2
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.709376075Z" level=error msg="AUFS was not found in /proc/filesystems" storage-driver=aufs
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.718460579Z" level=error msg="'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded." storage-driver=overlay
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.718627765Z" level=error msg="Failed to built-in GetDriver graph devicemapper /var/lib/docker"
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.727922632Z" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.728541637Z" level=warning msg="Your kernel does not support cgroup memory limit"
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.728661792Z" level=warning msg="Your kernel does not support cgroup cfs period"
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.728722417Z" level=warning msg="Your kernel does not support cgroup cfs quotas"
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.728784187Z" level=warning msg="Your kernel does not support cgroup rt period"
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.728843718Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.729320433Z" level=warning msg="mountpoint for pids not found"
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.730192977Z" level=info msg="Loading containers: start."
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.740978612Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.14.79-v7+/modules.dep.bin'\nmodprobe: WARNING: Module bridge not found in directory /lib/modules/4.14.79-v7+\nmodprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.14.79-v7+/modules.dep.bin'\nmodprobe: WARNING: Module br_netfilter not found in directory /lib/modules/4.14.79-v7+\n, error: exit status 1"
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.750277438Z" level=warning msg="Running modprobe nf_nat failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.14.79-v7+/modules.dep.bin'\nmodprobe: WARNING: Module nf_nat not found in directory /lib/modules/4.14.79-v7+`, error: exit status 1"
Apr 28 21:12:58 ihost dockerd[20663]: time="2019-04-28T21:12:58.759451525Z" level=warning msg="Running modprobe xt_conntrack failed with message: `modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.14.79-v7+/modules.dep.bin'\nmodprobe: WARNING: Module xt_conntrack not found in directory /lib/modules/4.14.79-v7+`, error: exit status 1"
Apr 28 21:12:59 ihost dockerd[20663]: time="2019-04-28T21:12:59.018141565Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Apr 28 21:12:59 ihost dockerd[20663]: Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables --wait -t nat -N DOCKER: iptables v1.6.0: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Apr 28 21:12:59 ihost dockerd[20663]: Perhaps iptables or your kernel needs to be upgraded.
Apr 28 21:12:59 ihost dockerd[20663]:  (exit status 3)
Apr 28 21:12:59 ihost systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 28 21:12:59 ihost systemd[1]: Failed to start Docker Application Container Engine.
Apr 28 21:12:59 ihost systemd[1]: docker.service: Unit entered failed state.
Apr 28 21:12:59 ihost systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 28 21:13:01 ihost systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Apr 28 21:13:01 ihost systemd[1]: Stopped Docker Application Container Engine.
Apr 28 21:13:01 ihost systemd[1]: Starting Docker Application Container Engine...

etcetera…
However, I am totally in the dark as to what is causing the problem. It could be something in the kernel used by Yunohost on the Raspberry Pi 3, or something with docker, or both. Could someone please help?