root@box:~# yunohost tools upgrade --system
Info: Nothing to do! Everything is already up to date!
Info: Upgrading packages…
Success! The system has been upgraded
And :
root@box:~# apt-get install --no-install-recommends docker-ce
Reading package lists... Done
Building dependency tree
Reading state information... Done
docker-ce is already the newest version (5:19.03.6~3-0~raspbian-stretch).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] Y
Setting up docker-ce (5:19.03.6~3-0~raspbian-stretch) ...
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
invoke-rc.d: initscript docker, action "start" failed.
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Mon 2020-02-17 13:25:48 GMT; 42ms ago
Docs: https://docs.docker.com
Process: 27757 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 27757 (code=exited, status=1/FAILURE)
CPU: 1.047s
Feb 17 13:25:48 box.sucha.fr systemd[1]: docker.service: Unit entered failed state.
Feb 17 13:25:48 box.sucha.fr systemd[1]: docker.service: Failed with result 'exit-code'.
dpkg: error processing package docker-ce (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
docker-ce
E: Sub-process /usr/bin/dpkg returned an error code (1)
Feb 17 14:01:24 box.sucha.fr systemd[1]: Starting Docker Application Container Engine...
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.390894615Z" level=info msg="Starting up"
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.406836317Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.409158859Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.411697962Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.411990617Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.425730401Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.425961181Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.426156232Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.426566908Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.460210876Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.534452427Z" level=warning msg="Your kernel does not support swap memory limit"
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.534559094Z" level=warning msg="Your kernel does not support cgroup cfs period"
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.534605916Z" level=warning msg="Your kernel does not support cgroup cfs quotas"
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.534652843Z" level=warning msg="Your kernel does not support cgroup rt period"
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.534698156Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Feb 17 14:01:24 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:24.535535287Z" level=info msg="Loading containers: start."
Feb 17 14:01:25 box.sucha.fr dockerd[4416]: time="2020-02-17T14:01:25.084940238Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Feb 17 14:01:25 box.sucha.fr dockerd[4416]: failed to start daemon: Error initializing network controller: list bridge addresses failed: PredefinedLocalScopeDefaultNetworks List: [172.17.0.0/16 172.18.0.0/16 172.19.0.0/16 172.20.0.0/16 17
Feb 17 14:01:25 box.sucha.fr systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 14:01:25 box.sucha.fr systemd[1]: Failed to start Docker Application Container Engine.
Feb 17 14:01:25 box.sucha.fr systemd[1]: docker.service: Unit entered failed state.
Feb 17 14:01:25 box.sucha.fr systemd[1]: docker.service: Failed with result 'exit-code'.
Feb 17 14:01:27 box.sucha.fr systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Feb 17 14:01:27 box.sucha.fr systemd[1]: Stopped Docker Application Container Engine.
Feb 17 14:01:27 box.sucha.fr systemd[1]: docker.service: Start request repeated too quickly.
Feb 17 14:01:27 box.sucha.fr systemd[1]: Failed to start Docker Application Container Engine.
Feb 17 14:01:27 box.sucha.fr systemd[1]: docker.service: Unit entered failed state.
Feb 17 14:01:27 box.sucha.fr systemd[1]: docker.service: Failed with result 'exit-code'.
Would it come from ? Feb 17 14:01:25 box.sucha.fr dockerd[4416]: failed to start daemon: Error initializing network controller: list bridge addresses failed: PredefinedLocalScopeDefaultNetworks List: [172.17.0.0/16 172.18.0.0/16 172.19.0.0/16 172.20.0.0/16 17
The Yunohost is connected to a VPN server (like labrique internet)
Hi Mosi,
Well, I just thought about an other solution : Purchasing a new raspberry-like computer in order to make the app mopidy working.
And it’s quite alright.
Let’s wait and see how things go.
Hi @charly - I have also been trying to find an integrated Docker (or similar) solution that would easily work with my existing self-hosted Yuno instance without destroying what I have already created.
Unfortunately, I haven’t had any success with the things I’ve been trying and it doesn’t help that I just don’t really understand Docker despite reading everything I can find. Even though we aren’t running on the same hardware or server os, I have been seeing similar issues and error messages as you posted.
The good news is I think I might know why you were getting the
WARNING - E: Sub-process /usr/bin/dpkg returned an error code (1) and I think I figured out at least a workaround hacky fix for my server for the failed daemon issue so maybe it will work for you.
This is mostly a guess but I think the /usr/bin/dpkg sub process error is because the Portainer install scripts need to be updated. I read something last night that now I can’t find that basically said something about there being an issue with Debian Buster and the stable release from the official docker stable repository and that the workaround is to load the edge repository and use that build. Additional some change going from docker-ce to docker-ce-cli. If I can find it again, I will update the post. I was getting loads of install errors using the build from the stable channel and when I went to the build from the edge channel they basically vanished at least from the install portion.
The docker daemon not starting has been equally frustrating and filled with non stop errors. Another guess but I think in your case @ljf already nailed it and since you confirmed you are running a VPN service on the YunoHost, it has been known that VPN services will cause this issue. Have you tried to stop all VPN services while you install docker and create containers? I would give that a shot and see if it might work. My issue isn’t VPN related because I am pretty confident that my server is not running any VPN service. My issue is related to docker not being able to create a new bridge network because it claims it cannot identify any non-overlapping ipv4 address pools. I have managed to get around that by manually entering the network I want to use. CIDR, and Gateway.
Let me know if you have any luck and if anyone has feedback please feel free to speak out