[SOLVED] [diagnosis] ipv6 only: Can't run diagnosis for XXX

My YunoHost server

Hardware: LXC container, host is a NUC
YunoHost version: 4.2.8.3
I have access to my server : As root through lxc-attach
Are you in a special context or did you perform some particular tweaking on your YunoHost instance ? : yes
If yes, please explain:

See bellow.

Description of my issue

Hello everyone,

I try to migrate a yunohost, from an internet cube to a server hosting several LXC containers, while using ipv6 only.

The server has one ipv4 address and a block of ipv6 addresses.
I have an other yunohost already installed in a container.
Its ipv4 is NATed and one ipv6 address is used.
This first yunohost works fine.

Now, for the other yunohost I try to migrate, I have ipv6 only.
Installation and backup restoration worked fine.
I face an issue when I try to diagnosis.

First, I have an expected error regarding a missing IPv4 address:

root@hostname:~# yunohost diagnosis show ip --issues
reports:
  description: Internet connectivity
  id: ip
  items:
    status: ERROR
    summary: The server does not have working IPv4

So I just ignored this error:

root@hostname:~# yunohost diagnosis ignore --filter ip test=ipv4
Success! Filter added
root@hostname:~# yunohost diagnosis run ip
Success! Everything looks good for Internet connectivity! (+ 1 ignored issue(s))
Warning: To see the issues found, you can go to the Diagnosis section of the webadmin, or run 'yunohost diagnosis show --issues --human-readable' from the command-line.

Now, here is the issue I am facing: I cannot run the diagnosis for dnsrecords, ports, web and mail since I do not have an IPv4 address.

root@hostname:~# yunohost diagnosis run dnsrecords
Error: Can't run diagnosis for DNS records while there are important issues related to Internet connectivity.
root@hostname:~# yunohost diagnosis run ports
Error: Can't run diagnosis for Ports exposure while there are important issues related to Internet connectivity.
root@hostname:~# yunohost diagnosis run web
Error: Can't run diagnosis for Web while there are important issues related to Internet connectivity.
root@hostname:~# yunohost diagnosis run mail
Error: Can't run diagnosis for Email while there are important issues related to Internet connectivity.

I believe that I should be able to run these diagnosis with ipv6 only.
So, I wanted to come with a quick fix by removing the dependencies to ip for these categories and then ignore errors that are related to not having an ipv4.

I tried to look at the code do modify the dependencies, without success.
Here is where I looked but I did not find any “dependencies” attribute anywhere else:

root@hostname:~# grep '\<dependencies\>' /usr/lib/moulinette/yunohost/diagnosis.py
        for dependency in self.dependencies:

Can anyone briefly describe to me how I can modify the dependencies?

Thank you,

@alb1

YunoHost is not ipv6 only compatible.

For example github or some dependencies are not available on ipv6, so you can’t install 95% of yunohost apps.

Note in your case you could improve things, by giving a private ipv4 address to your container in order to download content in ipv4. But yunohost is not able to manage correctly the situation where you have a private ipv4 and no public ipv4 correctly redirected…

Use reverse proxy apps, could help too.

About changing the code of yunohost diagnosis (or other parts), it’s really difficult to do that just for your instance, i mean the diagnosis would probably be completely unstable if we deactivate the ipv4 checks…

Note: for the moment, we don’t even have a list of what is broken with ipv6 only…

However, if you have dev skills and want to work on that topic, it’s possible, just keep in mind it could take some time to do it.

If you want to give it a try, you can open /usr/share/yunohost/hooks/diagnosis/10-ip.py and find this line:

and replace “ERROR” with “WARNING”

That should allow you to run other diagnosis items

I think during the dnsrecords diagnosis you will end up in another issue though, related to yunohost/network.py at debian/4.2.8.3 · YunoHost/yunohost · GitHub this piece of code (which i’m surprised doesn’t cause more trouble on your server already ?)

Basically during some DNS resolution (in particular in diagnosis), we force the DNS resolution to use external resolves (compared to local dnsmasq) an in particular the use of IPv4 resolvers and completely ignore IPv6 resolvers (which were causing the code to be superslow on non-IPv6 instances). The code could be a little bit smarter though.

Hello @ljf,

Thank you for your answer.

I understand, it makes perfect sense.

I might follow your advices (giving a private ipv4 address to the container and rely on a reverse proxy) in the following days and, in case of success, I will document the procedure.

Hello @Aleks,

Thank you too.

Out of curiosity, I experimented what you suggested.
Diagnosis for ports, web and mail ran indeed correctly.

As for the dnsrecords diagnosis, it suffered from instabilities as @ljf mentioned. Here is the traceback:

Traceback (most recent call last):
  File "/usr/lib/moulinette/yunohost/diagnosis.py", line 198, in diagnosis_run
    code, report = hook_exec(path, args={"force": force}, env=None)
  File "/usr/lib/moulinette/yunohost/hook.py", line 379, in hook_exec
    returncode, returndata = _hook_exec_python(path, args, env, loggers)
  File "/usr/lib/moulinette/yunohost/hook.py", line 490, in _hook_exec_python
    ret = module.main(args, env, loggers)
  File "/usr/share/yunohost/hooks/diagnosis/12-dnsrecords.py", line 265, in main
    return DNSRecordsDiagnoser(args, env, loggers).diagnose()
  File "/usr/lib/moulinette/yunohost/diagnosis.py", line 459, in diagnose
    items = list(self.run())
  File "/usr/share/yunohost/hooks/diagnosis/12-dnsrecords.py", line 33, in run
    domain, domain == main_domain, is_subdomain=is_subdomain
  File "/usr/share/yunohost/hooks/diagnosis/12-dnsrecords.py", line 99, in check_domain
    status = "ERROR" if its_important() else "WARNING"
  File "/usr/share/yunohost/hooks/diagnosis/12-dnsrecords.py", line 93, in its_important
    if results["A:@"] != "OK" or results["AAAA:@"] == "WRONG":
KeyError: 'A:@'

In case function external_resolvers() was called earlier, it was not slow. Here is the content of my /etc/resolv.dnsmasq.conf, hoping it may provide some answers to the why.

root@hostname:~# cat /etc/resolv.dnsmasq.conf
nameserver 89.233.43.71
nameserver 185.233.100.101
nameserver 80.67.169.12
nameserver 2001:910:800::12
nameserver 194.150.168.168
nameserver 2a0c:e300::101
nameserver 2001:67c:28a4::
nameserver 89.234.141.66
nameserver 91.239.100.100
nameserver 195.160.173.53
nameserver 185.233.100.100
nameserver 2a01:3a0:53:53::
nameserver 2a00:5881:8100:1000::3
nameserver 2001:1608:10:25::9249:d69b
nameserver 2001:910:800::40
nameserver 2001:1608:10:25::1c04:b12f
nameserver 2a0c:e300::100
nameserver 84.200.69.80
nameserver 84.200.70.40
nameserver 80.67.169.40

Again, thanks to both of you.

@alb1

In case anyone is interested in this topic, here is how I solved the issue:

I followed @ljf’s advice and:

  • added a local IPv4 to the container
  • configured a reverse proxy for http and https in a third container

Now the diagnosis can run properly.

This solution makes http/https accesses possible for the container in IPv4: it makes you believe that the second yunohost instance is accessible in IPv4 but this is not true (for mail, xmpp, etc.).

So I’m not sure I will stick with this solution for long… Anyway, here is how I proceeded.

Description of the network for the LXC containters:

The host server has one ipv4 address and a block of ipv6 addresses.

A first yunohost is installed in a container: its local IPv4 is 192.168.10.3.
Its domain is “first.nohost.me”.
Its IPv4 is NATed for all needed ports except 80 and 443.
It has one IPv6.

A second yunohost is installed in a container: its local IPv4 is 192.168.10.4.
Its domain is “second.nohost.me”.
No port is NATed to its local IPv4.
It has one IPv6.

A third container is a debian with nginx installed: its local IPv4 is 192.168.10.5.
Its IPv4 is NATed for ports 80 and 443.

Configuration of nginx:

The following configuration is achieved in the third container.

Here is the content of file: /etc/nginx/sites-enabled/default.

server {
  listen 80;
  server_name first.nohost.me;
  location / {
    proxy_pass http://192.168.10.3:80/;
    proxy_redirect    off;
    proxy_buffering   off;
    proxy_set_header  Host            $host;
    proxy_set_header  X-Real-IP       $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
  }
  access_log /var/log/nginx/first.nohost.me_access.log;
  error_log  /var/log/nginx/first.nohost.me_error.log;
}

server {
  listen 80;
  server_name second.nohost.me;
  location / {
    proxy_pass http://192.168.10.4:80/;
    proxy_redirect    off;
    proxy_buffering   off;
    proxy_set_header  Host            $host;
    proxy_set_header  X-Real-IP       $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
  }
  access_log /var/log/nginx/second.nohost.me_access.log;
  error_log  /var/log/nginx/second.nohost.me_error.log;
}

For https, I did not want the third container to handle the certificates: each yunohost instance already has its own certificate and yunohost does the hard work.
So I just wanted to proxy ssl stream depending on the domain name.

I created a file: /etc/nginx/modules-enabled/ssl.conf with this content:

stream {

  map $ssl_preread_server_name $ssl_name {
    second.nohost.me    second_backend;
    *.second.nohost.me  second_backend;
    default             default_backend;
  }

  upstream https_second_backend  { server 192.168.10.4:443; }
  upstream https_default_backend { server 192.168.10.3:443; } # first yunohost
  server {
    listen 443;
    proxy_pass https_${ssl_name};
    ssl_preread on;
  }

}

In the end, I restarted nginx:

systemctl restart nginx.service

Conclusion:

Once again, this solves the issue but makes you believe that the second yunohost instance is accessible in IPv4, which is only true for http and https.

It is probably possible to use nginx and configure a reverse proxy for mail and xmpp, but this is a story for an other time.

Thank you @ljf and @Aleks,

@alb1

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.