Node Exporter service fails with "code=killed, status=31/SYS"

What type of hardware are you using: VPS bought online
What YunoHost version are you running: 12.0.12
What app is this about: Node Exporter

Describe your issue

Hi, I’ve installed Node Exporter to use with the Prometheus to monitor system metrics. I’ve also setup the prometheus.yml file as per the Prometheus Docs. However the Node Exporter service fails with the following error:

Apr 04 12:55:21 systemd[1]: node_exporter.service: Main process exited, code=killed, status=31/SYS
Apr 04 12:55:21 systemd[1]: node_exporter.service: Failed with result 'signal'.

I couldn’t figure out why this happens. Does it have to do with installation or something similar? What should I do?

Regards

Share relevant logs or error messages

The log from the relevant service page:

I’ve encountered the same issue. From what I understand, this is due to permission errors. The systemd service may need some change. However I have not tried modifying it so far.

I have the same issue

Could you try this ?

systemctl stop node_exporter.service
nano /etc/systemd/system/multi-user.target.wants/node_exporter.service

and comment this line using nano or vim…

#SystemCallFilter=~@clock @debug @module @mount @obsolete @reboot @setuid @swap @cpu-emulation @privileged

Then restart the service like this

systemctl daemon-reload
systemctl start node_exporter.service && journalctl -fu node_exporter.service

Another things I would understand, I use a node_exporter app but with an external Prometheus, for this case the path to listen the api is /metrics I am not sure about the choice to use the path /api for local Prometheus…

Perhaps you can tell me…

Hello, could some of you try my version ? I also have include a config panel if Prometheus is external

To try use

yunohost app install https://github.com/YunoHost-Apps/node_exporter_ynh/tree/panel_conf

yunohost app upgrade node_exporter -u https://github.com/YunoHost-Apps/node_exporter_ynh/tree/panel_conf --debug

Hello @rodinux. I don’t know what to say for that. I don’t use the /api. Maybe the /api provides some control functions over the Node Exporter, while the /metrics is only for gathering collected metrics information.

I’ve installed that branch to a local VM yunohost instance. The service is up and okay. Below is the logs of the service if you interested:

Thanks, I also tried on a server how it works. If Prometheus is a local one I need edit the prometheus config like this

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "node"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:9100"]
       # The label name is added as a label `label_name=<label_value>` to any timeseries scraped from this config.
        labels:
          app: "prometheus"
    metrics_path: /metrics

If prometheus is external, the port Ipv4 of the node can listen only the external IPv4, adding a CIDR /32 is more secure and the port for IPv6 of the node is blocked…

It looks like your prometheus is misconfigured for the nodes. You don’t need to specify the metrics_path for the node exporter. Prometheus is installed with preconfigured settings to target itself:

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:9091"]
    metrics_path: /prometheus/metrics

You shouldn’t change that configuration to add a node exporter configuration. As per the prometheus docs Instead, you define a new job_name as node and add the node exporter configs below it.

Here is my definition’s whole picture as a reference:

scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:9091"]
    metrics_path: /prometheus/metrics
    
  - job_name: node
    static_configs:
      - targets: ["localhost:9100", "<external_node_exporter_domain>:9100"]

As you can see I defined a separate job_name called node and in the targets I defined both the internal (localhost:9100) and external (<external_node_exporter_domain>:9100) node exporter end points. Then I restarted the prometheus service.

After restarting the prometheus, navigate to prometheus UI; click Status > Target Health from the top menu. Then you should see the health status both for your prometheus and node exporter targets. For example the mine looks like the following:

NOTE: The port 9100 should be unblocked in yunohost firewall settings in order the connections to success.

There are 2 nodes I configured for the node exporter, and the pre-configured prometheus node. They are all up and healthy. Node exporter + prometheus + grafana make up really nice combo for monitoring your systems:

I have put a iptables rule to let the node port open… If the prometheus is local 127.0.0.1:__PORT__ and if external __IP_PROMETHEUS_SERVER:__PORT_

But I have blocked the port for IPv6, so the metrics can be read only by the IP of the Promotheus server with the IPv4…

You can see these rules on /etc/yunohost/hooks.d/post_iptable_rules/50-nodexport

So if it’ OK I push my Pull request, first on branch testing, do you agree ?

Oh, sorry @rodinux, it is beyond of my knowledge since I am not familiar enough with the yunohost scripts. Hence I am not able to agree or disagree.

However I do have a bit of bash programming experience. If you put the port as a variable it should be ok since the port numbers may vary depending on the port usage of the system. For example in my system that I mentioned in my previous answer, the prometheus port is assigned as 9091 while its default is 9090.

Indeed, it’s the same for node_export…

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.