[Streams] 504 timeout (nginx): running out of children

My YunoHost server

Hardware: Lenovo ThinkCentre m720q
YunoHost version: 11.2.9.1 (stable)
I have access to my server : Through SSH | through the webadmin |
Are you in a special context or did you perform some particular tweaking on your YunoHost instance ? : no
If yes, please explain:
If your request is related to an app, specify its name and version: Streams 23.12.17~ynh1

Description of my issue

There’s an odd title…!

I installed a new instance of Streams earlier today, and everything was working fine until a couple of hours ago, when I started getting 504 timeout errors from nginx.

Looking in /var/log/php8.2-fpm.log, everything looks clear until it says:

[23-Jan-2024 21:29:01] NOTICE: fpm is running, pid 681
[23-Jan-2024 21:29:01] NOTICE: ready to handle connections
[23-Jan-2024 21:29:01] NOTICE: systemd monitor interval set to 10000ms
[23-Jan-2024 21:34:00] WARNING: [pool streams] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 2 idle, and 12 total children
[23-Jan-2024 21:34:01] WARNING: [pool streams] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 1 idle, and 14 total children
[23-Jan-2024 21:34:02] WARNING: [pool streams] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 3 idle, and 17 total children
[23-Jan-2024 21:34:03] WARNING: [pool streams] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 3 idle, and 18 total children
[23-Jan-2024 21:34:04] WARNING: [pool streams] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 19 total children
[23-Jan-2024 21:34:05] WARNING: [pool streams] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 0 idle, and 23 total children
[23-Jan-2024 21:34:06] WARNING: [pool streams] server reached pm.max_children setting (24), consider raising it

…and this seems to be without my doing anything.

Could anyone shed any light, please? Could it be because my server was completely “alone” up until this evening but then started discovering other servers to federate with?

I have checked my disk space and RAM using df -h and free -h, and both remain constant throughout.

Checked the log again, and it says:

... # a lot of entries removed
[23-Jan-2024 22:01:00] WARNING: [pool streams] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 3 idle, and 16 total children
[23-Jan-2024 22:01:16] WARNING: [pool streams] server reached pm.max_children setting (24), consider raising it
[23-Jan-2024 22:14:00] NOTICE: configuration file /etc/php/8.2/fpm/php-fpm.conf test is successful

My site is now viewable again… but is this going to be a recurring issue, and can I do anything about it? I know absolutely nothing about nginx >.<

To answer myself, yes, it will be a recurring issue.
The logs from last night look clear up until about half an hour ago when I got up – it’s as if it knew!

Can you try going into config panel for the app (in Yunohost Webadmin) and specify higher memory footprint? (start with high, if this doesn’t help select specific and set it to i.e. 200MB)

Perhaps there’s some queue that needs to be processed but keeps crashing due to OOM.

config is expected to bork the configuration, you’ll have to go to /etc/php/8.2/fpm/conf.d/streams.conf (or something to that effect)and change 7.4 to 8.2 inside, then sudo systemctl restart php8.2-fpm for it to take effect.

Also this seems like an issue worth reporting on Github, although I feel like it was reported already.

Okay. I went to the config panel and tried those options in turn (high, specific at 200Mb). I didn’t restart anything in between each try (don’t know if I should have?) and there was no change in performance.

I don’t have a /etc/php/8.2/fpm/conf.d/streams.conf file.
The nearest I have is /etc/php/8.2/fpm/pool.d/streams.conf, which, when run through cat, gives:

[streams]

user = streams
group = streams

chdir = /var/www/streams

listen = /var/run/php/php8.2-fpm-streams.sock
listen.owner = www-data
listen.group = www-data

pm = dynamic
pm.max_children = 19
pm.max_requests = 500
request_terminate_timeout = 1d


pm.start_servers = 7
pm.min_spare_servers = 6
pm.max_spare_servers = 9

; Additional php.ini defines, specific to this pool of workers.

php_admin_value[upload_max_filesize] = 50M
php_admin_value[post_max_size] = 50M

I’ll try increasing the “servers” and “children” values and see what happens…

Edit: Interestingly, when I try to open it in nano the terminal says it’s meant to be read-only and the values I see have changed from those above:
pm.max_children is set to 24.
The three ‘servers’ values are set to 8, 4, 12 respectively.
Now when I put it again through cat, those are the values that appear… weird.

Edit: Doubling the children & servers values, then restarting the php8.2 service, had no effect. So I threw caution to the wind and changed the values to

pm.max_children = 200
pm.start_servers = 100
pm.min_spare_servers = 50
pm.max_spare_servers = 100

…and it has got my site back up and running, but I’m still getting errors in the logs that say that even 200 children isn’t enough!

I still think you should try increasing memory limits via config panel.

Setting it to high, restarting the php8.2 service, then checking the logs seems to have no effect (after putting the values in the config file back to what they were).
The same with setting it to ‘specific’, trying with 100Mb, then restarting and checking the logs, then 200Mb and restarting and checking the logs.

For the time being, I have set the memory footprint in the Streams configuration panel to “medium”, and then followed the instructions at the end of this page to work out how many children I should set. It seems as broad as it is long to do this with the setting on “high” or greater with “specific”.

Before I did this, $ free -h suggested I had 4.7G of RAM free and 5.7G available (which I guess includes swap). Choosing 4.7G as the lower, safer value, 4700Mb divided by ~30MB per pool/child process (this being the value given in the Streams configuration panel for “medium”) = 156 pm.max_children.
I have set the values in the config file to:

pm.max_children = 150  # default was 24
pm.start_servers = 15  # default was 8
pm.min_spare_servers = 10 # default was 4
pm.max_spare_servers = 20 # default was 12

I’m checking $ free -h every so often and for the last half-hour or so things have remained constant. Streams performance has also been good and constant.

Edit: Checked the logs the next morning and the “max children” error only occurred once. All other errors were saying php was busy and was averaging around 50-60 children.

I reported it on the Streams-YNH issues page and found someone had already asked about it in the Streams support group a few minutes before I got there.

It seems it’s a common issue for PHP-based Fediverse software and tweaking the config files as above is the solution. At the time of writing, the root cause isn’t clear, but it’s known to be linked to your server making new connections as federation happens.

This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.