Hashtopolis Forum
Agent read timeout due to high ram usage on server - Printable Version

+- Hashtopolis Forum (https://hashtopolis.org)
+-- Forum: Support (https://hashtopolis.org/forum-1.html)
+--- Forum: Problems (https://hashtopolis.org/forum-2.html)
+--- Thread: Agent read timeout due to high ram usage on server (/thread-1813.html)



Agent read timeout due to high ram usage on server - sorin-mihai - 10-20-2022

Hi. If the same problem or a similar one has been discussed in other threads, I apologize, kindly point me to the correct thread if possible.

I tried multiple server sizes, exposed to internet and behind cloudflare, TLS on both ends, at cloudflare and between cloudflare and the server, about 100 agents, hashtopolis 0.12.0+275, agents mostly s3-python-0.6.1.15, but some s3-python-0.6.0.10

Initially used Apache with mod-php, then NGINX with php-fpm, php7.4, configs almost defaults, standard packages from Ubuntu 20.04, then tried NGINX with php-fpm, configs almost defaults, but this time with standard packages from Ubuntu 22.04. When trying Ubuntu 22.04, database got "upgraded" but didn't attempt downgrade. Moved to a split server setup, Apache in one, database in another.

Initial setup seemed ok and agents were added one by one, a few small tasks were tested and completed in the first few days, then a large brute force was started and more agents added. All seemed fine, but after a while the UI started to be very slow, so I moved to NGINX and php-fpm thinking that maybe NGINX will be faster.

After reaching about 100 agents and after running for over a week, the UI became slow again, taking up to 15-20 seconds to load the page of the active task were all agents are assigned. The exact same page, with exact same number of agents used to load in 2-3 seconds a day before this started. Tried tweaking parameters in php-fpm but no luck. During the tweaking attempts, the UI stopped working completely and the agents started to not be able to connect, while NGINX kept throwing 499 errors.

Switched back to Apache and mod-php in an attempt to get somehow closer to the defaults in order to debug. After trying different things in both php and NGINX configs and even at sysctl level, and after increasing the server capacity to 32cpu/128gb ram, I still can't get a working UI and the agents can't connect.

On server
==> /var/log/apache2/error.log <==
[edited time] [php7:notice] [pid 46478] [client editedIP:PORT] PHP Notice:  Trying to access array offset on value of type null in /var/www/html/api/server.php on line 17

==> /var/log/apache2/access.log <==
edited IP - - [edited time] "GET /getHashlist.php?hashlists=5&token=BpJkADTe9S HTTP/1.1" 200 472 "-" "s3-python-0.6.1.15"
edited IP - - [edited time] "POST /api/server.php HTTP/1.1" 200 396 "-" "s3-python-0.6.1.15"

On client

[edited time] [ERROR] Error occurred: HTTPSConnectionPool(host='edited', port=443): Read timed out. (read timeout=30)
[edited time] [ERROR] Failed to get file status!
[edited time] [INFO ] Got task with id: 29
[edited time] [INFO ] Client is up-to-date!
[edited time] [INFO ] Got cracker binary type hashcat
[edited time] [ERROR] Error occurred: HTTPSConnectionPool(host='edited', port=443): Read timed out. (read timeout=30)
[edited time] [ERROR] Failed to get chunk!

At client side it just keeps increasing the log file, nothing concerning for the moment, but at server side it keeps spawning threads and using ram until it reaches max and then starts swapping.

Any recommendations for large amount of agents, for specific changes in the configs of apache/nginx or php.ini or maybe at sysctl level?
Any specific versions of php/apache known to work properly by default with large amount of agents?