WordPress brute force attacks are increasingly common at the time I write this – September 2014. Recently a server I look after was coming under attack and then after a few minutes the site would display ‘Error establishing a Database Connection’. When I logged into the server the MySQL service wasn’t running.
My first thoughts were that something in the attack was causing MySQL to crash, There was no information in the MySQL error.log to indicate why though so I spent time looking at plugins to help mitigate a brute force attacks, which tend to focus on the WordPress wp-login.php and xmlrpc.php files:
Disable XML-RPC will disable WordPress XML-RPC functionality meaning that nothing can be done to your content, however this does nothing to prevent initial database queries so MySQL can still be affected when running this, and it does nothing about wp-login.php either.
BruteProtect – I’ve seen comments that it will protect xmlrpc but I’ve seen it in action during an attack and as of version 2.2.2 it did nothing to stop it.
NinjaFirewall – there are a lot of configuration options but this one does the job. It sits in front of WordPress and so it can intercept the attack before all the database activity starts up. This worked great when I used it during an attack.
However you may host multiple WordPress sites on a single server and it could be tedious having to install this and configure it for each site, plus there is going to be duplication of what the code is doing and Apache still has to handle all the incoming requests.
It may be better to stop any attacks at the firewall because that’s the single point of entry to your server. This approach still requires a plugin – WP fail2ban but this one will log attacks to your systems auth.log which can be picked up by fail2ban and automatically added for a temporary period to your firewall rules. It is more complicated than the other plugins to install but once a malicious IP has been blocked it can’t affect any of the sites on your server will keep the system load much lower.
After getting the fail2ban solution in place the database was brought down yet again. I ended up looking in kern.log and at the time of the attack was the line ‘apache2 invoked oom-killer’ – that revealed the problem… Apache was using so many processes that it was running out of memory and as this particular server runs off an SSD the swap file is disabled. As there was nowhere else to get memory from it started killing off other processes, the biggest of these was MySQL.
What a relief to find the cause – MySQL wasn’t crashing at all. The solution was just to reduce the MaxClients value in apache2.conf – the default value of 150 is way too big for a modest sized server. If each process takes 40 MB then 150 processes require 6 GB of RAM. Getting a realistic value for this requires a little load testing plus the use of some tools to see how the memory usage increases as Apache has to handle more requests.