Any Linux system that’s exposed to the world tends to get a lot of hack attempts at it. I’ve typically run
fail2ban on mine to try to mitigate this, but on Ubuntu 20.04 I was unable to get it to actually detect various attempts.
There are a lot of tutorials out there for
fail2ban in general and even several on older versions of Ubuntu, but there’s one slight change on 20.04 (or maybe even an earlier version) which makes them not work. After a lot of hair-pulling I found one particular tutorial which had, buried almost in the marginalia, the magic thing I needed to get it working: basically, you need to use the
systemd log scanning backend, as none of the others seem to actually have access to the logs themselves, at least not without a lot of hassle.
So, the short version: add
backend = systemd to the
[DEFAULT] section of
/etc/fail2ban/jail.local. But read on for some
sshd configuration notes as well!
Here’s my full local jail configuration:
I didn’t have to add any other configuration for
fail2ban, and this immediately stopped a botnet attack that had been making it more difficult for me to log in to my server.
Incidentally, one of the reasons I was even looking into this was because
ssh kept on rejecting my connections randomly (with an error of
kex_exchange_identification: Connection closed by remote host), which made it hard for me to log in or push content reliably or whatever, and it turns out that
sshd has some built-in security stuff which actually gets in the way of
MaxStartups setting defaults to
10:30:100, which means that once there’s more than 10 simultaneous login attempts, start refusing 30% of all attempts, and ramp it up to 100% when there’s 100 simultaneous attempts. This ends up having a couple of deleterious effects, including on
fail2ban; if hack attempts from a botnet are being refused, then
fail2ban doesn’t even see the attempts themselves!
So, I changed the
MaxStartups setting in
50:10:100. This allows up to 50 simultaneous attempts, starts rejecting at a rate of 10% when it hits that threshold, then ramps up to 100% up to 100 simultaneous attempts.
Anyway here’s a graph of what fail2ban did once I finally got it working:
The interesting thing here is that once it started blocking things, it had blocked a couple thousand addresses, all seemingly part of a botnet. Then as those bans expired, they mostly didn’t come back, until there was a second surge, but then once that second surge (eventually) tapered off, they didn’t come back at all. This tells me that the botnet finally realized that my server wasn’t going to let them in and gave up.
fail2ban in this case didn’t just prevent the botnet from doing any damage, it also got the botnet to ease up, which made the life of my server even better. So even if your system’s security is perfect (which it isn’t),
fail2ban is still a darned good idea.
fail2ban dropoff also correlates quite strongly with a dropoff in my traffic monitors, disk access rates, disk usage (my
/var/log/auth.log was growing at a rate of half a gigabyte per day before this!), and a bunch of other performance indicators that were affecting my server’s health and availability.
So, it’s really good to have
fail2ban working correctly, and to configure
sshd to work alongside it.
I don’t know how long this attack has been going on (my previous logs got lost in the server reset) but I do know I was having some trouble with
ssh before then so I’m pretty sure I didn’t have
fail2ban configured correctly before, either.