There's a lot of value. For example, if you see failed logins against random user names like "dbadmin" or "root" it's likely just random scanning, but what if suddenly lots and lots of valid user names appear?
That's a great point, but I get back to the root question: who's actually looking at this? If people are examining logs it's usually for a particular trigger or a problem and filtering that signal from the noise is hard.
It's more typical of the servers-as-pets than servers-as-cattle scenario, but sometimes one is simply curious [or extra cautious]. SSH honeypots exist at least in part for this reason.
Well that would highly depend on what I'm seeing. If it's a single user there might be an attack on the way against that user. If it's multiple users, there might have been a compromise of some credentials.
It's definitely something you need to investigate.
Did you ever had the "pleasure" of a server grinding to a halt because the logs filled up all the space? To where you had to mount the disk to another system and clean it up before it wants to boot from again. Can be a bitch if it's a machine on a remote location. Not everything is cloud (yet) these days.
Granted, there usually is a lot more at fault when you run into such problems, but I find people not looking at logs a rather weak argument for letting them get spammed full with garbage. Certainly terrible hygiene, at least.
>Did you ever had the "pleasure" of a server grinding to a halt because the logs filled up all the space?
I've never seen this issue on any systems I manage, mostly because they all have log rotation.
>but I find people not looking at logs a rather weak argument for letting them get spammed full with garbage. Certainly terrible hygiene, at least.
Why is it weak argument? If it's something that doesn't materially impact you, why should you expend effort into remediating it? Hygiene is only important for things we interact with on a regular basis. We as a society don't care about the hygiene of the sewer system, for instance.
>I've never seen this issue on any systems I manage, mostly because they all have log rotation.
Ah, yes - the age-old claim that log rotation will magically stop a belligerent from dumping 100s of gigs of log files before `logrotate` has time to run ... filling up your disk
And even if logrotate did try to run, you have no space for the compressing file to live while it's being made
> Everyone should be using logrotate, and if they actually read the things, shipping logs to ELK or Splunk or Greylog or whatever.
Certainly they should. That is, if they have that much control over the server and if it's not some legacy system build by some defunct organization or John Doe. I do not disagree with your theory, on the contrary. But then there is reality, where this theory isn't always feasible.
> Keeping the log file on the boot partition was the first mistake
Wrong assumption. With logs on a full system (but not) disk, your system can still grind to a halt during boot. Sure, if you do have access to the bootloader, you can do an emergency/recovery boot. But you do not always have that on systems build by others (especially product vendors).
I would not be making this point if I had not run into situations where this was an actual problem. I can assure you it was never the result of my personal bad architecture or maintenance and almost exclusively while dealing with third party products.
It would be valid to argue they should get their shit together, but the reality is that at the end of the day, companies buy systems like these and you still will have to deal with them.