Hacker News new | past | comments | ask | show | jobs | submit login

They fill the logs with noise. That's "something." I finally set up fail2ban a while back... it works wonders.



Do people even read the logs? What value does it add compared to just monitoring successful logins?


There's a lot of value. For example, if you see failed logins against random user names like "dbadmin" or "root" it's likely just random scanning, but what if suddenly lots and lots of valid user names appear?


That's a great point, but I get back to the root question: who's actually looking at this? If people are examining logs it's usually for a particular trigger or a problem and filtering that signal from the noise is hard.


Likely, nobody is directly looking at the logs.

But they might be using software that automatically raises an alert when it sees repeated login attempts for a valid username.

Isn't that one of the purposes of Splunk?


It's more typical of the servers-as-pets than servers-as-cattle scenario, but sometimes one is simply curious [or extra cautious]. SSH honeypots exist at least in part for this reason.


> servers-as-pets This is a great way to put it.


> who's actually looking at this?

Well, your security team, post incident. But also automated systems like fail2ban.


And log-collectors like Splunk (with configured alerts, etc)


grep and zgrep will work wonders for checking for actual usernames in these logs even if they have significant amount of spam in them.


>but what if suddenly lots and lots of valid user names appear?

then what are you going to do?


Well that would highly depend on what I'm seeing. If it's a single user there might be an attack on the way against that user. If it's multiple users, there might have been a compromise of some credentials.

It's definitely something you need to investigate.


At a minimum, spend some of my limited time and attention on this issue rather than the 100s of other things that might be clamoring for my time.


What are you going to do to solve that issue?


Did you ever had the "pleasure" of a server grinding to a halt because the logs filled up all the space? To where you had to mount the disk to another system and clean it up before it wants to boot from again. Can be a bitch if it's a machine on a remote location. Not everything is cloud (yet) these days.

Granted, there usually is a lot more at fault when you run into such problems, but I find people not looking at logs a rather weak argument for letting them get spammed full with garbage. Certainly terrible hygiene, at least.


>Did you ever had the "pleasure" of a server grinding to a halt because the logs filled up all the space?

I've never seen this issue on any systems I manage, mostly because they all have log rotation.

>but I find people not looking at logs a rather weak argument for letting them get spammed full with garbage. Certainly terrible hygiene, at least.

Why is it weak argument? If it's something that doesn't materially impact you, why should you expend effort into remediating it? Hygiene is only important for things we interact with on a regular basis. We as a society don't care about the hygiene of the sewer system, for instance.


>I've never seen this issue on any systems I manage, mostly because they all have log rotation.

Ah, yes - the age-old claim that log rotation will magically stop a belligerent from dumping 100s of gigs of log files before `logrotate` has time to run ... filling up your disk

And even if logrotate did try to run, you have no space for the compressing file to live while it's being made

What fantasy world do you live in?


a system that grinds to a halt because of overflowing logfiles is already faulty.

What fantasy world do you live in?


Yep, if the sewer system occasionally spills into a river, you can just put up a sign and ignore it.


>Did you ever had the "pleasure" of a server grinding to a halt because the logs filled up all the space?

Everyone should be using logrotate, and if they actually read the things, shipping logs to ELK or Splunk or Greylog or whatever.


> Everyone should be using logrotate, and if they actually read the things, shipping logs to ELK or Splunk or Greylog or whatever.

Certainly they should. That is, if they have that much control over the server and if it's not some legacy system build by some defunct organization or John Doe. I do not disagree with your theory, on the contrary. But then there is reality, where this theory isn't always feasible.


>Everyone should be using logrotate

Doesn't run often enough to counter belligerents dumping 100s of gigs into logs too fast for logrotate to keep up


Keeping the log file on the boot partition was the first mistake


> Keeping the log file on the boot partition was the first mistake

Wrong assumption. With logs on a full system (but not) disk, your system can still grind to a halt during boot. Sure, if you do have access to the bootloader, you can do an emergency/recovery boot. But you do not always have that on systems build by others (especially product vendors).

I would not be making this point if I had not run into situations where this was an actual problem. I can assure you it was never the result of my personal bad architecture or maintenance and almost exclusively while dealing with third party products.

It would be valid to argue they should get their shit together, but the reality is that at the end of the day, companies buy systems like these and you still will have to deal with them.


Wish I could give you extra upvotes - I keep logs on a separate partition/drive

Doesn't mean the system cannot crash (because logs can no longer be stored)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: