Well, PCI compliance is different from regular server administration (a lot of it being smoke and mirrors, yes).
I do not believe ICMP (ping) is an automatic-fail condition for PCI (at least for certain SAQ levels that I'm familiar with) - however they do show up as warnings, particularly if you can get a timestamp response (to be used in timing-based attacks).
PCI prefers systems that handle CHD be "invisible" to the outside world, in an attempt to hide the systems an attacker might take interest in. Not always feasible (eCommerce, for example), but you gotta jump through the PCI hoops if you don't want to be stuck holding the bag if there's some breach.
To reduce the liability of the credit card company maybe, by making the process so complex and onerous that it is virtually impossible to complete a survey without some error or omission that would almost assuredly be used as a reason to invalidate any liability for the credit card company should some bad event take place. Source: have had to complete PCI surveys from multiple vendors.
I used to get so tired of having to write up explanations of why my FreeBSD server couldn't possibly have failed a security check for a Linux vulnerability, or that the web server they were complaining about didn't actually exist, or a million other retarded false positives every quarter. Thank goodness I don't deal with PCI any longer.
For the last security report I had to deal with for a client, the main vulnerabilities were reported against a Google site that was merely linked to from the clients site. Not PCI compliance, so more flexibility in dealing with their incompetence, thankfully.
They reported a number of purported (non-existing) "vulnerabilities" against said Google site that included that it stopped responding to their probing soon after they started hammering it with sketchy requests... They did, to be fair, point out that this could be a defence mechanism, but dinged it for preventing them from checking for other vulnerabilities.
At least I didn't have to explain why that one was nonsense - it was rather obvious to my client that the agency they'd hired were being idiots. It's not like it was difficult to see either - the domain name of the site they'd hit had "google" in it.
Sounds like a scan mis-configuration on your client's part. All PCI vuln scanners I've used require you to specify IP addresses and Domain Names you want scanned, and do not follow on-page external links.
Not the client. Third party agency hired to assess the security, and who clearly did not apply any critical thinking before sending it off. And as I pointed out, not PCI.
tenable keeps telling me that something "interfered" with its scans for PCI compliance - isn't the point of a firewall IPS system to do that? but i don't even have it on for the IP in question, so I basically have to just edit my network range to remove it. it really is bullshit.
My parent org is starting to take their vuln scan results and report them to c levels.
When they told me I informed them I stopped using their vulnerability scanner years ago because they would not allow me to chnage anything in it, including exclusions to icmp time stamps or other vulns Ive mitigated while proper fixes were in the works.
So I rolled my own and use that to audit my systems. They don’t care because “policy”. My c levels will just ask and then promptly disregard all future reports, adding to the noise
I did a job where I was given access to a server in the form of a set of credentials for an HPE iLO, which was accessible over the Internet. From there, we could use the remote console to logon as root.
HPE iLO doesn't support MFA or any form of public key authentication, and its security history is much worse than SSH. It requires several ports open and the old version they had required Java plugins on desktops and all sorts of nonsense. Using it outside of emergency repairs is a terrible experience due to console refresh lag and the fact you can't copy + paste.
The reason I had to do this insecure and annoying process is that a PCI assessor had told them it would be a hard fail to have port 22 open on the Internet, but this would apparently be fine.
Amusing but today there’s two kinds of hackers: people who manually run a campaign like in your story and the endless hordes of bots that automatically exploit systems to turn them into botnet slaves or cryptolocker hostages.
I vaguely recall some kind of malware that upon infecting a system scanned the system for other malware and removed/disabled it. The motives were far from pure, obviously. (Although there also was a case, I think, of a piece of malware specifically created to ensure "infected" system had up to date AV software and were up to date update-wise. We sure live in strange times.)
Such strategies are remarkably effective, and maybe arguably describes all security in a nutshell.
Every time I notice an obscure feature in a Google product or service and go "hm, I wonder if that could be exploited", I then always go "...meh, it'll take too long and require too much concentration to figure it out."
I can't help but picture him as a sysadmin walking away from a bunch of servers that are mysteriously 40% faster than ever before, but then he gets stopped at the door of the datacenter by some unimpressed looking lawyers who glare at him until he puts everything back
Thanks for the reference, and fair point, yeah that's not how it works at scale.
PCI QSAs are notorious for being complete jackasses. You have to be very careful about vetting them. That isn’t the dumbest thing I’ve heard like that!
[0] https://www.rsync.net/resources/regulatory/PCI_usw-s005_repo...
EDIT: It’s marked as "PASS" though, so it’s all fine, just funny.