I really hate the accusation that changing services to uncommon ports is security through obscurity. It's not a security practice, it's a high pass filter.
There's billions of hosts on the Internet and many millions of them running an SSH service. Even attackers with huge botnets at their disposal have finite resources and time to probe for hosts and then try exploiting vulnerable ones. Every exploit attempt has some percentage chance of being against a honeypot used to help catch the attacker.
There's little point in even sniffing non-standard/uncommon ports for different services. Anyone changing a service port has a higher likelihood of having a more secure system than someone with standard ports. So even if you scan high number ports for an SSH service you're unlikely to find one on a high port that's not using a more secure than default configuration.
So when you have SSH listening on some uncommon port and see hits in your logs you have a reasonable assurance the attacker isn't some script kiddy spamming the "attack" button on some stupid app. You also have reasonable assurance blocking that host/network with fail2ban or equivalent will actually be effective. You can also set way more strict fail2ban rules since you're far less likely to ever lock out authorized traffic. The more specific your allow criteria the better and more effective your block rules can be.
A "Beware of Dog" sign doesn't physically secure your house on its own. You might not even have a dog. What it does in change the calculus for a burglar. Why risk it when there's more obviously less risky targets nearby? The sign is a high pass filter.
Ports under 1024 are privileged in Linux - they require root or root delegated privileges to run. When SSH is on port 22, you have the assurance that unless the server is root compromised, it is what you think it is running on that port.
When you make that port 2222 or whatever, like so many people do, you have cut out a lot of noise... but now that compromised PHP application you had running has allowed someone to now race you every time SSH is restarted for an update or crashed or whatever to bind on that port. Let's say it wins - now you've got something listening on the SSH port. If someone using SSH ignores the fact that the host key has changed, now they're trying to login to a fake SSH server. Maybe you use password auth and you just gave them the password. Maybe they're using an OpenSSH client that is vulnerable to leaking private keys in certain situations. Maybe the fake SSH server pretends to be a real shell and they then log whatever actions you try to take when you SSH in. Because they're going to be able to figure out what port SSH is listening on - a fingerprinting port scan can be done in seconds.
You are sacrificing security when it comes to a more focused attacker for the sake of filtering out the low effort mass scans and basic brute forces. The thing is - I'm worried about the former, not the latter.
If you care enough about securing or filtering your SSH to go through any of this trouble, just set up a VPN on a separate machine and restrict SSH access via firewall to that machine. Spin up the smallest VM your cloud provider has, throw up wireguard on it, and you're good to go. It'll be plenty for a VPN that's basically just there for SSH access. Now someone has to have both an exploit for wireguard and an exploit for SSH to get into your machine that has things you care about, you've filtered out all the noise, and you haven't introduced new security risks for a more determined attacker.
What you mention makes sense if all of the wild potential problems exist on the system. You've already fucked up leaving password authentication enabled on your theoretical machine.
It's pretty trivial to leave SSH on port 22 and just forward a high number port to it while blocking 22 externally. All the root-not-compromised assurances and still maintaining the high pass filter.
While a VPN certainly can be a good solution to securing a machine, you've now got the problem of the VPN server needing its access protected.
Public key only for authentication and strict fail2ban rules combined with port forwarding makes for a very tight system. Not invulnerable but secure enough to not be worth the effort.
As I said originally and still maintain, using non-standard ports is a high pass filter. It's not a security measure. It might be part of your security setup but it's just a filter.
It doesn't necessarily need to be a high port either.
All bots will check port 22 first and then move onto others. Your can have a port sniffer watching this and block any attempts to your actual <1024 port of they've also hit port 22.
It works a treat. Shodan, begone. I agree with your high pass sentiment
> It's not a security measure. It might be part of your security setup but it's just a filter.
I think most people saying security by obscurity 1) know that’s a fallacy 2) know that by using that phrase the reader/audience knows what they mean without having to explain a phrase like high pass filter. It’s a bit pedantic to rant on the use of terms.
> You've already fucked up leaving password authentication enabled on your theoretical machine.
You could also be exposed to this with key based auth - older openssh clients have a security flaw where they can leak keys, and you could also have a 'fake' shell that still allows them to gain a lot of details about the inner working of your systems.
>It's pretty trivial to leave SSH on port 22 and just forward a high number port to it while blocking 22 externally. All the root-not-compromised assurances and still maintaining the high pass filter.
Yep! Certainly can.
>While a VPN certainly can be a good solution to securing a machine, you've now got the problem of the VPN server needing its access protected.
There's a few things here:
So, Wireguard uses key based auth, so it's pretty sure by default. Additionally, it doesn't show up in port scans - if you don't send the correct key, you get no response from the wireguard server. You'll get the same 'open|filtered' response from nmap and similar that you would for a UDP port with nothing listening. (I don't think this particularly matters - I don't really care if someone knows I've got WireGuard running - I would run it even if you could tell I was. See following paragraph)
Second, if the VPN is compromised (from a key leak) the worst they get is access to whatever the VPN network is. I don't trust clients just because they're on the VPN, and neither should you - they have all the same auth and security requirements I would put in place for something on the public internet. They still need to get past key auth for SSH (or have a zero day for it). If there's a zero day for wireguard itself, well, now they have access to that box, and can potentially use it for nefarious purposes, but I don't keep any private data for my company or our customers on it, so I replace it and upgrade to a version of wireguard without the exploit. They still need to somehow get past SSH auth, or use another zero day. I think the chances of there being simultaneous public zero days for both at the same time is pretty much nil, and someone being willing to burn two private zero days for the two on you means you're up against someone with enough resources you're probably screwed to begin with.
>Public key only for authentication and strict fail2ban rules combined with port forwarding makes for a very tight system. Not invulnerable but secure enough to not be worth the effort.
Frankly, I think public key only auth is most likely enough for 99.999% of everyone to begin with. I don't think fail2ban or port forwarding from a nonstandard port to 22 matters overmuch, even for filtering logs. If you're going for the "realistically good enough" setup, key only, no root, only have personal nonstandard usernames allowed, that log noise doesn't matter because you can just grep for failed attempts for the usernames you care about and ignore everything else, and the only thing you really have to fear is a OpenSSH 0day, which running on a nonstandard port isn't gonna save you from either. It might buy you a little time to get it patched and not get hit in the first wave, but that's about it.
I guess that if you manage to take over the high number port by somehow crashing the high port forwarder service and then listening on that high port, you have taken over ssh on the high port.
Someone setting this up well on Linux would be using iptables forwarding, not a user-space service listening and forwarding. Now, this hypothetical attack has to be able to manipulate the kernel's network stack and we might as well stop pretending the low 1024 ports have special significance in that case, either.
If you can crash or modify ip tables then you pretty much have free access into the system anyway. No need to expose a fake ssh host. Just change the ssh binary itself for an infected one..
It's still better than putting it in the same, expected spot. Plus, port scanning is often a red flag behavior (i.e. it's against terms of service in EC2 to perform it from one of their machines), and it can be detected.
It's like putting your key under your door mat vs. some loose brick near your back door. Sure, someone can still find it (I don't recommend the real-life equivalent), but assuming you have a camera and/or nosey neighbors, there's a good chance the presumed invader is going to look suspicious enough to garner unwanted attention.
Oops I didn't know about the scanning thing. I often do this from my hosted VPS to my other systems, to make sure all ports that I plan to have closed are actually closed.
Especially when it involves IPv6 networks and each internal device has its own IP, this can involve a lot of scanning. I haven't had any complaints but good point, I could get banned for that.
It is a major issue. Things like autoscaling have got a lot of bad habits ingrained in people though.
You should definitely use SELinux or AppArmor. For any production workload I do.
But for any production workload I don't allow SSH over the public internet. I require a VPN connection to a subnet that only has jumphost(s), the jumphost access logs are shipped to an ELK stack, SSH requires both a key + MFA to login to the jumphost, and SSH between production servers is blocked and alarmed on.
SSH onto the prod servers should also be denied. You say autoscaling so do you have an image? If yes then why do you need prod SSH access anyway? If a box is acting up kill it and let the ASG create a new one.
Sometimes you have trouble reproducing an issue outside of prod, even with things like tcpreplay or blkreplay. You could just kill off a problematic instance, but then you have trouble knowing why there was a problem to begin with. Grey failures might not be obvious in logs or metrics.
The idea that you never ever have to SSH into a production server is a nice ideal, but I've never seen it survive reality unless you just shrug about issues occurring and don't mind not being able to root cause them.
But it's also a world where people have grown accustomed to doing this - automatic scaling in the cloud with re-use of IPs means that systems come up with the same IP frequently, so people have gotten used to doing this.
For the stuff I manage in the cloud I use their APIs to keep my known_hosts file up to date - if an IP got reused by an instance, I clear the entry in my known_hosts file. If it didn't, I would see the key change alert (verified by modifying the host key on the destination server)
> Wireguard doesn't answer anything during port scanning, or when it receive unauthenticated packets
While I agree that UDP port scanning is harder than TCP, since you cannot just batch send SYNs on every port, it really depends on how you did setup iptables and the sophistication of the port scan.
Most scanners will consider that not receiving an ICMP/port unreachable is a sign that some UDP service is listening. This could be prevented by a default DROP instead of REJECT to confuse scanners, but it has other annoying implications. I bet most people out there do use REJECT so a UDP service such as wireguard would be immediately spotted because of its lack of response.
Also note that if the attacker is willing to invest a bit more time on the map, they will most likely have a Wireguard probe.
Wireguard doesn't respond at all if you do not send the correct key from the start, so I'm not sure how you would write a probe for it. You receive the exact same response as if you send a UDP packet to a port with nothing listening.
Most real world firewall configurations I have seen use DROP instead of REJECT, so anecdotally I'm not sure about your claim.
I don't think it's particularly worrisome if someone knows I'm running Wireguard, though.
If there's a wireguard zero day, they gain no more access to the rest of my servers than if I had port 22 open to the internet - and I don't think that's a significant risk with key based auth to begin with. If there's an openssh zero day, then yeah that could be trouble, but now someone needs to have zero days for both wireguard and openssh to get in to my prod servers?
If they've got two private zero days for that and they're willing to burn them on me, I'm an incredibly high value target and pretty much everything we're discussing here isn't going to be enough to save me, and I've got much bigger problems.
> Wireguard doesn't respond at all if you do not send the correct key from the start, so I'm not sure how you would write a probe for it.
Is that really so? I'm not all that familiar with Wireguard but that seems like a debugging nightmare if the client has no way to get any error pointer from the server. At least my experience from setting up some IPsec infrastructures is that client logs are essential for troubleshooting.
> Most real world firewall configurations I have seen use DROP instead of REJECT, so anecdotally I'm not sure about your claim.
I do advocate for DROP, but most configurations I see are default REJECT. DROP has a bunch of disadvantages that most people don't want to deal with. It messes with TCP because of the lack of ICMP responses, and overall it makes troubleshooting harder because you end up with programs hanging and time outing instead of instantaneous failure.
Actually the use of REJECT is so widespread that it makes my life easier. When multiple levels of firewalls are involved, I can be quite sure that if the program hangs, it's a rule on my side, while a RST tells me that it's somewhere else.
> I don't think it's particularly worrisome if someone knows I'm running Wireguard, though.
Agree, though the subject is just port scanning here, not what happens beyond that.
>Is that really so? I'm not all that familiar with Wireguard but that seems like a debugging nightmare if the client has no way to get any error pointer from the server. At least my experience from setting up some IPsec infrastructures is that client logs are essential for troubleshooting.
Yep! Which, you're right, can make wireguard troubleshooting a pain... but it also mostly 'just works' with significantly less configuration overhead and chance of messing things up than your general IPSec IKE setup. IKEv2 definitely makes things nicer, but it's still not as generally painless as wireguard.
> When you make that port 2222 or whatever, like so many people do, you have cut out a lot of noise... but now that compromised PHP application you had running has allowed someone to now race you every time SSH is restarted for an update or crashed or whatever to bind on that port.
This attack is mitigated quite strongly if you have selinux enforcing and aren't running networked services in unconfined domains.
Even if you have networked services that aren't confined in the default targeted policy, you can probably learn to write policy for them in a day or two, although poor documentation can make for a steep learning curve.
If you're going about that level of security (and you should be!) - why are you bothering with SSH being open to the public internet to begin with?
This is what's weird to me about this whole argument - people can come up with lots of ways to secure this, yet aren't willing to do one of the things that will provide the most security while also offering a high pass filter in blocking SSH access to the public internet. No log noise, no chance of a zero day hitting it, one key being compromised is no longer enough to result in an access breach.
In my case it's because (not counting the machines I get paid to admin, which are indeed behind a VPN) I only admin one isolated VPS, so without a dedicated bastion I feel that the benefit of a VPN is reduced. I just secured it according to the DISA STIG plus some more intrusion detection and stronger selinux confinement.
Adding a dedicated bastion would double my monthly costs, but SELinux costs me nothing if the targeted policy covers my applications, or like half an hour of my time per service if I have to write my own policy modules.
Although, I should point out I'm playing devil's advocate here because my ssh is still on port 22.
I agree, however with any team with size > 1 you then have to manage access. I've seen small teams share a common credential and getting SSO to work OOTB with that VPN infrastructure in SCM is a pain. Additionally, if you end up having more behind that VPN (e.g., db's, monitoring, etc) its arguably net worse to do than the port-shift IMO.
Fully agree. If you want security by obscurity, modify openssh-server and jumble up the protocol a bit. XOR all the bits with some key. Send ssh packets encapsulated in Skype video frames or encoded in the fur of cat pictures over HTTP port 80.
I'm not actually joking -- the above tricks work really well for stable connections through the China firewall.
I think a useful related question is: Does running SSH on IPv6 count as security, or security through obscurity?
Because the public facing SSH services I have on IPv6 have never been probed or had bad login attempts even once in the last four years. I don't consider that a security control at all.
Good question. There is a lot of stuff you can safely move to IPv6 (monitoring, SNMP) and it will completely clean your logs of script kiddies trying get in that way. (Although it does not preclude you from actually securing those things, of course.)
SSH is bit tricky though, because you may find yourself on a laptop in a hotel and then you would first need to create an IPv6 tunnel to be able to get in. If the only time you need to get in is during an emergency that's a hassle you don't need.
If you're going to the trouble of having a tunnel solution then you don't need public SSH or whatever else to begin with :)
It's a pity IPSEC is still such a total dumpster fire on Windows as it works great for ad-hoc with everything else, but thankfully Wireguard has been about as light-touch as VPNs get and I've had good luck getting it out of nearly any network.
>I really hate the accusation that changing services to uncommon ports is security through obscurity.
If you're doing this for security reasons, then it is by definition 'security through obscurity'. The problem is that you don't like the term because it generally carries a negative connotation. What you and the author should be arguing is that 'security through obscurity' shouldn't be seen as a negative necessarily.
>So when you have SSH listening on some uncommon port and see hits in your logs you have a reasonable assurance the attacker isn't some script kiddy spamming the "attack" button on some stupid app.
Thank you for this. The reason for moving ssh to a different port is not to stop a determined hacker, it is to lower the risk from script kiddies. It lowers the risk from stupid stuff, and is worth doing.
> I really hate the accusation that changing services to uncommon ports is security through obscurity. It's not a security practice, it's a high pass filter.
Well the high pass filter's purpose is to increase security. Therefore it is literally security through obscurity... Even if it's effective, that's what it is...
No, seems to me that giantrobot's point is that adding randomness to port numbers is a light security through computational complexity (almost a shared secret), that can in turn be used to help prop up other security practices, such as fail2ban. Security through obscurity is reliance in security engineering on design or implementation secrecy
Preventing people from ddosing your auth logs leading to false negative fatigue that lowers the effectiveness of your login monitoring security measures is actual security, not security by obscurity.
Anything that reduces the likelihood of success of any attack is security. Non-standard ports essentially remove you from the radar for a lot of dumb IP block scanners. That alone is worth the minimal effort involved.
A religious approach is not appropriate for technical issues. What you call "security through obscurity" is not the Devil, to be shunned at every step. That's just a meme that's been uncritically propagated via social media.
Hiding things has its value. Learn how to use all tools at your disposal, wisely.
One could argue all security is "through obscurity". The encryption key exists somewhere in the parameter space, which is open to everyone. You just don't know where it is.
It is more productive to think of the onion model of security. Any technique is just another layer in the onion. As long as you have a sufficient number of layers, and they're all good in different ways, you're probably fine.
> A religious approach is not appropriate for technical issues. What you call "security through obscurity" is not the Devil, to be shunned at every step.
Of course not, and frankly that's a gross mischaracterization of the parent's point. The problem is not that security through obscurity is the devil; the problem is that security through obscurity is not security, and is not useful when you need security.
> The encryption key exists somewhere in the parameter space
This misses the point of cryptography, which is that you (roughly) have to try each possible key separately. So, for example, the combination of a 64-bit security layer (or set of layers) with another 64-bit security layer is a 64-bit security layer, not a 128-bit security layer, which you should know if you know what "parameter space" means.
Log filtering at the expense of having one more thing to manage, forget, or otherwise screw-up.
Ways I've seen this go wrong:
- someone fails to copy the port while communicating it
- the new whiz-bang AI security solution detects non-standard SSH ports and "quarantines" you while you try to figure out what happened and who to talk to
- someone manually "reviews" the firewall rules and locks you out of your own boxes
- someone builds a tool that uses SSH but doesn't allow non-standard ports
This is a feature. Allowing Unfriendly AIs or incompetent morons to dictate security policy will go horribly wrong eventually; nonstandard SSH ports help make it painful for you to allow them to gain a foothold in the first place.
Turning incompetence into actual problems is not a value-add. I already know the incompetence exists, so there's no informational value, and if I were in a position to do something about it, I would have, so there's no organizational value. Just wasted time.
I have certain hosts behind a single IP and forward SSH to them on arbitrarily chosen ports.
Do I still get random logins on those ports?
Why yes I do.
Does putting SSH on a different port make any difference?
No it doesn't.
Putting SSH on a different port is either done for a specific reason, or you're just deluding yourself that you've somehow reduced your attack surface.
It clearly makes a difference for me. My servers which have SSH listening on port 22 have countless failed login attempts. Those running it on another random port have almost none.
Oh, I'm aware it's "boxes". But, I got into the "boxen" habit in the middle 90s and now that I'm "old" I've chosen to become stubborn and stuck in my ways. Also, I think it's more fun. I've got other things to be pedantic about.
OpenSSH on port 22 gets at least one attempt a minute for me (and that's with fail2ban blocking IPs on a single failure.) Dropbear on port 2222 gets at most one an hour.
Haven't seen one in a while bug a steering wheel club lock had the same effect. A determined attacker could just cut the steering wheel and remove it but there were a bunch of other cars without them.
EDIT: What happens to HN? It's my second post that get downvoted for no reason in this thread. This whole post is mostly question asking for clarification, how can they deserve to be downvoted?
> So when you have SSH listening on some uncommon port and see hits in your logs you have a reasonable assurance the attacker isn't some script kiddy spamming the "attack" button on some stupid app.
Do you remove a vulnerability by doing this? Whatever you are afraid that the script kiddy may do, still can happen, thus it is security by obscurity.
> You also have reasonable assurance blocking that host/network with fail2ban or equivalent will actually be effective.
Can you explain that to me? How is it less effective to block a script kiddy or not? Block him... that's it...
> You can also set way more strict fail2ban rules since you're far less likely to ever lock out authorized traffic.
Wait, what? How will it make less likely to block authorized traffic? As long as that traffic act like authorized traffic, it should still be authorized, isn't it? You would blanked ban traffic that doesn't try too many username/password?
The argument is that moving ports act on the probability of the attack, not the possibility of the attack.
Reducing the probability of the attack attemp is good in itself and reduces the noise and thus allows easier detection of worse attackers in server logs.
Hell I'm not arguing security by obscurity doesn't have an effect. I do believe it does hide security vulnerabilities that can be attacked, but it's still security by obscurity.
>thus allows easier detection of worse attackers in server logs
I would be amazed if you could show me a single instance where people would see an attacker in server logs and act on it... quickly enough too that it's effective.
I'm pretty sure that it's extremely rare that anyone will look at logs, even small one, to catch theses kinds of situation. If that's really what you suggest as being an effective way to protect yourself, than that's what you should mentions that it's required to do to become effective.
Fail2ban will act noise or not... which was my point in my comment.
So first, moving the SSH port to something non-standard is not the only step in securing a machine. It's a filter.
On my little jump server at home I used to get tens to hundreds of login attempts per hour. The server only allows for pubkey authentication and my fail2ban rules were strict (block after a single failed login) so the probing never went anywhere. Unfortunately the block lists got huge if I set the block time to a meaningfully long period. It also kept my server at a constant low utilization.
I then started forwarding a non-standard port (that didn't end in 22) and the number of attempted logins dropped to a dozen in a week. I could set my f2b rules to expire after days and not see any performance impact from a big iptables blocklist. My server's baseline load also dropped since it wasn't dealing with constant noise. I could also set up alerts on failed login attempts since I wasn't getting inundated with noise.
I started with a secure setup and increased security by changing the SSH port. Less noise means I can actually see when things happen. It's an outlier event when I get a failed login on my server.
Why would you let the rule be shorter when the attack are more targeted? Doesn't make sense to me to lower it.
> not see any performance impact from a big iptables blocklist.
Wouldn't it be a pretty marginal performance impact? Let say you do get a performance impact significant, then the goal isn't security, it's performance. That's a new thing, performance by obscurity!
> I could also set up alerts on failed login attempts since I wasn't getting inundated with noise.
So you are looking for failed login attempts more now? I have an hard time believing this, but as I said in my previous message, if that was what made it more secure, it's a pretty important step to add, as I would ignore them whether it's 1000 an hour or 10 a week.
I'm curious, what would you even do if you were to see a failed attempt? At what point will you act and how would you act? Anything I can think of is meaningless.
Most distros have f2b's bantime and findtime set to ten minutes by default and a maxretry of 5. Those settings cover the script kiddies/bulk scans. Because of the short timeout your iptables block list isn't going to get too full for too long.
I set my bantime and findtime to much longer durations and maxretries to be 2. This means my iptables block list is long for long periods and f2b has to do more work looking through the SSH logs when dealing with the noise of listening on port 22. Both have non-zero effect on my little Atom powered jump server.
Cutting down the noise let me set even stricter f2b rules with far fewer resources being used on my server.
As for "doing something", there's not much to do for a failed SSH login attempt. I do send myself notification emails but those are just a small section of a larger status email. If I were to see a lot of login attempts on my random high port I'd likely take down external access to my server or set up VPN access to my home network.
My goal is to avoid background noise and get on a "not worth it don't bother" list.
They mean filter out the low value noise (like you would filter out the constant hum of the PSU/grid), the scans on the usual ports (which all happen to be low port numbers).
An FBI agent from the Cyber Crimes division gave a talk while I was in college (>10 years ago). He was interested in brute force attacks against SSH daemons and created a couple hypotheses around number of logins and common passwords. To test this he setup two Honey Pot to record all of the username/passwords. The first one listened on standard SSH port 22, the other listened on a random high-numbered port. He left both of these running for ~6 months.
Results:
The honey pot listening on standard port 22 received 1,000s of login attempts (sorry, don't remember the exact number). The honey pot listening on the random high-numbered port received exactly 0.
I know this is just an anecdote and it might not necessarily be true today, but this experiment always sticks in my head. At least the guy used the scientific method: created a hypothesis, conducted the experiment, analyzed his results.
What I've found these days monitoring my own network is that there is now 2 waves -- a port scan and then the attack.
If I change a port for anything to another random port I won't get any login attempts for a few days but eventually I start getting hit again. I can repeat this over and over. I imagine what is happening is that the bad actors are scanning for open ports and they feed that periodically to another process that attempts logins.
The second wave is likely when public port scanning services such as shodan re-scan your host. (I wonder how hard it would be to fingerprint and subsequently blackhole shodan et al's scanning traffic)
Anecdotally, I have a machine with an exposed SSH, on a high number port. I get brute force attempts on a regular basis against it, just way less than when I run it on the standard port number. Security by obscurity is just one part of the steps I take with that machine. Using a high port number is dead simple and easily handled client side too, so I just do it.
> It’s where you keep the mechanism secret, not the key.
I think this can be, as you write, defense in depth if the secret of the mechanism is not the only defense.
As example the block cipher for the Common Scrambling Algorithm https://en.wikipedia.org/wiki/Common_Scrambling_Algorithm has been secret.
As it seems that has delayed the analysis of the system for about 8 years, but not damaged the procedure.
Technically defense in depth refers to multiple effective security measures (like cryptographic login), so security by obscurity isn't actually part of it.
(Moving SSH port plus something like fail2ban could be considered defense-in-depth against the incidental DDOS-like issues, though.)
Also anecdotally, I've been running SSH on the same 30xxx port since ~2004, including a cluster, which ran a public-facing service that was a popular target for various forms of abuse.
I recently tried to change ssh port to remove log noise. Well, it certainly helped a little bit, but bots quickly found out new port and started to brute force it, so in the end it did not help, just reduced noise. And as I don't see much difference between 100000 attempts and 1000 attempts, I decided to return it back. I don't care about brute force anyway, my passwords are not "root:root".
Let the server and client share a secret. Use that secret to encrypt the UTC date (2020-09-21), and sample some decimals from the first few bits (adding 100 or so, to avoid low-ports).
You could use that mechanism to rotate ports every 24 hours. This way, the bots wouldn't be able to learn the ssh port for more than 24 hours, without the shared secret.
Sounds like fun, or an easy way to lock yourself out of a box by mistake, depending on your perspective. :)
Or use a TOTP with a long period (10 minutes?) and use that value mod, say, 10k with a base of something like 9000. Easy to calculate the port in your head, impossible to guess without knowing the TOTP secret (I think) and can be extended with other fun* rules like "But subtract 10* the first digit" or "Add the first and second digits multiplied".
That's a fair point - e.g. Authy is currently broken on iOS 14 which means I'd be locked out if I was doing this and using Authy (although I have a VPN to one server which can then get to my others.)
When you have a large base of installations in a big organization, this can make a difference in practice because your incident responders have to sift through less data. This makes much less of a difference when you have great log management and SIEM systems in place. Many places don't, and some hygiene can make a difference at times.
When I see this in practice, the first thing I check is how auth is being done and the overall security of the host. Then, I look for how they are doing SIEM because cleaner logs is a common reason and they'd be better off with a more proactive management approach.
This is my observations as well. For 20+ years I have run ssh on a high port, with exception to my sftp server. The sftp server is hit every day, all day. I have received 0 hits to my ssh port on all my other servers. Even if they hit that port, they would not see anything, as I use a poor-maps port knocking using iptables string matching, but I would still see the attempts in the iptables counters and they are always 0.
FWIW, when I chose my port, I looked at port scanning statistics back in the day, looking for the least scanned ports. It appears those stats have held true for a couple decades at least.
If the attackers are using botnets to distribute the load across IPs, then perhaps we need to distribute the detection across IPs: https://www.abuseipdb.com/fail2ban.html
I used to manage VoIP systems and VoIPBL[0] was amazing.
"VoIPBL is a distributed VoIP blacklist that is aimed to protects against VoIP Fraud and minimizing abuse for network that have publicly accessible PBX's"
It's very similar to what you linked but is targeted to catching VoIP abuse.
I think the concern is a botnet with n IPs is that fail2ban tracks individual IPs, so if you have any kind of grace period before bannination, they get a linear speedup of n, and if there's an expiration period, get to try n times harder than a single bored script kiddie.
Worse, from an economic perspective, theres enough hosts listening on port 22 that a bot can try instead while they wait for timeouts, so you're not really imposing a cost on them. If you view running a botnet as a form of multi-armed bandit problem, the best you can really do is limit the economic value by slowing them down a tad versus their many, many other options.
But as soon as you found that "PermitRootLogin" can be set with no then all brute forces become useless since they can't match combination of user/password.
fail2ban has other uses: it prevents non-root user error (oops, one of your contractors reused a password…), it significantly reduces log noise, and it protects against any future exploit which doesn’t always work on the first 3 tries.
But for mine usage it increase memory usage. I'm using it on OrangePi Zero with 256MB RAM.
Port 22 is opened for world so anyone can join. Device have 2 users - root and jacob. I make a change and disable root login from WAN. Now can login from root from LAN.
Since noone knows that "jacob" exists i'm saved.
Not necessarily, plenty of people have common / guessable user accounts. For example every one of my servers in the cloud has an account called "user". (All my servers are also key-only authentication, obviously.)
Dito, a combination of forbidden root login, an obscure username, fail2ban and disabled password authentication worked well for me for the last 10 years. It's also quite simple to set up. The important part is to double and triple-check each step so that you don't lock yourself out (which has happened to me multiple times in the past, of course).
The entire point of everyone who rants against non standard ssh ports is that it adds more noise to the signal of "You should disable password login entirely and only use public / private key authentication."
And reading many of the responses in this thread it makes sense to me. So many people in this thread talking about how many less failed password login attempts they get when they change the port, indicating they allow password based login in the first place.
If you expect to be hit by that kind of attack (simple combination of username/password), then you should protect yourself from that kind of attack. It's never been easier nowadays to do this.
You may answer that you could still miss a few of theses simple password, that your solution would be more effective, sure, but then you use security by obscurity to protect yourself.
By the way, security by obscurity does works, it's not bad per say, just as that FBI agent just proved, it does have an effect. If it didn't, there wouldn't be so many case where it was used. The issue with security by obscurity is when you rely on that to protect from vulnerabilities and then ignore them. It only lower the likeliness of getting attack, it doesn't make attack less effective, it doesn't protect from any vulnerabilities.
Sadly, too many time, we just ignore it, hide everything and hope to avoid targeted attack which would foil that obscurity pretty quickly. This is when it get bad.
I have the same experience with non-standard port usage, and I think it's a very reasonable thing to do, while also caring for the security of the service behind that socket. SecOps will thank you for not having to wade through log spam in the endeavor of preventing attacks.
> I know this is just an anecdote and it might not necessarily be true today, but this experiment always sticks in my head. At least the guy used the scientific method: created a hypothesis, conducted the experiment, analyzed his results.
I don't need a research to see the difference in how much logging journalctl generates immediately after I disable port 22.
This is an interesting point. Imagine if you put a fake SSH agent on 22, it responds just like SSH but never allows a login. Would it make it even less likely that someone would bother trying another port?
Depends what we mean by sandbox. I wouldn't make a chroot the honeypot, but I don't see an issue with a program that just simulates a shell but doesn't allow exec or real fs access, for instance.
I don't think you would ever let them touch OS-level resources. There are plenty of third-party ssh server libraries where you just get a Reader and a Writer to the remote end. When they connect, you write "root@cool-computer# ". When they send bytes, you discard them, then print "root@cool-computer# " again.
While obviously accepting a TCP connection and allocating resources on your computer is more risky than just ignoring the connection, presumably it would be fun to do this, which is a good reason for doing something. You can set a memory limit, file descriptor limit, etc. and just crash if they're exceeded. You can run your little fake ssh daemon in gvisor and protect against attacks nobody even knows about yet. All in all, it would be pretty low risk, and also pretty interesting.
Moving SSH to a non-standard port is helpful only to reduce log noise coming from untargeted attacks. If an attacker is looking at your system, your attack surface is the same no matter where your SSH daemon is bound. I don't think it's worthy any extra effort to "distract" attackers like this.
> It’s an useless anecdote because SSH bruteforce attempts are not a threat and cost you nothing.
I can say from personal experience that this anecdote is both accurate (20 years ago and up till today) and meaningful.
No idea where you get this notion that brute force attack are no threat or without cost.
They certainly do pose a security risk (takes only one insufficiently trained employee/intern for a potential breach) and they certainly come at a cost (way beyond dirty just logs).
Botnets and fast networking stacks like DPDK have made port scanning the entire Internet a much more viable proposition than 20 years ago. Depending on your sshd settings you can be effectively locked out of your machine by a brute-force attack. Running on IPv6 and/or having a secondary sshd instance that only accepts connections from whitelisted IPs is cheap insurance.
That doesn't invalidate the observation (which I share) that these attempts are almost 0 when using a different port. It reduces logspam and if I start getting lots of brute force attempts on my non-standard port, this is useful and meaningful information (someone cares enough to do this).
> Botnets and fast networking stacks like DPDK have made port scanning the entire Internet a much more viable proposition than 20 years ago
True indeed, yet even today I have seen little evidence of scanning beyond standard ports (pretty much the same as in the past). Criminals are opportunistic by default and tend to go for low hanging fruit (standard ports, with standard server config). I certainly did see in increase on standard ports. Even while full range scanning has become more feasible, I have not seen much evidence of its use.
> takes only one insufficiently trained employee/intern for a potential breach
How? If they've leaked their key, why do you assume the port hasn't leaked too? On the other hand if they haven't leaked their key how would they get in?
Or are you allowing password authentication like it's 1999?
> Or are you allowing password authentication like it's 1999?
That is assuming you have such authority or technical means. If you're maintaining systems for a company, there's a good change that the product vendor simply won't allow fucking around with their system like that (ergo: yes, in practice you are indeed stuck with your 1999 authentication).
I'm not saying that it is good security (that's why layers security is often paramount), but it is situation I've encountered more than a few times.
Great for you, if you are GOD on all the systems you work with. Even then, your client/employer might simply tell you to stuff your objections and accept the bad authentication policy, because to them the risks are simply not worth the business disruption. I totally agree that is a flawed argument. But decisions usually aren't always (if ever) called on valid arguments.
Good for you, if you are in a position where you never had to deal with such real life situations.
Heck, some cloud providers have password logins by default. Since the instances are easy to setup I'd imagine many companies operating with a no-ops situation are vulnerable and don't even know it.
And then there are side projects. I remember being educated enough to know better, but doing it anyway as the server was a $5 digital ocean droplet, used to run a tiny minecraft server for some friends. Got brute forced and spent the next two weeks red-faced, trying to get DO to allow network access again so I could at least grab a backup before nerfing the droplet.
Now I use a basic ansible setup to automate changes to sshd so I don't have any excuse to be stupid again.
Not sure what you're arguing here. You either have control over sshd or you don't. Or are you really suggesting you can change the port of sshd but aren't allowed to disable password auth?
I'm a software engineer, so if my company gets hacked via ssh that's really not my problem. Worrying about such things would make me a busybody. But if you're a system admin and can't properly do your job, then I would seriously start looking for a new place to work. They will get hacked and you will be the guy that gets blamed.
> Not sure what you're arguing here. You either have control over sshd or you don't. Or are you really suggesting you can change the port of sshd but aren't allowed to disable password auth?
First, you'll have to separate two things here. One is the technical ability to control sshd, the second whether a company will allow you to tinker with the auth policy (whether that is password login, password login with only strong passwords, or rsa/ecdsa key access only).
The latter has nothing to do with control and only with what decision makers allow you to do (that sometimes is a large product vendor, not allowing anything beyond what they ship). If you work in a place where you have full control over the systems you work on, great for you. I can ensure you that it is not the norm (unless we're talking about hobby projects or projects with exclusive personal ownership).
As for the technical aspect, keep in mind that changing the public facing ssh port might not even be done on the host itself, but e.g. in port forwarding table in a router/firewall. This might not even always happen because it's technically impossible to do it on the box itself.
I'm pretty certain that tinkering with a box is regularly discouraged (especially if it is managed by some orchestration or vendor specific control/update tool), while effectively the same can be done by changing a router/firewall. There's a lot more things to be said about that, but please take it from me that hacking around in a systems you have not build yourself isn't always a bright idea (and it happens to be a very common situation).
> But if you're a system admin and can't properly do your job, then I would seriously start looking for a new place to work.
That's an interesting theory, but frankly not how I think the real world (usually) works. As a system admin you are there to solve problems for a client or employer. You can (and should) of course always warn for potential dangers, but refusing work or quitting a job/assingment because you're not getting full control over a system .. good luck with that. It is simply not an acceptable position in many situation. You must be in really high demand if you want to pull stunts like those and still have any work after a while.
Maybe it works different in software engineering land, but I highly doubt it. When was the last time you quit a job, because you preferred a different library or framework over the one your superiors/client dictated?
Please don't get me wrong. On a personal level I'm very principled about what I choose to work on or with (and what I refuse to take part of). But at the end of the day we are professionals, here to solve problems. If we can and a client/employers is willing to accept the risks of an imperfect solution that fits in their requirements, it ultimately is their call and responsibility. All within reason, of course.
> tinker with the auth policy (whether that is password login, password login with only strong passwords, or rsa/ecdsa key access only).
Or the port ssh works on
> that sometimes is a large product vendor, not allowing anything beyond what they ship
Surely you'd limit access to that on an IP level and bounce via a bastion (which you do control)
> tinkering with a box is regularly discouraged (especially if it is managed by some orchestration or vendor specific control/update tool), while effectively the same can be done by changing a router/firewall
"Tinkering" with a router/firewall sounds far more dangerous than a box -- you can knock out 2000 machines in one go.
> That's an interesting theory, but frankly not how I think the real world (usually) works
If the shit hits the fan, are you confident your management (which apparently refuse to allow you to implement basic security policies) will have your back, or will they pile the entire blame on your to save their skin.
Better to look for a company that respects your skills before you get pink slipped.
Calling that part of the auth policy (in the context if was responding to) is a bit of a stretch, but okay.
> Surely you'd limit access to that on an IP level and bounce via a bastion (which you do control).
What percentage of organizations have you seen do it that way? In my experience it's more often directly behind an internet facing NAT router, through a port forward. I'm not saying that's a good thing, I'm saying it's reality.
> "Tinkering" with a router/firewall sounds far more dangerous than a box -- you can knock out 2000 machines in one go.
You again appear to be missing the point I tried to make. It's not so much about danger, but more about control. A box is regularly far more of black hole (especially if it's a vendor appliance or legacy system) than a company's router/firewall is. Sure not without dangers, but that's why you're a professional that (hopefully) knows what he/she is doing. How often did you work on a router/firewall that controlled 2000 machines? In my case, I can count those on one or maybe two hands.
> If the shit hits the fan, are you confident your management (which apparently refuse to allow you to implement basic security policies) will have your back, or will they pile the entire blame on your to save their skin.
It works a bit different if you're contracted or working for clients, but either way: that's why you document things and make clear to those who make decisions that the risks are theirs and not yours.
--
But seriously though .. I'm not sure if you genuinely missed the point(s) I tried to make, if you might be pedantic on purpose (just for the sake of it), if you might be just another armchair general, or maybe have only worked in very privileged positions where you had full control and authority over the systems you had to deal with. The latter is certainly not the reality I've experienced for over two decades.
Maybe you are experienced, just in a very different reality/industry than mine. Still, I find these kind of arguments about companies "not allowing you to do basic security" or "not respecting your skills" rather childish and out of touch with the reality. I have not seen many gigs/companies where sysadmins (or even -architects) have this kind of god-like status. When I did see such situations, it often meant a company would have serious (potential) issues if/when their "guru" would piss off (leaving a collection of equipment in "status unknown", i.e. the next guy would not be allowed to touch anything and ergo my point about tinkering with boxes being discouraged).
How long have you been doing this (professionally)? That's not a rhetorical question. I'm genuinely curious.
Network administration (and system administration of about 150 linux machines) for about 10 years. I did a full port sweep of my network a few days ago, 1,555 IPs with port 80 open (although to be fair several of those are multi-IP connected). Before that 7 years of system administration and development.
My (my team's) network policy is that those web ports are not exposed on the internet - we provide proxies with 2 factor authentication up front. We find we get far happier users when we use carrots.
We operate a high wall policy, and while we do push towards a secure-everywhere system, we are more flexible that other corporate networks, and tend not to have exacting requirements. Your black box wants to use SNMP v2? Of course it does, that's fine. No you're not probing it from the internet though, we'll work with you to increase security.
If a team want a device that claims to run a proprietary protocol and needs TCP ports from the internet, that's fine, we do it. We discovered recently one of these devices was actually running a standard webserver on one of these ports after a firmware update. The device user didn't even know.
Ultimately we provide a network, you can take it or leave it, there's competition (go for one of the two non-shadow IT networks, or build your own).
I know from personal experience what happens when things go bad against my teams advice, all those emails saying "this will go bad" are work jack squat. Fortunately we had very air support for that specific event (front page national news), nobody cares about "I told you so".
From the context of your description, your earlier arguments certainly make more sense, more than they would in the situations I'm personally more familiar with.
My main conclusion here is that we are both referring to very different (work) environments.
For what it's worth, given the context you've described, I can well understand and support your arguments.
That said, I'm not so sure your context is representative of the industry as a whole. To be fair, I really don't know what would be. Only that we apparently have worked in very different environments with very different conditions and requirements.
Just to be clear: when I argued about warning for the risks of particular choices, I was never referring to internal company communications. Those are indeed worth little to nothing (if shit happens). I was referring to communication between separate legal entities. Withing B2B contract work, such communications do quickly become crucial (if shit happens), even from a legal perspective.
I hope you can see that while your arguments do hold well within the world you know, they might not be so applicable or useful in other (certainly existing) situations. The world certainly is more diverse than the context you've given.
Kind regards, and thank you for answering my question.
> Withing B2B contract work, such communications do quickly become crucial (if shit happens), even from a legal perspective.
Hah. Depends how large the companies are, but again in my experience all that B2B stuff is meaningless. Maybe it's only my company that's terrible at writing, measuring and enforcing service levels, and certainly awful at extracting any penalties (I guess because any downtime doesn't have a direct monetary loss, just a reputation lost which eventually leads to monetary loss)
That said my entire industry (broadcast) relies massively on IT - far more than in the past - and has absolutely no clue about security. In 20 minutes I found 130 of 200 devices on the internet with default credentials open on port 80. Case in point, using shodon, I can find a server and see within seconds that a Polish broadcaster is currently streaming some people playing violins from a studio - not sure if this is a live TV broadcast or being taped for later, it might be going out on "Program 2" on Polskie Radio, but I'm not an expert in the Polish broadcast landscape.
I'm just amazed how anyone could be in a position to have knowledge and authority to change the port SSH is listening on (thus breaking peoples workflows), but not change away from using passwords, even if a bastion and/or ip whilelisting isn't allowed.
They cause some real work because of the log noise they create. It's easier to see targeted SSH attacks if all the undirected attacks are filtered away.
This is absolutely true. I use fail2ban and I often find that it's using rather more CPU than I'd like. Sounds like moving my SSH port might solve that!
If you get 10,000 attempts on port 22, you're probably connected to the internet. If you get 10,000 attempts on port 63290, someone has taken a specific interest in you.
Personally? I'd decide the utility of having it public-facing is no longer worth the risk, and firewall it down to a much narrower set of source networks. I'd probably take a moment to brush up on my key hygiene too.
The fact that someone bothered to scan the entire range (or find your port at random) might indicate that they're specifically targeting you, and just being aware of that is an upside.
It shouldn't, but it does. Many smaller companies driven by business people, where maybe tech is just seen as a necessity on the side need a narrative like "people are trying to get in and if they do it's going to be a disaster" to take security seriously. Then or at the point where the disaster strikes.
I'm not really sure why this point was voted down below either; just because you work for someone who takes security seriously (at least to the point where it's insurance-satisfyingly safe) does not mean everyone does.
Years ago I worked at a small agency and every bit of time I spent had to be justified and produce tangible/visible results. "But is anyone really going to try to hack this local business" was a question I actually had to answer, since most other employees were creatives.
There's a lot of value. For example, if you see failed logins against random user names like "dbadmin" or "root" it's likely just random scanning, but what if suddenly lots and lots of valid user names appear?
That's a great point, but I get back to the root question: who's actually looking at this? If people are examining logs it's usually for a particular trigger or a problem and filtering that signal from the noise is hard.
It's more typical of the servers-as-pets than servers-as-cattle scenario, but sometimes one is simply curious [or extra cautious]. SSH honeypots exist at least in part for this reason.
Well that would highly depend on what I'm seeing. If it's a single user there might be an attack on the way against that user. If it's multiple users, there might have been a compromise of some credentials.
It's definitely something you need to investigate.
Did you ever had the "pleasure" of a server grinding to a halt because the logs filled up all the space? To where you had to mount the disk to another system and clean it up before it wants to boot from again. Can be a bitch if it's a machine on a remote location. Not everything is cloud (yet) these days.
Granted, there usually is a lot more at fault when you run into such problems, but I find people not looking at logs a rather weak argument for letting them get spammed full with garbage. Certainly terrible hygiene, at least.
>Did you ever had the "pleasure" of a server grinding to a halt because the logs filled up all the space?
I've never seen this issue on any systems I manage, mostly because they all have log rotation.
>but I find people not looking at logs a rather weak argument for letting them get spammed full with garbage. Certainly terrible hygiene, at least.
Why is it weak argument? If it's something that doesn't materially impact you, why should you expend effort into remediating it? Hygiene is only important for things we interact with on a regular basis. We as a society don't care about the hygiene of the sewer system, for instance.
>I've never seen this issue on any systems I manage, mostly because they all have log rotation.
Ah, yes - the age-old claim that log rotation will magically stop a belligerent from dumping 100s of gigs of log files before `logrotate` has time to run ... filling up your disk
And even if logrotate did try to run, you have no space for the compressing file to live while it's being made
> Everyone should be using logrotate, and if they actually read the things, shipping logs to ELK or Splunk or Greylog or whatever.
Certainly they should. That is, if they have that much control over the server and if it's not some legacy system build by some defunct organization or John Doe. I do not disagree with your theory, on the contrary. But then there is reality, where this theory isn't always feasible.
> Keeping the log file on the boot partition was the first mistake
Wrong assumption. With logs on a full system (but not) disk, your system can still grind to a halt during boot. Sure, if you do have access to the bootloader, you can do an emergency/recovery boot. But you do not always have that on systems build by others (especially product vendors).
I would not be making this point if I had not run into situations where this was an actual problem. I can assure you it was never the result of my personal bad architecture or maintenance and almost exclusively while dealing with third party products.
It would be valid to argue they should get their shit together, but the reality is that at the end of the day, companies buy systems like these and you still will have to deal with them.
This article engages in quite a bit of mental gymnastics to avoid calling this tactic exactly what it is. Security by obscurity
What are you doing here? SSH generally runs on port 22. You move it to an alternate port to thwart a naive intruder's attempt to establish a connection to a service running on that port.
You are still running SSH with the same old setup. If it was insecure before it remains insecure. The only thing you've done is "obscure" how to connect to it by using an uncommonly used port.
Why all the hate on "Security by Obscurity" It's a perfectly valid thing to do. It will definitely defend against a host of intrusion attempts. Its only cast in a negative light when it's the ONLY thing you do or if you consider it sufficient in terms of security.
FWIW the key analogy is broken. Hiding a key to your house in a plant is a form of "Security by Obscurity". Anyone that finds the key can open the door. Thats not very secure. So we hide the key so no only people that know where it is can find it. If an intruder found the key they'd immediately have access to the house. We rely on the obscurity of its location to provide whatever security it has to offer.
You could achieve the same affect by having a hidden door and leaving the key out in the open or hiding both. What use is the key if the location of the door it opens is "obscure". This works because access is defined by knowing the location of the key and the door simultaneously.
So... Are you saying that moving your ssh port to a different address doesn't actually increase your security at all? In my practice it gets rid of ~95% of probes from people trying to break into my system. Whether it just signals that I know what I'm doing, or demands resources that the attackers aren't willing to give, I don't know, but it definitely has a positive effect on the security of my system.
>So... Are you saying that moving your ssh port to a different address doesn't actually increase your security at all
No I think if you read particularly the "Why all the hate..." paragraph you'll see I'm saying the opposite of that. There is nothing wrong with obscurity but in terms of security its the icing not the cake.
By that I mean it will make a secure setup more secure, but its not enough on its own.
Building a house and putting the front door on the side or back of the house doesn't change the security of the door. Whether it has a weak lock or not doesn't depend on its location.
You could do something like that, but you'd have to write it yourself. If you want to ban IP addresses, fail2ban is probably an easier starting point. Plus users who authenticate with passwords might occasionally make typos and get banned, incentivizing better logon techniques.
For my random high numbered IP ssh port, I see 5-15 attempts per day. In the last 3 months, the most failed attempts from a single IP is 3. Most attackers (at least amongst those that try higher ports) are distributed these days.
Albeit a now invisible knob that must be searched for via a door scan. While I don’t personally switch my ssh port from default, I would imagine doing so would cut down on a ton of the noise from lowest common denominator ssh attempts.
I have changed some default ports and seeing the number of connection attempts drop from <many> down to close to zero was just nice. There are just fewer connections opened, fewer logs to analyze and so on. In my view, fewer connection attempts means less data to analyze which in my view is a security and cost benefit.
The problem is when you only use security by obscurity. Just changing then port does not do enough to make your insecure application secure. For example, if you have a Windows 2000 server on the internet, it doesn't matter what port you change IIS to, it's going to get hacked.
To use your analogy, if your lock is broken, or can be broken, it doesn't matter where you hide the key.
You're missing the point; the point is that it's only bad when it's used instead of (say) disabling SSH if it's not needed, or using the latest release, or using secure ciphers.
Doing all of the above as well as changing your sshd port will only improve your posture.
What makes this even more ridiculous is by the most literal definition, most of modern cryptography is "security by obscurity." My TLS or SSH connection remains secure because my selected private key remains obscure and hard to find in a space of 2^128 possible keys. Only a handful of techniques (quantum key exchange, one-time pads) are not relying on obscurity for their security.
Security through obscurity works perfectly well so long as you correctly quantify the size of the search space, the bits of information provided as hints, and the cost of searching--which basically describes what much of modern cryptography is about, at the primitive construction layer. By the same measure, a multi-million dollar cache in the American Rocky Mountains remained unfound for a decade despite there being a poem and short story filled with clues as to its location[1]. If I go totally off-grid for 6 months, and then come out of the wilderness saying that at some point during that wandering I cached my valuable possessions in a secret location, you're not going to find it.
The admonition not to use security by obscurity is well-meaning but requires explanation. We shouldn't hide things in ad-hoc search spaces that haven't been adequately formalized. We should prefer mathematical puzzles which have well characterized properties. And generally that is good advice. But that doesn't mean we shouldn't also throw in whatever extra protections are convenient, even if they're ad-hoc or provide insufficient bits of security by themselves.
Yes the attack surface before and after is the same ergo this is not security at all. The difference between 22 and 2222 is a nominal delay on a port scan.
It would be far easier and more convenient to use a ip whitelist and a firewall to protect access to 22.
The parent changed his message so this is no longer in direct response but it's probably still valuable:
By that logic so are passwords and the key itself, You simply have an "obscured" notch configuration on the grooves of the key. My ssh private key is also just a very uncommon sequence of bytes!
It's all encompassing.
So either your interpretation needs works, keys and passwords are insecure, or obscurity is actually totally fine.
So if I have your private key and your hosts definition in your ssh config, you're still secure?
Cool, send them to me and I'll show you otherwise. My email address is in my profile.
The point was that nuance is important. Keyspace size, the cost and penalties of a guess, if any information leaks in the process, these are important.
These cutesy memorable rhyming platitudes in any discipline are usually either meaningless or counterproductive.
I could for instance, use a very long base64 url endpoint as effectively a shared password. If you look at the cost, leakage, and keyspace analysis you can tell why it works or it doesn't far more than the old obscurity aphorism.
I think some of computer security theory could be taught via physical lockpicks pretty well so long as the students actually do the physical lockpicking act. I think the concepts translate well to the material reality especially if you have a large diversity of locks.
It actually sounds like a pretty brilliant educational approach.
Computer security through lockpicking. Requirements: introductory number theory, discrete mathematics, 2 or more programming classes
Please don't post like this, i.e. just putting someone else down. It's not in the spirit of the site and doesn't add any information. If you know more, that's great, but in that case please share some of what you know, so we all can learn.
Moving your SSH port isn't really about security. It's about reducing noise in your logs from annoying port scanners constantly hitting common ports. Although that does then have the benefit of making your logs more useful for detecting actual attacks. And it probably reduces some CPU cycles too as an added bonus.
I've had the SSH server for my host on a non-standard port for well over a decade now. My observations:
Back in the mid-2000's, botnets were small, the Internet was slower, and there was a lot of low-hanging fruit. Putting your SSH server on a non-standard port was a good way of reducing log clutter.
Today, though, it doesn't matter. I get as many login attempts on my non-standard ports as I see on hosts running SSH on port 22. This is because for at least the past few years, it has been feasible and indeed entirely commonplace to scan the whole IPv4 internet, 65k ports and all. Companies like Shodan and Censys sell access to detailed maps of basically everything on the routable Internet. There is no such thing as "hiding" a service on some little-used port anymore.
One can make the argument that attack surface is inversely proportional to security, so putting services on non-standard ports is in fact an increase in security. It's just that even back in the day, that increase was too small to seriously consider and today it's microscopic.
People really lose their mind over changing the SSH port. It's not an either/or choice between a non-standard port and all of the other SSH security settings. Nearly anybody that bothers to change the port is also going to follow most of the other industry-standard practices.
It's pretty much one of the first things I do when setting up a system... ssh port, no password logins, ufw. Of course, when I mess one of those up, I'm doing it again... but in general, that's my getting started step.
If it's a really important system I'll go a step farther and enable a port knocking sequence... but that's about the extent of it. Generally speaking, I rarely setup port knocking.
> Nearly anybody that bothers to change the port is also going to follow most of the other industry-standard practices
If they're following industry standard practice (I'm hoping you mean "key-based auth only") then there is absolutely zero reason to move the port, other than to avoid filling up your logs, and two IPTables rules take care of that anyway.
Moving the port is like being at a bar and needing to go to the bathroom, but first leaving your cellphone on the bar, covered with a little napkin. It is literally obscuring a thing, hoping that it is secure.
Honestly, if people insist on security by obscurity, they should just do it already and stop apologizing for it.
One thing to keep in mind here is that there is no absolute security. There is only security against some specific threat.
If the threat is "someone is specifically targeting you", then changing the port is indeed useless. However, if the threat is "a botnet trying to abuse a new openSSH vulnerability", then changing the port does help. Because no bot takes the time to scan tens of thousands of ports in that case when there are millions of other hosts around who listen on port 22.
By the way, when I lived in a dormitory, it was actually best practice to hide your valuables from view if you lived in a ground floor apartment. Literally obscuring the thing, but thieves only broke into the apartment where they knew there's something to steal quickly. A napkin over your wallet might not help in a bar but it did help in this case.
> However, if the threat is "a botnet trying to abuse a new openSSH vulnerability", then changing the port does help. Because no bot takes the time to scan tens of thousands of ports in that case when there are millions of other hosts around who listen on port 22.
It takes about 8 minutes to scan the whole internet for port 22. If there's an SSH 0day, and I have a botnet, I am going to make the entire botnet scan the entire internet on every port, because I know people move their SSH port. With 5,000 hosts, I can scan all 65,535 TCP ports across the entire internet in 104 minutes.
How fast can you patch your server after the 0day's been announced? What if you're asleep?
The time it takes to change the SSH port could also be used to harden the sshd config, which would eliminate several classes of previous exploits. A little more time and the entire system could be hardened, so that even if you are attacked, the attacker can't do much of anything.
I know some researchers have set up public services like NTP servers to try and build a catalog of active addresses but with the privacy address extensions even that seems functionally infeasible to me. But most of the things I care about are on private networks these days so I haven't had much cause to seriously investigate.
I haven't; there are definitely some protocol scanning differences. But off the top of my head, you could try 1) do a reverse lookup on a v4 address and then query the name for a v6 address, 2) do DNS probing for a domain you're interested in, 3) try the v6-in-v4 mapping to find v6 addresses mapped to v4 addresses. If you could sit on a backbone and sniff DNS traffic or something, that'd be a nice sized cache of addresses.
> Literally obscuring the thing, but thieves only broke into the apartment where they knew there's something to steal quickly. A napkin over your wallet might not help in a bar but it did help in this case.
The same approach works well for leaving stuff in cars. Visible bags or electronics tend to get stolen. The same thing under a sweatshirt doesn't.
A major reason to move the port is exactly so that you can log when someone does hit 22, especially internally.
If you standardize on port 5555, for example, and all of your tooling understands this, now you can audit 22. Anyone hitting 22 is either not using your standard tooling, or is at least potentially an attacker. Being alerted to an attacker is not a small thing.
There is a flip side of that as well though. You may have a tool that tries to look at all SSH traffic across a fleet of servers (maybe trying to identify compromised internal hosts). If every server has sshd running on a different random port then you can't do this (or at least it becomes a lot harder).
More generally, I think there is in fact a trade-off involved in deciding to explicitly ignore standards. Whether it hobbles automated network analysis or just causes friction in the development process, it can add up over time. That is not to say it is always a bad idea but I would just caution that, just as you shouldn't blindly reject "security by obscurity" you shouldn't blindly accept "security through obscurity" even if you have layered defenses. There is no substitute for thinking about individual circumstances and weighing explicit trade-offs.
I'd be curious to see what happens if you setup a sophisticated fake SSH daemon on port 22. Give them access to a virtual machine connected to that port, and log every command they try to run.
If you interpret the term literally, then yes. However that's not the correct approach when dealing with terms of art like 'security through obscurity.'
Wikipedia describes the term as it's actually used (emphasis mine):
> Security through obscurity (or security by obscurity) is the reliance in security engineering on design or implementation secrecy as the main method of providing security to a system or component.
An properly configured sshd on a non-standard port is not in any remote way using obscurity as it's main method of security.
I don't apologize for it, but I don't think yours is the only argument that can be made.
What if a security hole is discovered in OpenSSL and overnight all those bots smashing port 22 are tweaked to take advantage of it? Let's assume a vulnerability that doesn't care about password auth being disabled. Sure, they might discover my sshd on port 17263 or whatever eventually but I am surely going to get a significant head start on all of the people having their boxes hacked left right and center.
Sure, it's still security by obscurity, but it's nothing like the napkin example. It's more like hiding your phone under a table that has 20 other phones on top of it. When I hear about phones getting stolen, I've got a while to get mine back.
That works both ways though. Imagine in your scenario you have thousands of servers across a large organization, many that may be running on auto-pilot and don't have any active team maintaining them. When the hypothesizes critical OpenSSL bug pops up, the first thing you will want to do is find out all the server in your network running sshd so you can patch them. If they are all listening on some random port then that task becomes much, much harder.
If they're following industry standard practice (I'm hoping you mean "key-based auth only") then there is absolutely zero reason to move the port, other than to avoid filling up your logs, and two IPTables rules take care of that anyway.
This is what's known as 'putting all your eggs in one basket'. Relying on a strong auth mechanism is extremely sensible but only relying on that isn't. Exploits are found even in the oldest and best reviewed code. Nothing is 100% safe. If an exploit is discovered in the auth mechanism you're relying on then the obscurity of having SSH on a different port, with a mechanism to block port scanning, could at least buy you enough time to delay an attacker long enough for you to shut down auth until there's a fix. Considering the cost of switching port is so low it seems foolhardy not to.
> Adding obscurity to your existing security stack / layers improves your security.
In theory yes. In practice you are increasing the chance of someone else thinking "Oh this is secure I don't need to do that". I saw this very recently in several broadcasters all over the world - they have a broadcast server which comes with a well known default admin password for the webserver on port 80.
Several of these blocked port 80 externally, but allowed another port (7071), as it said to do so in the manual. They then didn't bother changing the password because "port 80 is blocked".
Had port 80 not been blocked they would have been more likely to think "Ooh I must change the password".
Now it's not good they didn't change it anyway, but your flat statement that "Adding obscurity to your existing security stack / layers improves your security" only applies if nothing else in your security stack changes.
I check port 22 on my public IP addresses to ensure that any servers are uptodate. Occasionally I'll find an out of date and yell at the department to fix it.
That's far better than blocking port 22, as they then listen on port 2222 or similar, and never update their ssh servers, and then when someone does stumble across it, nobody has a clue it's happened.
Never underestimate the ability for people to work around your "improved security" by introducing worse security problems
I mentioned this in another response, but I'll refine my argument here.
Security by obscurity's negative connotation comes from the ratio of effort on the part of the system implementer to the effort on the part of the attacker. While it may take time to break into an "obscured" system, it also takes a lot of effort to build the system in the first place. Worse, if the security is broken, you have to re-implement from scratch.
Better approaches decouple the implementation/methods for enforcing security from the random components of the system that need to be kept secret. Whether or not this increases the effort for the attacker, it drastically reduces the effort for the defender. If an RSA key is cracked, a new one can be generated in less than a second, and once deployed the attacker is back to square one. Having SSH on a different port adds the cost of a full port-scan to any attempt to break in, vs the cost of changing one line in a config file.
That all being said, there is a major difference between changing your port number and changing your RSA key: The attacker/defender effort ratio scales linearly (vs exponentially). Every time you change your port number (a fixed cost), the attacker has to run another port scan (another fixed cost) to get past this layer. By contrast, the amount of effort needed to crack an RSA key is exponentially higher than the cost required to create a new one. However, adding a fixed port-scanning cost to each attack does actually correspond to a very high overhead for a non-targeted attack (their costs go up by ~2^16).
I don't care if it security by obscurity or not, it's not stopping a targetted attack and it's confusing the people who need to use SSH much more than the attackers.
If your woken up at 3AM because something is wrong with some random server, you want to be able to access it fairly easily. Maybe you and your team monitor servers for customers, who decide that SSH should be moved to some random port. Even with the best documentation in the world, setting the port on their SSH connection is something people will miss.
If you're able to communicate that you moved the SSH port to your team, then you're most likely also able to simply whitelist the right IPs in your firewall. Just do that instead and leave the port at 22.
We manage a ton of servers for random clients and they just need to whitelist two or three IPs depending on the team(s) that needs access. Don't expose SSH to the entire internet, unless you need random people access it and in that case moving the port just seem to be a support case waiting to happen when someone forgets to set the port.
I call this a non-issue. We use not default SSH ports and every single person in engineering, support and customer integrators are just fine with it. When 99% of all equipment are on non-default ports everyone just expect to look up port before attempting connection. And we have a lot of servers in use.
What you are talking about is when company used port 22 and then some minority of server were changed, and some don't and it wasn't communicated and caused a mess. But when done company wide on a majority of equipment it becomes just thing we do, not a problem at all.
I like that. If you do move the SSH port, do it everywhere and randomly, then it's not unexpected. I still feel that other security measure should be taken first, most of which would make moving the port pointless.
You can do both, and in general most management teams should be at the very least using ssh configs if not full inventory management ala ansible/et al.
Whitelisting IPs may work with your corporate vpn, it most certainly does not work for hobbyists on consumer ISPs.
Moving the port deflects a lot of low effort attacks, and if the ssh server is otherwise still well secured then provides no harm and real benefit.
Depending on what your tasks are, distributing ssh configs to 100+ people isn't that feasible, but yes, ssh configs help, but you're back to ensuring that the information is correctly distributed. Chances are that the servers and users are managed somehow, while laptops and workstation may or may not be. You could use a jump host, but then why not just whitelist the IP for the jump host?
Just disabling password logins is just as good or better than moving the port. Don't want your logs to explode: logrotate!
If it's just for a hobbyist on a consumer ISP then sure move your port, implement port knocking, use VNC, I don't care. For business, moving the SSH port is a weird thing to do, it's not that effective and you should already have other security measures in place that makes moving a port pointless.
You could probably do something like the following in your ssh_config. Of course that's assuming that all the servers are adhering to some internal standard instead of the standard standard.
That's also assuming that all your servers are under one domain. We manage other peoples servers, so they are normally under that customers domain, rarely ours.
Exactly. Standards exist for a reason. The only thing you are obscuring is real authorized access. Bots don't care whether the service is on port 22 or 38692, they will just probe them all.
Yes, and that's what tools like tcp wrapper (or iptables) are there for. I use it to disallow whole networks from Brazil, China, etc so they don't even reach sshd to talk to it. Less log clutter, less to do for fail2ban if installed.
I don't think changing the SSH port is bad, at least it avoid some spam in the logs from all the automated login attempts. But for SSH there is a single, obvious security measure you should always take if at all possible, and that is to disallow passwords and use public key authentication exclusively. Weak passwords are the biggest danger for SSH, and this solved this specific problem entirely.
This discussion tends towards tediousness because people never define the security goals they're trying to achieve with their pet definitions.
Yes: moving your ssh daemon to run on a different port is "futile" if your goal is to prevent a competent attacker that is single-mindedly focused on attacking a specific host and willing to continue investing time and effort until they succeed.
Yes: having a port-knocking sequence is "futile" if your goal is to prevent an attacker that has the ability to perform full packet capture on all your traffic and is prepared to wait and observe an incoming ssh connection.
But, on the other hand:
Yes: having ssh at all is "futile" if your goal is to prevent an attacker that has the ability to identify your authorized users and then compromise them by kidnapping their families and threatening to murder them.
But even if you accept the author's distinction, the conclusion seems wrong -- moving your SSH port is security through obscurity, albeit relatively weak.
If I attempt to connect to port 22 and there's nothing there, then I don't know whether the machine has SSH or not. So the mechanism is being hidden. Therefore... security through obscurity.
(If, on the other hand, a connection to port 22 somehow failed with an error message "wrong SSH port!" then, in that case, it wouldn't be obscurity.)
The main reasons it's weak is that you can port scan to discover if it exists, and it's likely in the first place that many machines have SSH because alternatives are less common.
But it is still a first layer of security through obscurity, by definition. (Regardless of whether you think it's useful or good or not.)
This response is missing the author's point. There's a distinction between obscuring the methods and obscuring a parameter to those methods.
Security by obscurity, and in particular its negative connotation refers to obscuring methods. The reason it has a negative connotation is that the amount of work it takes to implement a secure system is high relative to the amount it takes to break it. By contrast, the amount of work needed to generate an RSA key, new port number, etc... is incredibly low relative to the amount of work it takes someone to break it.
Security by layers is important, but the article's point is that the ratio of effort to benefit for changing your port number is far far higher than the term "security by obscurity" generally implies.
> This response is missing the author's point. There's a distinction between obscuring the methods and obscuring a parameter to those methods.
But that's exactly the distinction I'm saying the author is getting wrong.
Moving away from the default SSH port is obscuring the method, because it's no longer a single step to check whether the server even has SSH or not. It might not at all, so the port number doesn't serve primarily as a parameter -- it serves primarily to obscure its existence in the first place.
To use the article's presidential motorcade analogy, it's not like knowing there's a motorcade but not knowing which car the president it's in. Instead, it's like not knowing if there's a motorcade at all, and having to check every car in the city to see if the president is even out in public in the first place.
You don't have to be faster than the bear, just faster than the other camper. Assuming the bear is just hungry, that is; if you messed with her cubs, she's coming for you specifically.
These are, of course, opportunistic targets and specific targets. Moving the ssh port helps with the former, but not the latter.
There is some economic cost with portscanning everything, but the bigger impact is that if you moved your ssh port you're more likely to also have turned off password auth and more likely to be running fail2ban and more likely to be running an IDS. Spammers don't want to deal with you making their livelihood more difficult. Poor spammers.
People who see brute force attacks chasing their port moves are probably specific targets. Ears up.
> It’s fine that people know you changed your SSH port. But now they have to scan all the ports and find the new one. Sure, that’s not too difficult, but many people are actually going to do that? Not many, because it’s costly in terms of time and effort.
I don't buy it. Scanning the whole IPv4 internet including all ports is not all that costly. Perhaps the effort would deter your proverbial script kiddie running a scan from home, but are those really a threat to your SSH setup?
It could be, but you can always reuse the mapping result for many subsequent attacks without losing that much accuracy. IP addresses will drift due to reassignments over time, but I can easily see a map remain usable for a week or a month. You could also just sell/buy it from someone else, which costs less.
That's from a single machine with 100kPPS. A single cloud machine will get you more, and you can rent a bunch of those.
And it's not like you need to scan all of the internet, limiting yourself to the address ranges of the big hosters will cut down the time again.
But really, the question is what are you defending against? What's your threat model? There's a published SSH 0-day you didn't patch yet? Ok, maybe that'll buy you a few hours of safety, but that's assuming nobody built a database of reachable ports in advance (think shodan) that they can then rely on to execute these kinds of exploits.
Do you see how the complexity of the attack increased though? Now the attacker has to buy multiple hosts to stage their attack from. Like anything in life, when things get hard, a percentage of people give up.
That's why I mentioned building a database, which essentially evaporates the cost for everyone except the one building the database. Unsalted password hashes are considered insecure due to rainbow tables.
And even if nobody built such a database, the cost still seems trivial compared to the effort it would take to compromise SSH in the first place. That is why I asked for a threat model. What are you defending against where everything is cheap except finding the host?
If your goal is to cut down on log spam, that's fine, but then just say so.
You’re defending against people who wrote scripts that only check the default port. Based on numbers that some others posted, that is actually quite a sizeable number, as they reported numbers of attempted connections on the default port to be orders of magnitude higher than other ports.
Scripts are not magic, they must be doing something. So what are you defending against? The last openssh preauth remote exploit from 2003? Weak passwords? Those are much better addressed by other measures.
> Scripts are not magic, they must be doing something
Not necessarily. Sometimes they just record potential targets for later manual probing. If the script doesn’t find what it’s looking for (in this example the default ssh port), your server is not recorded. That in itself is a win, even if it’s small.
> So what are you defending against?
It limits the number of people/processes trying to gain access to your server. Would you rather 10 people trying to get in, or 1?
> Those are much better addressed by other measures
Well, ya. Nobody is saying obscurity is the only security layer. You would need to secure it assuming the port is known. As an additional layer, only to (even slightly) reduce the number of potential threat actors, you change the port.
If you're monitoring your logs that does make sense. My issue are corporate policies that say "port 22 = bad" but don't monitor the logs either. I don't see what those are defending against.
I also disagree with the general statement of the blog post that changing the SSH port is like hiding your tanks. The discrepancy between armor and camouflage vs. cryptography and port numbers is many orders of magnitude. A better explanation than that is needed in my opinion.
A few useful resources:
If you don't want your SSH server being found by trivial port-scanning, apply port-knocking:
https://github.com/moxie0/knockknock
Port 2222 seems to have become the official obscured SSH port. If you want less log spam, maybe choose another one.
Also it may be a good idea to use another port <1024 - iirc it's good to keep it to the port range where an unprivileged user cannot open anything, potentially impersonating the SSH service.
Although I'm not sure how an unprivileged user would be able to stop sshd in order to take over that port.
Assuming the remote client has authenticated the host in ~/.ssh/known_hosts and assuming your unprivileged user hasn't got access to the host's private SSH keys in /etc/ssh then I'm not sure how they can really impersonate the service.
On my system the private keys cannot be read by an unprivileged user. However, the trust-on-first-use model of SSH is an obvious Achilles heel.
Here's interesting thought. There's UNIX security measure that dates back to old UNIXes: only root can listen on <1024 ports. That's why all old-school services listen on <1024 ports: some random user, running software on your machine can't intercept that port.
People changing ssh port to >1024 port actually reduce security of their systems.
Yes, client will notice that server fingerprint changed. So the question is, how many people will ignore that notice and still enter their password? SSH is a good software in that regard, as it allows clients to notice that server changed, but still it's an attack vector, one you should not just dismiss.
I wonder whether a synchronous port hopping would help security. The sshd would hop to a different listening port based on a random seed and the current time at fixed intervals. The client runs a small app to compute the current active port by using the same seed and the current time.
We maintain a sensor network which consists of multiple nodes which emulate network fingerprint and characteristics of popular services. We did multiple experiments over a period of 3 years as part of tuning our sensors one of which was changing the SSH port of one honeypot to 38651. Less than 10 hits were observed on it over a period of 6 months and 7 out of 10 were legitimate (or misguided, depends on how you look at it).
During our experiments, we learnt a lot of lessons -
1) Attackers operate on a budget and are often time-bound (they have bosses as well). They have less incentive to target esoteric ports unless they have specific intel about it.
2) It is always a better strategy to ensure you waste their resources. No one is going to do full port scans of internet. The last I heard, adding an extra port to ZMap takes around 500 MB of extra memory.
3) The advent of intel sharing models typically highlight nasty scanners like a beacon.
4) Very few ISPs provide high bandwidth pipes to their retail (or even privileged) customers do full port scans. Its always a fight between scanning via low bandwidth pipes via multiple nodes over a long period of time and have less updated data or having to pay a lot more to get a 10GB+ pipe to do it which is bad for OPSEC.
5) Corollary to previous point, even if you control a large botnet that may do it for you, you will get reported or will have scanning, co-relation or computation issues. Or all of them.
6) Pareto's principle is everywhere, 20% of security/ sanity measures will solve 80% of your problems. Simply changing network ports, running your infra over a closed VPN environment, reducing your public internet exposure, configuring dual factor authentication et al - it seriously reduced our infra monitoring/ management upkeep.
7) The internet is full of spam (and water is wet), expect mass spam all the time on your mail sensors/ email IDs. Interacting with these will eventually get elevated to targeted attempts. Even simple browsing will sometime give you a high value malware payload served via AD networks that was previously unknown.
It’s not bad, but it also shouldn’t be the basis of your security model either.
There are a lot of parallels with physical security. It’s good to do things to make yourself less of an obvious target but at the end of the day, you should still lock your doors.
Without getting into exactly what security by obscurity means, I think what people try to warn against when warning against security by obscurity is advice that distracts from solutions that do not depend on obscurity. For example the advice to move your SSH port distracts from the advice to use a private key and disable passwords. But in the real world, there's no mechanism as secure as public key cryptography, so it makes sense to keep the mechanism secret, as added security. The author's examples of dead drops or frequency hopping offer nothing close to perfect security, and it would be a real improvement to keep them secret.
Interesting analogy. You could extend the analogy to ASLR. ASLR randomizes the address just as you might randomize your SSH port. But no-one thinks of ASLR as "security by obscurity" - it's a mitigation strategy like any other.
Thinking of the SSH port number as a "secret" in same way that libc addresses are "secrets" is an interesting and valuable perspective. It suggests that, like with ASLR, there should be some higher-level "linker" that gives this secret information (address of functions, SSH port number) to those who need it, and no-one else.
The author fails to mention that changing SSH to a non-privileged SSH port also has other security implications that can be exploited.
The other example they gave that it's "more costly" to scan your host to find the non-standard port also barely holds water. It would be trivial to script such a scan and the relative time/compute that it occupies is minimal when you're conducting a dragnet of a pool of hosts, especially when parallelized across your own pool of hosts (which are potentially compromised, and thus coming from various host networks).
I think generally when you hear people mock changing your SSH port it's because it's often done as a sort of high pass filter and then nothing else is done on top of that. That or it's done in lieu of real prevention techniques like establishing bastions with temporary credentials or OTPs.
Now, I'd end with this with the usual guidance I give people in political or social realms: mocking people rarely teaches anything, it just dissuades action in the short-term. If you want people to stop being reckless with SSH then you need powerful visualizations and demonstrations of just how easy it is to find the skeletons in your closet from outside the fortress.
I do this on the (very few) internet-accessible boxes I have for one very simple reason - it _significantly_ cuts down on the amount of junk connections and log entries, somewhere between a 1000/1 or 10000/1 ratio.
All the rest (crypto, key setups, fail2ban or similar, etc.) is exactly the same as it would be on a "regular" setup, but at least I can actually _read_ the logs when I have to troubleshoot something else.
For me it was about the log noise. It was annoying to have to log those drive-by brute force scripts and scanners so moved it a different port. Still got some but it was 100x less.
However, I imagine once this becomes a popular enough technique, and I see it discussed on HN periodically, scanner scripts will start scanning other ports for SSH. So over time it might be less useful.
I used an skey https://en.wikipedia.org/wiki/S/KEY port sequence and had a small java app that would run upon my humble Nokia and Psion Organiser to provide the port sequence for that minute in time.
Not sure I have the code about still alas, due to that system getting lost along with others in a move a decade ago, but was simple enough to craft together and worked well.
Even had a http/s tarpit, which is highly recommended educational wise to do and insightful.
Again it is security with obscurity, just a bit more obfuscation involved in this type of set-up.
Oh and dynamic DNS so home IP was known as was dynamic IP allocation via the ISP.
Looking at it conversely: mustn't it be considered bad practise leaving your ssh port as default? Therefore changing it must be, at least a part of, good practise in the security task stack, no matter what label is applied to it.
It's just so little effort to avoid script kiddie port scans and whatever follow ups occur when they find port 22 open.
I imagine if you set up a host with hundreds of ssh tarpits and just one slow SSH port, bots trying to crawl your site to figure out the SSH port would be minimal.
Why bind a database to a local socket and not to a network port? Because it reduces the attack surface area. It doesn't mean the database is "secure": access permissions could be configured incorrectly; it could be missing security patches; etc. But closing off the port still increases security because an attacker has to compromise another system, or find a more obscure pathway to get at the database server. An attacker incurs an additional cost when trying to compromise the database.
Why move SSH to a higher port? Same reason - it reduces surface area.
However - does changing the SSH port _really_ add that much of a cost? For naive attackers - yes. A well configured firewall which detects and tracks SSH login failures, port scans, etc and issues IP bans is more robust. But even this only goes so far, especially for attackers who can orchestrate a large number of IP addresses.
Sometimes I wonder why everybody tries to get out of line, when instead it might be more useful, to keep some service on port 22 just to increase the cost for the botnet operators.
Does someone know it there is something like a super slow and super save sshd implementation which doesn't allow anybody to actually login?
I'd be wary of doing something that causes your server to be singled out in the attacker's reports, as might be the case if it causes their scanner to hang.
Moving your SSH port does nothing to change your SSH based vulnerability if an attacker knows the new port, but it makes it a lot less likely for you to be swept up in passive crawls.
This is like arguing, "If your hide-a-key is under the backdoor doormat, you are no less secure than if you put it under the front doormat". It's easy for someone to say "I was dropping off a package" for a reason to be at your front door. Much more difficult to explain why you are in someone's backyard.
Most attackers are opportunistic, take minimal risk, and want to keep costs low. Doing a full port scan on one IP is expensive in time and computation, and also has some risk. Unless they know for sure that it is worth the pay off, doing a quick "Tried turning the door knob" test is about all they will do.
Which is exactly what security by obscurity means...
You are not wrong about having bigger problems if that's all it takes, but making access to a thing less obvious, even if it is super simple, is a form of security.
This is like saying, keeping your cash in your front pocket instead of your wallet doesn't make your cash safer from pickpockets. Except, that's exactly what it does. If someone knows that your cash is in your front pocket, then sure, they could target that. But if you are trying to protect yourself from casual pickpockets that aim for wallets, then you have created a small buffer for yourself. Which is a form of security.
It's not great security, but it is a layer of protection.
"Security by Obscurity is when you hide how a security measure works, not when you keep some part of it a secret." This is the net opposite of what Kerckhoff tells us.
Very wrong. Let me quote Bruce Schneier on this: "If I hide $1million in a safe and hide the safe somewhere in the subway in New York, that's obscurity. If I do the same but hand you the safe right away and ask you to open it without the key, that's security". Nobody cares about whether your solution is buggy. If you assume any security based on the inability to learn something about the WORKINGS of it its broken (or more precisely just overhead because it's as good as nothing). You can only model that based on the inability to learn DATA (like a key/password).
I think that "Security by obscurity is bad" is really a misstatement. The right version should be "Open design is better".
The underlying assumption is that criminals are often smarter (or having more advantage) than defenders. But they probably do not outsmart the entire human intelligence, so there's a hope that an open design that actually survived can better protect us in a long run (because they're viewed and attacked by everyone).
i.e.
Individuals < Criminals < Collective Intelligence
In this case, I don't think using an unusual SSH port is security by obscurity. Its design is already open; knowing the port number doesn't significantly change the threat model because SSH itself already provides a certain level of security.
I think SSH security is similar to that of a VPN (except for large installations where key management/revokation may be better handled by VPN tools), so for many people there's no need to use a VPN.
Well it avoids cluttering the auth logs of all the machines with failed ssh attempts, which is, from what I understand, the main benefit of moving the ssh port to a non standard one.
As a side benefit, it also adds a real layer of security (ie you would have to find a vuln in both openssh and openvpn for example).
Key management for SSH can actually be pretty good, you can do pretty cool short lived certificate authentication with ssh using Hashicorp Vault for example. I would still not have my VMs have public IPs, feels like asking for troubles, for no real benefits.
A lot of people don’t seem to remember (or weren’t around) when this phrase was coined. Security by/through obscurity very much referred to doing things like starting a bash shell on an obscure port so you could telnet to it and just have a shell (no login at all, and each host didn’t have a firewall either). Or naming a directory something like ‘...’ and people would overlook it in the ls listing. Or leaving a shell running on a serial/modem line assuming that no one would know the phone number to dial in to.
Even a standard ssh install that allows password login on a regular port is already miles ahead of where we were a few decades ago.
People have a lot of misconceptions about security and I think what most people really want just doesn't exist. What we all want is to know that if we set the "correct settings" we have achieved Security. This is just not true.
Security requires monitoring and maintenance. It takes ongoing attention.
I'd venture that if you have a comprehensive security program in place then the port of your ssh server doesn't matter very much.
Conversely, if your security program begins and ends with turning on a firewall and changing the ssh server port, I'd say you are off to a good start but you have a lot more to do if you want to get serious about security.
I stopped changing my ssh port once I learned low ports can only be taken be processes started as root. And then I realized I won't expose ssh port or any other management port except my vpn port...
Note that the root restriction isn’t strictly true any more: for example, on Linux you could grant the CAP_NET_BIND_SERVICE capability to a process and run it as a non-root user. If you have network services running as root just to be able to bind a low port, you could avoid that.
In the case of sshd, of course, it would also need permission to start sessions as another user.
I only do this for nginx serving content on http(s) ports, but of course technically if my host got compromised an attacker could do the same for ssh port. But at this point (having root) he/she would not have to do this (I don't use passwords) and I would be already doomed :)
This is not true. Moving the port doesn't increase security by itself. BUT. If you use it with portsentry, it IS security!
- moving the port filters bots trying to connect
- if an attacker tries to scan the open ports, will be immediately firewalled for that ip.
This protects not only the ssh service, but all the listening services on the host! Separates the clients by behaviour.
This can be bypassed only if the attacker spends a lot of time and IPs for the attack, which is a whole another level, because that is a very targeted attack and all of your security is under test.
This is essentially just me shouting into the void but I really hate that for me (a common netizen) there is no realistic recourse against these constant attacks.
I understand the technical issues involved but it is really disheartening to me that when it comes to malicious actors attempting to access and abuse other people's property the entire net's general response is basically "shrug can't win, don't try"
By that I mean, I would not expect much to come of me submitting logs of constant penetration scans to the police.
I've always advocated it (whether you call it security by obscurity or not) simply because most of the scanners will not choose to pick any other port but 22, then systems like DenyHosts will provide higher fidelity about threats and less volume to pour over. Of course, whether you are on 22, 24, 2222, or whatever, you need to properly harden your sshd with certificates, ciphers, removing the unneeded/unsafe configuration parameters, etc., otherwise it just won't matter what port you are on.
Be aware of potential security issues if you use ports greater than 1024, as those are unprivileged and any app can bind to them.
If this is a single machine that only you have access to it may be fine. If that's not true (and it won't be if it is a machine owned by a company), then you are better sticking with privileged ports.
> I just came across another post on Hacker News talking about why you shouldn’t move your SSH port off of 22 because it’s Security by Obscurity.
As greenduck already said, anyone making such an argument doesn't understand the issue of security by obscurity: it's not a problem when it's used as an additional layer of security, but it's a huge problem when it's used as the only layer of security.
Personally I favour IP whitelisting, but I realise this has issues with scalability.
I think it's important to remember that security by obscurity is not a primary line of defense, but limiting or hiding available information about your system is still a good practice.
I might have the best home security system available, but that doesn't mean I'm going to make the details of that system or the contents of my house easily available information.
Take all of the technical protection measures you can, but then also make it hard for attackers to gain any information as well.
The author is splitting hairs, creating their own definition of the term without reference. When hair splitting, backing it up can help make a point.
To my knowledge there is no official definition of the term, so both cases the author describes can be considered "obscurity".
Interestingly Wikipedia states that with Security by Obscurity the main mechanism of defense is obscurity. That definition is interesting and that with that definition "obscurity" is certainly bad.
It is precisely security by obscurity, it only helps against attacks one would not need to worry about in a well administered system. All the drive by traffic on port 22 is just trying to log in with absurdly simple passwords and two or three usernames, and if you have to worry about these, then you should set up public key authentication, not try to hide the port. And as a matter of fact, a
nmap -A -p- target.tld
will easily identify whichever port you have selected.
Security through obscurity is a free way to increase your security stochastically. It eliminates low-effort attacks, lowering noise, allowing you, the security professional, to focus on detecting high-effort attacks. Of course we shouldn't implement some security through obscurity and call it a day, but people who dismiss security through obscurity as not worth implementing at all are also foolish.
This only makes sense if it's the only form of security, which would be stupid.
I had my personal server on p22, and got thousands of attempted attacks a day. I don't allow password authentication, only a key. I moved the default port not to reduce my risk too much, but to reduce the total number of attempts. Just by moving the default port, unsuccessful attempt dropped by 100x.
Security is a multi-dimensional problem. Moving the port helps reduce noise, that is, you get a higher signal to noise ratio. Other dimensions include using ssh keys instead of passwords, disabling root logins, adding fail2ban, and 2fa. Using some or all of these modifications from the defaults will improve ssh security.
The problem with this is the terrible analogies. Frequency hopping is just that - hopping. Not picking a single frequency. Hiding a tank works for a while, and you'll move it pretty soon so there's no time to find it.
Setting your SSH port to be something unusual, and leaving it forever, is not the same as those.
If a port scanner does any analysis on the welcome banner, it's pretty obvious when you're dealing with an SSH server.
That is, unless someone decided to run with a custom banner, which isn't something I have come across often in over two decades of experience (but I might not have recognized one if I saw one). The reality is that many (fast) scanners/script-kiddies often only scan for default ports. Changing the default port will lower your hit rate.
It could be argued that this will only eliminate the less sophisticated attackers, which might hold some merit. However, good security is almost always a combination of factors that each add their own little contribution to the overall security. Even small increases can be worth it in the overall effort.
Time is often an overlooked aspect within security. If you have SSH on an alternative port, you can use the detection of an inbound scan on the standard port to mitigate an subsequent scan/attack on your custom port. For instance by throwing the scanning IP address immediately in a deny ACL. Just a suggestion.
Port knocking is also be an interesting concept, but a whole different discussion.
Do you actually run any public server with ssh on 22 and monitoring logs about it? From my experience noise on port 22 is so high that you wouldn't even notice actual targeted attempt.
No, I don't, but that has nothing to do with security and everything to do with routing. Noise is just as high on other ports so relying on moving a port or even a password is redundant as far as security is concerned. Personally I only have two servers which are publicly exposed to any form of ssh connections, both acting as access servers, nothing more. I've made several traps so if someone does manage to log in, I'll be immediately notified and I've also added a kill switch, so I can shut them down anytime with a click of a button.
Its not even obscurity: nmap --script ssh2-enum-algos target.ip -p 22-65535 -v
Sure it takes a while longer, but the interesting attackers -- the ones that target a specific organization, not the ones enumerating whole ip ranges over and over again -- do have this time.
Also: you would still need to do a more elaborate (and therefore more "expensive") full range port scan, compared to scanning just the standard ports. That doesn't scale so well for scanning many IP address (e.g. criminals scanning the internet for potential targets).
The other side of obscurity is to hide in plain site in fact changing ports might even attract interest.
I keep port 22 because that is where I expect to find ssh and already spend too much time trying to remember things.
- key for ssh on port 22 is only the ssh-key difficulty
- key for ssh on unknown port is ssh-key difficulty + ssh-scan difficulty
it is an asymmetrical solution:
- for somebody with enough resources it is not much of an increase in difficulty
- for somebody with insufficient resources it is a significant increase in difficulty
so essentially, it is an effective way to fight of script kiddies without targeted interest.
it is not effective against targeted attacks - best to not have open ssh ports at all on public servers but to put them behind vpn/firewalls with ip restrictions.
Moving the port is definitely security by obscurity, it's just that often, security by obscurity works. Tanks have armor but they still paint them camo. You are obscuring the target.
It's been a while, but back when I did full-time sysadmin, I used a port-knocking daemon that picked a random port > 1024 (with carve-outs for some known ports) and sent that port # to my pager.
As you suggest, though, the port-knocking daemon itself ran on port 2222.
My solution is to bind SSH only to the wireguard interface, and open only UDP to the public internet. There is no way to discover which UDP ports are live.
Well it would be security through obscurity if you changed the port, but didn't require any authentication on your ssh server.
There was a time when people would do stuff like this, have a modem that would dial in to a corporate system without any authentication. With the idea being that they would have to guess the phone number. Then later on in the name of security making sure that you had to connect at a certain arbitrary bitrate.
I understand the point of the article, I just think it's too narrow of a definition. The whole thing is just arguing semantics anyway.
From a Hacker/Pentester view: Security by obscurity will only steal my time. It won't stop me from hacking you. Moving to another port is useless, I'll do a port scan anyways.
Non-standard paths and domain names are different stories, but even those will be brute forced eventually.
It just costs time, makes everything more complicated and in the best case (for me) it makes it easy for you to block the wrong port.
Sometimes, as the saying goes, I don't need to outrun the bear, just the other campers.
If you're targeting me specifically, then yes, moving the port is pointless. But while I have't stopped you, I've avoided a half-dozen script kiddies running a script that takes advantage of a new 0-day.
What's in your threat model that's simultaneously a non-targeted attack, but also sophisticated enough to get through basic security (eg. strong password or public key)?
I commented extensively on the other thread (cliff notes: nonstandard SSH ports induce their own security risks [if on port 1024 or above], add complexity with minimal benefit, and generally indicate you should be focused your effort elsewhere - like setting up a wireguard VPN.), but they're not entirely wrong here. Just misguided. If you are guarding against OpenSSH 0 days as an attack vector, one, there are better security measures - see VPNs - to take that provide your greater protection in general, and two, port scans are trivially cheap, so once they infect all the things listening on port 22 they'll use that compute and network capacity to scan the rest of the internet to find things listening on nonstandard ports, because botnets = money, so increasing the size is a worthwhile investment. It gains you a few hours, or you could throw up Wireguard on another host where someone then needs both a Wireguard 0 day and an SSH 0 day at the same time to compromise your system.
It can be part of defense-in-depth: Move SSH to a nonstandard port and then completely hide it from anyone who does a port scan or tries SSH on the wrong port. I'm sure that's occurred to someone, somewhere in the world.
Can't you block ips after x failed attempts? That would make it costly to scan 65536 ports.
In a directed attack that might not matter, but against attackers that scan millions of hosts it might give you some minor peace of mind, but if you bother to configure your sshd you probably aren't a target for those attackers anyway.
On the same theme, I think that people have fixated on "security by obscurity" == bad instead of the original message which was "security by only obscurity" == bad.
There are simple things you can do to keep away from scripted attacks or increase their cost. Stuff like using non-standard ports, or disabling default login IDs like root. Those are all pretty effective at keeping away the bulk of non-skilled attackers.
Defense in depth. I don't care if the first level of defense is a speedbump. I don't absolutely depend on it, but why should I _not_ put the speedbump there when it is so cheap and easy.
The reason is because obscure things tend to be less well understood. SSH port being on another port isn't a case of that, but it isn't security by obscurity.
Security by obscurity might be "I use a non standard PDF viewer, therefor I am safer". But a non-standard PDF viewer may also be subject to less auditing, may not have pressure to improve its security, etc. It's an unknown both to the attacker but also to you. This is where security through obscurity is actually a problem.
It also makes it harder to threat model. Do you consider your obscurity as a mitigation when you consider threats to your system? It leads to the temptation of "well, but the attacker would have to know that I'm using this PDF viewer", and then you start to treat that as a mitigating factor. It can be a dangerous thing.
Secrets are not obscurity. Randomness is not obscurity. People seem to make that mistake the most.
It is trivial if you actually know what the port number is and you are only talking to one server and the non-22 port is static. Now imagine you have code that is connecting to thousands of different servers on different ports and they can change at any time if someone who owns one of those servers decides to rebuild their machine and changes the SSH port. Nothing is trivial at scale. That's not say that is always a valid consideration but standards and conventions exist for a reason.
It's not cheap, you're paying for it every time someone new has to log in and you have to explain your setup. For that price you could do a real security measure like public-key only login.
> instead of the original message which was "security by only obscurity" == bad
That isn't even the original message. The original message was Kerchoff's Principle, which states that in cryptosystems all information about the system is known except the secrets. This is very literally "security by obscurity is bad" except that it applies very narrowly to the construction of cryptosystems. The error was applying it to systems security more broadly, where it doesn't fully make sense.
DRM is one example in which security by obscurity is almost universally bad, and even game publishers seem to agree, considering in how many games Denuvo and/or VMProtect has been deliberately patched out of just to see massive performance improvements as soon as initial sales die down. All it does is get you the extreme fans that want something right at release, and you'll capture that audience - most of the time at least - with or without DRM. All you're doing is making their experience worse.
There are very few people who can create proper security first, and then a sane level of obscurity to mitigate attack.
It's the same as saying don't roll your own encryption. There's people out there that could, but for the other 99.9% its false security.
Generally what happens is people will use their non programmer "common sense" and think "nobody will be able to figure it out so it's good". Obscurity is something you do because you know how to properly break rules.
>On the same theme, I think that people have fixated on "security by obscurity" == bad instead of the original message which was "security by only obscurity" == bad.
This seems to be the case for a lot of things in the technology. Someone shares an idea, it gets popular and through the years of "the telephone game" the original indent is lost but everyone is running around treating it as gospel. Everything from alige to security by obscurity.
Using non-standard ports can actually be quite a severe security risk.
Browsers block many ports, if you move your internal service to an unblocked port, you expose it to every browser in the network! (e.g. a mail server or IRC, something that can be talked to over http.)
If your IRCd allows an unauthenticated connection to start spamming like that, changing the port so a browser can't do it is just wallpapering over the real problem. Anybody that cared to could just use any other software to spam your IRCd.
The issue remains that your ircd is still wide open, waiting for somebody more creative to find another way of opening connections inside your LAN on arbitrary ports. Counting on browsers to keep your network secure is foolish.
> The issue remains that your ircd is still wide open
Wide open, from within the LAN... not many consider internal apps wide open.
Besides browsers, you need actual hacking to reach internal services, a SSRF for example. Or real remote code execution and at that point, you have other problems.
> Counting on browsers to keep your network secure is foolish.
I think what you're saying is: treating the internal network as secure is foolish. That I would agree with.
I never said you should rely on the browsers port banning. It's the other way around, people usually don't even know you can talk to IRC over HTTP and therefore don't even consider it a risk and are protected without even knowing.
My point is not "use port banning as security measure". My point is just: if you move apps to a non-standard port you might expose it to browsers, so better use the standard!
Do you have a reference for "Browsers block many ports" ?
I thought there are even ssh clients that run in javascript in the browser. That wouldn't work if browsers were blocking access to port 22 for outgoing packets/connections.
Any piece of javascript code on any website you visit is going to be running inside of Firefox on your local network.
I don't know how widespread these kinds of attacks are but they are reasonably well known which is why these port blacklists were implemented in the first place:
I'm up-voting your original comment because it does seem like a potentially valid concern, at least worthy of discussion, and shouldn't be down-voted in my opinion.
I usually hear "but it's security by obscurity" argument from people who are not involved in security in a professional way. If you need to protect a high-value target, confusing the attacker is a must. It's also reflected in various security standards and guidelines (See e.g. SP-800 171-B, 3.13.2e: Disrupt the attack surface of organizational systems and system components through unpredictability, moving target defense, or non-persistence. )
There's billions of hosts on the Internet and many millions of them running an SSH service. Even attackers with huge botnets at their disposal have finite resources and time to probe for hosts and then try exploiting vulnerable ones. Every exploit attempt has some percentage chance of being against a honeypot used to help catch the attacker.
There's little point in even sniffing non-standard/uncommon ports for different services. Anyone changing a service port has a higher likelihood of having a more secure system than someone with standard ports. So even if you scan high number ports for an SSH service you're unlikely to find one on a high port that's not using a more secure than default configuration.
So when you have SSH listening on some uncommon port and see hits in your logs you have a reasonable assurance the attacker isn't some script kiddy spamming the "attack" button on some stupid app. You also have reasonable assurance blocking that host/network with fail2ban or equivalent will actually be effective. You can also set way more strict fail2ban rules since you're far less likely to ever lock out authorized traffic. The more specific your allow criteria the better and more effective your block rules can be.
A "Beware of Dog" sign doesn't physically secure your house on its own. You might not even have a dog. What it does in change the calculus for a burglar. Why risk it when there's more obviously less risky targets nearby? The sign is a high pass filter.