Hacker News new | past | comments | ask | show | jobs | submit login

The only "security risk" i see there is number 1, and that is all to do with physical security.

> Disadvantage #1 – Open ports on unmanaged switches are a security risk

Why? Is there something that would prevent an attacker with physical access from unplugging an existing cable? Does the average managed switch config have mac limits and auto shutdown if a link is lost for just a few seconds? Mac limits are easilly bypassed, even without (permanently) disconnecting the legimate device by inlining an active device, maybe some mac spoofing.

I don't include 802.1x or automatically shutting down a port that loses an uplink as a "simple and effective security precaution", it would be a right pain for many situations. Is the latter even a feature? I certainly haven't come across it (unlike normal portsecurity like limiting number of mac addresses, which just adds to overhead with limited effective security).

> Disadvantage #2 – No resiliency = higher downtime

If my device has one ethernet cable into one switch, how does that help? If my unmanaged switch goes pop, I have a spare that I can put in and be back running in a minute. My managed cisco edge switches take 10+ minutes just to reboot.

If my device has two ethernet cables, one into one unmanaged switch, one into another, losing that switch isn't a problem.

> Disadvantage #3 – Unmanaged switches cannot prioritize traffic

Correct they can't. Managed switches without qos set up can't prioritise traffic either. If your switch is dropping packets, you don't have enough bandwidth. I've seen packet loss when sending 500mbit down a 1G uplink on managed switches, even on QOSed traffic. Indeed I've seen higher priority traffic drop and lower priority not drop. QOS isn't trivial. Ultimately it comes down to how big your buffers are whether your packet gets through or not, so your application should cope with some loss, and if you get too much loss you need more bandwidth. If you have 48 devices connected at 1Gbit each, each firing 100mbit of traffic every second, all bang on the second, with a 10gbit uplink, on paper you only need 4.8gbit of uplink. You'll also need a 600MB packet buffer and expect a lot of delay on your packets, whether you have managed or unmanaged, QOS or no QOS.

> Disadvantage #4 – Unmanaged switches cannot segment network traffic

Correct, but then if I have 8 desktops in a cluster why wouldn't I pop in a desktop switch with 8 1G ports? I want them all on the same vlan anyway.

> Disadvantage #5 – Unmanaged switches have limited or no tools for monitoring network activity or performance

They don't, but again do I want that for a specific use case?

If I want a managed switch (which I usually do), then I'll spec a managed switch. It's unlikely it will be cisco. If my requirements don't need features of a managed switch then I won't bother.

I find it interesting that there's no mention of preventing broadcast storms, or IGMP snooping - both of which are far more useful for a typical edge switch than qos.

Personally, I tend to use managed switches - indeed I just bought a couple of 24 port TP Link POE switches for an event I'm planning. I'm not 100% sure I'd go for an unmanaged switch in rsync's case, but from your list

1) Doesn't apply -- servers are in a secure location

2) Doesn't apply -- servers are either single connected (so need a physical visit, and replacing an unmanaged switch is far quicker and easier than a managed switch), or they're dual connected to two different switches

3) If they're doing inline management then you might want to carve out a small part of your uplink to prevent yourself from being dossed by a dodgy server (if your server is saturating your uplink bandwidth and you ssh session can't establish that could be an issue. If you've got OOB access on a separate link though, not a problem, and clearly they don't have that problem)

4) Doesn't matter -- they don't want different vlans

5) They presumably measure the bandwidth use of each of their servers. The question thus is "does the ISP give me logs I can rely on for the wan". Personally I wouldn't, but I can see the idea

Spanning tree: Secure network, they aren't going to connect one port to another to cause a storm

IGMP: They presumably aren't using multicast for anything major so bitrates would be very low even if they were there

Reasons to use a firewall or a switch with an ACL in this specific case that I can think of:

1) 2 points of control -- a zero-day on freebsd's firewall could open a port to an unintended source which was listening but blocked by iptables (or bsd's version). If you had a non-bsd firewall it's unlikely the same zero-day would work

2) Port 22 is only open to a specific IP range, again there's a zero-day, and TTL of outbound packets is high enough to establish a session

Reasons to use a managed switch even ignoring firewalling:

1) Reliable traffic stats -- you could guess at these by summing the uplinks of all the connected devices although some packets will be dropped and some may be going to other devices on the network

Reasons to use QOS on a managed switch:

To allow inband managment if something goes wrong. A separate ilo/ipmi/kvm connection would be better for that though.

I don't think they'd need features like span ports (I personally use them all the time, and fibre taps, but I have a different use case which is UDP heavy and loss-intollerent)




> Correct they can't. Managed switches without qos set up can't prioritise traffic either.

> If your switch is dropping packets, you don't have enough bandwidth.

this isn't true, there exist more bottlenecks than just bandwidth, e.g. try sending 10 byte packets instead of 1500 byte packets and watch as your switch starts dropping due to CPU exhaustion

> Ultimately it comes down to how big your buffers are whether your packet gets through or not

not really, traffic prioritisation is about deciding which packets you drop when hitting your limits (or close to), not making sure that you never drop anything

obviously if you're never hitting any bottlenecks: the prioritisation does nothing


Dunno how you'd make a 10 byte packet, the smallest valid ethernet packet was 64 bytes, and I'd expect my switch to forward those at line speed just fine, and drop any runt packets just fine too. Maybe you could hack a network driver to deliver some really nasty frames, but that doesn't seem a likely situation for rsyncs use case -- not compared with a switch failure for other means.

The point about QOS is that it often isn't necessary because you shouldn't be hitting those limits, and if you do you often don't care (because you've got half a dozen identical desktop computers talking to an unmanaged network not doing any relevant dscp marking). In rsyncs case the traffic they're sending is all ssh traffic - what's going to be doing the tagging and differentiation?


> not really, traffic prioritisation is about deciding which packets you drop when hitting your limits

But everything is the same: ssh traffic for backups. And both ends do congestion control.

I don't care if nightly backups take 1 or 2 hours.


802.1x is trivially proxied anyway, unless you don't reconnect when the link is lost. So an attacker with physical access is going to be able to inspect your packets regardless.


The beauty of SSH-only is that you can assume that all of your traffic is being inspected all the time, but you have a protection against that: ssh-encryption and key fingerprints.

If you wanted to confirm ssh host-key validity, I'm sure rsync.net would perform an out-of-band verification. When they emailed me a request to do some server maintenance, I asked for a verification, and they placed a GPG-signed confirmation on their web-server for me to verify.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: