The article seems to perpetuate one of those age old myths that NAT has something to do with protection.
Yes, in a very superficial sense, you can't literally route a packet over the internet backwards to a host behind NAT without matching a state entry or explicit port forwarding. But implementing NAT on it's own says nothing about the behavior of your router firewall with regards to receiving Martians, or with regards to whether the router firewall itself accepts connections and if the router firewall itself isn't running some service which causes exposure.
To actually protect things behind NAT you still need firewall rules and you can keep those rules even when you are not using NAT. Thus those rules, and by extension the protection, are separable from the concept of NAT.
This is the kind of weird argument that has caused a lot of people who hadn't ever used IPv6 to avoid trying it.
If you think about it, NAT offers pretty much the same protection as a default stateful firewall. Only allowing packets from the outside related to a connection initiated from the inside.
> Only allowing packets from the outside related to a connection initiated from the inside.
NAT a.k.a IP masquerading does not do that, it only figures out that some ingress packets whose DST is the gateway actually map to previous packets coming from a LAN endpoint that have been masqueraded before, performs the reverse masquerading, and routes the new packet there.
But plop in a route to the network behind and unmatched ingress packets definitely get routed to the internal side. To have that not happen you need to drop those unmatched ingress packets, and that's the firewall doing that.
Fun fact: some decade ago an ISP where I lived screwed that up. A neighbour and I figured out the network was something like that:
192.168.1 and 192.168.2 would be two ISP subscribers and 10.0.0.x some internal local haul. 192.168.x.1 would perform NAT but not firewall.
You'd never see that 10.0.0.x usually as things towards WAN would get NAT'd (twice). But 10.0.0.x would know about both of the 192, so you just had to add respective routes to each other in the 192.168.x.1 and bam you'd be able to have packets fly through both ways, NAT be damned.
Network Address Translation is not a firewall and provides no magically imbued protection.
I have never seen a NAT implementation that forwarded every packet sent to it. As you stated in your first sentence, NAT forwards packets that match previous packets. Assuming it does that job well, that’s filtering right there.
its pretty common to have the NAT gateway also be a stateful firewall (you’re tracking state, after all) but they’re not the same and you can have one without the other.
Its just uncommon in consumer or prosumer devices.
A similar allegory is perhaps industrial washing machines vs consumer ones or that printer/scanner combos are common (even in offices) but print shops and people who actually need a lot of paper would have dedicated equipment that does either scanning or copying better.
It’s also like a leatherman, they all have some commonality (the need to be gripped) so theres a lot of combination; but a tradie would only use one as a last resort- often preferring a proper screwdriver.
> NAT offers pretty much the same protection as a default stateful firewall
Most NAT requires itself to include a stateful firewall; it's the same thing as the NAT flow table. This whole trope is mostly getting into people's heads to not forget about actually configuring that "free" firewall properly, since it'll just be a poor one otherwise.
>Yes, in a very superficial sense, you can't literally route a packet over the internet backwards to a host behind NAT without matching a state entry or explicit port forwarding.
Don’t forget source routing. That said, depending on your threat model, it’s not entirely unreasonable to just rely on your ISP’s configuration to protect you from stuff like this, specifically behind an IANA private range.
Yeah, I keep meaning to write something about this. I've definitely noticed people wary of IPv6 because their machines get "real" IP addresses rather than the "safe" RFC1918 ones. Of course, having a real IP address is precisely the point of IPv6.
It's like we've been collectively trained to think of RFC1918 as "safe" and forgotten what a firewall is. It's one of those "a little knowledge is a dangerous thing" things.
In a world where people think NAT addresses are safe because you don’t need to know anything else about firewalls, IPv6 _is_ fundamentally less secure.
> In a world where people think NAT addresses are safe because […]
The vast, vast majority of people do not know what NAT is: ask your mom, aunt, uncle, grandma, cousin(s), etc. They simply have a 'magic box' (often from the ISP) that "connects to Internet". People connect to it (now mostly via Wifi) and they are "on the Internet".
They do not know about IPv4 or IPv6 (or ARP, or DHCP, or SLAAC).
As long as the magic box is statefully inspecting traffic, which is done for IPv4-NAT, and for IPv6 firewalls, it makes no practical difference which address family you are using from a security perspective.
The rending of garments over having a globally routable IPv6 address (but not globally reachable, because of SPI) on your home is just silliness.
If you think NAT addresses are safe because… of any reason whatsoever really… simply shows a lack of network understanding. You might as well be talking to a Flat Earther about orbital mechanics.
> which is done for IPv4-NAT, and for IPv6 firewalls
Are internet routers that do ipv4 NAT usually also doing an IPv6 firewall (meaning they only let incoming connections in if they are explicitly allowed by some configuration)? Maybe thats the point where the insecurity comes from. A Home NAT cannot work any other way(it fails "safely"), a firewall being absent usually means everything just gets through.
> Are internet routers that do ipv4 NAT usually also doing an IPv6 firewall (meaning they only let incoming connections in if they are explicitly allowed by some configuration)?
Consider the counter-factual: can you list any home routers/CPEs that do not do SPI, regardless of protocol? If someone found such a thing, IMHO there would be a CVE issued quite quickly for it.
And not just residential stuff: $WORK upgraded firewalls earlier in 2025, and in the rules table of the device(s) there is an entry at the bottom that says "Implicit deny all" (for all protocols).
So my question to NAT/IPv6 Truthers is: what are the devices that allow IPv6 connections without SPI?
And even if such a thing exists, a single IPv6 /64 subnet is as large as four billion (2^32) IPv4 Internets (2^32 addresses): good luck trying to find a host to hit in that space (RFC 7721).
All the ones I've had have had a firewall by default for IPv4 and IPv6, yes. If ISPs are shipping stuff without a firewall by default I'd consider that incompetence given people don't understand this stuff and shitty IoT devices exist.
I do wonder how real the problem is, though. How are people going to discover a random IPv6 device on the internet? Even if you knew some /64 is residential it's still impractical to scan and find anything there (18 quintillion possible addresses). If you scanned an address per millisecond it would take 10^8 years, or about 1/8 the age of the earth, to scan a /64.
Are we just not able to think in such big numbers?
There is one practical difference. IPv6 without a NAT exposes information about different devices inside the private network. A NAT (whether ipv4 or ipv6) will obfuscate how many devices are on the network. Whether that is desirable depends on the circumstances.
> […] In this paper, we design an efficient and scalable system via spatial-temporal traffic fingerprinting from an ISP’s perspective in consideration of practical issues like learning- testing asymmetry. Our system can accurately identify typical IoT devices in a network, with the additional capability of identifying what devices are hidden behind NAT and the number of each type of device that share the same IP address. […]
> IPv6 without a NAT exposes information about different devices inside the private network.
In practice this has not been true for over 20 years.
IPv6 devices on SLAAC networks (which is to say, almost all of them) regularly rotate their IPv6 address. The protocol also explicitly encourages (actually, requires) hosts to have more than one IPv6 address active at any given time.
You are also making a wrong assumption that the externally visible address and port ranges chosen by the NAT device do not make the identity of internal devices easily guessable.
In both cases the only consumer security comes from "the home router defaults to being a stateful firewall". The only difference between the two is whether it also defaults to doing NAT with that state, which is not what was making IPv4 secure for people unaware either.
We did this at OpsLevel a few years back. Went from AWS managed NAT gateway to fck-nat (Option 1 in the article).
It’s a (small) moving part we now have to maintain. But it’s very much worth the massive cost savings in NATGateway-Bytes.
A big part of OpsLevel is we receive all kinds of event and payload data from prod systems, so as we grew, so did our network costs. fck-nat turned that growing variable cost into an adorably small fixed one.
I looked at using fck-nat, but decided it was honestly easier to build my own Debian Trixie packer images. See my comment below[1]. How has your experience been with fck-nat?
That's quite recent. There was some time after AWS started charging for ipv4 addresses where you could not realistically go for an ipv6 only setup behind Cloudfront because it would for example not connect to a v6 only origin.
This is probably a result of all AWS services being independent teams with their own release schedule. But it would have made sense for AWS to coordinate this better.
I can't believe people are paying these crazy amounts for what is basically a fleet of firewalls. What is the difficulty in running VMs with nftables rules?
running a VM where? on an ec2 instance? who's going to keep that updated for me? who's going to reprovision it when aws retires the underlying hardware? who's going to monitor it for PCI compliance for me? i don't want to deal with all that. i could dump it on fargate, but at that point it's barely cheaper than just using the official version.
i've had to look at my nat gateway zero times since i set it up a couple years ago. i can't say that about any VM host i've got. to me, that's easily worth the few dollars a month that aws charges for it. it's cheaper than hiring somebody, and it's cheaper than me.
It costs a lot more than a few bucks when you’re putting a lot of traffic through it. And running your own NAT instance does not incur per-GB traffic costs.
That said, the paid NAT gateways do also publish metrics. That can be nice when debugging a legitimate issue, such as when your gateway actually runs out of NAT ports to use.
The market will provide. In this case by increasing prices to the point of maximum value extraction from people who don't want to deal with all that. There's a high initial cost to moving to something else here, with a lot of people dragging along paying more than what the market would otherwise equalize to, out of avoiding that initial hurdle. (And long term commitment of a resource, of course, one with low average but indeterminate excursion cost.)
1) You can't `npm install` it, which is a huge barrier to entry to the modern breed of "engineers".
2) Companies will happily pay thousands in recurring fees for the built-in NAT gateway, but if an engineer asks for even half that as a one-off sum to motivate them to learn Linux networking/firewalling, they'd get a hard no, so why should they bother?
For company hosting cloud solutions gets you the various compliance stuff for free which can be worth it if you're not too large, and of course faster turnaround if you need to get a product out.
For personal a cheap vps will end up costing around the same as something you can do on your own, without the risk of messing up your machine/network from a vulnerable endpont
This is really it: compliance. The cost is in having to prove that you did the right things. But I do wonder if we will see an easier path forward with that. After all if there was a way to pay someone a once a year fee for an audit and filling out the paperwork and the cost was lower than the cost of using AWS then surely people would do that and it is an opportunity for an audit business that is willing to work with self-hosted setups. Or just have GPT-5 fill out the compliance docs. I suspect it won’t be long until GPT-5 is reading them.
I'll admit, bit of a poor choice of word,l. But when you need to do e.g. physical security, costs add up quickly over what you'd spend on cloud in say a year, and the compliance companies are usually a huge headache to deal with so that'll be some nice amount of your staff's time lost
I think AI coding is another part of why this is seeing a resurgence. It’s a lot quicker to build quick and dirty scripts or debug the random issues that come up self hosting.
A lot of this is support. If you’re self hosting, when things don’t work the way they should, the team has no one to blame. On AWS, they can always lean on aws not working the way it should as an excuse.
You’re misunderstanding the value of cloud computing, which should be obvious at this point. But I’ll go ahead and re-explain it for you.
Businesses need to maintain focus and allocate resources toward delivering their core product.
Software is highly profitable and even with inflated cloud computing costs, it makes some level of sense to not over-optimize and spread teams thin reinventing the wheel.
If I can deliver my product or feature to the market 20% faster that’s going to make more money than if I optimize my cloud infrastructure costs to save 50%.
As a business owner I don’t want to have to hire high-paid specialists who understand the deep intricacies of data center infrastructure, I want to be able to pay people with highly common skills who can quickly translate my business logic to working software.
I think it might be as simple as ipv4 is just nicer to look at…maybe we should have just done “ipv5” and added another block. Eg 1.1.1.1.1. I know its stupid, but ipv6 addresses are just so hard to remember and look at that I think its just human nature to gravitate towards the simplicity of ipv4.
dead::beef is just as memorable as 1.1.1.1, and my v6 delegated prefix is just as unmemorable as my public v4. The "easier to remember" argument just sucks hard.
I don't think many people are memorising a bunch of wildly different IPv4 addresses either.
At best, I remember the prefix of my private network, and a handful of single-number suffixes of important hosts (i.e. my LAN is 192.168.1.x, and I remember that .100 is my local file server...)
Who remembers IPv4 addresses? If you have more than a small handful of devices in your network you're probably going to want some kind of name service.
The problem with "add another block" is, that you have to change everything everywhere to make it work... and if you're changing everything, why not expand it properly.
Only a tiny minority of people have to look at those addresses, the majority just types "facebook", enter, clicks on first google result and gets facebook (because ".com" is too hard to write).
> I know its stupid, but ipv6 addresses are just so hard to remember and look at that I think its just human nature to gravitate towards the simplicity of ipv4.
If only there was a system that allowed for easy to remember human labels to be translated to a machine-usable sequence of bits that we call "an address"…
I have difficulty remembering ten numbers, why do I have to say 1-212-487-1965 when I can just say Santa Rosita 71965? Maybe we should have just done another exchange name and added another name. Eg Hawthorne Santa Rosita 71965. I know its stupid, but 10 digit phone numbers are just so hard to remember and look at that I think its just human nature to gravitate towards the simplicity of telephone exchange prefixes.
Yet again, another fundamental misunderstanding (either genuine or not, I'm not sure) about the low-level technologies and their origins that underpin all of this. "Can't we just..."? No.
I always find these discussions about AWS NAT gateways interesting because I recall way back in the day, before AWS had a manages NAT gateway, the recommendation was to roll your own anyway. Or at least that's what I heard. I took an ACloud Guru course and one of the first ec2 lessons was to create a simple NAT gateway in your VPC so that your other instances could reach the Internet.
Am I the only one who read this and thought, "doesn't everyone self host a NAT gateway?"
Mine's in the living room, it says TP Link.
More seriously, NAT is fun and all but it can introduce unexpected behaviors that wouldn't exist in a firewall that doesn't do translation. Less is more.
As an OG networking person, developer, and Linux user, the state of modern dev culture just makes me sad.
Modern devs are helpless in the face of things I taught myself to do in a day or two when I was fourteen, and they’re paralyzed with terror at the thought of running something.
It’s “hard” goes the cliche. Networking is “hard.” Sys admin is “hard.” Everything is “hard” so you’d better pay an expert to do it.
Where do we get these experts? Ever wonder that?
It’s just depressing. Why even bother.
It really makes me worry about who will keep all this stuff running or build anything new in the future if we are losing not only skills but spine and curiosity. Maybe AI.
Yes, networking and sysadmin are hard, because the Internet is a much more hostile place than it was 20 years ago and the consequences for getting things wrong are much more severe. Early 2000s, ISPs had ports open by default and getting a static IP-address was a question of just asking. With dyndns, we were hosting websites off home computers. I remember a comment on HN saying that some US university provided publicly routable static IPs to dorm room port. Not even sure I could get a static IP-address nowadays as a home consumer, never mention the willingness to host something that is not behind a WAF.
And when you got things wrong back in the day, you came home from school, saw a very weirdly behaving computer, grumbled and reinstalled the OS. Nowadays it is a very different story with potentially very severe consequences.
And this is just about getting things wrong at home, in corporate environment it is 100x more annoying. In corporate, anyway you spend 80% of the development time figuring out how to do things and then 20% on actual work, nobody will have the time to teach themselves something out of their domain.
I've hosted stuff at home for almost 30 years, never bothered with a WAF. I have several VMs exposed with public IPs. If you keep your OS updated, it's hardly the end of the world. Sure, if you put up an unpatched OS from 10 years ago, you're going to have problems.
I'm hosting from my home with a static ipv4 right now. It's been running for years without a single problem. I just put in a basic pf config. Everything is fine. It's not that scary.
I have 2gbps at home and open ports and IPv6. It’s a dynamic IP but it changes maybe once a year. I could host a site here, sure. It’s infinitely better than it was 20 years ago.
OSes are more secure. Isolation is better. Languages are better. Hardware is vastly cheaper and faster and more reliable. Everything is easier and faster and better.
In the corp world we have this absurd embarrassment of riches. There are like ten choices in every category. Half of it is free. It’s easier to set up and run than it was back then. Way easier. Hosting is silly cheap if you compare cost / performance.
People are just incurious and brainwashed with this weird sense of helplessness.
This security phobia is so overblown if you take some basic precautions and don’t run crap service software.
If I were hosting something controversial that might draw the ire of one of the insane political cults out there I’d run it through a free CDN maybe. That’s easy.
It doesn't matter how easy something is to set up and run from technology side if actually being able to set it up and run it takes half a year or more coordination calendar time, justification to several different departments, their review and approval. It's completely understandable, regulations and audit requirements are what they are: but then it is strange to read that modern developers somehow are paralyzed with terror. Well, the ones who were willing to try new things got shitcanned long time ago, this is the people who you have.
Isn't it anyway better for admin and security folks to have developers not get any ideas and stick to the bounds of the box?
I feel the issue is that now, what would be done by entire teams (networking team, storage team, database team), is now perform by only the same DevOps team.
We have way less time unfortunately to dig into each tech, business is pressing us like lemon on the other side to ship quickly.
I actually kinda think ai will help with this, in a roundabout way.
I think of AI as a kind of floor, a minimum required skill to be able to get a job as a professional anything. If you want to find paid work as a developer, you have to at least be better than AI at the job.
Optimistically AI will filter out all the helpless Devs who can't get anything done from the job market. "Code monkeys" won't be a thing.
Juniors will have to enter unpaid trainee programs I guess, but that might not be such a bad thing
For those who DID think "I wonder what my 'when I was a kid' will be about when I'm old" what kind of things did you guess it'd be and what did it actually end up being?
I'm only in my 30s but I was thinking recently "when I'm retired I feel like I'm going to be telling stories about how back in my day we had this thing called the filesystem and you'd just browse it directly..."
Man, just this week I had a moment like this that killed me. I had just woken my tweenager up for school and realized I’d turned into the kind of asshole who comes into your room in a good mood at 6 am. Stood in the shower and came to terms with that, but it took a while.
Always assumed I would, but I thought it was that the youngsters would be running circles around me (in this domain) the way I ran circles around the olds when I was a kid.
What happened is that as an Xennial (young genX / old millennial) I know way more about computers than either generation to the side of me. This includes younger devs. I knew way more than them when I was their age. As a teen I was hacking C to get my 386 with Slackware Linux that I installed from floppies online by modding SLIRP to run on the sun3 I had dial up access to so I could pipe serial SLIP through it. Learned all about everything happening under the hood on a network.
I don’t feel self congratulatory about this. I feel depressed. If the kids were all smarter than me it would give me more hope for the future.
All of this. I despair with some of the takes on basic technology being hard. And when you try to defend understanding just the most rudimentary things, you're labeled a problem because you should just be paying out the nose for the service and writing even more shit code to cover it up.
Yes and developers these days don’t know assembly like I learned at 12. Does it matter?
I’m sure the list of things that you don’t know that some other developers do know is long.
No one is an “expert” at everything. I know AWS well (trust me on this) and I’ve used more services than you can imagine in a production capacity. I choose not to know the intricacies of Linux and front end development for instance. That’s either “someone else’s problem” or in the former case, I just give a zip file with my code in it and run it in Lambda or a Docker container and run it using a managed Kubernetes/ECS cluster, use Lambda (yes you can deploy a Docker container to Lambda) or Fargate (AWS manages instances in Docker cluster).
Fwiw, the solutions mentioned here don't seem to properly secure the kernel's network stack against common attacks (rp_filter, accept_redirects, accept_source_route, syncookies, netfilter rules, etc). Ask your local security guru to harden the instance before deploying.
I'm not to much into networks, although I've been sysadmin my vps for years.
why I would need a NAT Gateway? it's not enough with a good set of rules of ufw or similar software?
It's mostly for AWS context, when you want to host for example your servers in a private subnet, but you want to allow access of small part: you can leverage a NAT Gateway to be the public entrypoint + some security groups as gatekeepers to filter the traffic.
However, the fees from AWS are atrocious on the NAT Gateway.
I build my own NAT instances from Debian Trixie with Packer on AWS. AWS built-in NAT Gateways use an absurdly outdated and end-of-life version of Amazon Linux and are ridiculously expensive (especially traffic).
The bash configuration is literally a few lines:
cat <<'EOF' | sudo tee /etc/sysctl.d/99-ip-forwarding.conf > /dev/null
net.ipv4.ip_forward=1
EOF
sudo sysctl --system
sudo iptables -t nat -A POSTROUTING -o ens5 -j MASQUERADE
sudo iptables -F FORWARD
sudo iptables -A FORWARD -i ens5 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -o ens5 -j ACCEPT
sudo iptables-save | sudo tee /etc/iptables/rules.v4 > /dev/null
Change ens5 with your instance network interface name. Also, VERY IMPORTANT you must set source_dest_check = false on the EC2 NAT instances.
Also, don’t assign a EIP to your EC2 NAT instances (unless you absolutely must persist a given public IP) as that counterintuitively routes through public traffic. Just use a auto-assigned public IP (no EIP).
NAT instance with EIP
- AWS routes it through the public AWS network infrastructure (hairpinning).
- You get charged $0.01/GB regional data transfer, even if in the same AZ.
> Also, don’t assign a EIP to your EC2 NAT instances (unless you absolutely must persist a given public IP) as that counterintuitively routes through public traffic. Just use a auto-assigned public IP (no EIP).
Could you point me to somewhere I can read more about this? I didn't know there was an extra charge for using an EIP (other than for the EIP itself).
That's what you did before AWS had the "NAT Gateway" managed service. It's literally called "NAT Instance" in current AWS documentation, and you can implement it in any way you wish. Of course, you don't have to limit yourself to iptables/nftables etc. OPNsense is a great way to do a NAT instance.
I believe the NAT instances also use super old and end-of-life Amazon Linux. I prefer Debian Trixie with Packer and EC2 instances and no EIP. Most secure, performant, and cost effective setup possible.
> NAT AMI is built on the last version of the Amazon Linux AMI, 2018.03, which reached the end of standard support on December 31, 2020 and end of maintenance support on December 31, 2023.
Sure that one’s case, though you might be able to give out a host instead of IP to others to whitelist. Then you just set a low TTL and update the DNS record.
For anyone else who is super confused as to wtf this is about: 1) it's not "NAT Gateway " but rather "The AWS service called NAT Gateway" and 2) it's not "self-hosting" but "hosting in EC2", in the same sense that "running postgresql on an EC2 instance" wouldn't be "self hosting aurora".
Yeah, I just use a VPS box I pay $20/year for. Only the most basic config goes on this machine. Basically load is 0.1 , and has no data.
Then I run my stuff locally.
And then I use ssh tunneling to forward the port to localhost of the remote machine. Its a unit file, and will reconstruct the tunnel every 30s if broken. So at most 30s downtime.
I use Tailscale myself, but if you want everything totally under your control (and don't want to go to the trouble of setting up headscale or something similar) then that's one of the absolutely simplest, lowest-effort ways of doing it. EDIT: Well, except for the VPS box I suppose, but if that provider went down or you had any reason to suspect they were doing anything suspicious, it would be quite simple to jump to a different provider, so that's pretty darn close to controlling everything yourself.
Particular things: I use letsencrypt wildcard, so my subdomains aren't leaked. If you register per subdomain, LE leaks all your subdomains as part of some transparency report. Learned that and had to burn that domain.
The VPS is from LowEndBox. Like 2 core, 20GB storage 2GB ram. But runs perfectly fine.
I run jellyfin, audiobookshelf, Navidrome, and Romm. Ssh tunnel per application.
It would also be trivial to switch providers as well. But again, not a seed box, not doing torrents, not doing anything that would attract attention. And best of all, no evidence on the VPS. Its all SSL and SSH.
> Society would have you believe that self hosting a NAT Gateway is “crazy”, “irresponsible” and potentially even “dangerous”.
I just can't take articles seriously when they lead with these kind of claims and then don't back them up. Typically to give their articles some sort of justification and/or weight. Did not bother to read the rest.
Agreed. Assuming an AWS "NAT Gateway" is the same as a regular NAT?
Security is not the purpose of a NAT. It's there to give you more IPs than you have. There's all sorts of NAT hole punching techniques. If you want a firewall, you need a firewall.
The firewall provides the stateful one way door, the router moves packets between the set of subnets it can see, and NAT makes it so things on the public internet think the conversations from one private address+port combo are actually coming from another public address.
The last part isn't adding the security, and you can absolutely NAT without preventing the "outside" subnets from being allowed to route to the "inside" subnet, it's just that NAT is almost always done on the box providing the stateful firewall too so people tend to think of the 3 functions as combined in concept as well.
> you can absolutely NAT without preventing the "outside" subnets from being allowed to route to the "inside" subnet
Under very specific conditions. Technically if you send packet with destination 192.168.1.10 directly to wan port of router - yes it can route it inside. The problem - how to deliver this packet over internet. You need to be connected to exactly same network segment to pull it off.
And you don't need statefull firewall to deny this kind of packets.
> I've seen claims of providers putting IPv6 behind NAT, so don't think full IPv6 acceptance will solve this problem.
I get annoyed even when what's offered is a single /64 prefix (rather than something like a /56 or even /60), but putting IPv6 behind NAT is just ridiculous.
This shouldn't be mistaken for an anti-IPv6 post. There's also some steps you have to go through to enable IPv6 on your VPS networks, and there's still stuff like GitHub not handling IPv6. So, much as we need to migrate, we still have to support IPv4 connectivity for the foreseeable future.
> and there's still stuff like GitHub not handling IPv6.
And virtually everything inside of AWS still requires IPv4 so even if you have zero need to reach out to WAN, if you need any number of private AWS endpoints, you're going to be allocating some ipv4 blocks to your VPC :(.
I've worked at four tech companies and never saw a hint of IPv6 (except for some tests that verified that third-party networking code accepted that address family).
Instead I played with IPv6 at home to make sure I understood it well enough should it ever come up at work. We'll see!
In theory.. but what happens when you want to change ISPs or your ISP doesnt assign static ipv6 blocks? Its recomnended but ISPs have no incentive to give a shit about you. Now all internal infra is not routable.
There absolutely are annoyences IPv6 get rid of, that are much embedded in IT culture we only see them if we look.
Port forwarding, external/internal address split, split horizon DNS, SNI proxies, NAT, hairpin routing - some of the hacks made mostly because of shortage in IP space.
That's kind of my point. In 20 years of managing networks and infra, none of those things have ever been painful or cost me more than a few minutes a year. That's just not enough to convince me I have any reason to switch over.
Everyone has to address their spiritual beliefs every time they mention something vaguely related to them? Else they lack epistemic humility? ...Did it occur to you that most people have actually thought of this question?
Wait, is "seems lacking in epistemic humility" just coded language for "I disagree, therefore you couldn't possibly be thoughtful"?
Yes, in a very superficial sense, you can't literally route a packet over the internet backwards to a host behind NAT without matching a state entry or explicit port forwarding. But implementing NAT on it's own says nothing about the behavior of your router firewall with regards to receiving Martians, or with regards to whether the router firewall itself accepts connections and if the router firewall itself isn't running some service which causes exposure.
To actually protect things behind NAT you still need firewall rules and you can keep those rules even when you are not using NAT. Thus those rules, and by extension the protection, are separable from the concept of NAT.
This is the kind of weird argument that has caused a lot of people who hadn't ever used IPv6 to avoid trying it.
reply