Tailscale continues to delight me every day. I wish I could pay them for them my modest little use-case of remotely-accessing my ham radio and my rpis. Before tailscale I relied on flaky upnp or "configuring my router" and trying to bust through nats. Not sure IPv6 would have helped this at all given how hostile ISPs are.
I'm looking for a solution to the problem of exposing a port on my laptop port to the internet (let's say for example purposes: ssh server port) from whatever network my laptop happens to be connected to ... I want to be able to initiate connections back to my laptop from random devices which are connected to the internet via network infrastructure that I don't control ...
Can tailscale give me that without me having to setup my own network infra somewhere?
I loved these two articles and feel some amount of trust for the author's company based on the communication style and knowledge sharing -- just wondering before I dive in whether this tech offers a good solution to my problem?
> IP mobility is what we do, in a small way, with Tailscale’s WireGuard connections. We try all your Internet links, IPv4 and IPv6, UDP and TCP
From the wireguard website[0]:
> WireGuard explicitly does not support tunneling over TCP, due to the classically terrible network performance of tunneling TCP-over-TCP. Rather, transforming WireGuard's UDP packets into TCP is the job of an upper layer of obfuscation (see previous point), and can be accomplished by projects like udptunnel and udp2raw.
I assume this means Tailscale has their own client where they’ve bolted this on themselves?
> All this wonderfulness depended on the ability to start over and throw away the legacy cruft we had built up. And that is, unfortunately, pretty much impossible. Even if IPv6 hits 99% penetration, that doesn't mean we'll be rid of IPv4.
This bout of pessimism seems unfounded in the article. Of course most services and networks will get rid of v4 when a sufficiently high percentage of users and devices have moved, just like they do with other protocols. It won't disappear off the face of the earth, just like other old protocols didn't, you'll just find that you can't use it on most networks.
The article does cover it, but a bit later: It talks about e.g. WiFi lightbulbs and the like, still running on IPv4, and covers Postel's Law pretty well, considering it not 'a good idea' so much as 'you may not like the law of gravity, but the law does not care'. Postel's law is not a principle you should apply. Postel's law is a thing that will be done to you, regardless of what you think of Postel's Law.
In other words, some numbnut with an old wifi lightbulb is going to write some cruft to attempt to make their IPv6 only router somehow route their stuff to IPv4.
Whichever clowny moron sells a router that actually enables this numbnut wins the fight, because said numbnut will buy the clown's product.
I'm just rephrasing and repeating that article as I understand it. Point is: Yes, the bout of pessimism is covered. And covered again, and in a better way, in the followup article linked at the very end: It's the application of the notion that Postel's Law is a force of nature and not an applied principle.
At some point the v4 compatibility feature is not going to be attractive to significant nr of people. Just like parallel ports in PCs ceased to be. So yes there's a force favouring legacy support, but it's only so strong, and prone to weakening.
The transition will take a while because of old devices, I agree with that. But still doesn't change the prospect that v4 will go away eventually.
Exactly. Technically ipv6 is designed to solve a real problem. Except the people who are charged with using/delivering service with IP addresses hate it with a passion.
The majority of network folks I have interacted with joke about it.
IPv6 is a terrible solution because ipv4 is so much easier for people to write down, remember, and actually use.
An ipv6 address is more like a MAC address in that it exists and it’s supposed to be unique but no one is reciting their MAC address every day.
Phone numbers, mailing addresses, and email addresses are all memorable. IPv6 is not and will not succeed as a result.
Yes, I’m acutely aware it’s being forced on the industry because ipv4 is running out and there is no other option. However, the price of ipv4 blocks shows how badly people don’t want to switch to this miserable standard.
Who in their right mind is memorising IPv4 addresses, at least beyond the IP range used on their home network? Nobody remembers MAC addresses because nobody needs to: IP and the upper layers hide that. If you don't want to adopt IPv6 because it would make remembering your IP addresses harder I'd say you have much more pressing issues to resolve first.
I'd also argue that many people, me included, have simply stopped remembering phone numbers (and maybe addresses) now. I have some (my parents', etc) committed to long-term memory but there is no way I need or even try to remember new phone numbers I come across. My contacts list does it for me.
IPv6 adoption is slow because a) it hasn't been essential so far, b) it's weird and different to IPv4 and if using it isn't essential, why bother? It's the technical equivalent to rewriting all your recipes from imperial to metric units, despite owning scales and measuring tools that still do both.
(Note: I am a nerd and like playing with this stuff so I am mostly v6 where possible.)
> Who in their right mind is memorising IPv4 addresses, at least beyond the IP range used on their home network?
Hell, I don't even remember them on my home network. I have a server running a dozen VMs and I'll be damned if I can remember the IPs for all those services. DNS exists for a reason.
Moreover, the only time I might be working with those raw IPs is with services using static IPs. But in that case my IPv6 addresses aren't actually that much more complicated. On the v4 side I might have:
192.168.1.122
And on the IPv6 side (where I have a /48 allocated) I might have:
Yes, it's all easy. I have a /64 assigned to my LAN, an /80 assigned to each of my hosts for allocating IPs to containers, etc. I have DNS and a text file listing which blocks I have allocated out of the /48 my ISP gives me. It's not hard or substantially different to IPv4.
The original solution was a static hostname<>IP address table, "hosts.txt", centrally administered, that all hosts copied via FTP until that became too large.
That led to the development of the Domain Name System (DNS) in 1983 [0].
DNS AAAA records do for IPv6 what A records do for IPv4. PTR records (for reverse look-ups) work for both.
For zero configuration (no DNS server) we have multicast DNS (mDNS) [1].
> IPv6 is a terrible solution because ipv4 is so much easier for people to write down
I'm not sure if you're serious, but in case you are, I'd ask you to consider the percentage of people using IP whose use cases involve writing down IP addresses.
Just us network administrators, who administer the networks that your IPs run on. It’s difficult to convince someone to make their own job harder, especially when nobody is actually asking them to, either.
If you are administering networks by writing down ip addresses, it sounds like an old habit that maybe survived this long because of conveniently short addresses, but hardly essential.
(Yes I have done it too back in the day, though we had this nifty tech called copy & paste...)
To me it feels like the IPv6 dream of a fully peer-to-peer connected network was the wrong turn. Instead I think they should have rolled with NAT.
NAT is great because it acts as an abstraction layer for the external network, in that the nodes on the local network can talk to each other and be blissfully ignorant of the details of the external network.
And for the majority of local networks there's a fundamental asymmetry between the local and external network. The majority of nodes on local networks only ever need to connect _out_ to the external network, they don't need incoming connections.
Going to 128bit addresses essentially makes DNS required. And when using DNS nobody really cares about the address of the host. They just want to be able to talk to the host on the other end, whatever it is.
Being fully peer-to-peer is a pipe dream. Nobody wants their entire local network exposed to the entire internet without a firewall. And with a firewall comes the exact same hole-punching problem that NAT has.
So that's the problem they should have tried to solve better, I think. They should have embraced NAT and the inherent network asymmetry, and found ways to make it better rather than trying to abolish it. Embraced that DNS needs to play a more central role than it does today, because nobody really cares about addresses.
> The majority of nodes on local networks only ever need to connect _out_ to the external network, they don't need incoming connections.
And that's how we can't transfer large files from one computer to another, and why audio and video calls over the internet are only done through well founded companies, so there's nearly no competition.
Nobody has configured their firewalls, because those things don't exist. If they existed, your firewall would have come configured by default.
But if your computer had connectivity to the rest of the network, you could choose to disable protection on some port, and help bootstrap some software like those. Instead, with NAT, you can't.
I wish I could upvote this 1000 times. Most firewalls are just so much security theater cargo culting. NAT and universal firewalls have done immeasurable, and at this point practically irreversible, damage to network freedom and free software.
A fully peer-to-peer network is how IPv4 worked before NAT became popular. NAT was hack to solve address shortage. I worked for a couple ISPs in the 90's and many corporate networks had fully routed addresses on their LAN. I even had this at home.
At one point I had an entire /24 routed over a 28.8K connection. I eventually upgraded to an ISDN line. Fun times.
It sounds like what you really want is a firewall, not NAT. Having a globally unique prefix (either IPv6 or v4) does not mean your network is completely accessible externally. There are no dependencies. It actually makes things simpler if you ever want to connect two internal networks.
Of course I want a firewall. We all do. That much was obvious even when I was on dial-up, installing Shorewall or whatever it was back in 2000.
My point is that if you have a firewall you're in the exact same position as if you have a firewall+NAT in that you need to be able to punch holes in it.
What is the downside? The external network only needs to know you're responsible by an IP range that your ISP gives you, you can still configure your network internally as you please.
The addresses depend on the external network. If my prefix changes with IPv6, suddenly nothing works on the internal network until all the nodes are reconfigured and the firewall rules rewritten. I know because it happens to me regularly.
With NAT the internal nodes can for the most part work like normal. They might experience a gap of a few seconds where communication with the external network (the Internet) is down, but apart from that nothing.
You might say just use ULA[1]! To which I will say what's the point of IPv6 then?
It's true that this is a problem for intermittently connected consumer networks and customers of flaky ISPs that change the prefix on you needlessly. Regulation for allocating PI space to flaky isp customers might help.
A fully peer-to-peer network is arguably how the IPv4 internet still works, as hosts behind NAT are not really part of it. But of course it hurts the network effect when fewer users are on the network proper.
> And for the majority of local networks there's a fundamental asymmetry between the local and external network. The majority of nodes on local networks only ever need to connect _out_ to the external network, they don't need incoming connections.
This is kind of like the railway administrators who say "there's no reason to have the train stop in TinyTown, there are no passengers waiting for it there".
If I could receive incoming connections, I could host my own Internet services (such as my own web site) AT HOME, and not be at the mercy of corporations to allow me to offer my information and/or services.
Now try hosting two different web services on two different hosts inside your NATted network on port 443, so you can use https:// without specifying a port.
Because that's the big advantage with IPv6 and users getting subnets instead of a single IP.
[Edit] The problem with public addressed end nodes is the Operating Systems we use aren't secure. Linux and Windows are unfit for purpose in a world of persistent connectivity. Suitable alternatives do exist, but aren't well known.
Once I get Genode or GNU Hurd running, I'll gladly hook it straight to the the full IP4/IP6 public addressed internet. Until that day, nope.
You’re confusing a firewall and a NAT. Having a publicly routable IP address does not in any way imply allowing external connectivity to that address.
The firewall rule that allows packets from ESTABLISHED and RELATED connections back to a host behind a NAT is the exact same rule that allows packets to get through a default-drop firewall.
And if you don't think Linux and Windows are secure, then you do. Either way, you shouldn't be putting them behind NAT.
You should put them behind NAT if you want to connect your network to the internet but you don't have enough IP space to do it. It has nothing to do with how secure you think your OSs are.
I don't believe any firewall out there is fit for purpose, because the OS on which it is based is likely to be insecure. On top of that OS would then be the firewall software itself.
I would truly love to be proved wrong, but mathematically proven systems are few and far between.
I had to downvote you for conflating NAT with security. IPv6 has the same capability, by the way, due to a reserved address space called link local (fe80).
NAT is just as horrible, and secure, as spanning tree.
> I had to downvote you for conflating NAT with security.
I specifically didn't.
I said that nobody sane connects to the internet without a firewall. That firewall will of course be default-drop because otherwise why bother.
With that firewall in place, nodes on the outside of the firewall cannot talk to nodes on the inside without some configuration. So the fact that the nodes share an address space is moot.
My point was that punching a hole in that firewall requires a similar effort or solution that adding a NAT rule does.
The difference is that you can give an IP to a device semi-permanently and you can route to it directly if needed. This allows you to configure a firewall in your device specifically, rather than having to orchestrate with the router. For instance, you can give your laptop an IP and your partner's laptop another IP and you can just route everything to those two specific devices, which you know is handled properly by two responsible people. I agree with the point that you probably want to be more restrictive in the router level when you connect your fridge, printer, etc.
Moreover, the router is a lot simpler as it doesn't need to keep track of connections, which works well with TCP, but not necessarily with UDP. So, you can actually use other protocols other than TCP easily too. The only downside of IPv6 is that it is not backwards compatible, but being able to identify devices uniquely has no down sides.
You don't just need configuration on the fw, you are also limited to having at most one service of each kind per NATted network, because you use up their default ports.
That's already quite convoluted, and it means that one remote address can only connect to one of the services at a time and you need to reconfigure your router to change which one. It also gives you no selectivity for remote networks that use NAT, so if 1.1.1.1 is actually an ISP using CGNAT for thousands of customers then they're all stuck with going to only one of your services.
(If I read your post as written, it also seems to say that the client needs to pick a custom source port and if they're using NAT then they also need to get that source port unaltered through all of their layers of NAT. And they'll be limited to one connection at a time, perhaps one connection for however many people are behind the same CGNAT, because only one connection can use a specific source port at a time.)
So yes, you can pile hacks on top of hacks on v4, but it's not similar effort to just permitting the connection in a firewall, and it has a bunch of extra limitations too.
Correction about link-local: v6 link-local doesn't really work for this - LL addresses are meant to be used for low level stuff like address autoconfiguration and neighbour discovery. They don't even work with the normal socket APIs without doing some extra gymnastics (specifying interface indices etc).
What does work though is ULA (Unique Local Addresses) - they're also designed to prevent conflicts when you eventually end up making a mashup of your and someone else's network who also wanted to use their own addresses.
> There's only one catch: that's not how UDP and TCP work, and it's too late to update them.
At least TCP is being worked on, MPTCP already is in iOS and is getting integrated into linux. As part of its ability to switch paths it will also have an endpoint-independent identifier, i.e. mobility.
So what? Just because IPv6 does does not solve all the problems in networking it's trash and we should abandon it? No thanks. Where I live, ISPs are starting to share IPv4 addresses between several customers, which creates many problems.
And QUIC over IPv6 is still a lot better than QUIC over IPv4.
Here Comcast started giving out "static" IPv4 addresses to dynamic IP customers, which is the opposite of sharing IPs... I can change it by spoofing my router's MAC address though.
Comcast is the reason I disabled IPv6. They'd frequently route traffic over IPv6 to... Nowhere. Websites of major tech companies such as Google and GitHub would occasionally get routed wrong and just... Hang. It's like they black holed outbound IPv6 traffic. The traceroute6 output would quit displaying new routes.
Never had that problem on IPv4. So I ended up disabling it.
(I do have an unrelated issue that sometimes they can't route traffic between two customers in the same general area. Traffic tends to go out to the west coast Comcast DCs and get stuck, never comes back. We're midwest).
Github is IPv4-only to this day. So you should not be getting AAAA records for it, nor any v6 routing issues to a v4 address.
(Also, my experience with IPv6 on Comcast was relatively solid; just hated their billing department. Now my ISP doesn't offer IPv6 at all, which isn't great.)
Used to run the network for a large hosting company. Can confirm, I disabled all peering and transit on one set of border routers for maintenance, restored, and then did the same on the other set.
In both cases, I disabled everything, and then only restored IPv4. Took me a week to notice and restore IPv6. Not a single support request was received relating to the week long IPv6 outage.
This was when I gave up and removed IPv6 from my home network. I’m now behind IPv4 CGN, and don’t care. It works.
What do you mean? Just because Comcast leases you the same IP for years doesn't mean they're obligated to give you the same IP on your router's next DHCP request.
When we moved recently Comcast rotated our IP to one that geolocates to two states away. Which is fine until every single big box store geolocates us to the wrong store.
IPv6 is basically on mobile providers shoulders right now, in my opinion. At some point it will go from opt-in to required, and later from required to the only option.
And while it's not like mobile users use old sites much, but if you want them to have the ability to access yours, then you'll make sure it has IPv6 as well.
Several mobile providers have already gone past the "required" to the "only option" stage. There hasn't been major outages because the mobile OSes adapted just fine and NAT64 seems to work alright.
We've crossed the point that most of the consumer internet is IPv6 by default. It's enterprise and other similar VPN adoption that has lagged behind, generally intentionally as businesses worry that IPv6 is harder to secure and have cultural momentum behind years of IPv4 security knowledge.
I learn new things about IPv4 every day. Over the weekend there was an article about someone writing an IPv4 address parser and there were a couple of cases that were new to me.
One thing about "second system syndrome" that doesn't get talked about much is how the second system always has to take into account the unwritten rules from the first system, that the original designer didn't take into account. What was previously unspecified now becomes specified, and that increases the amount of text in the second system's specification. But, the complexity might not be as high -- it's just that you can read about it all in one place.
Ehhh I don't know about that. IP takes a while to understand for most of the reasons articulated in the post. DHCP and ARP, for example, are a bit strange. It's just everyone already understands it, so explanations have been distilled pretty well.
That being said, I don't grok IPv6 myself, on account of never actually looking into it.
ARP I'll give you (though even there it's only actually in the binding of IPv4 to ethernet-like lower layers. IP over e.g. PPP doesn't need it). DHCP isn't inherent in IPv4 at all though. You want to throw it in, because IPv6 has address autoconfiguration built-in. But this is actually just more evidence that IPv6 is more complex.
I want to throw DHCP in because in practice, you need it to connect to random IPv4 networks.
Besides, the counterpoint is even tougher. Should we expect beginners to configure static addresses when actually configuring their networks? Getting bogged down in the details of broadcast addresses, subnet masks and gateway addresses can be a bit unsettling for a total beginner. It's second nature to seasoned hackers, so we discount how foreign it was when we first learned it.
Understanding how to read/write an IPv6 address is WAY more complicated than for IPv4. That barrier to entry alone likely prevents a lot of casual adoption. You can come back that IPv6 isn’t all that bad, but the real world adoption rate amongst individuals, even skilled hobbyists, speaks for itself.
That is probably the silliest argument against IPv6 you can make. Just look at the header structure and IPv6 becomes pretty clear. And without old cruft that IPv4 is burdened with, I would argue IPv6 is easier to grok.
Maybe you mean that the increase in adoption is too slow, and I agree to that, but change always takes more time than we want, specially when there isn't clear financial incentives.
That’s why I said among individuals, specifically hobbyists. Sure, companies that consume vast quantities of IPv4 addresses have an economic incentive to use IPv6, so you will see some adoption. But the reason there is such a long tail is because the cost of adoption (learning it) is higher than the supposed benefits for most individuals and small corporations.
The ideal situation for both IPv4 and IPv6 is to not have to deal with raw addresses. If you are reading IP addresses over the phone or in a ticket to connect to computers on a regular basis, your company did IT on the real cheap and is likely suboptimal in many ways except for amount of money spent on IT.
Not as much, you could learn the basics on ipv4 in two to three 4 hour classes of study - assuming you know how to convert between binary octal and decimal.
If you only use autoconfiguration perhaps, but for manually assigned addresses it's really not too bad since one can structure them in a logical way (which would be extremely expensive to do in the IPv4 case).
If you are a company you can get your own IPv6 space and can structure things hierarchically: <company prefix>:<site>:<vlan>::<hostnumber>.
And in fact I can remember more IPv6 addresses for hosts than IPv4 addresses for the same hosts right now.. so there is that.
And yet I have fe80:: in my head and 2001:db8:: in my head (as well as my own /32). I don’t have the documentation prefix (if one even exists) for IPv4 in my head.
It’s harder, but “can’t... ever” and “not possible” are not true for everyone.
IPv4 as deployed with NAT is more complex than IPv6. The most significant barriers to V6 adoption are, in this order:
1. It’s an entirely new network. IPv4 only hosts are not reachable on the IPv6 net. It replaces rather than extends IP, and I can’t figure out why they did that.
2. The addresses are too long to remember and easily type, and the encoding with its shortening rules is non-canonical and annoying to visually parse.
Extending V4 to 64 bits on top of the V4 space would have been a much better idea for multiple reasons. Every V4 becomes a /32 in a full /64. Using something like base32 addresses could have been kept to just barely type-able length.
People always push back on #2 with “use DNS.” I always wonder if these people have ever done IT or devops. You constantly type IPs in the real world and long V6 addresses are a huge annoyance.
I do a lot of devops and haven't typed an IP address in a while. I do a lot of comparisons, however, and comparing 128 bit numbers is slightly more difficult than 32 bit numbers. Overall, I stopped remembering IP address a long time ago. I wouldn't recognize "jrock.us" if you showed it to me. I kind of recognize my current external IPv4 address, but it could be off by a lot before I'd notice.
I think, many years ago, I typed a lot of IP addresses, but don't anymore. I am personally fully-ready for IPv6, but understand that the world is not. (For example, my website was IPv6-accessible for many years, but changed providers and now it's not. I was a little sad about this, but it turns out nobody cares, so I didn't spend any time on it. Unfortunate, but someday it will work again, I'm sure.)
> It’s an entirely new network. IPv4 only hosts are not reachable on the IPv6 net. It replaces rather than extends IP, and I can’t figure out why they did that.
IPv6 cannot be an extension of IPv4 since IPv4 is not forward-compatible.
It has nothing in it that would allow stuffing more address bits into it.
And since you cannot encode more than 32bit in 32bit you will never be able to have a IPv4 successor that can just interoperate with IPv4 - you will always end up in exactly the same situation as IPv6 is right now.
And if you just want to route your IPv4-ng over IPv4 in the core or something (which is what your idea is more or less).
You can do that with IPv6 as well: Use 6in4 tunneling.
But you are still at the same problem that a IPv4-only cannot communicate with an IPv4-ng-only host, since it is unaware of IPv4-ng.
There's plenty of room if the IPv4 header to extend it.
The most significant bit of the IP protocol number field is mostly unused. The IP protocols that use it could be safely deprecated on the open Internet.
Setting the MSB indicates a 64-bit address.
In 64-bit mode the source IP and destination IP fields in the V4 header become the most significant 32 bits of the source and destination IP. The least significant 32 bits of each are present in the options field, which contains enough room for both. In 64-bit mode any other use of the options field is deprecated. The bit index of these 32-bit chunks in the options field could be standardized to permit easy hardware decoding. The part of the options that indicate the option type and length could be present for backward compatibility with IP64-unaware routers but otherwise ignored.
Uglier than a clean slate design? Sure, a bit. But if this had been done, we'd already be using it everywhere and the problem would be solved.
TCP, UDP, and other IP protocols could remain almost entirely unchanged, though I imagine tables in software stacks would have to be extended to support 64 bits of address space. That's not a major change.
I just came up with that in 10 minutes while replying to this post. I assume it would take a committee several years to write the above spec.
The big "win" here is that existing IPv4 routers in the Internet's core infrastructure do not need to be upgraded all at once. They'd just lack the ability to understand anything beyond /32.
Honestly there are probably enough /32's to give one to every major endpoint, so I wonder if the core would even need to worry about it at all. I imagine that ISPs would hand out /32's to business customers (maybe for an additional fee), but once IP64 were widely deployed they'd probably hand out /48's to most subscribers. That would give you 65,535 "real" IP addresses for your house or small business. Mobile phones might get a /56, which would give them 256 IPs for tethering and permit cellular networks some additional IP space.
There are nice things about 128-bit addresses, but I am not sure they are worth adding 30+ years to the transition and having to lug around the hideous boat anchor that is NAT for an extra few decades.
> The big "win" here is that existing IPv4 routers in the Internet's core infrastructure do not need to be upgraded all at once.
As I understand it, that is not the limiting factor in IPv6. Actually, I believe it was the second piece of the Internet's infrastructure to support IPv6 (the first part being OS networking support). Instead, the limiting factor is network management and middleware, in other words the edges and not the core of the Internet, that are holding back IPv6 adoption.
I think you will be surprised on how much stuff will break in mysterious ways. And that it will create a nightmare to actually operate. But even if we pretend it will just work there is the still the issue that IPv4-only hosts cannot talk to IPv4-NG hosts.
There is no way around that.
And having to upgrade the core is not really an issue (anymore).
You are focusing on the wrong issue.
v6 already implements a similar scheme. You set the protocol field in the v4 packet to 41, and then put the v6 addresses at the beginning of the packet payload, with the original packet payload following afterwards.
It doesn't seem to have helped as much as you're claiming.
I could set up an entirely ipv6 network on each of my devices and connect it with nat into IPv4, but crucially I want someone typing "ping 1.1.1.1" or opening a udp conenction to 92.42.35.61 port 5210 would just work, and I don't want to add much overhead (and wide deployment of dual stack is adding overhead)
Imagine I have a mid sized ipv4 network comprising say 200 /24 subnets across multiple locations, all on 192.168.x.0/24, with 192.168.0.0/24 reserved for a vpls connecting them together (say ospf point-to-multipoint, other topologies are preferable of course).
Default router to the internet on 192.168.0.1 (internal), which srcnats to a public IP addresses of 12.12.12.0/24, say 1 hide per subnet, advertised out as BGP to a couple of providers via some /30 links from their allocation.
Now say I want to create a new client network.
1) Create a new subnet on 192.168.201.0/24 and put them all behind the new hide 12.12.12.201
I've done this 100+ times already, no problem, but it's rather legacy.
2) Create a new ipv6 network, with a 192.168.0.89/24 address which links to the other networks
I could create a new ipv6 network, and reach servers on 192.168.12.1, or 192.168.51.8, as well as reaching the internet on ipv4. Traffic leaving the network would get srcnatted to 192.168.0.89.
I then have a new http service I need to add, so I create a new ipv6 server network. This is reachable directly from the wifi clients as I have an ipv6 peering between the wifi client router and the ipv6 server router.
For access from ipv4 subnets I can just put a destination nat on 192.168.0.89 port 80 to my new server, so it's reachable from all other ipv4 clients, but also from ipv6 clients directly. the server wouldn't need as it would see the ipv6 connection from the natting router (which maintains the state). I could also add a port forward from the public IPv4 address too.
3) Do both, and have twice the overhead as I have to manage both an IPv4 and IPv6 network.
I'd rather not do that dual stack. After all what's the point in moving to ipv6 if you need to keep ipv4 running. It would be like using IP and IPX on the same network.
My next step would be to add a public Ipv6 address peering. I put in ipv6 routing on my default router.
Now my new server is reachable directly on ipv6 from my client ipv6 network, and the entire ipv6 internet
It's also reachable from my older network on the internal 192.168 address range, and from the ipv4 internet on
I don't have to have a dual stack on every device (including the server), just on 3 routers.
I notice that some of my customers would like to connect via ipv6, so that's fine, I do an ipv6->ipv4 nat on my firewall for legacy services.
With this setup, which doesn't need me to change 20,000 devices on 200 subnets simultaneously, or end up in a situation where I have 20,000 dual stack devices, I can start migrating to ipv6 without losing any existing ipv4 services. Eventually I will be left with just a handful of ipv4 subnets and a couple of routers running an ipv6/v4 natting gateway
That's a very simplistic statement. That's like saying IA-64 was over-engineered. It's completely unhelpful unless you can articulate in which ways it is over-engineered and propose a practical alternative.
I take my hat off to the AMD folks that brought us x86_64. I challenge you to design a successor to IPv4 that is better than IPv6. Put it out there and let the world start taking shots at it. We'll see how well it holds up.
There is plenty of opportunity for a better idea to catch on. Just like x86_64 succeeded despite the existence of IA-64. IPv6 has been so poorly adopted that it is not too late to retreat.
But it would take a lot more than lazy criticism to find a workable path forward.
The article is way too long for the message it tried to convey. Whatever that message was, I didn't get it.
Related to this topic: IF IPv6 was a drop-in replacement for IPv4 then I think the adoption would be much better today. For instance from an end-user (residential use-case), I should be able to upgrade my network equipment (modem, switches, routers) over time and each one of them should work with IPv6 when possible transparently independently. It's possible that one might not reap the benefits on ipv6 immediately but over time (gradually) we should.
Counterpoint: Stephen Fry’s voice suddenly began narrating this entire for me around the third sentence. This probably biased everything else for me, but in my opinion, it’s exceedingly rare to find ~a complex~ any topic in IT discussed ~in depth~ at all in such well-written language as this post contains. I honestly don’t care if they were completely wrong; their tone invited thought and discussion (which is probably one reason it’s been posted several times, with decent discussion, to boot).
> I should be able to upgrade my network equipment (modem, switches, routers) over time and each one of them should work with IPv6 when possible transparently independently.
Good news: this describes current reality. (Your ISP needs to do their bit of course too)
> Avery’s Laws of Wifi Reliability (mostly a paraphrasing of a trivial subset of CAP theorem):
> Rule #1 ("a OR b", success compounds and failure rates drop): if you have two wifi router brands that work with 90% of client devices, and your device has a problem with one of them, replacing the wifi router brand will fix the problem 90% of the time. Thus, an ISP offering both wifi routers has a [1 - (10% x 10%)] = 99% chance of eventual success*
> Rule #2 ("a AND b", failure compounds and success drops): if you’re running two wifi routers at once (say, a primary router and an extender), and both of them work “correctly” for about 90% of the time each day, the chance that your network has no problems all day is 81%.
> My claim is that the "choose between two internets [IPv4, IPv6]" problem is the same as the "choose between two flakey wifi routers on the same SSID" problem (Rule #2). We should not be surprised that the combined system is less reliable.
> The World in Which IPv6 was a Good Design: mobile IP. That is, the ability for your connections to keep going even if you hop between IP addresses. If you had IP mobility, then you could migrate connections between your two internets in real time, based on live quality feedback. You could send the same packets over both links and see which ones work better. If you picked one link and it suddenly stopped, you could retransmit packets on the other link and pick up where you left off. Your precise heuristic wouldn’t even matter that much, as long as it tries both ways eventually.
> If you had IP mobility, then you could convert the “a AND b” scenario (failure compounds) into the “a OR b” scenario (success compounds).
It doesn't really work any differently to v4. Most of your v4 knowledge still applies to v6's behavior. Subnetting and routing works no differently, and neither do the addresses themselves (other than being longer). It's passing packets around in exactly the same way v4 does.
There are a few differences, e.g. default routes, IPs and maybe DNS come from RAs rather than DHCP. You also won't be using NAT, but that removes complexity so it should make things easier to understand.
It was. Kubernetes should have been native v6-only from the start, with inbound v4 connections being handled by reverse proxies. Would've simplified the whole thing a bunch.
This. ISP treating delegated prefixes the same way as a dynamic IP in IPv4 (as in, change it often and randomly) is one of my major pains as a user of IPv6. The whole network stack is very bad at handling this - it results in a very annoying, semi-broken network where things kind of work, but not really: hosts on local network might be reachable or not, depending on when they get the prefix change and adjust their global addresses accordingly. Hosts further uplink might be reachable or not, depending on when and in what sequence the ISP's routers get notified of the change. It's a nightmare if you want to have any kind of reliable connectivity.
fd00::/7 (technically fc00::/7 but IIRC the lower half is reserved) is supposed to be used for all local addresses and is equivalent to the private reserved ranges in IPv4.
As I understood it back in the day--which is how long IPv6 has been floating around--one of the whole ideas was that every smart dust mote or whatever would have its own fixed IP address. I'm not sure many people were thinking much about multi-tier edge architectures. There were central servers, maybe PCs, and then "things." And those things were going to exist in very large numbers and needed IP addresses.
If your ISP is swapping prefixes on you without warning (vs giving you lifetimes and chance to deprecate old addresses) then they are providing bad v6 service and you should complain loudly. That said, it's still no worse than what you get with v4...
The designers didn't imagine the prefix to be eternal, that's why there's a lot of support for automatic renumbering (address lifetimes with concurrent deprecated and active addresses, with support for this built into address autoconfiguration mechanisms like dhcpv6 and slaac from the start, etc). So it's better than "v4 with more bits".
But people like the NAT. I'd guess it's the only part of networking that gives people a warm and nice feeling; it appeals to our senses of ownership and security.
100ms isn't that low, and those NATs are introducing additional latency and complexity that don't need to exist. If both sides are behind NAT then you may well need to relay the connection through a third party server, introducing even more cost, complexity and latency and involving an otherwise-unnecessary third party.
"I managed to get something working" doesn't invalidate any of that.
100ms is very low for doing decent compression and decompression.
Network latency added by natting is microseconds.
Imagine I had a public IP - no natting involved. How would my video client tell my stateful firewall to allow a new incoming connection from the calling device?
The issue isn't the NAT itself, it's the inhibition of P2P connections (to a further extend, spontaneous queries for distributed routing technologies like seen in IPFS, but those are just preventing full decentralization).
Once you have to bounce the data through a hosted proxy, you're forcing no less than 10 ms on to the users (of course there are rare exceptions for users really close to your proxy).
Well, nobody knew which new features would work out. I think looking back the bet on features that would pay off in saner network architectures are still quite defensible and some will likely pay off in the future when we move on from the dual stack norm.
I thought I knew how it all worked, because after all I once used ebtables and ipchains to build a dual-homed linux gateway back before that was a thing (marking packets was key to everything)...but nope.