Awesome article! Interesting related work is in [1], where we used DNS TTLs as a covert channel for passing data, without needing to control the domain(s) being used. Through the development of that covert channel, we found a variety of idiosyncrasies such in the client-side DNS infrastructure and discussed them in [2]. Some devices will report an erroneously high TTL, some will unnecessarily shorten the TTL, some represent entire clusters of DNS resolvers with interesting properties, and so on. Based on your work, it appears that over the past five years, the number of open resolvers has dropped dramatically, from ~30M to ~3M.
Your email response really is indicative of some of the folks that get cranky when you send them packets :)
As for DNS, djbdns can store arbitrary bytes in RR (e.g., TXT), as octal. For example, modified dnstxt can print formatted text stored in TXT records, with linefeeds, etc.
Super fun article. I also like to see a "real" implementation of crazy ideas like this.
Can anyone confirm if the Microsoft DNS servers default to caching an unlimited amount of data? The article claims "Unlimited??" as the default for these systems. Eyeballing the pie chart looks like 20% of the servers are running Microsoft, which could provide quite a lot of storage.
Please don't. There's a perfectly lovely naturally emerged digital life form living in the spaces in between on the Internet, and this would threaten their habitat. Sure they haven't figured out we exist yet, and are certainly a long way from being able to communicate with us, but they seem kind to one another and I'd hate to see their evolution displaced.
Even unlimited is bound by memory/storage with probably an LRU eviction scheme. So unless your stored data is hot, or their storage is very large, it might not stay around long.
An enhancement of this technique could be used on one’s own private network of DNS resolvers for the specific purpose of acting like a highly available directory of private cloud nodes, storing the following information:
host:service:port:protocol
encoded in one DNS TXT record per service.
This would kind of be like a mashup of Apple Bonjour and this technique.
The big question is, how long to cache the information for in such a setup, assuming the cloud itself is highly unreliable, so as to make the entire thing extremely fault tolerant?
While an interesting use, abusing DNS in a similar way has beena long known (15 year) security vulnerability. For example, OzymanDNS. Even then, that was just one of the first published exploits. People had been performing DNS tunneling for some time.
There are detectors of DNS abuse that I imagine the people who actually would store files in DNS would not want pointed at their files.
Yes! Reading the description of DNSFS, i was sure Dan Kaminsky had done something like this years ago, but couldn't track it down - Dan Kaminsky has done a lot of things with DNS.
Oh I'm with you, you've gotta put other controls in place. Still in my basic acl for every network, because it's one if the first things users will do to circumvent controls.
HTTP 1.1 servers need the host name in the request, so that a single IP can host multiple domains that resolve to it. If you just go to the IP address, you get an error or a default host. It should work fine with most other protocols, though.
Adding to what others say here: if you have/know the ip address, you probably also know the host name. There's nothing magical about:
# from memory, syntax might not quite work
telnet 1.2.3.4 80
Http/1.1
Host: example.com
Get /
Which is indeed why you can put the ip and host name(s) in /etc/hosts - and without other network level blocks - browsers etc will just work.
With http 1.0 blocking/filtering ips was enough, with 1.1 you need a proxy. With tls/ssl you have the choice between (having the capability to) decrypt everything or filter nothing. (obviously ip level filtering works, but it's a little crude in a Http 1/1 world. Ditto for http2 etc).
Just a tiny correction: RIPE Atlas' reliability tags (e.g., "-stable-Xd") have nothing to do with the probe "changing the public IP address once a day". Those filter simply measure the probe's uptime over different time windows.
In fact, the "-stable-1d" tag you mentioned would be true even for probes that have been down "up to 2h" over the last day.
You can use the dig utility to see if a DNS server is recursive. Just do the scan in two steps. One major port scan using masscan, netscan, etc., then a smaller scan of the IPs with port 53 open to see if they are recursive or not. You'll see this in dig's output if the server is not recursive:
Your email response really is indicative of some of the folks that get cranky when you send them packets :)
[1]: http://research.tom.callahan.us/pubs/icsi-tr-12-002.pdf
[2]: http://research.tom.callahan.us/pubs/imc029-schompAemb.pdf