Fun! I wish WebTorrent had caught on more. I've always thought it had a worthy place in the modern P2P conversation.
In 2020, I messed around with a PoC for what hosting and distributing Linux distros could look like using WebTorrent[1]. The protocol project as a whole has a lovely and brilliant design but has stayed mostly stagnant in recent years. There are only a couple of WebRTC-enabled torrent trackers that have remained active and stable.
I think the issue has generally been that web torrent doesn't work enough like the real thing to do its job properly. There are huge bit torrent based streaming media networks out there, illicit, sure, but its a proven technology. If browsers had real torrent clients we would be having a very different conversation imo
I don't remember the web torrent issue numbers off the top of my head, but there are a number of long standing issues that seem blocked on webrtc limitations.
I think we still have the same blocker as we had back when WebTorrent first appeared; browsers cannot be real torrent clients and open connections without some initial routing for the discovery, and they cannot open bi-directional unordered connections between two browsers.
If we could say do peer discovery via Bluetooth, and open sockets directly from a browser page, we could in theory have local-first websites running in the browser, that does P2P connections straight between browsers.
Could you run some kind of hybrid DHT where part of it was Webrtc and part was plain HTTP(S) / WebSocket?
There are some nodes (desktop clients with UPNP, dedicated servers) that can accept browser connections. Those nodes could then help you exchange offers/answers to give you connections with the Webrtc-only ones, and those could facilitate offer/answer exchanges with their peers in turn.
It'd be dog-slow compared to the single-udp-packet-in, single-udp-packet-out philosophy of traditional mainline DHT, but I don't see why the idea couldn't work in principle.
I think a much bigger problem is content discovery and update distribution. You can't really do decentralized search because it'd very quickly get sybil-attacked to death. You'd always need some kind of centralized, trusted content index, but not necessarily one hosted on a centralized server. If you could have a reliable way to go from a pubkey to the latest hash signed by that pubkey in a decentralized way, + E.G. a Sqlite extension to get pages on-demand via WebTorrent, that would get you a long way towards solving the problem.
Yes, but it's STUN that sucks. If the software ships with a public (on the internet) relay/STUN server for connecting the two clients, it won't work if either aren't connected to the internet, even though the clients could still be on the same network and reach each other.
That seems like a nonissue for the purposes of this discussion though, in terms of user uptake. Tiktok and Facebook and other websites aren't exactly focused on serving to people on the same network.
> The Direct Sockets API addresses this limitation by enabling Isolated Web Apps (IWAs) to establish direct TCP and UDP connections without a relay server. With IWAs, thanks to additional security measures—such as strict Content Security Policy (CSP) and cross-origin isolation— this API can be safely exposed.
Though there's UPNP XML, it lacks auth for port forwarding permissions. There's also IPV6.
Similar: "Breaking the QR Limit: The Discovery of a Serverless WebRTC Protocol – Magarcia" https://news.ycombinator.com/item?id=46829296 re: Quick Share, Wi-Fi Direct, Wi-Fi Aware, BLE Beacons, BSSIDs and the Geolocation API
Can't seem to find any mentions of this online from over a week ago, not much commentary either, mostly stuff that smells like advertising / astroturfing. Hmm...
What a pity that although webtorrent support is already merged in the libtorrent master branch years ago, it's not merged into the stable branch yet, therefore not working out of the box in clients like qBittorrent.
If it was, would it mean that qBittorrent would share with web clients by default? My understanding was that it's not the same protocol, so I'm guessing that a client like qBittorrent would have to choose to "bridge" between both protocols, right?
This is cool - I actually worked on something similar way back in the day: https://github.com/tom-james-watson/wtp-ext. It avoided the need to have any kind of intermediary website entirely.
The cool thing was it worked at the browser level using experimental libdweb support, though that has unfortunately since been abandoned. You could literally load URLs like wtp://tomjwatson.com/blog directly in your browser.
I think one of the values of (what appears to be) AI generated projects like this is that they can make me aware of the underlying technology that I might not have heard about - for example WebTorrent: https://webtorrent.io/faq
Pretty cool! Not sure what this offers over WebTorrent itself, but I was happy to learn about its existence.
I'm planning to eventually launch an open source platform with the same name (peerweb.com) that I hope will be vastly more usable, with a distributed anti-abuse protocol, automatic asset distribution prioritization for highly-requested files, streaming UGC APIs (e.g. start uploading a video and immediately get a working sharable link before upload completion), proper integration with site URLs (no ugly uuids etc. visible or required in your site URLs), and adjustable latency thresholds to failover to normal CDNs whenever peers take too long to respond.
I put the project on hiatus years ago but I'm starting it back up soon! My project is not vibe coded and has thus far been manually architected with a deep consideration for both user and site owner expectations in the web ecosystem.
Well this is supposed to load a website in the browser like a "normal" website (doesn't work for me, stuck on "Connecting to peers...").
Just using a torrent client means that you have to download the website locally with a torrent client, and then open it in your browser. Most people wouldn't do that.
If it actually worked i could certainly see the value prop of not making users download a separate program. Generally downloading a separate program is a pretty big ask.
I built something similar a while back: Distribyted Gate: it turns any magnet link into a browsable webpage.
The key difference is the approach: it uses a Service Worker as an embedded HTTP server in the browser. This means files are loaded on-demand rather than requiring full downloads upfront. The SW intercepts fetch requests and streams chunks directly from the torrent swarm.
> XSS Protection - All HTML sanitized with DOMPurify
> Malicious Code Removal - Dangerous tags and attributes filtered
> Sandboxed Execution - Sites run in isolated iframe environment
I don't think that super makes sense. You probably just want the iframe sandbox and not remove all js. Or ideally put the torrent hash as the subdomain to use same origin policy.
I wonder if these colors are a kind of a watermark that are hardcoded as system instructions. Almost all slopware made using claude have the same color palette. So much for a random token generator to be this consistent
Yep, and I refuse to use sites that look like this. Lovable built frontend/landing pages have a similar feel. Instant lost of trust and desire to try it out.
Its probably more of a me "problem". But I'm sure there are plenty of others that share my sentiment. It doesn't really have anything to do with it being familiar, familiar can be good, but what I'm talking about is a familiar ugliness and lack of intention.
The Stripe or Shopify checkout is familiar, but it only became familiar because it was well designed and people wanted to keep using it.
Also when its obvious someone used an LLM, it bleeds into my overall opinion of the product whether the product is good or not. I assume less effort was put into the project, which is probably a fair assumption.
Ask any modern (post-GPT-2) LLM about a random color/name/city repeatedly a few dozen times, and you'll see it's not that random. You can influence this with a prompt, obviously, but if the prompt stays the same each time, the output is always very similar despite the existence of thousands of valid alternatives. Which is the case for any vibecoded thing that doesn't specify the color palette, in particular.
This effect is largely responsible for slop (as in annoying stereotypes). It's fixable in principle, but there's pretty little research and I don't see big AI shops care enough.
Before LLMs became big, I used emojis in my PRs and merge requests for fun and to break up the monotony a bit. Now I avoid them, lest I be accused of being a bot.
I think serving video is a particularly interesting use of Webtorrent. I think it would be good if you could add this as a front end to basically make sites DDOS proof. So you host like a regular site, but with a JS front end that hosts the site P2P the more traffic there is.
I think it is very difficult (and dangerous to the host) to serve user-uploaded videos at scale, particularly from a moderation standpoint. The problem is even worse if everyone is anonymous. There is a reason YouTube has such a monopoly on personal video hosting. Maybe developments in AI moderation will make it more palatable in the future.
The "host" is the user in this case. Every user that watches the video, shares the video. Given that discovery doesn't appear to be a part of this platform, any links would undoubtedly be shared "peer-to-peer" as well, so if you aren't looking at illegal things and don't have friends sending you illegal things to watch, it's perfectly safe.
What I'm suggesting is more in the context of self hosting - a JS wrapper which would make it easy to host a video with plain HTML while preventing bandwith issues.
I like Peertube a lot, and I didn't realize until just now that they had a form of P2P distributed distribution which uses WebRTC. But it would be great to be able to do that with a static site, without deploying a whole framework. Just a simple JS wrapper which could sit on top of a <video> element would be amazing
Cool. Some people complained about broken demos, I uploaded the mdwiki.info [1] website unaltered and seems to work fine [0]. MDwiki is a single .html file that fetches custom markdown via ajax relative to the html file and renders it via Javascript.
The idea is to host it on github, and people send changes to the content via pull requests (vs. editing like in wikipedia). There is no backend, just plain files.
In high school, an acquaintance of mine made the website "e-imagesite.com" [1]. It was a very easy-to-use image uploading site (and honestly less irritating than ImageShack and predated imgur). It was just being hosted on HostGator, I believe, and written in PHP and used jQuery.
I believe he had to eventually shut it down because people kept uploading horrifying stuff to it, and it was never even that popular. Child porn and bestiality were constantly being uploaded and I don't think he liked having to constantly report stuff to the FBI.
After building a proper comment section for my blog (including tripcodes!), I've thought about making my own "chan" site, since I think that could be fun, but I am completely terrified of people uploading horrible stuff that I would be forced to sift through and moderate pretty frequently. User submissions open up a huge legal can of worms and I am not sure that's a path that I'm willing to commit myself going down.
When there's strong anonymity, I suspect that this problem could be even worse.
It's a little depressing, because decentralized and distributed computing is one of the most interesting parts of computer science to me, but it feels like whenever I mention anything about it, people immediately assume piracy or illicit material.
Yeah, I’m fully in support of a decentralised web but the internet is old enough now that being naive about this stuff has become equivalent to being maliciously incompetent. Without designing for things like community or self-governance and moderation, you’re designing for trouble. Thinking about ways to healthily cultivate a peer-to-peer web doesn’t make someone a Nazi, it makes them a responsible member of a community.
you can't stop someone from verbally describing certain objectionable material, therefore we should regulate the medium thru which sound travels and suck up all the oxygen on the planet. it's the only way to save the children
Hi, Omodaka here thanks for checking out PeerWeb. I forgot to turn the client to serve the demos - should be better now ;)
This is meant to work with PeerWeb App which is more secure and stripped down torrent desktop client, that you can use to share your websites and host them through peerweb.lol. Still haven't released the dektop client but might do it.
Point of this is for everyone to host their content without needing servers, and as a great learning experience. Security is very big caviat here, so in no way is this final secure version.
I'm glad to see this was not unexpectedly fast to load. Would not want to upset those distributed expectations! I wonder if there's a business model in selling speed on a robust network that is on average too slow. Is there anyway to incentivize more nodes through micropayments distributed from people who pay for their site to be served faster?
Ultimately I guess the distributed web is felled by economics thus far.
I have been intrigued by WebTorrent for a while. From my experience downloading Linux distros over Torrent, I know that it works really well when many people contribute.
But I have never had a successful experience with WebTorrent, presumably because it is less popular and I have never found a use-case where enough peers were sharing?
i wish stuff like this was more like double-click, agree, and use. they always make it complicated to where you're spending time trying to understand if you should continue to spend more time on this.
I tried this, the functional "Functionality test page:" is stuck on "Loading peer web site... connecting to peers". I can't load any website from this.
Not only did it take > 5 seconds to load a page, images were progressively loaded as fast as two at a time over the next minute or so - if there were no errors during transfer!
In the past ZeroNet was performant enough to realistically share websites but it's abandoned (ZeroNet Conservacy exist but no active peers seems to exists) this allow client to use an website without installing anything, which is nice, but how to get things visible initially it's well... A human challenge...
I feel like if it were combined with federated caching servers it would actually work. Then you would have persistence and the p2p part helps take load off popular content. There are now P2P databases that seem to operate with this. Combining the best of both worlds.
p2p storage as in torrent or IPFS or whatever is the part that we kinda' solved already. Serving/searching/addressing without the (centralized) DNS is still missing for a (urgently needed) p2p censorship resistant internet. Unfortunately this guy just uses some buzzwords to offer nothing new - why would I share links to that site instead of sharing torrent magnet links?
One issue I've had with IPFS is that there's nothing baked into the protocol to maintain peer health, which really limits the ability to keep the swarm connected and healthy.
IPFS [1] requires a gateway unfortunately (whether remote or running locally). If you can use content idents that are supported by web primitives, you get the distributed nature without IPFS scaffolding required. Content is versioned by hash, although I haven't looked to see if mutable torrents [2] [3] are used in this implementation. Searching via distributed hash tables for torrent metadata, cryptographically signed by the publisher, remains as a requirement imho.
Bittorrent, in my experience, "just works," whether you're relying on a torrent server or a magnet link to join a swarm and retrieve data. So, this is an interesting experiment in the IPFS, torrent, filecoin distributed content space.
It's worse than just illegal content. Copyright doesn't allow you to redistribute anything without the permission of the copyright holder. IPFS however has no means to track the author or the license of content.
That means even distributing a piece of perfectly legal Open Source becomes illegal. Unlike a tarball or even a torrent where you can bundle content and license, IPFS allows addressing individual files or blocks, thus stripping the license from the content, which most licenses forbid. This does not even require an intentional action on the user, but happens automatically by partial content landing in your cache.
In 2020, I messed around with a PoC for what hosting and distributing Linux distros could look like using WebTorrent[1]. The protocol project as a whole has a lovely and brilliant design but has stayed mostly stagnant in recent years. There are only a couple of WebRTC-enabled torrent trackers that have remained active and stable.
1. https://github.com/leoherzog/LinuxExchange
reply