I'm building a decentralized ai marketplace and guess what? This piece of software is the backbone of our off-chain file transfer protocol. It just works; however, its seeder (BitTorrent-tracker) depends on node-webrtc, and it is not an official node package for web-rtc. It has some weird issues. For example, when you seed the torrent through BitTorrent-tracker, and on the client side (chrome browser), I used chrome://webrtc-internals to debug rtc connections. First time in my life, I coined the term connection leakage. A bug in the WebRTC handshake leads to too many connections (300+) for even 1kb torrents.
I hope it gets solved very soon.
I also attach my node profiling output. Looking for some advise if you have any..
I originally went the same route as you, and found that https://github.com/pion/webrtc is probably the best package out there for webrtc. I learned go just for it, and it paid off tenfold. Less memory, more connections, lower latency.
Same (I tried Node-based WebRTC packages), same (I found pion to be head and shoulders above everything else), same (I learned Go just so I could work with it).
Its ability to be simultaneously low-level and easy to work with is truly something else.
If you run your tracker on Linux and an info hash whitelist approach would work for your use case, it might be worthwhile to have a look at aquatic_ws [0]. It relies on tungstenite [1] for websockets and achieves around 20x the throughput of the reference implementation when running with four threads.
I don’t know of anybody who runs it in production, but its sister implementation (BitTorrent over UDP) handles 80k responses per second with lower load averages than opentorrent.
Lots of other great WebRTC implementations also exist. Check out https://github.com/sipsorcery/webrtc-echoes for all the other implementations if you are open to non node.js
hey, I'm one of the few remaining active WebTorrent devs and I've seen this issue myself, to the point where it's simply easier to use NWjs or Electron with node integration and use chromiums wrtc implementation instead of the wrtc package, if you ever manage to find a better replacement for wrtc for node tell me, and I'll make sure it's implemented ;p
I have this problem with WebRTC too. I'm operating from the native side with anacrolix/torrent and pion/webrtc. I wonder if it's not due to the JS implementation.
Besides being an amazing technical achievement, I find this very interesting legally, as it further blurs the line between passively viewing content hosted somewhere and redistributing/actively sharing content.
Have there already been cases of websites making their visitors unwitting peers, similar to e.g. JavaScript cryptocurrency mining?
Well spotted. In other words, WebRTC lets a site owner have their visitors distribute content directly to others without their knowledge or consent, basically making it impossible to argue that the distributor has intent to do so in the general case.
That said, it was already highly questionable for legal application, since a torrent client can run on any machine connected to a home wifi, including guests and compromised machines. It may have been enough for a "hey, you're breaking our tos, please check your devices" from the ISP, but should never have been more than that (like threatening letters from rights-holders).
Do countries other than Germany also have dedicated, famous [1] legal agencies sending out cease and desist letters and suing for damages (~900€) as soon as you accidentally seed a compromised torrent for even a few seconds?
In the US the ESA RIAA and MPA hire out several different third parties (some of which they have very close ties with) to do that kind of work on their behalf. They include companies like Rightscorp, Vobile, OpSec, or PML Process Management
Mostly they just threaten people with lawsuits in order to get settlement money from folks who may even be fully innocent, but who cannot afford to fight against an international media empire in court. Lawsuits against random users are less common than they were, but still happen. The bigger issue is that the media industry has been pressuring ISPs to permanently disconnect users after nothing more than repeated unproven accusations of copyright infringement.
ISPs who fail to close these accounts risk being fined billions of dollars, so anyone in the US who has been getting DMCA notices from their ISP should take steps to prevent that or else they're risking getting disconnected which is a problem for those without many options for high speed internet access.
Permanently cutting people off from the internet over nothing but accusations seems very extreme, but the media industry has already taken several ISPs to court for not doing it and so far they are winning.
Along those lines, Comcast/Xfinity for example has a defaultish setting on their routers to leave an open guest network available. If I connect to my neighbors guest network and download game of thrones, will my neighbor get the nastygram, or is it smart enough to know it is a guest network? Should I connect to my own guest network (not that I have it enabled) if I want to do illegal things?
The xfinitywifi network requires you log your device into a Comcast account. It's not strictly a "guest network" for arbitrary devices, it's more of a roaming access to your own account.
Pretty sure they can nail you even harder for piracy on this than on your own network where it might be someone else's device.
Not only does it require login, but it is not like they are giving you full bandwidth. Good luck to anyone trying to do a lot of high-bandwidth downloads over an Xfinity guest network.
Every one I've ever connected to gave you free access for a new account for some limited period of time. But that was like 5+ years ago so maybe they've changed it.
Usually, these CPE guest networks have their traffic routed via an IP different from the main customer one, and have their own bandwidth/QoS class as well.
I totally agree, that's how it should be. In reality right now there are countries in which you get cease and desist letters and a ~800 Euro fine for even a short amount of seeding. This is a cool technology but also allows more automation in this shady industry and laws will take years and years to follow up.
That's why for a decade already the law around here punishes you not so much for seeding, but specifically for failing to secure your Internet connection to prevent seeding (regardless of who actually seeded).
Intent is a necessary element for that charge, at least on the US Federal level. But in any case, completely accidental possession is a defense, meaning, if you have something in your possession without even knowing that it's in your possession, that can be presented as a defense to charges. That would likely be the same when it comes to being an unwitting distributor of copyrighted material via a stealth torrent. Similarly we don't prosecute individuals whose computers are infected with viruses that propagate (and thus illegally access and trespass onto other systems) without the person's knowledge.
I am on a 200GB capped residential LTE internet connection. Upload is included in that data cap. If this was surreptitiously deployed that would cost me money and generally be not cool.
While there are definitely some terrible ISPs out there, not every instance of someone not being able to use as much bandwidth as they want is a great injustice.
Sometimes it’s just sane network management. LTE networks in particular are limited in capacity. There are a bunch of different ways to limit use with different tradeoffs, and bandwidth caps actually strike a good balance between effectiveness and predictability. I have a lot of experience thinking about and implementing bandwidth pricing models, and I don’t have time to list all the options and their tradeoffs here, but you can suggest one you think is better if you want. Bandwidth caps cause users to limit their use, while still allowing a user to use the network heavily without penalty when they need to.
The real problem here is that OP doesn’t have an intuitive UX for limiting upload bandwidth to something he is comfortable with. Webtorrent then breaks his assumption that web browsing will not incur upload bandwidth.
It is in general dishonest to diminish someone else's resources without their consent or prior knowledge. I can see how this is different from using someone else's computer as part of your botnet (because the act itself is illegal), but how is this different from, say, mining a bitcoin on someone else's computer because they visited your website?
By "this" I mean the case in which a visitor's bandwidth is used to redistribute a file without their permission.
The solution to this is the same as it is for the crypto miners. Disable javascript and WebRTC by default and you no longer have to worry about every website you visit using your computer for whatever they want.
New tech is really nice and all, but I've never been a big fan of letting everyone and anyone run whatever code they want on my systems. If I actually need a website to do something like this I'll whitelist it or even set up a browser dedicated to that task, but at this point letting every website do whatever it wants is just dangerous.
Terrible internet providers are absolutely the norm across all of America. Except for a handful of small regional ISPs and municipal ISPs, where they haven't been outlawed.
As far as I know, Safari is doing something along these lines already (I believe WebRTC connections only work there after requesting camera or microphone permissions).
An explicit permission would be nicer, possibly ("this site is requesting to establish direct connections" or similar).
Have you ever used a limited data plan yourself or talked to someone who does about their web and app use patterns? People get attuned to which sites are wasting their data very quickly.
> Have there already been cases of websites making their visitors unwitting peers, similar to e.g. JavaScript cryptocurrency mining?
I haven't seen it yet in websites, but I have seen video streaming apps using torrents on the backend without informing users about it. It's caused people who thought they were legally (or at least 'safely') streaming shows and movies for free using something they found in the app store to be surprised when they got hit with DMCA notices from their ISP.
I did a PoC in college (~2008!) where I coded a distributed rainbow table generator in JS. It'd assign a block of hashes which would get sent back to the server. We showed that a single webserver distributing the work in a naive js implementation (long before webworks and wasm) could outpace a fairly decent local c impl (we were undergrads, so definitely not optimization experts).
Was fun until sites actually got on board with salting.
I know there has been unwitting peer-to-peer livestreaming of video, but that is not exactly the same as it did not involve file storage.
I think BBC, among others, have experimented with that.
There is https://arc.io/ which is a P2P CDN. Sites get paid a share of the profits based on the amount of data provided by their users. Seems like a cool alternative to ads IMO, but most people that hate ads also hate having their browser "hijacked" so idk if it'll take off.
P2P connections over the web are usually not possible due to typical consumer router configurations and bad decisions in the design of WebRTC protocol.
The vast majority of these P2P web projects, including WebTorrent, is actually using proxy servers to fake the illusion of P2P connectivity (to be specific, they are using TURN servers to proxy traffic).
Reading your stackoverflow link my guess is that you aren't using STUN. A P2P connection can't be established without a NAT hole punch.
Also if possible I would avoid the terms `Full-cone NAT` and `Symmetric NAT` they don't do a good job of describing what is actually happening. NAT Mapping/NAT Filtering is the best way to describe it. I wrote a little bit about it here [0]. To see what type of NAT you have try stun-nat-behaviour[1]
Thank you for this. I guess P2P connectivity is more often possible than I thought! For the record, I have tried to form P2P connections with multiple different devices and multiple different network conditions, and although I have been able to achieve connectivity with Python, I have not been able to achieve connectivity with WebRTC. So I have reason to be suspicious about claims regarding web P2P connectivity.
> Reading your stackoverflow link my guess is that you aren't using STUN. A P2P connection can't be established without a NAT hole punch.
I described NAT hole punching in my stackoverflow question, where I asked why isn't WebRTC doing this? I was using a STUN server (edit: I mixed up TURN and STUN servers and edited this post to fix that. My stackoverflow question has clear references to how I was trying to achieve connectivity - it has been a while so not everything is in clear memory for me.)
STUN servers are only required for NAT punching, they don’t relay traffic. Only TURN servers are used as relays in the event that a direction connection can’t be established.
I had a mistake in my post where I confused TURN and STUN. It's been a while since I actually tried to do this stuff. The stackoverflow post has details on how I tried to achieve connectivity, if you check that you will see that yes I used STUN.
But it doesn't matter since STUN is just used for setting up the connection; it doesn't relay traffic. WebRTC always needs a coordination server to exchange tokens, so STUN doesn't make it less P2P.
Well you usually only need a STUN server that does the NAT hole punching, but transfers no data which can go directly P2P. The problem is that it doesn't work for two people on mobile data where a full TURN is needed. I've never completely understood why exactly it's broken in only that case, but it may have something to do with devices not being directly visible to each other for security reasons.
Or mobile operators being dicks because they can, that's frankly more likely.
> The problem is that it doesn't work for two people on mobile data where a full TURN is needed. I've never completely understood why exactly it's broken in only that case
This is not the only case where it doesn't work. One of the experiments I tried was desktop computer on fixed broadband connecting to another desktop computer on fixed broadband.
Hmm I suppose there may be edge cases where it doesn't work for home networks, but I doubt it's more than a few percent of users. For mobile data it's broken for 100% of devices.
> For mobile data it's broken for 100% of devices.
No, it's not. I've run my own webrtc signaling and STUN servers, and trying between multiple different mobile operators and devices I can establish a connection without TURN in most cases. Not 100% but in most cases.
WebTorrent is obviously well suited for p2p file distribution, but using a minimal subset of the protocol also provides a nice hack for easily bootstrapping peer connections between web app users. Piggybacking on public mediums already designed to do peer exchange can let you rapidly prototype a WebRTC project without the hassle of running your own server anywhere.
It also allows encrypting your peers' session descriptions to hide them from the torrent server. All of this is of course experimental and I'm very open to feedback.
Thank you so much for your work on Trystero! It’s one of the most amazing projects I’ve ever found. I’ve used it to great success on multiple projects and I’m a big fan. I don’t understand how Trystero doesn’t have thousands of GitHub stars by now.
I've always been very, very nervous about the privacy implications of this technology. Creating a persistent, public record of every IP that has ever accessed a particular video is a HUGE step backwards compared to current video/file distribution systems, and I don't think end-users correctly appreciate the trade-offs yet of making that sort of information public and joinable across domains / different activity (i.e. "give me a list of all users who view LGBT content and also watch online instructional videos for my Christian school").
I think the good ol’ saying applies here as well that security through obscurity is not secure. Here like in other aspects it is best to secure users in such a way that if a christian school does a search like that and punishes students based on what they do in their free time (or worse their gender perception or sexual orientation) then there should be an easy course of action through the legal system for the student to get justice (and for the school admins to be punished).
> if a christian school does a search like that and punishes students based on what they do in their free time
As an alumni from a Christian school, I can confirm that this is legal. [0]
You'll get expelled if you're in a same-sex relationship, or displaying behaviors of such a relationship (e.g. holding hands). [1]
> Harding University holds to the biblical principle that God instituted marriage as a relationship between one man and one woman and that gender identity is given by God and revealed in one’s birth sex. Students are prohibited from being married to or dating a person of the same sex. Neither may students engage in behavior suggesting a romantic relationship with a person of the same sex. The University further holds to the biblical principle that sexual relationships outside the context of marriage are unacceptable to God and immoral. Sexual immorality in any form will result in suspension from the University.
I think they get a bit of leeway by both being a private university and a religious institution. I'm not a lawyer though, so I have no idea.
If you want a real fun trip down Christian college lane you should check out PCC [0]. One of my friends went there for a semester or two and it sounded... horrible.
> PCC policies govern many aspects of the students' lives, including dress, hairstyles, cleanliness of residence hall rooms, styles of music, borrowing, off-campus employment, and Internet access.[30] For example, "All students are expected to dress modestly, in conservative fashions and . . . men are not to wear effeminate hairstyles or apparel."[31]
> PCC also prohibits physical contact and interaction between unwed members of the opposite sex. For example, a chaperone and "day-pass" is required for a "mixed group" for students under the age of 23.[32] Students over the age of 23 are not required to have a chaperone on a date, but cannot go to a beach or a park after dark and cannot "visit the home of an unmarried person of the opposite gender."[33]
> Most stairwells, elevators, and parking lots on campus are segregated by gender.
You shouldn’t rely on security by obscurity, but you also shouldn’t resign from it. In the end of the day, if you have a password hidden somewhere, it’s also security by obscurity.
Hardly. A password is usually hidden written in a notebook (or a local hard-drive) which would have to be physically stolen in order to obtain, or in a password manager which is (hopefully) encrypted.
I think a better analogy is in how different countries handle national identification numbers. Some countries have a publicly accessible list encouraging people to share it when commercing, these countries usually have other means of preventing identity theft. Other countries issue identity numbers in a private manner, discouraging and prohibiting distribution. These countries use obscurity to prevent identity theft among other measure. However anecdotally it seems like identity theft is rampant in the latter countries which includes obscurity in the id-theft prevention, but not in the former.
Oh, huh, I first knew about this from Peertube, and I assumed they were the ones who had created it. It seemed like a really clever solution to the problem of "how do I host my own YouTube without needing a massive infrastructure to deliver video?"
Really cool to learn that Webtorrent is a generic API that others can use. There seems to be a lot of good ways it can be used to decentralise the web again.
When exactly do you think the web was "decentralized"? The web the majority has ever interacted with has always lived on someone else's server. Self-hosting has always been rare.
It's weird seeing people talk about some mythical history where the web was somehow decentralized.
Decentralized doesn’t mean that every website is distributed to user-local edge servers, it means that there isn't a single point of failure of the web (that is, the web is decentralized, individual sites are still, separately, centralized.)
The emergence of central dominant vendors for key services reduces that (AWS, MS, Google, and CloudFlare are each capable of disabling a lot of the web with a failure; and there are backbone providers that can have similar effects) somewhat, of course.
This idea ignores that 1) there's always been points of failure for the web and 2) there's reasons the dominant players became dominant.
The web in real terms requires infrastructure for content to be available. Without that infrastructure you've just got dead links. But this has always been the case. There's no golden period to go back to.
The dominant players in the infrastructure business can end service for individual customers but they don't run everything. You can spin up a server and point your DNS records at it and be online if CloudFlare drops your site. They exist because it's cheaper to buy infrastructure in bulk and sublet it but that doesn't stop anyone from making their own server on their own infrastructure.
In the earlier days of the internet, there were no two companies you could pick who, if they had large-scale server issues, would take down a significant portion of the web. Nowadays, if Cloudflare & AWS have a bad day on the same day a lot of businesses may as well just send their employees home, such is their outsized impact on the internet. (And the internet on our lives, of course.)
This happening "for a reason" doesn't change the fact that what is now the case was once not. In 2001, if a particularly well known tech vendor screwed up their DNS, their product website would be down and everyone would shrug and go about their day. In 2022, if a particularly well known tech vendor screws up their DNS, more than 10% of the internet falls over.
The fact that I can spin up a server is true, but doesn't help me when I can't pay my bills online because a very large shopping website has gone down.
> This idea ignores that 1) there's always been points of failure for the web and 2) there's reasons the dominant players became dominant.
It ignores neither. The web (and the underlying internet more generally) are decentralized systems by design, but over time (because its a lot more convenient and efficient when you aren’t, say, experiencing a major widespread disaster) a lot of key functions have progressed in the direction of centralization, because unlike the prople making the original concept, most users aren’t making decisions with a nuclear exchange targeting critical communication nodes as a significant part of the threat model that their resiliency plans address.)
These aren't binary transitions (nor are they entirely monotonic), and there are very good reasons people prefer more centralization for the uses the internet has evolved to fill. But it is a real change over time.
Seems like ad blockers have a new field to look at. I was just thinking about what how people would feel about this when they’re on limited or metered bandwidth.
Sounds like an amazing browser feature to meter bandwidth.
Not only does it increase accessibility of the internet for low bandwidth users (they will use more if they know they won’t blow out their budget) but it will help shame the bloated websites with more user visibility into the issue
Seems like it needs a way for users to decide what they share so that we can just share the content and let the ads 404. Hell maybe we can even pay each other--both for the hosting and for the data work necessary to keep the malware (e.g. ads) out.
I was wondering how it works, since I'm pretty sure WebRTC has some overhead. Turns out that
> In order to support WebRTC's connection model, we made a few changes to the tracker protocol. Therefore, a browser-based WebTorrent client or "web peer" can only connect to other clients that support WebTorrent/WebRTC.
It's so fulfilling to see WebTorrent still popping up on Hacker News after all these years. I started the project in 2013 and devoted most of my 20s to working on it, ultimately becoming a full-time open source maintainer. I started WebTorrent with the goal of extending the BitTorrent protocol to become more web-friendly, allowing any browser to become a peer in the torrent network. Within less than a year of starting the project, I got WebTorrent fully working (see https://news.ycombinator.com/item?id=8317441). And it worked _well_, beating many native torrent apps in terms of raw download speed and the ability to stream videos within seconds of adding a torrent.
WebTorrent never got as much attention as the cryptocurrency projects selling tokens throughout the mid-2010s, even though WebTorrent _actually worked_, and it had more users than almost all of them :) I was never tempted to add a cryptotoken to WebTorrent, despite many well-meaning friends telling me to do it and cash in. Nonetheless, WebTorrent served as an accessible on-ramp to the world of decentralized tech, along with other projects like Dat (https://dat-ecosystem.org/) and Secure Scuttlebutt (https://scuttlebutt.nz/), playing a role in getting people excited about decentralization.
But WebTorrent is more than a protocol extension to BitTorrent. We also built a popular desktop torrent client, WebTorrent Desktop (https://webtorrent.io/desktop/), which supports powerful features like instant video streaming.
We also built a `webtorrent` JavaScript package (https://socket.dev/npm/package/webtorrent) which implements the full BitTorrent/WebTorrent protocol in JavaScript. This implementation uses TCP, UDP, and/or WebRTC for peer-to-peer transport in any environment – whether Node.js (TCP/UDP), Electron (TCP/UDP/WebRTC), or the web browser (WebRTC). In the browser, the `webtorrent` package uses WebRTC which doesn’t require a browser plugin, extension, or any kind of installation to work. If you’re building a website and want to fetch files from a torrent, you can use `webtorrent` to do that directly client-side, in a decentralized manner. The WebTorrent Workshop (https://webtorrent.github.io/workshop/) is helpful for getting started and teaches you how to download and stream a torrent into an HTML page in just 10 lines of code.
Now that WebTorrent is fully supported in nearly all the most popular torrent clients, including uTorrent, dare I say that we succeeded?
It's been a long and winding journey, but I'm glad to have played a role in making WebTorrent happen. Huge shoutouts to all the open source contributors to WebTorrent over the years, but especially Diego R Baquero and Alex Morais who were critical to WebTorrent's success.
If you're curious what I'm up to now... I'm building Socket (https://socket.dev) with an awesome team of open source folks. And there's actually a WebTorrent connection, too! Before Socket, we built an end-to-end encrypted file transfer app, Wormhole (https://wormhole.app), using WebTorrent under-the-hood (Show HN thread: https://news.ycombinator.com/item?id=26666142). Like Firefox Send before it, security was a primary goal of Wormhole (see security details here: https://wormhole.app/security). But one area where we felt we could improve the security of Wormhole was in how we audited our open source dependencies.
Like most teams building apps with JavaScript, we had a large `node_modules` folder filled with lots of constantly-updating third-party code. The risk of a software supply chain attack was huge, especially with 30% of Wormhole visitors coming from China. As most teams do, we enforced code review for our first-party code; but as most teams do, we pulled in third-party dependencies and dependency updates from npm without even glancing at the code. It's too much work to read every line of code of all dependencies. But the status quo would leave our users open to supply chain attack and we wanted to do better for our users. We looked around for a solution to detect signs of attack and to analyze the risk of various open source packages, but none existed.
So we built Socket to help developers ship faster and spend less time on security busywork by helping them safely find, audit, and manage OSS. By analyzing the full picture – from maintainers and how they behave, to open-source codebases and how they evolve – we help developers and security teams to identify risk from malware, hidden code, typo-squatting, misleading packages, and more.
Very nice overview - thanks for this. Q: what's your take on the path to a possible a web browser based DHT that could be used instead of tracker servers (or anything, really.) I've seen a variety of people trying to figure this out but I don't think any solutions have popped out.
I was looking into (ab)using webtorrent trackers as webrtc signalling servers but instead decided to write a cloudflare worker to do it [1], which is kind of a half-step towards a 'serverless' signalling layer. The next step would be to get a decentralized network that allows you to run cloudflare workers off of cloudflare. But a true DHT would be ideal.
As someone who's built (and is still currently working on) an app that integrates WebTorrent, I want to thank you for resisting the temptation to cash in on the token craze while so many other decentralization-adjacent projects have been ruined the get-rich-quick crowd. It gives me hope that some of that original ethos is still alive. And good luck with Socket. Super important work.
WebTorrent seems like the most obvious solution for distributing Linux ISOs. Instead of coordinating networks of volunteer http mirrors, the download webpage of various distributions could simply use WebTorrent and operate one webseed... Then every user becomes a seeder. They are usually going through the trouble of making torrent files anyway.
I made a proof-of-concept of this idea during the pandemic for fun[1]. It doesn't work very well without WebRTC peers though.
This is pretty cool, but wouldn't it make it harder to get seeds in the first place?
Just imagining someone getting their file and closing the torrent tab, I guess you could try and address this with some UI callouts, but as a frequent torrenter, since it runs on a dedicated machine/application I tend to seed heavily.
I once thought about using webtorrents on a video on-demand site that I was hosting. The idea would've been that I'd have the video available on the server both through HTTP and BitTorrent, and when a lot of people start watching the same VOD at the same time, they'd help seed it to each other and reduce the burden on the server.
Never actually made it since the server that I had was actually good enough to serve people at least decently.
Those with a good internet connection will download faster than they play the movie. Say for a 2h movie, it gets downloaded in 15min and shared for the remaining 1h45min...
While I absolutely love BitTorrent and make heavy use of it, I still am not sure if embedding it in a browser is a good thing. I have no doubts about the usefulness, but wouldn't a low priority daemon, or even a client on a minimal external system such as the smallest *Pi-like board, be more functional? It would also speed up peer discovery.
As an example, I have my client as a service on my XigmaNAS NAS, so that it stays on even after all PCs in the house are turned off at night, and frankly I can't imagine going back to a traditional client or a browser, therefore a PC to stay turned on to work.
...Unless I completely misunderstood the purpose of this software.
I think you're right in that there are more beefier implementations outside of the browser, but the point of creating an app that is in the browser is almost entirely for minimal barrier of entry.
It hasn't receive any meaningful update in years now. For those wondering if you'll be fined if piracy is forbidden in your country by using WebTorrent the answer is yes, at least in Germany. They have some good trackers, speaking from experience. :)
It's quite obvious that it means "running in the browser engine" rather than "has the technology implemented in a native language". But sure, you're technically correct.
Not sure about the Opera Bittorrent support, but WebTorrent is years old, don't be fooled by the fact it's been posted here today. I remember first trying it at least 5 years ago.
Thank to that, I've just learned about open movies and am very excited. I wonder how far open movies can get us in the indie scene, can it make impact similar to open software. I watched Tears of Steel and thought to myself: "My, some parts of this are so cheesy!" and then immedeately thought "But if you don't like it - you may fork the project and make it better!".
WebRTC is encrypted and authenticated. No concerns about that!
Another nice thing about WebRTC is that it uses DTLS. You can’t block web torrents because that would also block conferencing software and corporate VPNs (Cisco)
For most commonly downloaded torrents, webrtc support is not enough yet, that i know of. While torrent clients like transmission, qbittorrent and rtorrent do not support webrtc it would be difficult to guarantee the spread of this tech that would otherwise be great
the main hold-up on this is actually libtorrent not releasing WT support in a full release yet, from my talks with qBit devs, there's a lot of people waiting for them to simply include it in a release as it seems stable, and there are quite a few people using it on rasterbar, see https://tracker.btorrent.xyz/stats
I want to build a social network, and use a p2p protocol for sharing large files. I know that peertube claimed to use bittorrent, but then it seemed to drop it? What's the catch?
If implemented this use the webseed feature so you can get serverless seeding. You can add the seed dynamically on the client when there are no peers as indicated by the tracker.
This would let you Torrent a file download (like Ubuntu of course) in your phones browser, using no data cap instead of using a data capped tether to a computer
I hope it gets solved very soon.
I also attach my node profiling output. Looking for some advise if you have any..
[C++]:
[Summary]: [C++ entry points]: