love Wormhole, but that animated background chews up an incredible amount of battery power on my M1 Macbook Air (even when the tab is in the background).
Please wormhole crew, turn it off.
We recently made a one-line change that massively reduced GPU usage, specifically:
- Intel integrated graphics: 60% reduction
- AMD Radeon: 40% reduction
- Apple M1: 10% reduction
What was the change? We removed "opacity: 85%" from the <canvas> element. We were using opacity to slightly darken the animation but now we just darken the texture image directly.
A bit more detail: We're running into the limits of WebGL. It seems just rendering a white screen at 60fps at 4K resolution is enough to make people's fans turn on.
So we reduce the frame-rate when the wormhole is not warping (render every 3rd frame). We also lower the resolution and scale it up.
100% agree - I've used Wormhole a few times when I try to quickly get data into interactive VM sessions, because it works great otherwise, but the graphics consistently grind the whole thing to a halt.
I really don't get the reasoning. It looks kind of cool, but it makes it super unusable for a bunch of use cases from very old computers to interactive sessions on raspberry pis to constrained VMs. And those are exactly the kind of places where I want friendly easy tools to copy files across for quick system admin or to get logs back out! Doesn't seem like a good tradeoff.
You could add a "fast" or "lite" or similar subdomain that avoids anything like the complaints that have been shared here. Not perfect (the default should be the best), but at least you could avoid this problems as a user.
I remember trying many of those services and I decided to use this one because I could send large files without any problem (was trying to move sqlite dbs that were several Gbs, as it seemed to stream the file instead of trying to store it on ram first, but now I see wormhole.app allows up to 10GB, and I don't remember to have any limit.
WebRTC services seem to have problems to get up to speed, but for streaming files between devices it seems the best solution in terms of friction.
The symptom seems to be that the SCTP data rate drops with increasing latency (which used to be a problem with very old TCP implementations too, but all modern ones handle high-latency networks much better).
I think SCTP has everything we need, just more knobs need to be exposed [0]
I have gotten feedback about the performance about Pion's SCTP implementation as well. It is a hard problem. The amount of people who care about SCTP performance + WebRTC and are able to work on it is very small.
If anyone is interested in more about this [1] is a fun issue to read.
libwebrtc is also planning to stop using libwebrtc soon. That would mean all browsers (Chrome, Safari, FireFox) will be on something new. Could be good no idea yet. The ongoing work is here [2]
Thanks, this is great context! Especially the very different focus in these two projects explains a lot (lightweight control channel for A/V [1] vs. the entire reason for using WebRTC in all of these file transfer projects).
Curious to see how the new implementation will play out for the browsers!
There's also https://snapdrop.net which seems extremely similar to sharedrop.io but has an additional useful feature of letting you send messages which I sometimes use to send links to devices that aren't logged into any service.
As I recall, there's a difference between file.pizza and webwormhole. file.pizza allows the sender to specify files and then generates a share url, whereas webwormhole creates a share link first. The latter can be useful if you're not sure exactly what you'll send before you share the link.
Links are checked at least once daily - I do have a few that are recently broken that I need to address - but File.pizza is okay again. I have switched the main link to Wormhole, because it's now my preferred option - and because File.pizza has been up and down for me in the past as well.
The website is up - the actual transfer service no longer seems to function in modern browsers. Have you (or anyone) had a successful transfer via File Pizza recently?
Just tried with a tiny file, it tries to work, as I can see the filename/size on the receiver end. But, it never downloads. It does say "Peers: 0, Up: 0, Down: 0".
I was wondering why I recognised your name - you're the main developer of ffsend. Thanks for all the work! I really hope you get more people interested in maintaining and developing Send.
Hey - love this project! I was able to get an instance deployed with Nginx reverse proxy without too much trouble. Password encryption doesn't seem to be working, but that might be some weird header issue thing with the reverse proxy setup and I'm not too worried about it.
One thing I was wondering is if/how expired files are cleaned up. I uploaded a large file, set it to expire after 5 minutes, and although I can't download it anymore I see that it's still in the files directory on my server.
I glanced through the code, but I didn't see any mechanism for periodically purging expired files or anything like that. Is there something that I missed, or should I just set up a cron job or something to delete all files in that directory older than a week?
> but I didn't see any mechanism for periodically purging expired files or anything like that. (...) should I just set up a cron job or something to delete all files in that directory older than a week?
You're right. Expired files that don't reach their download limit are kept on the server. Due to implementation details there is no 'nice' way to do this from Send itself. If using S3 as storage you can configure a LifeCycle Policy, if using raw disk storage you can set up a cron.
All uploaded files have a prefixed number which defines the lifetime in days (e.g.: `7-abcdefg` for 7 days expiry). So you can be a little smarter with cleaning up.
What level of logging/privacy can we expect from a self-hosted instance? I had faith in Mozilla's commitment to privacy, but I don't necessarily trust some random dude's AWS instance.
> What level of logging/privacy can we expect from a self-hosted instance?
It really depends on who is hosting it.
Send itself doesn't really log anything except for errors. A reverse proxy in front of it might be used for an access log, which is default with the docker-compose template for it. Files are always encrypted on the client, and it or its keys are never seen on the server.
If you're wondering for the instance I've linked: it runs on Digital Ocean. I have an access log (IP per request, for 24h), I can list encrypted blobs and their metadata (creation time, size), and that's pretty much it.
Naïve question here, but is there a config setting that would work without HTTPS?
I run a home server just for internal use and it might be nice to send files via a link for memes, jokes, quick one-shot uses rather than storing it on a samba share, etc, but it doesn't have a public-facing URL for confirming a LetsEncrypt certificate.
If you really don't want to use a certificate, just configure the base URL to be a http: address. That should work fine! Feel free to open an issue otherwise.
Note that if you do this, rather than actually setting up HTTPS, outside of Send itself a bunch of stuff becomes impossible (you presumably don't want to do any of this stuff, but if you ever do try it just won't work) because you lack Secure Context, and gradually over time you can expect more errors and problems.
Already if you give me a plaintext HTTP link I'm going to have to consciously decide that's fine and click past the interstitial warning me it wasn't able to be upgraded to HTTPS, if you use it to inject an image somewhere that's otherwise HTTPS, the image just counts as broken unless I go out of my way to authorise it.
You could self-sign a certificate, or if your internal URLs use a subdomain of a public domain you control you could use DNS challenges for Let's Encrypt.
But then I'd have to give the password to anyone I wanted to receive files. I want to be able to send files to people but not have them be able to send to others.
Maybe adding HTTP basic auth is fine, as I mainly want to keep random bots from finding the service. I'll try that, thanks!
I was wondering where I had seen your name before, and then after scrolling through your GitHub, I realized it was your Advent of Code 2020 solutions in Rust. Those were absolutely beautiful.
Encryption is done with JavaScript on the client. The decryption key is attached as hash to the download URL on the client side as well.
When visiting the URL, the key never reaches the server because the hash-part of an URL is never sent and is a local-only thing. So there's no need to strip logging. The client downloads the encrypted blob, and decrypts it on the client.
You are welcome (poster here :-)). And thanks to you for maintaining a great, and useful, piece of software. I recently needed something like Firefox Send that could have files uploaded for longer than 1 day but no more than 7 days and Send (and your public instance) was perfect for such task.
An observation. This is the second time today we've had a submission link to github, even though the main repo is on gitlab (the first was https://news.ycombinator.com/item?id=27047243)
This is excellent! I've been missing Firefox Send ever since they took it down.
However, it needs to be hosted somewhere.
...and if I'm going to be using a hosted service, I'd like the ability to easily pay for it (so that it doesn't eventually collapse or resort to shady things like ads), either though donations or microtransactions for bandwidth/storage.
Unfortunately, there's no good microtransaction service.
Wasn't Mozilla working on one? Where did that go?
...and thus, we've gone full circle.
And I'm typing this comment in a Chrome browser, because my company is migrating away from Firefox due to "security issues".
This is a comment for my instance specifically, but you might find it nice to know:
The https://send.vis.ee/ is mostly funded by donations right now. I do not plan to take it down, unless the cost becomes a problem. I'll never resort to ads.
If this ever happens, I'll likely show a warning beforehand. Some time later I'll disable the upload page, and will take the rest of it down the week after. Files have a maximum lifetime of a week anyway. So if you discover this when uploading, you can simply switch to some other service. Existing links should not break.
There's a donation link on the bottom of the page (https://vis.ee/donate). But feel free to use it without a contribution.
You can host your own Send, and the host need not exist when there's nothing you are sharing which is right in-line with utility computing as provided by "cloud" hosting companies like Amazon, Microsoft, Oracle, &c. Should be possible for under $5 for a month of Send operation provided low disk space suffices, say under 5GB. Perhaps not micro enough though.
Yes, the problem is that it's "not micro enough". The value of this service is low enough that it's not worth it for me to self-host, and the financial overhead of cloud providers is enough that it would cost far more for me to spin up a dedicated instance than pay someone for the fractional cost of usage of their instance.
More generally, I want the ability to make microtransactions (substitute "extremely low-friction donations" if you will) for everything that could be "free" but also costs money (bandwidth, compute, storage), because no matter how much free time I have, there will always be services that I could benefit from, but are low-enough-value that it's not worth it for me to self-host or get a cloud host myself.
I think the shifting "product focus" is probably the main factor here, simply because such a service being used for malware hosting was completely predictable from day 1. When they started they probably thought that it was worth it, then later on they changed their mind. That or they were incredibly naive.
In the context of a large corporation, incredibly naive is just an euphemism for bad management. They launched it. An internal security audit found that it was being used for phishing. They planned to fix it but layoffs came along and they had to sunset Send.
That means no security at all. Without a way to link files being hosted to identity or inspecting the contents of the files, there is no barrier to prevent spam and illegal files from being hosted.
I don’t know. The internet had hundreds of file sharing sites at one point. They all suffered fates similar to the epic MegaUpload although with not as colorful founders as Kim DotCom.
I don’t see how having them again would be different than last time?
Mediafire is still alive and I think it's the last hold out from the "big" file sharing websites of the mid/late 2000s. Though honestly I don't miss the download limits, timers and adf ly spam that came with them. Common cloud storage (gdrive, dropbox) are much easier to use and share files from, although they require you to be logged in. Send seems to be the best of both world though.
After trying all these WebRTC options and the NAT traversal service (STUN, iirc) always being down, I ended up using IPFS instead. With public gateways from CloudFlare it is very easy to effectively drag and drop files and have them accessible via the IPFS-to-HTTPs gateway.
For the rare times I have to do this I run a local server and the free version of CloudFlare argo tunnel. It provides an https url so the upload/download is safe from ISP snooping, there are also no size limits, you can send a 30GB file if you need to do that.
How does this work. Does the file need to pass through a server before reaching the other end? or is it streaming directly between sender and receiver?
Also, does it need to put the whole file in RAM first?
There is a server involved for mediating peer discovery but the file transfer itself is p2p.
Regarding your second point: I’m actually not sure if the file is copied into memory or if the browser just keeps a reference. I haven’t tried it with large files yet.
There's two persistence types: pinned and unpinned files. Pinned files persist, but someone needs to seed them at least occasionally. Unpinned files get eventually garbage collected. If you want to share a file, you don't need to pin it if the recipient for example tells you when the download is complete. In this sense, all these WebRTC examples are more equivalent to unpinned files.
As long as you keep your node up and running your content will never disappear. So if you just want to share files with friends I can see this working well - just keep your node available.
The other option too would be just to clone the repo locally and use grep, find, etc. That seems the simpler option if you just want to perform a search.
Naive question: The github page says 62% of the languages used in this repo is FreeMarker. I checked the repo and every file I look at is js, what and where is FreeMarker?
I can't find any freemarker templates on the project, but apparently it's using i18n files with an ".ftl" extension which is the default extension for freemarker templates.
It shouldn't be excluded because that pattern doesn't match any of the files.
If you run `git check-attr --all public/locales/foo/send.ftl` with the current .gitattr file, you'll get no attributes.
If you update the attr match to `public/locales/**` or `public/locales/**/*.ftl`, then the `check-attr` command above will match it and show 'linguist-documentation'.
Crypto is hard, it doesn't wrongly claim its secure. Its a one man show. Isn't that where beauty of open-source lies? Some students were able to get a bug(purposeful) into linux to show how easy it was.
Or even the example of Openssl after heart bleed. Some fresh set of eyes look into the code, things get fixed. We have a log of it, developers learn something, and project moves ahead.
I know everybody's posting tons of alternatives already, but I'm curious why https://transfer.sh isn't included. It has very simple instructions for encrypting against a recipient's Keybase GPG key, works from the site or command line, and has 14 days of retention.
Just curious, since I keep seeing Wormhome mentioned, but I never seem to see anyone mention Transfer (unless it's just a lesser known option and I happened to hear of it early).
The big advantage of Firefox Send was that it was hosted by Mozilla, and I could trust that Mozilla wouldn't have any backdoor in the service.
When the same project is hosted by someone that I don't know, I can't be sure that they won't modify it to peek at the files (I'm not going to perform a full code audit on every page load).
Oh, I forgot about that one. Yet another Mozilla project that worked well that was abandoned. (Remember Firefox OS? https://killedbymozilla.com/)
I know what you're thinking : They did not abandon Rust !
Well I just learned from a post that recently made it to the HN front page that management was considering dropping Rust. The only reason they did not was because of someone who fought hard for it.
It depends. Some services are much more suitable than others.
Some services are 1:1 ratio. That is, uploading a file results in a download that only works once. So that makes them rubbish for malware, you have to be spear phishing somebody and even then it buys you less than using Tor would.
Some services are only encrypted in transit. So bad guys can't intercept or alter the data, but at rest on your server it can be scanned for malware, copyright infringement, whatever the provider wants to scan for.
Some services cost money to use which is an obstacle to bad guys who most likely want more money and not to be paying money up front first.
Firefox Send was encrypted in situ (the keys live only in clients, so the server doesn't know your keys), it was free to use, and it allowed either unlimited or very large ratios.
So that makes it potentially very attractive. On top of which, it has this nice trustworthy Firefox name. Grandma Jenny's kids have told her not to go around installing stuff from just anywhere, but they did tell her _Firefox_ is trustworthy after she got flustered when it auto-updated. How is Jenny supposed to understand that this link to Firefox Send isn't Firefox?
* https://wormhole.app/ (my recent fave, by creator of WebTorrent, holds for 24h, https://instant.io by same)
* https://file.pizza/ (p2p, nothing stored)
* https://webwormhole.io/ (same, but has a cli)
* https://www.sharedrop.io/ (same, does qr codes)
* https://justbeamit.com/ (same, expires in 10 minutes)
* https://send.vis.ee (hosted version of this code)
* https://send.tresorit.com/ (not p2p, 5 GB limit, encrypted)
I track these tools here: https://href.cool/Web/Participate/