Not just. There is a niche for mixed web/native apps whereby the page is loaded from the web, but interacts (at least in part) with a locally hosted web-server.
Like, started from the local filesystem? I see they have a "Notebook Server," is that what you mean by "locally," rather than starting it from clicking an e.g. "jupyter.html" file?
To their credit, it doesn't seem to use Node, and configuration via .py files tells me maybe they have some lightweight server thing that makes it a bit more manageable as a single-user app than via Node or Electron and friends.
Anyway, I have a project I have been wanting to be completely self-contained in an HTML file, but that gives me CORS problems, so now I'm thinking maybe the Jupyter posse has implemented a nice compromise I can use.
Yes, Jupyter Notebook (and its successor Jupyter Lab) is a Python program that serves a browser (HTML/CSS/JS) application. It is meant to be used equally well when the Jupyter server is on your local machine or somewhere else on the network.
Other tools that use similar architectures: Tabula, Kiwi IRC, The Lounge IRC. The user experience is maybe more "technical", but you don't have Electron involved, just whatever web browser you would use for anything else.
The part that appeals to me is that they're using a smaller than nodejs (and Electron for that matter) local server to do so! CORS means something has to proxy, so seeing the existence of what I'm assuming is a minimum implementation is really helpful. Obviously I'm not hanging out in the right slacks or subreddits. ;)
I hate this shit, I've only seen the sketchiest sort of spyware, like discord, do it. Browsers should forbid it, since it's a blatant end-run around the limitations a 'web app' would normally have.
Furthermore, browsers even having this ability facilitates inter-protocol exploitation. It's not only local webservers that are put at risk by this, other sorts of local servers can be attacked as well. See CVE-2016-8606 for an example of a browser being used to attack a guile repl.
I think it's a problem from a security point of view to allow websites to do this. Often the server that runs on localhost is poorly secured and may even expose "Access-Control-Allow-Origin: *" headers. And even if it doesn't, the browser still has to run a request to find out whether such a header is present, so some attacker controlled data does end up in these services. This in turn can be used for attacks.
Maybe these services were coded assuming that an URL can only be 1 kb long? What if you overrun that buffer?
Yes, it's possible localhost servers can have security issues, but:
* This only affects servers that do indeed serve the "anybody can talk to me" CORS headers. For anything else, standard cross-origin restrictions apply. A simple localhost server that has ignored this issue and isn't aware of CORS at all is generally not vulnerable.
* This change doesn't actually increase this risk, it decreases it! Localhost requests from pages were already allowed, CORS notwithstanding, it was just not possible from HTTPS sites as it was mixed content.
* This is a pretty common pattern for lots of popular software that has a desktop component - e.g. Spotify, Zoom - so there's a clear use case for it.
* There's ongoing work to restrict this further, see https://web.dev/cors-rfc1918-feedback/, proposed by the Chrome team. In short, HTTPS will be required, and a new `Access-Control-Request-Private-Network: true` header will be sent (and an `Access-Control-Allow-Private-Network: true` response required) to force servers to opt in to any requests that cross from public origins to private ones.
> This only affects servers that do indeed serve the "anybody can talk to me" CORS headers.
Again, even if these CORS headers are not sent, the browser still does a request to the server in order to find out whether the headers are being sent. Check running python3 -m http.server in your terminal, and then do var v = fetch("http://localhost:8000/hellllllo") in your browser's console. You will get a big red CORS error in the console, because the builtin python http server does not send these headers. But the web server will still receive and respond to the "hellllllo" GET request! It will show up in your terminal's log. For some insecure servers, getting a specifically crafted request might be enough to exploit security bugs. Like I said, take a server that has a limited size buffer for the URL after which it overflows letting you write data to whatever is beyond that on the stack.
> This change doesn't actually increase this risk
Alright you have a point here, but it's still bad this feature exists in the first place.
> This is a pretty common pattern for lots of popular software that has a desktop component
Because it's used doesn't mean it's a bad idea.
> There's ongoing work to restrict this further
Huh that's very nice. Indeed this would resolve my concerns:
In the future, whenever a public website is trying to fetch resources from a private or a local network, Chrome will send a preflight request before the actual request.
Preflight requests are hardcoded and barely have any attacker controlled data (except for the ip address maybe, as 127.0.0.2 is as valid as 127.0.0.1).
For example, some software, such as VPN clients, will now open the authentication page in a Chrome/Firefox web browser, rather than in an embedded browser - this is a security win! It affords the ability to use WebAuthn/U2F, password managers, as well as an updated browser.
However, for this to work, you need to pass an authentication token back to the client - this is done by binding to a port locally, and exposing a webserver which receives an authentication token.
Duo Network Gateway[1] authenticates this way, and I'm sure others do as well. I know Palo Alto GlobalProtect, AWS CLI, and others offer web-based authentication now, but don't know specifics of their implementation. (I work at Duo.)
The cloud service could communicate with the local device's client through a separate connection though that the client opens with that cloud service, no need to do this via the local JS.
The only benefit I could think of would be that it makes it easier to correlate the local client with the authenticating user, but you could just have a token that you make part of the URL the client visits. Also it doesnt fully solve the correlation problem either, on the same computer, two clients could run under two different OS level accounts. Will it just send its authentication token to one of those clients? There is no user separation on Windows or Linux. I only know about Chrome OS having user separation at the port level.
> Also it doesnt fully solve the correlation problem either, on the same computer, two clients could run under two different OS level accounts. Will it just send its authentication token to one of those clients?
When the request is made to the server, the port that is temporarily bound is also sent with the request. This tells the server what URL they should POST back to. The port is random for each authentication.
> The cloud service could communicate with the local device's client through a separate connection though that the client opens with that cloud service, no need to do this via the local JS.
FWIW, no Javascript necessary. Just a regular POST form & Location headers. In any case, you're right - you could open up a separate connection to the server with a correlation ID, and include the same correlation ID in the initial request to the authentication server, and then upgrade the connection's permissions after the fact. You ought to be careful of session fixation attacks here. (Attacker can send Victim a link, and when Victim authenticates, Attacker is logged in. With a local web server receiving the token, when Victim authenticates, they see an error instead.)
As always, there's engineering tradeoffs in building different solutions. Some factors (besides concerns around session fixation) that may come into play:
- Typically, the authentication and authorization step occurs prior to opening a tunnel or long-lived connection. Shifting when this occurs can have unexpected or unwanted side effects. In the case of the DNG, which protects web applications in addition to SSH servers, authentication is typically provided via cookies, and so the tunnel opened for SSH is actually a sort of Websocket connection. With the approach I have described, authentication occurs prior to upgrading the HTTP request into a Websocket, in the same manner it does for web applications. In the approach you suggest, this would have to flip a bit, meaning re-implementing authentication inside the Websocket connection itself.
- What if the authentication server is not the service provider? For example, in the SAML ECP profile, the client negotiates an authentication token from the authentication server on behalf of the service provider. It then takes this token back to the service provider for cryptographic verification. You could instead teach the authentication server how to talk to the service provider directly (e.g. the HTTP-Artifact protocol) - but this works better if you own both the IdP and the SP. In many cases, customers may bring their own identity provider (e.g. Okta) with them to the service provider (e.g. GlobalProtect.)
A typical Linux installation can run much more web servers on localhost. I run transmission-daemon, syncthing, cups, sometimes pagico (which is a desktop app which runs a php/web server backend on loopback).
I guess that Firefox folks have thought of this otherwise, it'd be patched pretty quickly.
CORS solves much of this - servers have to opt in to allowing these requests, just like any other cross-origin requests. Badwebsite.com in general cannot send a POST to bank.com/send-money from inside your browser, and similarly it cannot POST to localhost:631.
There's caveats and of course servers can be configured insecurely, but this isn't a general risk by default.
Both of those requests can be sent. CORS just stops the response from being read. It’s up to web servers on localhost to assume they’re at just as much risk as any other non-local service, and they often fail to do so. (See also DNS rebinding that nets an attacker the opposite set of permissions, in a sense.)
That's not actually true, at least via Ajax. Certainly you can just throw a POST at the website, and we hope that any web server is secured with CSRF protection. Additionally, the SameSite changes recently introduced by Chrome should further mitigate this problem.[1]
> The Cross-Origin Resource Sharing standard works by adding new HTTP headers that let servers describe which origins are permitted to read that information from a web browser. Additionally, for HTTP request methods that can cause side-effects on server data (in particular, HTTP methods other than GET, or POST with certain MIME types), the specification mandates that browsers "preflight" the request, soliciting supported methods from the server with the HTTP OPTIONS request method, and then, upon "approval" from the server, sending the actual request. Servers can also inform clients whether "credentials" (such as Cookies and HTTP Authentication) should be sent with requests.[2]
s/CORS/Lack of CORS/, but you can probably tell from my comment as a whole that I was covering exactly what you’ve just mentioned. Forgetting CSRF protection and Host header checks are exactly the kinds of things developers who write software that starts local servers forget all the time.
Yes, I have a localhost running python script to communicate with serial devices and printers, so this makes it a bit easier. (I currently slide in a self-signed cert)
Not just. There is a niche for mixed web/native apps whereby the page is loaded from the web, but interacts (at least in part) with a locally hosted web-server.