The biggest change from my perspective is "Firefox now ensures that localhost URLs — such as http://localhost/ and http://dev.localhost/ — refer to the local host's loopback interface (e.g. http://127.0.0.1). As a result, resources loaded from localhost are now assumed to have been delivered securely (see Secure contexts), and also will not be treated as mixed content (bug 1220810, bug 1488740)."
Am I incompetent if I thought all browsers all did this already by default? Or was this typically enforced at the OS/networking-stack level, and now Firefox is adding an additional guarantee that it will happen directly in the browser, even if the networking stack would say otherwise?
The latter. You can always set your /etc/hosts or equivalent to map "localhost" to some random IP that is not loopback. Or a proxy could proxy that hostname, etc.
In the new setup proxies can't proxy localhost unless you set a pref to explicitly let them do that, and for "localhost" and hostnames ending in ".localhost" Firefox won't even ask the DNS stack for the IP, assuming loopback instead.
I'm pretty sure all of this is clearly specified at this point, so browsers should end up all aligning on a common behavior here at some point, if they haven't yet.
Not just. There is a niche for mixed web/native apps whereby the page is loaded from the web, but interacts (at least in part) with a locally hosted web-server.
Like, started from the local filesystem? I see they have a "Notebook Server," is that what you mean by "locally," rather than starting it from clicking an e.g. "jupyter.html" file?
To their credit, it doesn't seem to use Node, and configuration via .py files tells me maybe they have some lightweight server thing that makes it a bit more manageable as a single-user app than via Node or Electron and friends.
Anyway, I have a project I have been wanting to be completely self-contained in an HTML file, but that gives me CORS problems, so now I'm thinking maybe the Jupyter posse has implemented a nice compromise I can use.
Yes, Jupyter Notebook (and its successor Jupyter Lab) is a Python program that serves a browser (HTML/CSS/JS) application. It is meant to be used equally well when the Jupyter server is on your local machine or somewhere else on the network.
Other tools that use similar architectures: Tabula, Kiwi IRC, The Lounge IRC. The user experience is maybe more "technical", but you don't have Electron involved, just whatever web browser you would use for anything else.
The part that appeals to me is that they're using a smaller than nodejs (and Electron for that matter) local server to do so! CORS means something has to proxy, so seeing the existence of what I'm assuming is a minimum implementation is really helpful. Obviously I'm not hanging out in the right slacks or subreddits. ;)
I hate this shit, I've only seen the sketchiest sort of spyware, like discord, do it. Browsers should forbid it, since it's a blatant end-run around the limitations a 'web app' would normally have.
Furthermore, browsers even having this ability facilitates inter-protocol exploitation. It's not only local webservers that are put at risk by this, other sorts of local servers can be attacked as well. See CVE-2016-8606 for an example of a browser being used to attack a guile repl.
I think it's a problem from a security point of view to allow websites to do this. Often the server that runs on localhost is poorly secured and may even expose "Access-Control-Allow-Origin: *" headers. And even if it doesn't, the browser still has to run a request to find out whether such a header is present, so some attacker controlled data does end up in these services. This in turn can be used for attacks.
Maybe these services were coded assuming that an URL can only be 1 kb long? What if you overrun that buffer?
Yes, it's possible localhost servers can have security issues, but:
* This only affects servers that do indeed serve the "anybody can talk to me" CORS headers. For anything else, standard cross-origin restrictions apply. A simple localhost server that has ignored this issue and isn't aware of CORS at all is generally not vulnerable.
* This change doesn't actually increase this risk, it decreases it! Localhost requests from pages were already allowed, CORS notwithstanding, it was just not possible from HTTPS sites as it was mixed content.
* This is a pretty common pattern for lots of popular software that has a desktop component - e.g. Spotify, Zoom - so there's a clear use case for it.
* There's ongoing work to restrict this further, see https://web.dev/cors-rfc1918-feedback/, proposed by the Chrome team. In short, HTTPS will be required, and a new `Access-Control-Request-Private-Network: true` header will be sent (and an `Access-Control-Allow-Private-Network: true` response required) to force servers to opt in to any requests that cross from public origins to private ones.
> This only affects servers that do indeed serve the "anybody can talk to me" CORS headers.
Again, even if these CORS headers are not sent, the browser still does a request to the server in order to find out whether the headers are being sent. Check running python3 -m http.server in your terminal, and then do var v = fetch("http://localhost:8000/hellllllo") in your browser's console. You will get a big red CORS error in the console, because the builtin python http server does not send these headers. But the web server will still receive and respond to the "hellllllo" GET request! It will show up in your terminal's log. For some insecure servers, getting a specifically crafted request might be enough to exploit security bugs. Like I said, take a server that has a limited size buffer for the URL after which it overflows letting you write data to whatever is beyond that on the stack.
> This change doesn't actually increase this risk
Alright you have a point here, but it's still bad this feature exists in the first place.
> This is a pretty common pattern for lots of popular software that has a desktop component
Because it's used doesn't mean it's a bad idea.
> There's ongoing work to restrict this further
Huh that's very nice. Indeed this would resolve my concerns:
In the future, whenever a public website is trying to fetch resources from a private or a local network, Chrome will send a preflight request before the actual request.
Preflight requests are hardcoded and barely have any attacker controlled data (except for the ip address maybe, as 127.0.0.2 is as valid as 127.0.0.1).
For example, some software, such as VPN clients, will now open the authentication page in a Chrome/Firefox web browser, rather than in an embedded browser - this is a security win! It affords the ability to use WebAuthn/U2F, password managers, as well as an updated browser.
However, for this to work, you need to pass an authentication token back to the client - this is done by binding to a port locally, and exposing a webserver which receives an authentication token.
Duo Network Gateway[1] authenticates this way, and I'm sure others do as well. I know Palo Alto GlobalProtect, AWS CLI, and others offer web-based authentication now, but don't know specifics of their implementation. (I work at Duo.)
The cloud service could communicate with the local device's client through a separate connection though that the client opens with that cloud service, no need to do this via the local JS.
The only benefit I could think of would be that it makes it easier to correlate the local client with the authenticating user, but you could just have a token that you make part of the URL the client visits. Also it doesnt fully solve the correlation problem either, on the same computer, two clients could run under two different OS level accounts. Will it just send its authentication token to one of those clients? There is no user separation on Windows or Linux. I only know about Chrome OS having user separation at the port level.
> Also it doesnt fully solve the correlation problem either, on the same computer, two clients could run under two different OS level accounts. Will it just send its authentication token to one of those clients?
When the request is made to the server, the port that is temporarily bound is also sent with the request. This tells the server what URL they should POST back to. The port is random for each authentication.
> The cloud service could communicate with the local device's client through a separate connection though that the client opens with that cloud service, no need to do this via the local JS.
FWIW, no Javascript necessary. Just a regular POST form & Location headers. In any case, you're right - you could open up a separate connection to the server with a correlation ID, and include the same correlation ID in the initial request to the authentication server, and then upgrade the connection's permissions after the fact. You ought to be careful of session fixation attacks here. (Attacker can send Victim a link, and when Victim authenticates, Attacker is logged in. With a local web server receiving the token, when Victim authenticates, they see an error instead.)
As always, there's engineering tradeoffs in building different solutions. Some factors (besides concerns around session fixation) that may come into play:
- Typically, the authentication and authorization step occurs prior to opening a tunnel or long-lived connection. Shifting when this occurs can have unexpected or unwanted side effects. In the case of the DNG, which protects web applications in addition to SSH servers, authentication is typically provided via cookies, and so the tunnel opened for SSH is actually a sort of Websocket connection. With the approach I have described, authentication occurs prior to upgrading the HTTP request into a Websocket, in the same manner it does for web applications. In the approach you suggest, this would have to flip a bit, meaning re-implementing authentication inside the Websocket connection itself.
- What if the authentication server is not the service provider? For example, in the SAML ECP profile, the client negotiates an authentication token from the authentication server on behalf of the service provider. It then takes this token back to the service provider for cryptographic verification. You could instead teach the authentication server how to talk to the service provider directly (e.g. the HTTP-Artifact protocol) - but this works better if you own both the IdP and the SP. In many cases, customers may bring their own identity provider (e.g. Okta) with them to the service provider (e.g. GlobalProtect.)
A typical Linux installation can run much more web servers on localhost. I run transmission-daemon, syncthing, cups, sometimes pagico (which is a desktop app which runs a php/web server backend on loopback).
I guess that Firefox folks have thought of this otherwise, it'd be patched pretty quickly.
CORS solves much of this - servers have to opt in to allowing these requests, just like any other cross-origin requests. Badwebsite.com in general cannot send a POST to bank.com/send-money from inside your browser, and similarly it cannot POST to localhost:631.
There's caveats and of course servers can be configured insecurely, but this isn't a general risk by default.
Both of those requests can be sent. CORS just stops the response from being read. It’s up to web servers on localhost to assume they’re at just as much risk as any other non-local service, and they often fail to do so. (See also DNS rebinding that nets an attacker the opposite set of permissions, in a sense.)
That's not actually true, at least via Ajax. Certainly you can just throw a POST at the website, and we hope that any web server is secured with CSRF protection. Additionally, the SameSite changes recently introduced by Chrome should further mitigate this problem.[1]
> The Cross-Origin Resource Sharing standard works by adding new HTTP headers that let servers describe which origins are permitted to read that information from a web browser. Additionally, for HTTP request methods that can cause side-effects on server data (in particular, HTTP methods other than GET, or POST with certain MIME types), the specification mandates that browsers "preflight" the request, soliciting supported methods from the server with the HTTP OPTIONS request method, and then, upon "approval" from the server, sending the actual request. Servers can also inform clients whether "credentials" (such as Cookies and HTTP Authentication) should be sent with requests.[2]
s/CORS/Lack of CORS/, but you can probably tell from my comment as a whole that I was covering exactly what you’ve just mentioned. Forgetting CSRF protection and Host header checks are exactly the kinds of things developers who write software that starts local servers forget all the time.
Yes, I have a localhost running python script to communicate with serial devices and printers, so this makes it a bit easier. (I currently slide in a self-signed cert)
That would simplify my life when developing stuff on localhost and importing them between different things over http(s).
edit: Kids, you see those articles that regularly hit the frontpage about work/life balance ? The ones about how you need healthy sleep patterns and crunching all night long is actually bad for you brain ? This comment is what happens when you read those but don't actually follow through with recommendations.
It means FireFox could visit https://whatever.localhost without SSL warnings for self-signed certs. It has no bearing on cURL. cURL is maintained by Daniel Stenberg (an open source legend), whereas FireFox is maintained by Mozilla. Completely separate. This is like asking if Tesla's new car policy will affect your Toyota.
I was curious about this because the release notes don't mention changing the behavior of HTTPS for localhost. I set up a local test and http://mytest.localhost works without having to add an entry to /etc/hosts, but https://mytest.localhost with a self-signed TLS certificate will still raise a SEC_ERROR_UNKNOWN_ISSUER error. You still need to ignore those or install your own root CA with something like https://github.com/FiloSottile/mkcert for that to work seamlessly.
That sounds great for local development. It sounds like a vector to keep an eye on for bug bounty hunters. Any time something is "assumed to be [...] secure" there's a good chance that assumption can be broken in some strange way in future.
What Mozilla is implementing here, I suspect, is RFC 6761 section 6.3 item 3, which says:
Name resolution APIs and libraries SHOULD recognize
localhost names as special and SHOULD always return
the IP loopback address for address queries and
negative responses for all other query types. Name
resolution APIs SHOULD NOT send queries for localhost
names to their configured caching DNS server(s).
A few notes:
1) The use of "the IP loopback address" is a bit weird in IPv4, of course.
2) It's hard to know which loopback address to return from an API without violating the "SHOULD NOT" part of this item.
There's really no way to win here. In theory Firefox could do the resolution as normal and then if it doesn't come back as loopback replace it with 127.0.0.1. That would be perfectly within the letter of the RFC, since none of the stuff above it MUST-level requirements, but probably not really within its spirit, since it would violate the "SHOULD NOT" bit.
Note that Firefox is not alone here; Chrome has the same behavior last I checked. https://bugzilla.mozilla.org/show_bug.cgi?id=1220810#c28 is a comment from one of the networking/security folks at Chrome about this precise issue, in case it's of interest.
> There's really no way to win here. In theory Firefox could do the resolution as normal and then if it doesn't come back as loopback replace it with 127.0.0.1. That would be perfectly within the letter of the RFC, since noneo f the stuff above it MUST-level requirements, but probably not really within its spirit, since it would violate the "SHOULD NOT" bit.
Could a check to see if the name is "localhost" or "*.localhost" suffice to satisfy the "SHOULD NOT" requirement?
From the bugzilla comment:
> It seems reasonable to extend that to support any /etc/hosts assertions that fall into 127.0.0.0/8, but that's not something we've heard a whole lot of interest in, though there have certainly been one-off requests.
That does seem reasonable to me, and it covers more than just "localhost" names.
I do understand that most uses probably do not require resolution of localhost names to other than 127.0.0.1. I may also be a little annoyed that I'll have to make some changes to something I've been doing for a long time.
> Could a check to see if the name is "localhost" or "*.localhost" suffice to satisfy the "SHOULD NOT" requirement?
To satisfy that requirement you have to check for those names and if that's what the name is _not_ ask the "configured caching DNS server". Which means you need to do what? Calling OS APIs will do that DNS server thing unless the name is in /etc/hosts. You could manually parse /etc/hosts and see whether it's there, I guess... That's basically Mike West's suggestion.
The parsing of /etc/hosts would presumably have to exactly match the parsing that the system DNS code does, including error recovery behavior, etc. Otherwise you might treat as localhost something that is not mapped to localhost in /etc/hosts or vice versa. This is all doable, but it's not something I'd want to do if I were tasked with this decision, absent strong indicators that this work and the extra maintenance burden and attack surface (every parser ever is attack surface) is really worth it.
If the OS API calls a DNS server, that's not really the higher-level API’s problem. If you've decided to override OS APIs and do your own ground up implementation, then you basically need to do everything an OS-level API would do if it was behaving correctly. So you either need to parse all the appropriate system files (hosts, nsswitch.conf, etc.) with the appropriate special handling of localhost domains, or have your own equivalent, app specific configuration.
The “what are you supposed to do?” complaint seems to be complaining that replacing the core OS-level name resolution facility to avoid the possibility that it doesn't comply with one spec recommendation while avoiding loss of other desirable features involves quite bit of complexity. Which, yeah, it does. The whole reason to have OS level services like that is to prevent every app that does networking from having to reimplement basic services.
Keep in mind also that in many cases (e.g. when using DNS over HTTPS), Firefox is in fact not using the OS API, and is directly hitting a DNS server instead. Of course in that case whatever is happening in /etc/hosts is either getting ignored anyway or Firefox is parsing it anyway...
Sure there is. Consulting the local hosts file isn't contacting the configured caching DNS server, so there is no SHOULD or SHOULD NOT violated by consulting hosts and then defaulting 127.0.0.1 if a more specific address in the loopback address range is not provided.
(Relying on a system-level name resolution API also isn't contacting the caching DNS server, and if that API behaves according to spec, including potentially relying on local resources loke hosts but not remote DNS servers for localhost domains, that would also work.)
> and if that API behaves according to spec, including potentially relying on local resources loke hosts but not remote DNS servers for localhost domains
Yes, but that's only relevant if Firefox were mapping from IP addresses to something. Here it's mapping to an IP address, so 127.0.0.1 is fine (even though, e.g., 127.0.0.42 would also have been fine).
So, does that mean things like WebCrypto (which requires https to be available) will work in development environment now? Or did they already work out of box? I never tried because I assumed they won't work.
Yes, that is exactly what it means: things that require a "secure context" in spec terms, like WebCrypto, will work on http://localhost even though it's not https.
> As a result, resources loaded from localhost are now assumed to have been delivered securely (see Secure contexts), and also will not be treated as mixed content (bug 1220810, bug 1488740)."
That's dangerous. I don't want stuff loaded from localhost to have any different security context than stuff not loaded from localhost.
Firefox is way too valuable as a hedge against Google hegemony to let die. If they ever lose the Google search money hose and become threatened I do expect a Google competitor to realize that and step in with some funding.
Microsoft basing Edge on Chromium was a really stupid move on their part for this reason, and I suspect they are starting to realize it.
That being said, I only expect it to be maintained at a hedge level, not enough to shoot for dominance again. Think of it like Valve's Steam on Linux efforts -- it's not a major effort on their part, it's just enough to keep Microsoft honest.
Also things like teams and skype at least on linux appear to just be electron webapps. Microsoft's safest long-term cash cow I think is still in these corporate contracts that combine all their tools in a confusing giant stew for every employee at those 1,000+ headcount companies.
Having more instrumentation over the platform they're doing it on is probably reasonable and realistically blink is already on every platform with over 1% usage share so that's a much easier road than supporting Trident, EdgeHTML or Chakra - the engine is not their core product - it's the stuff built on top of it.
I've used edge on linux - other than the weird thrill of seeing software that says "Copyright Microsoft" running on linux it's just a typical functional webbrowser -- that's exactly what they need.
I've been gradually switching to Edge and I'm sure Microsoft is happy with people like me who are too scatterbrained to change the default search engine (of course now I've spent more time writing this comment than I would have making the switch)
this. Firefox isn't a competitive threat for Microsoft; Chrome was. So Microsoft did what they're extremely good at: commoditising competitive threat where they were losing on market share. They've done it with Linux too. Whether you love them or hate them, you have to recognise the effectiveness of both strategy and execution.
You’re both right I think: short term needed users so adopt chrome... long term Firefox essential so MS would step in as a supporter (and potentially adopt it too should google become a problem).
Isn't the reason why many applications and libraries use Chromium due to Gecko's too Firefox-centric APIs?
I read a post here on HN a few months (years?) ago where a developer was really trying to use Gecko for their own application and got frustrated so much that they gave up and switched to Chromium.
Gecko is architected to be used as a framework, not a library, which means it tries to dictate a lot of aspects of how your code is supposed to be structured. Whereas Webkit/Blink is the other way around.
It predates KHTML/Webkit by several years so some additional legacy baggage is to be expected.
FFs market share was a bit higher back then, though.
I see the main asset of firefox as being
a) not google tech
b) mobile browser with ublock
c) tech nerds with tons of custom FF addons, never wanting to switch
So I don't see them go away anytime soon, but I also see nothing to stop the decrease of their marketshare. That means, unless mozilla would focus on their core values again. They lost much trust with quite some shady actions now, over the years.
None of them are a guarantee for it to survive but diversity is important and hard to see a second option disappearing just like how BSD are surviving.
Imagine what kind of HN thread pops up if Firefox is abandoned by Mozilla.
Also Firefox is a single licensed product that is MPL, and for anyone who wants it that way commercially may support it.
I don't see a big comeback for Firefox unless there's a big backlash against the reduced effectiveness of adblocking due to manifest v3 in chrome. It's not like the days where "tabs and ad blocking" were only available in one browser (well two, but Opera was still (a) charging money or (b) including ads depending on the timeframe) and provided a huge incentive to switch from IE. Sync is the closest any browser feature has come to that level in a decade, but all browsers had that pretty quickly.
Mozilla also don't have the brand power to push their browser to people who don't actually care but just want the notification on google.com or youtube to go away, or had it bundled with flash player or their windows install or their AV program like a lot of Chrome's rise.
My hope is it bottoms out and finds a new stable equilibrium at brave/vivaldi levels. The question is if that's a large enough position to actually maintain a browser engine.
I'm in the same boat and have the same worries. I give them a monthly donation to support the company, in hopes of showing that there are paying customers they may be able to fall back on in the future. I can't think of anything else I can do to turn back the tide of the advertiser-developed browser. (Yes, I already know the money doesn't go directly to Firefox development today, thanks.)
Thank you for this inspiration, I've created my monthly payment now. I cannot imagine being forced to use anything else than Firefox... (Even though the Android version is barely usable after the rework...)
Oh, I use it as my daily driver mobile browser. And I agree it has improved quite a bit (so that it is now usable, though with a lot of bugs/missing features/inconsistencies).
It's being outpaced by other browsers in a rapidly expanding field, not falling in usage.
Safari and Edge are defaults on iOS, MacOS, and Windows, and Chrome is the default on Android. I imagine we're primarily seeing the effects platform capture everywhere else.
Firefox says they have 200m monthly active users. In 2018, techcrunch wrote about that same report and said "this data shows a downward trend for monthly active Firefox users, which now measure about 250 million, down from well over 300 million last April."
So I would be concerned.. they're actively bleeding users... not just being outpaced in a growing market.
It wont die. Google will keep funding it just to say they dont have a monopoly in browser market. But it will continue its path towards irrelevant by mainstream usage. They will continue to shrink the engineering as budgets are cut. Until no more silly 20% project and management goals that drains resources ( they wont have that luxury anymore ).
And someday Firefox will require a reboot, not its code base but from a management perspective.
From a user point view, Firefox will stay pretty much the same for another 5 - 6 years at least. Both the Browsers and its Cooperation. There are still quite few improvement coming from Rust Servo in the pipeline. So technically it should be competitive enough.
- From someone who has been watching / following Firefox since before its birth.
It would be bad for Linux users. To my knowledge, it is the only web browser that offers without patching video hardware acceleration.
It works on both Xorg and Wayland.
But TBH, the last few releases from 79 showed some bugs and regressions that affected my user experience. From 79 to 81-82, I had to disable vaapi support otherwise it made video playing stutter.
And in 83, webrender caused some extensions popup Windows to be blank.
I hope Linux Wayland user experience will stabilize. But I switched to Firefox full time on my home laptop thanks to hw acceleration support and also tab containers.
any suggestion for a good opensource browser? Firefox is my main browser (and the only one recommended by privacytools.io), but having a chromium-based one is good to have too.
- Brave: it has been pushing opt-in-by-default ads and crypto widgets
since Tabkit died with xul and Tree Style Tab beeing a poor replacement (that crashes often) i use vivaldi althou its still not as good as the original opera or ff-tabkit were in regards to the vertical tab ux.
browser ux really seems like 'one step forward, two backward' over the last two decades.
I don't have the numbers, but I've read somewhere recently that while market share in % drops each year, the overall absolute numbers (millions of users) are somehow stable.
It's the pie that is expanding (especially on mobile, where Firefox has hard time competing).
It would be complicated due to Mozilla's structure. Mozilla Foundation is a non-profit that owns the for-profit Mozilla Corporation. You can donate to the foundation but they can't use the money for Firefox.
But the fundamental reason is that donations are only ever going to be a drop in the ocean. Google gives them $400m/year. Mozilla Foundation donations are like $3m/year. Even Wikipedia donations are only about $100m/year and they have way more users and push really hard.
Probably nothing, but again I suspect they don't bother because it would be a drop in the ocean and certainly not worth the "Mozilla is selling out!" backlash you'd get from the sort of people that use Firefox.
I was of the impression that Mozilla had shed their side projects in the face of dwindling money from Google. Is that still not the case?
I am willing to donate to Mozilla in spite of their mismanagement given that we are creeping ever closer to another browser monopoly. Ignoring bad financial decisions, Firefox is a good product and I've been using it exclusively on PC since Quantum came out. I feel that people here hold Mozilla to some impossibly high standards and turn it into outrage porn. I mean, I don't see top-ranked comments on HN about threatening to stop using Chrome because of whatever sleazy thing Google did last.
Firefox should sign a deal with EU and get funding from them. Their selling point could be "Unlike other browsers, Firefox won't send data to the US and will enforce GDPR even on sites that arent compliant". Otherwise Firefox will be an expense account of some google exec and be treated as such.
Compete or die. I have 2 computers that can't run Firefox at a usable speed. No way to figure out the problem.
My fresh computer works fine.
And I'm currently using the Android app. It's buggy, but it could be Facebook or the Google keyboard. Chrome has different problems. Duck duck go has been my newest install. I still need to transfer over.
Basic fear is that it becomes yet another chromium spawn some ~5 years down the line. Recent dramas surrounding FF on Android and unasked-for UI changes in desktop surely didn't win FF any new users...But...I'm still clinging to it.
They've completely redesigned it about a year ago, making it much faster, but losing add-on compatibility. So pretty much the same thing that happened to desktops on v57, except a bit worse: only 17 add-ons are currently available in nightly.
Still more than what's available on Chrome and Safari on mobile, so it always remained my browser of choice on a phone.
Thanks for the info! What a relief, I'm currently still on v68 to not lose uMatrix, bypass paywalls, amp redirect to html, old reddit redirect and others.
Right, but choosing between power user features and stability is clearly worse as a user than not having to choose between power user features and stability.
Also isn't there more telemetry on by default in nightly?
Installing unsupported add-ons also risks instability. Mozilla didn't restrict the initial list of supported add-ons because they don't want to allow add-ons. They continue to expand the list of add-ons supported in Firefox for Android as more add-ons and APIs are tested.
Mozilla uses Firefox telemetry to fix bugs, identify areas to optimize, and guide product decisions. How else can Mozilla know which devices or problems to prioritize?
It's not a good idea. That's also why it's very important to provide updates that do not disable functionality important for the user. That way you don't force users to choose between security or functionality.
Mozilla still does not allow non-store extensions on their new firefox. Until they're back I have to keep old Firefox around. ...which sucks.
(I keep a Firefox Developer edition which I use for reading documentation, news etc. It is fully stashed up with all my favorite extensions except Bitwarden.
My main browser is almost bare bones, just BitWarden, uBlock origin etc.
As for why: I'm conservative in more than one way. The more separation I can add between customer data and auto updating extensions the better.
r.e. market share: one problem is developers - web-oriented devs who deal with node.js invariably must use chrome simply for the server-side devtools, and then stick with it, leading to 'only in chrome' sites and 'all the cool kids' / new developers using chrome..
would love to see a well-supported spidermonkey-based server-side JS engine, electron replacement, and corresponding firefox debug tooling
> Native support for macOS devices built with Apple Silicon CPUs brings dramatic performance improvements over the non-native build that was shipped in Firefox 83: Firefox launches over 2.5 times faster and web apps are now twice as responsive
You will need to do a browser reboot after updating to get the ARM64 version instead of the Rosetta version[1]
Ever since I switched to my apple silicon Mac I've been using Safari, an unpleasant side effect being that apple seemed to spy on passwords (the browser freezes unless it remembers your password), and youTube ads
Is anyone else worried Firefox on Android is becoming increasingly unusable? How well supported is it my Mozilla I wonder.
I often find pages render as blank pages until I refresh, and scrolling will often hang for multiple seconds. I was playing "Jackbox Games" (which is played via the web and PC), and I found those of us on Firefox found the text boxes were broken. Just downloading PDFs was broken for several weeks, and the official advice was to switch to the beta version of Firefox for Android.
Firefox on Android has been solid for me, perhaps it's an extension issue? I've definitely run into extension issues on mobile that don't exist on desktop.
I also only run two: Dark Reader, and uBlock Origin. It's uBlock that I find causes rendering issues on some releases of the Android app, but I haven't seen that happen in a while.
The only thing I do find painful was the app redesign removing the feature to clear all tabs on close-swiping the app. You can clear all tabs by selecting 'Quit' from the menu, but it's a hassle. I wish we could drop the forced tab persistence when closing the app as a setting, like used to be the case.
uBO just does what the filters in the filter lists dictate, so when you see specific websites broken as a result of uBO enforcing filters, best is to report these cases to the list maintainers so that this can be fixed for all.
(Mozilla employee, though not working directly on mobile)
Mozilla is invested in Firefox on Android for the long haul. We landed a complete re-architecture this year (Fenix). It hasn't been completely painless -- for example, as you seem to have experienced, we've had to split our development time between working on the new engine and maintaining the old -- but Fenix is already better than Fennec in most ways and is significantly easier to improve moving forward.
I'm still stuck with the old Firefox for Android until extensions are generally supported.
I'm really looking forward to that though, because other than extensions the new one looks great.
Same here. Is this slated for a future release? If so, do you know which one?
I've been trying to find an authoritative source to signal when Android FF will support extensions (that previously worked but no longer do) -- any tips much appreciated.
You might be able to answer this - are containers coming to Firefox Mobile at any point? That's the main thing that would make me switch from my phone's default browser.
I'm very unhappy with the new FF mobile. The user interface is worse
than before in every(!) aspect. There's not a single improvement. And
everything feels slower. And there is no offline browsing. On a mobile
browser! WTF?! And there is no way to save websites. On a mobile
browser, which has very limited connectivity due to its nature.
This is just sad. There was a time when I was happy to read about a new
FF version for the desktop. Now I'm just waiting that they will ruin it
like they did with the mobile version.
I'm completely the opposite. Always had firefox installed on my phone, but it was just too slow and buggy before. When the new FF mobile came out as preview it pretty much became my only browser. I've used chrome maybe max 3 times last month. It's way faster in my experience. Ad-block works great (one of the issues I had with previous FF on android, extensions would easily bring it to a crawl), and I like how they reworked the UI, I still can't believe nobody thought about putting the location bar on the bottom before.
I do also see the blank screen rendering from time to time. However, I have encountered similar things on chrome as well, typically a refresh fixes it. There are some webpages that really don't seem to want to render on FF mobile though (not sure if because of UBO or something else), that's the only time I switch to chrome.
> I still can't believe nobody thought about putting the location bar on the bottom before.
This turned out to be a surprisingly big deal to me. Every time I have to use Chrome for some reason I am bothered by how out of the way the location bar is. I think some other browsers have tried to put it at the bottom, but none that are as popular as Firefox or Chrome.
I agree. The Tab interface is unusable. new FF opens lots more new tabs than the old one. But the list is less readable, less usable, and closing tabs is a chore (!) because there is a list-slide-animation that makes you miss the next X to close and you have to wait for the f-ing animation to finish. Bookmarks are inaccessible, you only see a few on the new-tab-page or in an unordered mess as search results when typing something in the address bar. A loading indicator is missing and loading often hangs for a long time or clicks don't register. I'm seriously researching alternatives, the status quo is hell.
Changing it so that you can't see your bookmarks when typing into the address bar is infuriating. It means if I want to go to a bookmarked page I have to close the tab I'm on (because I really just want to use that tab and I'm done with it) then open a new tab then the bookmarks show up.
But they're not actually the bookmarks are they? They're pinned sites which are different in the browser versus desktop. And they're not collections. There are too many "bookmark things"
I used Firefix on Android from the beginning, just like on desktop, but the recent changes pushed me over the edge. It's hard to find a suitable alternative, right now I'm switching between Vivaldi and Kiwi. The current state of Firefox on Android makes me feel really sad.
What's intriguing to me is no primary password support on FF Android and yet it allows sync with FF protected with a primary password.
DDG using Webview with built in ad-blocker seems to be the fastest android web browser I've ever used. I wonder whether it uses Chrome's data compression.
But FF Android supports extensions, at least most used ones and I hope with support for more extensions its adoption could grow.
I've had issues with blank pages and scroll hanging occasionally in the years I've used Firefox on Android, but it seems like those issues have become less common for me lately with the last couple updates. Though I also did a big purge of my tabs lately (getting down to 10 from over 200) and I wonder if that decreased the amount of issues.
> Will switching to a different browser allow me to use Flash?
>No. Adobe and other browsers will also end support Flash at the end of 2020. Even if you install an older version of Firefox, the Flash plugin itself will stop loading Flash content after January 12, 2021. See Adobe’s Flash Player End of Life information page for more details
Adobe:
> Since Adobe will no longer be supporting Flash Player after December 31, 2020 and Adobe will block Flash content from running in Flash Player beginning January 12, 2021, Adobe strongly recommends all users immediately uninstall Flash Player to help protect their systems.
I don't have a horse in the Flash race -- it had a ton of issues but also enabled many content creators to deliver over the web in a way straight HTML/JS was incapable of -- and yes, Flash had its time and that time is now over, but yeah. Agree. Disturbing.
Flash games and animations were a treat of the early/mid-2000s web, and many folks got their start as creators there. Adobe hasn't just stopped distributing a player/plugin for Flash content, they're blocking the existing players from playing existing content. Even if you were an archivist there's nothing to do.
So yeah, this is disturbing. I can go to my city's library and browse microfiche if I want. I can install a Gopher or a Usenet client and connect to those widely-unused protocols that still serve content. But Adobe's reaching back and making all Flash content completely unplayable, even with players that should support it.
The reason it's disturbing is exactly why it's good that it went away. It was a proprietary plugin supported by a single vendor, so when that vendor chooses to stop supporting it, it's gone. We've been lucky that it's been supported for so long, in that regard.
Although in practice it is still hard for independent parties to implement support for all relevant open web standards, at least they are open and already have existing independent implementations.
Allow me to rephrase what you wrote: It's good that Flash will abruptly stop working because that will force people to stop relying on software supported by a single vendor, the problem with which is that it might abruptly stop working, which would be bad.
That's not precisely what I was trying to say. Another attempt: Flash was bound to go away when it was supplanted by something else at some point, and it is a good thing that that "something else" was open standards, and that most content has been built on those rather than Flash for a while now, because the same is unlikely to happen for that.
>Adobe will block Flash content from running in Flash Player beginning January 12, 2021
What exactly does this mean?
Will the Flash Player projector still work after that date?
I still have some old files in my Flash archive that I would like to play occasionally, so I really hope that the projector does not have some kind of time bomb.
Doesn't concern me. Flash has been on an EOL track for a long time now, and I'm actually pretty happy they're being this aggressive about killing it, as otherwise it could linger on indefinitely, causing security bugs across the web.
Not GP, but I think what they find disturbing is that Adobe could block a plugin running on their computer, either by the plugin being timebombed, or by it checking in with a mothership.
Welcome to the world of proprietary software. Flash was always a closed source product controlled by Adobe. I don't find that disturbing because it's incredibly common in the closed source world.
Hell, this is precisely what Gatekeeper does on macos, in that it reaches out to the cloud to decide if you should be allowed to run software on your machine.
It's even justified on the same basis: enhancing security.
Which is exactly why I'm glad Adobe is being so aggressive.
The death of Flash isn't news. If enterprises haven't reacted to that, frankly, they never will short of killing it as is being done here. Letting is linger on just ensures it's a source of ongoing security breaches, and I think we've all seen how dangerous those can be (thanks Experian!)
I'm seeing high CPU usage on YouTube after recent updates on Xubuntu. I think it started when they enabled GPU acceleration by default.
Previously I could watch even FHD videos without much fan noise, but after recent updates, fans spin at the max speed by playing even 480p quality vids. Anyone else?
It's very very very simple and quick, there is a video demo and a text explaining what to do, and then you can open a bug at https://bugzilla.mozilla.org/enter_bug.cgi?product=Core&comp..., with the profile link (you can login with a github account or email). This will be triaged and hopefully we'll understand what's up. Please use a Nightly build or a Mozilla-provided build when doing so (so there are symbols). It's just a matter of downloading and untaring the tarball.
Please also attach a copy of `about:support` raw data (button at the top), so we can understand what your configuration is: desktop Linux systems can be pretty diverse and sometimes it isn't clear what works, what doesn't and why.
If you're playing 480p, then youtube might be serving you AV1, and this is (for now) using the CPU (trading a lot less bandwidth for increased CPU usage), you can check the codec it's using by right clicking on the video and clicking on `Stats for nerds`, there is the codec string. You can adjust your codec preferences in your youtube profile I believe.
Just did some tests on Firefox 83 and DOM reflow time grows exponentially with the depth of DOM tree. With grid elements of depth 10 single reflow takes seconds on my Ryzen laptop while on chromium is not noticable at all.
GPU acceleration is not a magic bullet it seems. In our case drawing large SVGs suddenly becomes really slow when GPU rasterization is enabled. Chrome has the same problem. I really hope we will see more polishing in this area and maybe a cooperation between Chrome and Firefox developers?
Thing is, when I play the same video with VLC which uses HW decoding, it doesn't even break a sweat. So I think there might be something wrong on the browser part...
There could be something related to FAN thresholds that resides in BIOS directly thus starting using GPU just slightly over some minimal limit might raise FAN speeds significantly and not be related to temperature raise as there is some inertia between temperature readings and real temperature inside GPU core.
I'm also experiencing FAN noise raise on my Ryzen APU based laptop.
i mostly stopped bothering with in-browser video when mpv wiht youtube-dl integration came around. either drag the url to the player window or use an open-with addon.
This particular example is only slow on Chrome. Even if you move the edge of the shape into the viewport Firefox is much faster than Chrome. But I know from real life examples that Firefox becomes slow if handling large SVGs. I will have to investigate this and file a bug report.
I checked again with our app on Firefox and performance is not that bad. Rendering gets a slower with large SVGs, but it is still good and there is not much difference between hardware rendering enabled and disabled. So I was either wrong about slow performance on Firefox or things have improved recently.
I'm on Linux. I stubled on webrender from a blog yesterday, and enabled it. My observation was the same - high CPU usage, so I then disabled. Got 84.0 today, and gave it another go. It appears to be behaving well so far. So I'm guessing there was some fix in 84.0 that made it work for me.
Yep. We've seen bugs with different window managers so WebRender is only enabled in Gnome in 84 to minimize the risk of breakage. This restriction will be removed in subsequent releases.
Also, we have it enabled in nightly and beta for many more configurations already :). There are many bugs to fix, but we're working through them. This is the exact same treatment we gave Windows and OSX in previous releases this year.
I know the usual rational about users blaming Firefox and yadda yadda but I am rather not convinced this is how things should work in the open desktops. If there are clear bugs in other environments, file them and break them if necessary. The only potential exception I can see is when it's not clear if it's a bug or not..
Yeah, that seems odd. I wonder if the update doesn't help/involve Weyland people, and if GNOME is an X11 holdout? I haven't been paying enough attention to know.
Recently switched back to Firefox (Nightly) from being an early Chrome adopter. Really haven't found any issues compared to Chrome. Performance is supposedly worse, but nothing I have been able to notice in my workflow.
Not sure if it's this update or the last, but the added the ability to show the bookmark bar ONLY on the new tab screen ala Chrome's behavior. I can't stand bookmarks as a side panel like the old FF behavior, and not a fan of tree style tabs. Having my bookmark folders on the new screen has been great.
I have a Cisco R200 server. It has a built in management interface called CIMC. (There is a proper name for that management type software but I can't remember.
From these panels, you can pretty much do anything you want to the physical server - Open a console session, change boot order, bios, etc.
On Chrome the animation starts lagging behind after a few loops so audio and video desync badly. On Firefox it’s much worse, since Firefox is naturally slower than Chrome so even the first loop is desynced already.
IIRC (it's been many years) after many loops that animation started desyncing in the real Flash player too. It wasn't terribly noticeable after only a few loops, but the fact that it happened made me think that the audio loop was completely async from the animation.
I just tried the above animation on W10 + uBo + Firefox 83. I went through 3 loops and it seemed to stay in sync. Then I hopped over to check this comment and came back to that tab, and it was out of sync at that time. So I think Tab Switching might be a challenge for Ruffle.
Clearly Ruffle is not perfect, but leagues, furlongs ahead of having to watch old flash content on YouTube via Unregistered Hypercam 2.
I had no idea they even still supported Flash?! I thought they changed their plugin API so many times that it would have been unsupported many years ago.
The extension API changed (web extensions). The plugin API is still NPAPI. There was a project to replace the API with the one that chrome uses, which has better sandboxing, but eventually they abandoned that project.
My memory may be faulty, but my recollection is there was never a serious attempt to replace NPAPI with Pepper. It was looked into, but it was concluded that the Pepper APIs were too tightly bound to Webkit's implementation to make it possible to implement, and Mozilla instead built their own out-of-process sandbox (which was sorely needed because Flash crashes were the single largest cause of Firefox crashes).
Major browsers can't ditch Flash until Adobe EOLs it. Too many legacy enterprise apps still rely on it, even though they are likely being frantically reimplemented this month. I'm in that boat.
A website my kids school use for maths advises it will stop worrying except for with Google Chrome, so I assume Google is keeping their flash implementation, and so flash will keep working.
That web site is wrong. The Flash player in Chrome will stop working on January 1st, and it will be removed from the browser entirely in an update early next year.
Depreciation in Firefox isn't going to cause that much corporate pressure to finally rewrite those ancient flash apps. When Internet Explorer drops it then they have a problem.
"Firefox now uses more modern techniques for allocating shared memory on Linux, improving performance and increasing compatibility with Docker."
Finally, now I can use Firefox again. I have an older box, about 10-years old :-) I see noticable speed improvement on linux. I only use few extensions with dark theme, like uBlock/uMatrix/Dark-Reader. I've been using "Ungoogled-Chromium" for the past few years. Its nice to see Firefox improving things on linux. Will definantly use it more now.
I run a relatively new machine and can still feel the performance improvements made here. Feels really snappy, maybe i will give it a go again as a daily driver.
My advice; stop reinventing the wheel, and if you must reinvent the wheel, make sure its at least as round as the last one.
I've been using FF since Netscape, and it used to be a few settings and we're off. Now, you have to edit policies.json to get updates stopped, autoconfig.js and firefox.cfg to disable telemetry, legacyUserProfileCustomizations to use userChrome.css to get anything like a proper menu system.
'Lockwise' has dumbed down the login system so much you can't even change field names, making many sites incompatible. Nor can you import logins from file or even point to installations to import. The add-ons system is like traveling back in time ten years, bugs and fewer alternatives than ever, still can't figure out how to stop Tree Style Tabs from stealing mouse focus from the page onmouseover like a widget. -And these are problems that were fixed before they were reinvented.
For the first time in 16 years I'm seriously trying alternatives.
-these were my views on 83, i hope to hell 84 fixes some of them.
> autoconfig.js and firefox.cfg to disable telemetry
You can change this in the configuration dialog, and once disabled, that setting will be preserved through Firefox upgrades. You shouldn't need to change this repeatedly.
> legacyUserProfileCustomizations to use userChrome.css to get anything like a proper menu system.
By "proper menu system", do you mean having the menu bar always visible? You can enable that with View->Toolbars->Menu Bar.
> you have to edit policies.json to get updates stopped
Doing so would result in running an outdated and potentially insecure browser.
I'm running Firefox 83, and those options are still available. It's critically important that Firefox support disabling telemetry.
If you've edited those options via one of the lower-level preference systems, such as autoconfig.js, you may have locked them, which would cause them to show up as greyed out.
about:config and search toolkit.telemetry.enabled I checked on a fresh 83 and its greyed out.
You need to create the file firefox.cfg in install Dir and enter lockPref("toolkit.telemetry.enabled", false); To override it And to get that to work you need to create /defaults/pref/autoconfig.js with pref("general.config.obscure_value", 0); and pref("general.config.filename", "firefox.cfg.js");
Its getting harder and harder to make a good browser out of FF, I personally don't mind the work, but things like Lockwise replacing a working password manager without the needed options and no addons to fix it or workarounds, i can see it messing up fields in PasswordFox but can no longer do anything about it.
You seem to be diving directly to the low-level settings. I don't know why those would be locked, but the telemetry checkboxes in the UI work just fine, and aren't disabled. And once disabled, they'll stay disabled. Firefox is never going to prevent people from disabling telemetry.
>Doing so would result in running an outdated and potentially insecure browser. //
No thanks.
MS pulled that shit.
It's _my_ computer. That's a principal reason I installed Phoenix in the first place.
I don't auto-update because people running projects do what for me are stupid things like break all the add-ons, knock holes through my adblocking, remove key settings, add non-removable addons, etc., etc., and they do it without warning or option.
Not to mention the UI changes which annoy and confuse elders that I support.
I don't have time for supporting forced beta testers (family & friends, I'm a home user) so auto-update stays off for projects that think it's their device one is using.
I truly love Firefox for what it (should) mean: an opensource browser that fights for open and diverse web. Specially now that everything is Chromium based.
However I'm very disappointed with the corporate way it is managed. Lot's of money for the CEO and cutting development funds.
Things I would absolutely buy from Mozilla: private internet services
- Something like Zoho: Calendar/Mail/Notes replacement for Google services!
- VPN server if available in my country
I want to ditch google/apple/microsoft for something "encrypted/private" first. And self-hosting is just a hassle. Provide me easy, "self-hosted"-like services please.
“macOS users running Cylance antivirus software may see their Firefox installation get corrupted due to being erroneously flagged as malware.”
I run Firefox Nightly on my work mac (with Cylance) and have been seeing this. Luckily IT have happily whitelisted it and reported back to Cylance, but still slightly weird that a major application would be flagged like this.
Cylance is insanely sensitive. In my experience running internal penetration tests, it's been one of the few pieces of security software that's genuinely slowed me down and been a nuisance.
That said, once you figure out its weak spots, it's as useless as all the others, but it doesn't surprise me it'd flag Firefox.
On my new work laptop, I installed Firefox first and haven't touched Google Chrome. I've been happy with this and it was finally what made me move off Chrome.
I see there are many here who work for or follow Mozilla / Firefox closely.
Could someone share:
- is there any progress on fixing the extensions API (it doesn't have to be exactly equal to the old, but for example some way to disable tabs on top would be highly appreciated as I always use Sidebery or TST in one or more browsers.) (I generally don't complain much about ux/ui, but this is a real mess).
- Are management aware that many of us badly wanted to support the Firefox / Servo teams while we just as badly don't want to support the people who fired lots of them, agreed to shady extension deals etc? (Personally I would accept a heartfelt apology and a change in direction.)
I don't think their revenue per user has really changes, their problem has been the regular drop in usership. Your money can't offset millions fewer users, which is why they aren't screaming for donations.
Donations will increase Mitchell Baker compensation even further, and won't help the browser. I think at this moment donation will corrupt Mozilla even further. Better keep money until they fix the management.
That supports mozilla foundation, which uses the funds for advocacy, but it doesn't support the development of firefox, which is done by the mozilla corporation.
> .. it should avoid the problems we'd been seeing with overly small /dev/shm in container environments (which were causing serious problems for using Firefox for automated testing of frontend projects).
docker has a 64M default for /dev/shm that can be overridden, but in hosted CIs it is not always accessible. Chrome is a user of the IPC shared memory too and trips over it in pipelines because of the low default. They track memfd_create at https://bugs.chromium.org/p/chromium/issues/detail?id=798221
Looks like grid view of open tabs[1] will also be available on Android. It's good to see the Android version improving together with the desktop version, since it's pretty much the only mainstream Android browser to support extensions.
> The Network panel is now able to handle unexpected crashes and render useful debugging details such as a related stack-trace. Users can also easily file a bug report by clicking on the available link to help improve the stability of the tool.
Does this mean it will actually work on initial page load? Pretty annoying that a new tab requires loading the page twice with devtools open...
I have, again, for the nth time, stopped using firefox a few weeks ago because it started again to be much smaller at page load than chrome/safari. It's sad to always have to switch back and forth because its development is so unstable.
>Firefox 84 is the final release to support Adobe Flash.
My newgrounds nostalgia hit me hard after reading this.
Despite any criticism on flash it self, there were some pretty cool stuff made with it all the same.
I hope they fixed the one thing that has been preventing me from switching to FF as my daily driver. The one thing that I can find a few "fixes" for, but none seem to work for me. Starting up to a frozen window, and then having to open new frozen windows, until eventually a non-frozen window opens. Then having to go clear a bunch of zombie processes out of Task Manager. Happens almost daily, and drives me crazy. /rant
I've found a few "fixes". From recent memory, disabling extensions, and clearing out my profile from appdata. I'll give this version a shot and hope for the best.
I had this too but it eventually stopped happening on both my workstations at some point during 2020. Could be one of your extensions, is it happening even without any extensions (assuming trying a recent version of FF doesn't fix it already)
And still no proper support for pinch to zoom or proper scrolling with touchscreen laptops or Windows touchpads. Every release I hope they will resolve this extremely basic feature. Every release I'm disappointed. Every other browser just works - I don't know if it's native functionality or if they put in the work - but Firefox is just absolutely rubbish here.
Check about:config apz.allow_zooming is set to true or do a fresh install and wipe all configs if that doesn’t do it, native pinch zoom was added in version 83
It's a fresh install and that's enabled. Let's be clear here: there is pinch to zoom. But it's an extremely half-assed implementation that is most certainly NOT native and feels terrible. I like Firefox but this has been a deal breaker for me for literally years. I don't understand why their zooming and scrolling can be so good on Android but so awful in Windows.
When you pinch does it jump between zoom levels in large increments or is it smooth? The large jumping was the old behavior – check about:config's browser.gesture.pinch settings are all empty and not mapped to ctrl+ or anything. I haven't tried on Windows but on macOS the native zoom is smooth and pretty good
But if that doesn't work it may hint that pinch gestures aren't being passed to firefox properly; try pinch zooming on maps.google.com, if that doesn't work then maybe there's a deeper issue going on
I'm really struggling to understand what you mean by "non-native" other than "different to edge". But even that's not useful: different how? Too fast, too slow, too sensitive, not sensitive enough?
Cannot confirm any of these points (Win 10 pro, Thinkpad T490p). Everything is buttery smooth, and does exactly what I would expect it to do, both zooming and scrolling. (I only tied the touch screen, though.)
Maybe check your touchpad and touchscreen drivers.
It will be tough to keep up with Edge/Safari on their respective home platforms going forward. I've been trying to move over to Edge from Firefox on my Windows desktop just because it's clearly the future. Safari works best on my iPhone12 mini. When I get an M1 MBA, I intend to just use Safari.
I haven't been using Firefox for nearly 20 years because of a rendering engine, I've used it because of its features. In the last 20 years, most people have moved from desktops to laptops and phones, native app battery use and performance means a lot more than features.
Did they fix the bug where you couldn’t search for empty strings using quick search hot keys?
Previously a search for an empty string would go to the quick search link with an empty query. The previous release wouldn’t let you submit the search without a non-white space query. Which breaks using the quick searches as bookmarks.
The big one here for web-devs has to be "Show Tabbing Order" feature in the Accessibility menu in the dev tools. Being able to see this should save a lot of trial and error.
Since this is a firefox thread, I have a very annoying bug that I haven't had much luck searching on... any luck here?
I really like the picture-in-picture feature of firefox. However, if I have a video in PiP mode and I click pause, randomly about 5-10 seconds later it unpauses and keeps playing. This happens on both Hulu and Netflix.. on multiple computers and has been going on for quite some time.
Seems like such a common thing I'm surprised it hasn't been fixed.
It is a common thing, so I'm guessing not many other people are experiencing it. I just tried with Netflix, and it doesn't seem to be happening for me. Do you have any unusual peripherals (that might somehow shift focus to the PIP window and send a space or something)?
I actually just tried again, and Netflix does not seem to be doing it.. only Hulu. It does this on multiple machines so perhaps just Hulu <> Firefox specific.
more useless features Firefox should add a feature using ffmpeg decoder that plays every codec and subtitles native in for html5, chrome is in the same camp both are adding useless features nobody but large companies use.
We have five Windows 10 computers - 3 Intel laptops, 2 with nVidia graphics. 1 Ryzen laptop with nVidia graphics. 1 Ryzen desktop with Radeon graphics. None of them have the tearing issue when watching videos on YouTube using Firefox.
The Ryzen laptop and desktop have 144hz screens. The rest are 60hz.
The biggest change from my perspective is "Firefox now ensures that localhost URLs — such as http://localhost/ and http://dev.localhost/ — refer to the local host's loopback interface (e.g. http://127.0.0.1). As a result, resources loaded from localhost are now assumed to have been delivered securely (see Secure contexts), and also will not be treated as mixed content (bug 1220810, bug 1488740)."
This is great for local development.