Hacker News new | past | comments | ask | show | jobs | submit login
Firefox on Unix is moving away from X11-based remote control (utcc.utoronto.ca)
109 points by signa11 on Oct 29, 2021 | hide | past | favorite | 180 comments



I suspect trying to maintain a remote-X11-based workflow is going to become increasingly annoying as more applications prioritize the Wayland / Xwayland use-case like Firefox has done here.


I'm very ignorant about Xwayland – but I frequently use "ssh -Y" or "ssh -X". Is there a simple alternative?

Will this move break lots of things needlessly? The wayland / X debate is one of those things that I fear will become a bit like vim vs emacs...



It's lovely.

Some elaboration: Waypipe takes a completely different approach from X forwarding. X forwarding entails shipping a bunch of X drawing commands across the network. This is incredibly inefficient for modern use-cases, where local rendering would be done through direct rendering instead (more details here: https://superuser.com/a/1217295/58251). Waypipe takes a step back and instead just renders a low-latency H264 video stream that it ships across the network. It won't be pixel-perfect, but a lot snappier! (In principle there's no obstacle to doing it pixel-perfect either, at the expense of bandwidth)


I imagine that historically, sending draw commands would have been highly efficient and quite responsive— show these form controls, this text field with these contents, etc. But as common use-cases have gotten more and more graphical, it's made more sense to just render a buffer and send that instead.


X11 forwarding essentially does that anyway nowadays. Modern toolkits don't actually use most of X11 because it's inefficient and unnecessary with the hardware we have.


And because the graphics primitives built into X11 are... well, primitive. The protocol was designed in the mid-1980s, at a point in time where display resolutions were low and color displays were usually limited to 256 colors or less. As a result, the graphics and text primitives don't support antialiased rendering or any kind of color blending -- so anything using those functions looks rather ugly and dated.


I was under the impression that pretty much anything recent used client-side rendering anyway? For example, anything Qt based.


Nope, Qt can (and is, for instance by default in Debian) be configured with -xcb-native-painting which does what you'd expect it to ; I use it that way and it's pretty cool, last week I was debugging an app running on a raspberry pi behind a university proxy a few hundred kilometers away from home and it worked fine, and has the very nice benefit of using my .Xresources (e.g. for my local screen's hi DPI)


IIRC that only applies to Qt Widgets apps which is a decreasing number of apps. The XCB native painting flag doesn't work with Qt Quick and actually seems to break those apps which use it, because Qt Quick is rendered with GL. Someone correct me if I've got the details wrong here but for me that flag is pretty broken, it falls into the category of "legacy compatibility only" just like everything else in X11...


Actually the flag was added in some Qt 5.x version due to outcry.


Yeah, because people were using that for some older Qt Widgets apps. AFAICT it was never supported for Qt Quick because that doesn't even use the Qt platform style.


Going to ask: do you know _any_ desktop software using Qt Quick ?


Yes. KDE has been slowly adopting it for 11 years (!) now: https://aseigo.blogspot.com/2010/10/plasma-in-18-24-months.h...


As a data point the immense majority of apps I use are still widgets.


Yeah they will be if you use a lot of those old style toolbars-forms-and-dialogs apps. For other apps I don't see much interest in Qt Widgets.


Wtf. We really live in different worlds. Most programs you use in your desktop are not "toolbars-forms-dialogs-and-buttons" ? What exactly do you use ? Braille interface ?


Web browsers? Programs based around images/video? LibreOffice? Krita? Blender? Most "productivity" apps I see are based around a canvas or around images in some way. If they can hardware accelerate that, they will.


All of these are "toolbars-forms-dialogs-and-buttons".

Something like this https://github.com/KDE/kongress/blob/master/screenshots/comb... would perhaps not be "toolbars-forms-dialogs-and-buttons" , but that is hardly recognizable as an application you find on your desktop PC.


Except they aren't really, the main widget in all of those is already an OpenGL accelerated canvas... If your application is primarily a form or a dialog then yeah, but everything else that has images or video or custom drawing is going to want hardware acceleration. Maybe you spend a lot of time filling out forms or in text editors or writing emails? But I think even those text-based apps will want to be hardware accelerated eventually too, just look at the number of GPU rendered terminal emulators that are popping up.


> Yeah they will be if you use a lot of those old style toolbars-forms-and-dialogs apps.

that's, like, most apps by a gigantic margin.


I can't say that's been my experience!


I've just checked my desktop and (besides a ton of terminals), I have :

- Zim (GTK widgets note taking app): https://zim-wiki.org/

- Strawberry (Qt widgets music player): https://www.strawberrymusicplayer.org/

- QtCreator (Qt widgets IDE)

- KTimeTracker (Qt widgets time tracker)

- Telegram (Qt widgets, even if it does not look like it ;p)

- Firefox (its own thing, hardware-accelerated)

- VSCode (electron, hardware-accelerated)

My document-writing software is TeXStudio, also in Qt widgets. The main software I work on, https://ossia.io is too (the canvas can be rendered with Qt's GL painter but this leads to better performance only on 4K+ resolutions, on 2K Qt's software renderer is faster and has less latency in all the systems I could try, and has a way lower "idle" energy consumption as it does not particularly wake the GPU). Other than that, software that I use occasionnally are all widgets-based and don't do GPU rendering (AFAIK): LMMS, QLC+...

Anecdotally, I tried every GPU accelerated terminal I could find and none felt as good as my trusty old urxvt


> Qt can (and is, for instance by default in Debian) be configured with -xcb-native-painting which does what you'd expect it to

It seems to me that -xcb-native-painting does the exact opposite of what I "expect Qt to do"? In any case, if it can use server-side X11 painting, that doesn't mean that it doesn't support the opposite, which is what I meant -- unless the Qt devs completely removed the code for client-side rendering, it's basically already ready for that scenario (passing pixmaps), isn't it?

> and has the very nice benefit of using my .Xresources (e.g. for my local screen's hi DPI)

Not quite sure how that is relevant. Isn't it possible to query that from the client, then produce a pixmap in an appropriate resolution? That should be transparent.


> It seems to me that -xcb-native-painting does the exact opposite of what I "expect Qt to do"?

I was talking about the flag. Qt by itself can render however you want it to, if someone wanted to make a platform abstraction that would render Qt apps with ncurses that would be possible too.

> it's basically already ready for that scenario (passing pixmaps), isn't it?

sure, if you use it as a local X11 client (or on other platforms than X11) it's what happens too


I wonder, is this purely a compile-time thing, or is the Qt binary library capable of switching between the two depending on whether you have a remote display or not? It seems like something that should be possible without forcing you to choose whether the performance of local or remote users is prioritized with a compile-time switch.


> is the Qt binary library capable of switching

yes, that's how a single Qt binary can run on wayland, x11, raw EGLFS, or even expose itself over VNC (https://doc.qt.io/qt-5/qpa.html)

you can force a platform with QT_QPA_PLATFORM=<...> with <...> being one of the plug-in names in your /usr/lib/qt/plugins/platforms

The only limit is that it can only be chosen on startup but not changed "while" the application runs if e.g. for some reason you wanted to switch a Qt app from X11 to wayland without restarting the app.


This clearly makes sense architecturally, but my goodness these envvars are a nuisance in Nix where you have wrapQtAppsHook and a bunch of other hackery to ensure that the proper plugins are findable at runtime.


Heh, I completely forgot about the ability to switch between Wayland/X11 session which makes this necessary in the first place anyway.

Cool, thanks, I learned something new.

> or even expose itself over VNC (https://doc.qt.io/qt-5/qpa.html)

Hmmm, I wonder how difficult it would be to support Ultimate++'s TURTLE protocol (https://www.ultimatepp.org/reference$WebWord$en-us.html)...


Wtf? Sending drawing commands over the wire is WAY more efficient for the vast majority of cases than sending compressed H264 video, and DEFINITELY much lower latency no matter what the codec latency is! It was even mentioned as a con of Wayland in the recent story "What is wrong with desktop Linux" right here on HN.


> Sending drawing commands over the wire is WAY more efficient for the vast majority of cases than sending compressed H264 video, and DEFINITELY much lower latency no matter what the codec latency is!

How closely have you profiled this? Back in the early 2000s I found there was a fairly strong divide between the applications which used X in the traditional manner, where sending the command across the wire could potentially be faster modulo loss of parallelism, and the growing number of applications which were doing more of the graphics internally so they could control font rendering, antialiasing, blending, etc. directly and were just slinging bitmaps back and forth. I'd expect that would have gotten worse over time rather than better.


> > Sending drawing commands over the wire is WAY more efficient for the vast majority of cases than sending compressed H264 video, and DEFINITELY much lower latency no matter what the codec latency is! > > How closely have you profiled this?

Yes. Anyone who actually experimented with tools like X2Go or Xpra/winswitch vs. plain old `ssh -Y -C`, even back in the mid aughts already, would know that GP is wrong.


I am not sure which GP you are referring too, since I am a heavy user of the original NoMachine, and it has incredibly better latency even when used over long-latency links (or should I say: specially when used over long-latency links). The new commercial NoMachine also uses H264 and I outright refuse to upgrade to it since it's just worse -- though it makes commercial sense for them since they want to support Windows.

But also just ask anyone who has used RDP versus something like VNC.


I was indeed referring to you!

I'm surprised by your account, and glad that you've given it. This is mine:

In high school, I was a Linux hobbyist and I also had a habit of forgetting and losing documents I needed for school. Additionally, the school's computers not completely locked down, but the selection of software on them was very limited.

So I played with a lot of remote access solutions for accessing my home desktop, both to do things on it that I couldn't do at school (like browse an uncensored web or using an IDE I couldn't install on the school computers) and to access my files or even running applications. I tested a lot of stuff, both with Windows clients and with Linux clients (but I was limited to the former at school).

I tried X2Go, TigerVNC, plain Xpra, SSH (with normal and insecure forwarding, with and without compression), NX (FreeNX on the server, but with the proprietary client alongside several open-source clients), and WinSwitch, which was some nice tooling built around Xpra. (On the LAN, I even messed around with VirtualGL for 3D acceleration with remote applications.)

Xpra was the preferred choice, and WinSwitch using Xpra and H.264 encoding worked best for me when I was away from home. Using plain X forwarding, I noticed that some applications were painfully laggy, especially Eclipse and Firefox. But even with my modest home internet connection (with especially low bandwidth on the uplink), the H.264-based solutions were very usable for any application I threw at them.

> But also just ask anyone who has used RDP versus something like VNC.

For connecting to full desktop sessions, I remember tigervnc being way faster than x11rdp.


Your usecase is actually quite close to mine. Except that NoMachine was the fastest of them all. I didn't try to run Firefox (why? you can proxy and browse locally!), but I had to use it to run software that could only be run on premises. And, basically, anything that was not NX incurred a huge latency that made using it a maddening experience.

For me, nxproxy/nxagent is basically the equivalent of mosh compared to SSH. If you have never felt the need to use mosh, then your link is not a high latency one.

Note that to my knowledge there is no actual implementation of an RDP server for Linux; they all just send do VNC over RDP (i.e. send bitmaps).


I remember NX being pretty good. I don't really remember why I ended up using Xpra more, but I used it for individual applications. I usually used NX only for full desktop sessions. And I only had whatever version of FreeNX I could get set up server side, which I remember being difficult, so maybe proper NoMachine NX would have been better.

I also never really dug into how Xpra used image encoders in its whole process. There's this diagram in the old (probably outdated) docs:

https://www.xpra.org/trac/attachment/wiki/DataFlow/Xpra-Data...

It seems more sophisticated to me than just a VNC approach, since it lets you send over the individual windows. There's this note on the current docs:

> Choosing which encoding to use for a given window is best left to the xpra engine. It will make this decision using the window's characteristics (size, state, metadata, etc), network performance (latency, congestion, etc), user preference, client and server capabilities and performance, etc

I'm not sure if that's more selective or cleverer than NX's approach, or basically the same. I wonder now whether it was latency, bandwidth, or processing power on the client (all we had was IGPUs) that was scarcest for me back then.

> For me, nxproxy/nxagent is basically the equivalent of mosh compared to SSH. If you have never felt the need to use mosh, then your link is not a high latency one.

I think plain SSH worked fine for me back then, though, so maybe latency was less of a problem for me than it has been for your use cases. Mosh is great though, especially for cellular internet connections.


> But also just ask anyone who has used RDP versus something like VNC.

I'm not sure I follow. While VNC and RDP are very different protocols, they ultimately both work by sending bitmaps of the server screen to the client.

Recent versions of RDP use H264 encoding, under the name "RemoteFX". I believe commercial VNC implementations do the same, although the free ones seem to be sticking with their own encoding schemes, as they believe they're better optimized for desktop content, but they're both still streaming bitmaps.

In any event, like others have mentioned, X11 forwarding has degraded over time to the point I only use it when working with locally hosted VMs. While old X software works well with it (xterm, motif toolkit stuff, things directly using xlib/xcb) modern applications like web browsers and the newer Qt/GTK libraries just don't handle latency well at all. They may be sending vector operations, but they're sending tons of small vector operations, which TCP overhead and network latency really hurts. And applications using libraries like Skia (which includes LibreOffice) are doing local rendering and just pushing bitmaps out.

I think X forwarding is conceptually better, but only if applications are really using X idiomatically, which they aren't.


RDP does not stream H264 -- RemoteFX needs a (v)GPU, so it is not the default. In fact you can go an peruse the local/persistent RDP cache and see it full of standard Windows GUI bitmaps.


My point was that both RDP and VNC display the remote screen by sending bitmaps over the wire. You can view the contents of the RDP bitmap cache and see that, while some UI elements are being "intelligently" selected, it's largely rectangular chunks of the screen, not unlike VNC.


Both can send bitmaps, but RDP can and does send graphics (and even text!) rendering commands. You can even influence how text is antialiased on the client. And even "dumb" VNC benefits from having more specific damage (as in, "areas of the screen that changed") than just a generic "all the screen changed, here's the new framebuffer, go figure it out".


Only since Qt5 I started seeing some programs not sending raw X acceleration commands; that's way latter than "the early 2000s", and they can still be configured to render using X. With few glaring exceptions (browsers, games, etc.) most programs still issue X rendering commands, even if they don't use font stuff.

But the point is that it's still much faster to send vector drawing commands over the wire. Programs/toolkits no longer caring to do so is a problem which we paper over by finding efficient ways to compress bitmaps (like h264), but almost by definition you cannot ever surpass the benefits of vector graphics.


The problem with X11 drawing commands is that there are a lot of round-trips over the network. Client asks for something, has to wait for the server response before proceeding, then it asks something else, etc. XCB tries to help mitigate problems that Xlib created that were not in the X11 protocol, but it's still highly inefficient.

In practice, running everything locally and then just streaming the framebuffer ends up being faster.

Not that we couldn't come up with a fast-over-the-network protocol. It's just that it's not really here.


> Not that we couldn't come up with a fast-over-the-network protocol. It's just that it's not really here.

In practice the "fast-over-the-network" protocol ends up being HTTPS + HTML + CSS + JavaScript, with a web browser as the client.


i don't know why parent was being downvoted.

this is exactly why i am building web applications instead of desktop apps if i expect to run the application on a remote machine.

a very good example is the deluge torrent client.

by concept it is a desktop gui application.

but it is designed in a client-server mode that allows the actual gui to run anywhere and access a backend through a thin protocol that provides a much better experience than remote X would. it has both a traditional gui interface and a well designed webinterface.

there is no reason we could not be designing more applications like this.


> The problem with X11 drawing commands is that there are a lot of round-trips over the network.

This is not a problem with drawing commands or the fact that the X11 wire protocol supposedly inefficient (it is actually very efficient). It is a problem of GTK and Qt that introduce unnecessary round trips. Both tooklits (which are mainly the reason the Linux never took off because they are pure garbage) never cared in the least about remote applications.

As comparison libXt based toolkits like motif run perfectly fine over the network and are very responsive.


I'd advise against using hyperbole and dismissing projects as "pure garbage". I tried to ask you this before but I would be really interested to know what your use case for motif is in 2021. That seems like a recipe for pain and frustration, significantly moreso than any pain and frustration you would have had with GTK and Qt.


Instead of playing the strengths of the UNIX desktop they are purposefully omitting them and trying to be a subpar copy of Windows/Mac all while throwing backwards compatibility out of the window for no apparent reason on a regular basis. They are pure garbage to such an extent that they appear to be deliberate sabotage.


> while throwing backwards compatibility out of the window for no apparent reason on a regular basis. They are pure garbage to such an extent that they appear to be deliberate sabotage.

This is exactly how I feel about Wayland. It is gratuitously incompatible and its proponents regularly lie about X to push it. Makes me think they want to give desktop Linux a finishing blow.


I'm not sure who you are referring to when you say "proponents", Wayland and X are mostly being developed by the same people. I don't see why they would lie about their own software. Also, if you look into it, you'll find everything that was redesigned in Wayland and broke backwards compatibility was done for a reason.


It is always your own software the one you want to rewrite, because "the second time you'll get it right".


Well, I think Wayland is going to avoid the "second-system effect" as it was intentionally designed to be smaller and less grandiose than X. http://catb.org/jargon/html/S/second-system-effect.html


But the reality is those features were added to the first thing for reasons. Which means there's user pressure to add them again... which means the second system effect is virtually impossible to avoid.

See for yourself:

https://wayland.app/protocols/

And unlike the first system, it doesn't have the experience of use to refine it.


I'm curious as to why you think Motif "plays to the strengths of the Unix desktop" or what those strengths would be. I don't believe Motif has ever been particularly popular among GUI developers, it seems to me the only reason it was used was because it was the only real option on Unix for a while. Also please avoid assuming bad faith and suggesting without evidence that something is "sabotage".


That article wasn't really accurate, most clients aren't using the X drawing commands anymore. The "vast majority of cases" has moved to using GL or Vulkan, which also doesn't serialize over the network.


The number of Vulkan or GL programs running in a desktop Linux is close to zero (or at most 1: the compositor). Browsers vary but on most setups they use practically nothing with GL/Vulkan (blame drivers). Gtk+ (even 3) still sends X drawing commands. Qt5 does not by default (it renders by itself, without any GL/Vulkan) but can still be configured to use X.


Qt has Qt Quick and QGraphicsScene which will use a GL backend.

GTK4 has a GL backend by default.

Web browsers and Electron use Skia which has GL and Vulkan backends.

Pretty much every video game is using GL and Vulkan already, or Proton which translates D3D to Vulkan.

Video players are using VA-API directly instead of XvMC.

The only thing in your list that doesn't have a GPU accelerated backend is GTK3, which GNOME is currently migrating away from to use GTK4. To my knowledge GTK3 also tries really hard to use Cairo on the client side for as many things as possible, and generally avoids using X drawing commands. I think it should be fairly obvious by now that graphics developers will prefer to use accelerated APIs whenever possible and don't care at all for "network transparency" if it means they have to use an outdated and mostly inadequate API.


I have absolutely no single program using Qt Quick on my desktop, nor I can actually remember the name of any single one. Which is funny since I actually was a Qt Quick developer in the past and can tell you a shitton of embedded platforms (i.e. cars) that use Qt Quick. Just no desktop programs.

I do not have any single Gtk4+ program on my desktop, and I use a quite up-to-date rolling distribution.

Skia having a GL backend does not mean it is used. Firefox does not use it on almost any Linux platform. It still blacklists even the FOSS drivers. Tested today with upstream v93 and using radeon opensource driver.

No idea why bring VA-API into this. It is practically the same thing as XvMC with a much more generic API. It is also not necessarily Vulkan or GL. Ironically this is also one area where intelligent remoting tools win (they can send the original compressed video stream down the wire. RDP does it), where a plain H264 stream will be forced to recompress, increasing latency at best.

Cairo is also backed by X rendering and this is the default.

Basically, if I ignore the compositor and games, I do not have _any single program_ on my system which uses GL or Vulkan for 2d graphics. Not surprising: in my experience, using GL for 2D graphics (i.e. arcs, lines, etc.) usually ends up in a big slowdown -- and a big increase in memory usage and crashes. It is mostly worth only for when you do pure texture manipulation like scaling, rotating, etc. i.e. final compositing or layering.

And, if, in addition to the above, I ignore the browser, and set the corresponding Qt flag, then _all programs_ in my system render using X rendering.

Easily tested because the performance difference is abysmal when using NX.


I would say that's probably something specific to your set up, and you may want to try some more apps, or ask those developers of your apps what their plans are. If you use Plasma, then you are using some Qt Quick software. Currently only the GNOME extensions panel is ported to GTK4, but more things are aimed for GNOME 42. In Firefox you need to enable webrender which is still beta but should be stabilized soon. I used VA-API as an example because that is another thing that only works locally and doesn't touch X. If you are using VA-API to output video to an X window then it explicitly won't send the original compressed video stream because the point of VA-API is to use hardware decompression. If you want to stream video then X and VA-API are both the wrong tool, you have to use something like gstreamer or phonon.

"Cairo is also backed by X rendering and this is the default."

This is incorrect, Cairo X rendering only happens if you use the Cairo Xlib surface, which GTK3 only uses in some circumstances.

Sure, maybe you're using a lot of applications don't use GL or Vulkan now. But if they are being actively developed, they are probably actively moving towards it. We can revise my original comment if you think it was wrong and want to make it more relevant to your situation: The "vast majority of cases" has moved to using GL or Vulkan or are taking steps to move towards it.


Well, it is a big difference. Someone was saying below that the vast majority of rendering was done "client-side" since 2000s, and it actually couldn't be farther from the truth. Even in 2021, the vast majority of of programs are still rendering server-side.

And, I have also been reading the claims of programs going to switch to OpenGL "any time soon" since the 2000s, with people backing out of the idea always due to "drivers" or the like. My experience trying to accelerate 2d programs with OpenGL has always been a disaster anyway. Maybe Vulkan is more suited to 2d rendering, but I would be surprised.

That is why I don't believe in that argument. Server-side rendering _is_ working as of today. Most programs do not use OpenGL at all. Wayland is breaking all of this; it's not that it was already broken.

> If you use Plasma, then you are using some Qt Quick software.

I was perusing the KDE source and the only program I could find is plasma-widgets. Not surprising: the only program in the entire KDE desktop that uses Qt Quick is a widgets program. It's basically a layering program.

> Currently only the GNOME extensions panel is ported to GTK4

I use latest released version of Gnome and it is not.

> Cairo X rendering only happens if you use the Cairo Xlib surface, which GTK3 only uses in some circumstances.

i.e. ALWAYS when using X as backend. It is the default. What else are they going to use, the PostScript backend?


"And, I have also been reading the claims of programs going to switch to OpenGL 'any time soon' since the 2000s, with people backing out of the idea always due to 'drivers' or the like."

I really don't know what else to tell you. Like I said, GTK4 and Qt are already using it. Chrome/Electron is using it, and Firefox will have it very soon. Wayland didn't break anything here, that and improvements in the drivers were just the missing pieces to finally complete the transition. Really, developers have been trying to ditch X for an extremely long time, you just said you've been hearing them talk about it for 20 years. Well as you know it takes a long time to rebuild everything.

"I was perusing the KDE source and the only program I could find is plasma-widgets"

I can think of several: KDE Connect, KSysGuard, System Settings, Kamoso, Kongress, there are more but I can't remember all of them! And yes, the entire shell also uses QML and Qt Quick.

"I use latest released version of Gnome and it is not."

This happened in GNOME 40 so your distro may be behind. See some docs from like 6 months ago: https://gjs.guide/extensions/upgrading/gnome-shell-40.html#p...

"i.e. ALWAYS when using X as backend. It is the default. What else are they going to use, the PostScript backend? "

No, the image backend is probably what you would consider the default because it works everywhere and can be used for off-screen rendering. In my experience Cairo Xlib surfaces are actually pretty uncommon because client side operations are done so frequently.


> I really don't know what else to tell you. Like I said, GTK4 and Qt are already using it. Chrome/Electron is using it, and Firefox will have it very soon.

Well, then don't repeat exactly the same argument. The point is that most desktop software _as of this day_ does not use client-side rendering, and even less desktop software uses OpenGL for rendering. Most of them are using plain classic widgets. We find some exceptions, but this is hardly enough to claim that most desktop software uses OpenGL, not in 2020, much less in 2000s.

> I can think of several: KDE Connect, KSysGuard, System Settings, Kamoso, Kongress,

While KDE Connect and Kongress do have some qml for the interface ( https://invent.kde.org/deepakarjariya/kdeconnect-kde/-/tree/... ) , I have not been able to find any QML whatsoever for the rest (e.g. https://github.com/KDE/ksysguard ).

Kongress looks like a widget anyway. You can easily recognize these programs by how poorly they integrate with the rest of KDE, and I definitely do not see them with any frequency at all.

> This happened in GNOME 40 so your distro may be behind.

So, apparently, gnome-shell uses it through GJS, which is why I can't find any binary linking directly to gtk4. It's still literally one user.

> No, the image backend is probably what you would consider the default because it works everywhere and can be used for off-screen rendering.

The xlib backend is obviously also capable of off-screen rendering, otherwise all hell would break loose. And the xlib backend is still the default.

Seriously: https://github.com/GNOME/gtk/blob/master/gdk/x11/gdksurface-... .

Why don't you just try? It's not hard to get into a situation where you don't have a working OpenGL environment and _all_ software still works. It's not hard to measure the bandwidth used for indirect X. Etc. Etc.


There isn't anything else for me to say, and I am not arguing with you or presenting an argument, this is just a casual conversation. Most of the software I know about uses OpenGL or Vulkan for rendering, and the server side ones are the exception. I'm repeating it because it didn't seem like you were understanding what I was saying, if you did understand it then you can disregard the previous comment. You can tell me that you don't use that software, which is fine for you, and I'm happy for you to use whatever suits you, but it's not really a meaningful discussion for us to have either. Please avoid making such comments or suggesting that I don't know what I'm talking about. Of course I can't know exactly what is going on on your computer, so if you want to explain it, then just tell me.

"I have not been able to find any QML whatsoever for the rest"

You won't find some of the QML from looking at the apps, bits of it are scattered in the KDE frameworks too. I don't think they integrate poorly, work has been done to make them match the Breeze skin.

"It's still literally one user."

Yeah that was kind of a dry run for GTK4 porting. As I said before, everything else is being ported, the next step was to get the support libraries ported over and then everything else can follow. https://gitlab.gnome.org/GNOME/Initiatives/-/issues/26

"The xlib backend is obviously also capable of off-screen rendering"

You usually don't want to do that, it introduces unnecessary round trips when you could just render it on the client side and then avoid that. That link to gdk surface is misleading, even on X11, GTK4 is using the GL renderer as default and is not going to call that or bother creating a cairo surface. Believe me, I've tried this in a situation without a working OpenGL environment and the performance is degraded.


Then why not use a wire protocol for Vulkan? SPIRV already exists. It only needs some wrapper code tacked on to work as a X extension and you have perfectly efficient server side rendering that potentially works even over the network without breaking anybodies backwards compatibility.


I'm very confused by this comment, SPIR-V is not a wire protocol for Vulkan. And you can't really serialize Vulkan over the network, that breaks things like vkMapMemory.

If you want something like a remote Vulkan, the thing to watch there would be WebGPU.


WebGPU translates 1:1 to SPIR-V.


That sentence doesn't really make sense to me, did you mean WGSL translates 1:1 to SPIR-V? "WebGPU" refers to the API, not the shading language.


Yes I was referring to the "WebGPU Shading Language" which I abbreviated with WebGPU which should be obvious. The WebGPU API is the boiler plate you need to make WGSL over the wire work. My original proposal was an X extension in a similar fashion that makes SPIR-V over the wire work.


I don't see why you would need an X extension, WebGPU works fine within a browser. You don't need to touch X or Wayland or Mesa or anything.


> The "vast majority of cases" has moved to using GL or Vulkan, which also doesn't serialize over the network.

IIRC, GLX did serialize over the network (though AFAIK limited to an older version of OpenGL; the vast majority of cases has moved to a newer version of OpenGL).


Yeah, indirect GLX is limited to OpenGL 1.4 and earlier. It also isn't even enabled by default on recent Xorg releases, you have to opt-into it.


I agree. And that's why Microsoft's RDP is still going.


What about drawing this small bitmap here? Oh wait it’s now animating? Just calculate the amount of those in your average interface and say which is faster, compressing/decompressing on both side, or uncompresses draw commands that will have to transmit the whole thing serially?


Waypipe isn't relevant in this case, this post is about remotely invoking an RPC call over D-Bus. It's actually pretty trivial to forward a D-Bus socket using ssh -L.


I'm still a big fan, personally, of ssh -YC - it works pretty efficiently with Eclipse and other simple GTK text based apps.

With Firefox, it doesn't work as well, and I can understand why. Browsers demand a lot of graphics acceleration. That said, you can still get decent performance by force-enabling xrender as long as they still support it.gfx.xrender.enabled;true

Personally, gonna be a bit sad when Wayland breaks all my things (my xdotool scripts, my ssh -YC, my XSDL on Android)


Xdotool is actually (yet another) one of those things in X11 that is somewhat broken and due for a redesign. The APIs that it uses are not really safe for applications to use, they're meant for window managers, so in some situations your scripts can get overridden by the window manager and they just won't work.

For remoting to a smartphone you can just use any old VNC client.


I've run Wayland Firefox over waypipe. It's a bit finicky to set up, but fast enough that you can play videos with no trouble.


> vim vs emacs

Is that actually something to fear? They're both available and active, and systems don't depend on either of them being installed.


I worry about how this trend could impact BSDs which typically rely more on FireFox (and derivatives) as a modern browser. Neither Wayland nor Dbus are native to those systems.


What exactly does "native" mean on BSD here? In my experience, that usually just means something that is written in C, uses BSD make, uses the BSD license, and follows some patterns used in the BSD libc. Which seems to apply to very little popular desktop software including Firefox and all other web browsers.

BSD users seem to be fine with running ports though, which include Firefox, D-Bus and Wayland.


Firefox cares little for user experience, constantly alienating its user base over, and over again.

They are notorious for being the least user friendly of browsers, and that is surely saying a lot.

So taking this as any sort of trend, isn't prudent.


Idk if you can say that. Using Wayland may very well give better UX after all.

My prob with Wayland is more like, as shitty and 80s X may be, it's the one stable API that almost all F/OSS desktop apps and quite a couple highly specialized apps are developed against. The risk is loosing it all, especially as new desktop aren't coming in this millennium.


That's what XWayland is for, those clients probably will be able to keep working. As long as those clients exist then we'll be able to keep having things like XWayland and XQuartz, it's just translating X11 to the underlying window system after all.


Yes, probably. But still, what's the point of pulling support out from all apps if there are exactly zero new apps forthcoming? With getting hardware support/drivers for X already problematic, Wayland-native apps never able to run on non-Linuxen, and extant desktop app developers not even having capacity to qa apps on XWayland, the whole thing doesn't look like so good an idea on balance.


Maybe you are looking at it in a skewed way. I don't think there is a real use case for "Wayland-native" apps. There is no reason to do that unless you're building an embedded device or something, in which case you already probably picked Linux as the only kernel you're going to support. Most apps are just using a toolkit or some other kind of abstraction layer. If you don't have an abstraction layer then you probably have a lot of other portability problems to worry about if you want to get it working outside Linux.


Most apps just use a framework, and those frameworks more than likely already have a wayland backend, so you just automatically have a wayland-native app with proper hdpi, multi-monitor, etc support!


X11 and Wayland is like when the previous API is deprecated, but the new API is in beta


X is like trying to fly modern rockets with Apollo-era computers. The computers will work, until they don’t. In which case the only people able to fix them are retired and you have to salvage parts from aerospace museums. Over the years the cruft of replacement parts has accumulated and no one person can really understand the whole thing anymore, and entire sections are not understood and nobody remembers why they’re there or really what they do (but if you get too close to it the lights go off in certain important corner cases).

Wayland is the shiny new SpaceX module that brings a lot of improvements but needs to have its toilet fixed.


> X is like trying to fly modern rockets with Apollo-era computers. The computers will work, until they don’t. In which case the only people able to fix them are retired and you have to salvage parts from aerospace museums. Over the years the cruft of replacement parts has accumulated and no one person can really understand the whole thing anymore, and entire sections are not understood and nobody remembers why they’re there or really what they do (but if you get too close to it the lights go off in certain important corner cases).

And often the response to that it to rewrite it in something "modern," like Electron.

I feel the appropriate response to a situation like you describe is put in the work to figure the existing thing out rather throw it away and put the work to building and debugging a replacement. It's less sexy, but it's the right thing to do.

Now it would be an entirely different matter if the old system could not longer provide adequate performance, etc. I'm only addressing the "it's old and only understood by olds, therefore replace" thought process.


The problem is that X was created over decades for very different eras of computers and it doesn’t really make sense to keep modifying it — there are design limitations to it, and the accumulated complexity has made it unmaintainable. X maintainers have abandoned it and told people to migrate to Wayland. Sometimes you just need a clean slate.


Nah, you figure it out, then throw it out and rebuild a better alternative.

X11 has had too much piled on it already; it needs to go.


Most people who criticise X like you have no idea what they're talking about. The parallels you're drawing are childish and indicate an extremely shallow understanding of the issues at hand.


Well you don’t have to listen to me, the X maintainers have said Wayland is the way forward and they’ve stopped developing X as of several years ago. It’s silly this is still an issue.


They themselves also said there's no technical reason why they couldn't easily keep using X. They just didn't want to. <https://wayland.freedesktop.org/faq.html#heading_toc_j_5>

So it is putting their own selfish desire for fun and personal glory above the good of the linux community and ecosystem. This deserves zero respect.


That’s just bullshit. X is architecturally bad, which makes sense considering that it came from a time when there were no goddamn GPUs at all!

Wayland is closer to the hardware what we actually use, so an implementation can actually be more lightweight, and it cuts out all the legacy shit from X and starts from a sane abstraction.

As an actual X maintainer put it: “ You can only apply so much thrust to the pig before you question why you're trying to make it fly at all”


I find it hard to disagree - and I'm (finally) on a full Wayland setup (that's honestly pretty stable/good -- Sway for anyone curious)

The amount of 'lift' involved was absurd. Half of the applications I launch need tickled/informed that Wayland is in use.

If I don't do this, most of these things default to XWayland and copying/pasting between that and native things is absolutely broken (often one direction)


I'm in a similar boat as you. I switched from X11 to Wayland (also Sway) and from PulseAudio+Jack2 to Pipewire a couple weeks ago. I am in the extra difficult boat of doing it on a high DPI display (I assume you aren't, or you would have mentioned it, given how annoying it is to get everything working). GDK_DPI_SCALE, ELM_SCALE, QT_SCALE_FACTOR, MOZ_ENABLE_WAYLAND, QT_QPA_PLATFORM, XDG_CURRENT_DESKTOP, XDG_SESSION_TYPE and a few other things are necessary to get everything working.

Then you have little things like having to do xdg-desktop-portal to do screencasting, little nits like not being able to screen-share a window at a time (only a full desktop). It's definitely not a super easy and comfortable transition at the moment. I'd recommend most anybody who wants to do it use a DE that takes care of as much of the pain as possible, like modern Gnome or something. Sway itself is nice. I've never used i3, but I've used Awesome for like a decade, so Sway is not a big jump.

Moving to Pipewire was really easy, on the other hand. Mostly drop-in, and I had some annoying audio issues both with PulseAudio and Jack2 that Pipewire completely fixes for me. I absolutely love it, and feel like Linux audio is in an acceptable place for the first time in my life.

edit: oh, and before Firefox 93, I had to force Firefox to run in X11, because extension popup windows were broken on Sway due to a Firefox bug. Much of the testing for Wayland applications is done only on Gnome.


Ouch, I hadn't even considered highDPI!

You're right, haven't crossed that bridge myself. I have three displays but all rather normal pixel density.

Agreed - if someone wants Wayland, I'd have to suggest Gnome (or even KDE, I hear they're doing well).

As usual with window managers, you've got to build your own fun - and moving to Wayland (through Sway) has not been that.

I started with i3 - so I basically had a working config to copy/extend... but the amount of extension necessary - oh buddy.


Another approach would be to integrate Firefox-specific D-Bus proxying inside of waypipe.

Have waypipe listen to where Firefox normally does, if one is spawned on the remote host.

I can imagine this being useful for more things, like portals going forward: one could imagine forwarding sound, video (both trough pipewire, negociated over D-Bus) and files this way.


I suppose you could leverage the Firefox remote agent protocol that things like puppeteer use and make yourself a wrapper that emulates the behavior that was deprecated.

I had a somewhat similar situation with WSL where I wanted xdg-open to open Chrome tabs in Windows instead of WSL. Was able to get that working with just WSL->Windows built-in functionality and a *.desktop file.


I've run into the mentioned issue often when an X application in XWayland will fail to open a link in my Wayland Firefox process. I hope they resolve it soon, but I've updated Nightly since this article was written and it still happens, so it looks like not yet. Maybe my installation wasn't built with the necessary flags.


I dug a bit on this just now, and apparently you can force firefox to use dbus always by setting the env variable MOZ_DBUS_REMOTE=1 . Finally I can click to links in slack instead of copying them.


Hmm... I had this working, but then it stopped. I think I have that variable set all over the place, but I must not... or something. Must investigate! (I'm on Wayland/Sway)


Thank you very much!


Is it the year of Linux on the desktop yet?


oh yes! this is the single most frustrating thing on Wayland for me right now (not a huge deal, but almost anything else just works these days).


I remember there being a variable that you can pass that fixes it for Firefox.


Why couldn't this have instead been fixed by having Firefox listen on both, and then having remote control check X11 first and then if it didn't find it there, then fall back to D-Bus? Wouldn't that have fixed the problem that motivated the change, but without breaking the use case with X11 forwarding?


It seems for me that this remote control using the X based protocol was never meant to be a feature. Probably the mechanism was created before D-bus existed, was meant to be dropped once a better solution was created, but never was done until now.

If anything else, D-bus seems to be the proper protocol for something like this, since as the author itself said, it is simpler and more reliable.


Hmm, so it would not work on systems without DBus? If that is unix app, why not just use unix socket?


Dbus is available on most systems now. I don't know what's the reason for the authors, but in practice with UNIX socket, you have issues like "which instance owns the socket", "how are the requests distributed", "how does an instance know the current handler died", "how to serialise and version the remote requests", etc. Dbus solves a lot of that without (effectively) designing your own custom underspecified version of half of dbus.


> but in practice with UNIX socket, you have issues like "which instance owns the socket", "how are the requests distributed", "how does an instance know the current handler died"

While bus-based publish-subscribe paradigm may have some merit in desktop setting, for direct control the client-server is much more straightforward, and in this case these answers are easy. Each instance owns one socket, if there are multiple instances and not a default one, a client needs to know which to connect to (as they are generally not interchangable), and even if they are, a client can just enumerate sockets and open the first alive.

> "how to serialise and version the remote requests"

These issues are irrelevant for connection-oriented and reliable unix sockets.

> Dbus solves a lot of that without (effectively) designing your own custom underspecified version of half of dbus.

Not really. Connection-oriented client-server solution is just much simpler that dbus and offers some advantages like implicit state associated with connection. Dbus makes things much more complicated and then solves some of these complications.


"These issues are irrelevant for connection-oriented and reliable unix sockets."

This comment is really confusing to me. It's not irrelevant, you need to serialize the messages somehow. And if you ever decide you want to add some new messages, then you have to deal with versioning. It's not simple. What you have described is just a cut down implementation of D-Bus. And you are leaving some other things out:

- race-free name resolution, the solution you described has a lot of race conditions

- message ordering, the best way to do this reliably across 3 or more processes is to use a bus-based method

- security, auditing, rate limiting, i.e. how does a system admin manage your service. This is all solved with D-Bus

I've seen lots of complaining about D-Bus over the years about how it's "too complex" but everything in it is there for a reason, and I've never seen anyone actually able to make a simpler design that works as well. If you implement all the stuff I just talked about then your solution will be about as complex as D-Bus. So in terms of message buses I actually think it is quite simple compared to other things like ActiveMQ. Please stop implementing ad-hoc protocols over a socket unless you have a really good reason to do that, it drives me nuts to see that stuff getting deployed and people going through the motions fixing the same bugs over and over. At the very least you could use protocol buffers, or use ASN.1, or re-use the D-Bus wire format, or do anything besides rolling your own.


I am curious about this too. On OpenBSD I set

export DBUS_SESSION_BUS_ADDRESS="no"

to avoid Firefox from starting dbus, which is a Linuxism ported to BSD just to allow people to execute some applications. That works fine for me.

So since this is removed and Firefox is the only app that I use which needs dbus, I wonder if I will be required to use yet another useless (ie: not needed by OpenBSD) process just to use Firefox.


It's weird watching the Linux development community go from the position that Windows apps tightly coupling to Windows was bad because it made those programs more of a pain to run under Linux (most of the time requiring Wine) to deciding that sticking with cross-UNIX platforms was just no longer worth their time because Linux "won" the UNIX wars.


DBus (the software, rather than the protocol) isn't required on Linux anymore than it is on OpenBSD. It's required by the Plasma and GNOME desktop environments, most KDE and GNOME applications, and any other desktop environments or applications that want to use it.

It's also cross-platform, and runs natively on OpenBSD, NetBSD, and FreeBSD. It does not leverage a compatibility layer, and depending on it is in no way comparable to requiring an emulation layer like WINE.


Don't compare Linux and Windows, thats in bad faith. Developers just want to accomplish the best software they can, its all done in the open, but that doesn't mean they have to design software for other systems with different capabilities.


I would be really interested to know why people seem to think D-Bus is a "Linuxism". It's a pretty small daemon with an Apache-style license and upstream support for BSD. The only thing I've heard are comments like this where people find that D-Bus is required to run some GTK or Qt application, and those toolkits are viewed as being "Linuxisms", so people put it in that bucket transitively. Is there another reason I missed here?


I build Firefox without dbus for NetBSD, current version works fine.


Can you use Wayland without DBus? I guess in theory yes, but in practice most compositors rely pretty heavily on it for various other things.


Does Sway use D-Bus? I would not guess that. Enlightenment? I have hard time seeing that.


Sway itself may not use dbus, I think that's right, but a desktop workflow built around sway for day-to-day use will typically utilize DBus for e.g., pipewire, a notification daemon, xdg-desktop-portal (screen sharing) at the least. But you are right of course that these are not sway and you don't need to use them. A typical user would need to be motivated to actively avoid it.


Sway does not use d-bus. It listens on a unix socket (different from the wayland socket) for control messages.

Gnome/Kde compositors don't use d-bus either, they instead use wayland extensions for control (since both the compositors and clients have to implement wayland already anyway, this is easier than using d-bus).


The GNOME compositor does actually use D-Bus and it is actually much easier to use D-Bus in it than to add new wayland extensions.


It says if it's built with D-Bus, so presumably you can still just build it without it for now.


No, it wouldn't, because why would you run a system without dbus? That's just silly.


Makes perfect sense - stop using X11 because XWayland needs to be fixed.


Actually it does make perfect sense because X is deprecated by the community and largely abandoned except for XWayland, which is kept around for legacy apps only. Update your workflow to the modern stuff, don't expect the maintainers to keep supporting your use case from decades ago.


Ah yes, surely if we break the entire world again, this year will be the year of the Linux desktop.

The Linux desktop experience is a bit like watching an alcoholic. You try to explain that drinking is making them sick and they should stop, but they just get violent and drink even more.

I think the success of Windows should be proof enough that you can keep compatibility for very old APIs and that it's required for an OS that is to be a platform for other people to build software on, as opposed to a walled garden (Apple) where you sometimes let people build part of it.


The maintainers of X decided they didn't want to keep maintaining X since it has too much baggage. So they designed/implemented a replacement, Wayland, and have given notice that development on X is going to come to a halt. Unless a different group of maintainers steps forward, but that seems unlikely. So like always, open source is a do-acracy. The people doing the doing, get to make the decisions.

Regardless of "Year of the Linux Desktop" what these maintainers are trying to do in general is minimize their time involvement while making the desktop have the features and support (like for 4K monitors) that people want. Secondly, "the Linux Desktop" isn't a single organization like a commercial entity is. Its a bunch of different groups that all have their own priorities and schedules and use cases and so on. Expecting that process to produce output that is functionally identical to a single commercial entity, is unrealistic.


> The maintainers of X decided they didn't want to keep maintaining X since it has too much baggage. So they designed/implemented a replacement, Wayland...

> ...what these maintainers are trying to do in general is ... making the desktop have the features and support (like for 4K monitors) that people want.

Was there something about X that was incompatible with 4k monitors?


> Was there something about X that was incompatible with 4k monitors?

The window scaling situation in X is shitty, and basically unsolvable in the framework of X. Wayland solves this problem.


> The window scaling situation in X is shitty, and basically unsolvable in the framework of X. Wayland solves this problem.

By that, do you mean it makes DPI assumptions that you'd want to break with a higher-resolution display?


I mean it's a royal bitch to get different DPI on a screen by screen basis, and impossible to do so in a standardized, consistent way on a window by window basis.


The problem with the state-of-the-art X11 is it (Xinerama or RandR) implements multi-monitor as a single logical monitor with each output stitched together. Getting multiple DPIs on each monitor is effectively impossible because you only actually have the one logical screen.

I believe it's possible with the older X multihead method, Zaphod mode, to have fully separate X11 screens, which could have different DPIs on each screen. The problem is there's no way to move windows from one screen to the next, and my understanding is that is an architectural limitation of X.


No. Even if there was something, Xorg is open source and can be made to work with anything, it isn't like code is set in stone.


But it kind of is, if the code is so ancient, crufty, and confusing that people don't want to maintain it, which is the case with the X code and a big reason for Wayland in the first place: Wayland was started by X maintainers for the purpose of not having to maintain X. Which they still have to do with Xwayland, but even that will stop as Xwayland was only ever supposed to be a stopgap for legacy applications.


> that *SOME* people don't want to maintain it

FTFY. There are people who want to keep using X and the only solution to that is to maintain it as code wont be maintained by itself. The entire point of open source/free software is that if anyone wants to fix/implement something they're free to do so, so for as long as someone wants to keep using X and has the necessary know-how (or the time and will to learn), X will be around.


A counter argument, make and tabs because 20 users should not have to redo their makefiles once


It's estimated that more than 6000 people are working on Windows full time.

I don't think that Linux (kernel + DE) has half of that manpower.


> X is deprecated by the community and largely abandoned

Old and functional does not mean deprecated. In most distros and for most users X is not deprecated.

It sounds like you've been living on the bleeding edge for a long time and have lost touch with the reality of linux for most users. The GUI toolkits like Gtk and Qt fully support X. They do not fully support Wayland. When this changes you can call X deprecated.


Yes, it's weird seeing the X is dead sentiment considering some of the most popular distros are still using X, and Wayland has only recently caught up with comparability for workflows like screen capture. I'm unfortunately stuck on X because of my NVIDIA system, but I hadn't been all that interested in Wayland until recently. AFAIK there is also some work to be done on the gaming side no?


Good news, Nvidia has granted upon thee a driver that might actually work with Wayland


The most popular distros are Ubuntu and Red Hat/Fedora. Both use Wayland by default. Everybody else is part of the long tail.


That is to confuse "long tail" with "doesn't matter", though.

If a supermarket doesn't stock goods in the long tail, its customers shop elsewhere.

Dismiss the "long tail" and we wouldn't have champagne in the supermarket, or classical music on the radio.

The idea that somehow the ever-changing nature of Linux is going to stablise into something that satisfies _everyone_ seems misguided. Open source code means diverse groups can, and will decide themselves what's deprecated, and take on maintenance of the things they want to exist.


If it costs just as much money to support the long tail as it does to support 95% of your user base, guess what? That long tail will go unsupported. This is why the only relevant web standard is "does it work on WebKit (formerly IE)?" And why the only relevant firmware standard is "does it boot Windows?" The major distros ship with Wayland. The major toolkits support Wayland. That's where all the development and maintenance energy is right now. X is still around, but eventually support for it will go away. Once it becomes niche enough that the costs of maintaining an X code path outweigh the benefits of supporting the few users still on X, they will take the X code path out. That's how things go in software.


Well Linux itself is part of the long tail, so by this logic, why bother maintaining such an old fashioned and irrelevant operating system based on POSIX standards from the past?


I agree with you that, "Old and functional does not mean deprecated." but what you asserted is not what deprecation is either. It generally means the developers are giving notice that new development on a feature/feature-set/api is discouraged. That notice was given for X a while ago.

I understand the idea that it would be better if the replacement were 100% ready when that notice was given, but I'm not surprised that wasn't the case with X, it's big shoes to fill.

I can't speak for normal users, but I have seen some reporting, that they have been on a Wayland for a while now without knowing it.


Fact is that Xorg is more or less effectively abandoned, so yes, while distributions don't deprecate them yet because there's still a lot of places where X11 is required this is where it is going.

Hmm, I guess you should let Fedora know that GTK+ and Qt don't support Wayland given they ship with Wayland by default and these... just work.


>GTK+ and Qt don't support Wayland given they ship with Wayland by default and these... just work.

That's not the impression I get watching #gtk on GIMPNet and the bug trackers. There's a lot left to do even if Wayland is enabled now.


I always start firefox with `-no-remote` to avoid having it talk with other firefox's


Don't know why, but Firefox on my Ubuntu is slow AF. Scrolling pages is just a pain. Graphics acceleration seem to be enabled.. Chrome is buttery smooth though, so no idea how to fix FF


What version of Firefox? Graphics acceleration may be enabled in X11 but not in Firefox. Check the about:support page for status of that in the Graphics section. Last year I believe they enabled it by default in Linux due to improved driver stability, but there is still a blacklist. Chrome and Firefox both have blacklists, each with different card strings in it.

Over here it is: Compositing WebRender (I have layers.acceleration.force-enabled;true and gfx.webrender.all;true)

There's a far less common problem where you are running firefox in XRDP or ssh -YC and it is being sluggish - for that you have to force-enable xrender https://bugzilla.mozilla.org/show_bug.cgi?id=1263222 )


All the flags to force using acceleration are enabled, but the section with the Graphics driver shows that no driver was found. I do have the latest Nvidia Drivers installed, and my graphic card is a GeForce GTX 1660 Ti Mobile.

* Running Firefox 93.0

*Update: I've rebooted my system, and it shows that my Graphic Card is now in use. Not sure which flag I enabled helped, but pages are buttery smooth as with Chrome. Many thanks!


If you'd flipped any graphics flags in about:config, system reboot was probably overkill (unless you'd updated system drivers too).

Most likely just needed to restart Firefox. But glad you got it working.

And yeah, both Chrome and Firefox on Linux can be rather finicky with graphics cards due to high levels of unreliability in the past. Although the situation has improved over time.


Maybe it's because FF isn't configured to run with wayland? Check "window protocol" in about:support and make sure to run it with the MOZ_ENABLE_WAYLAND=1 env var.


My system runs on X11 though


oh, sorry, I know recent versions of Ubuntu come with Wayland by default


It's actually PopOS 21.04, so pretty recent. Wayland is available if I wanted to ise it, but I remember running into other issues when tried.


You could try disabling acceleration and see if that helps. Sometimes there are just driver issues.


Make a thread on Ask Ubuntu or the forums and try to get help figuring out the cause?


Very glad to hear this, the X11 mechanism has been causing issues for me for a long time, as detailed in the article.


At least as of >10 years ago when I worked on the code, the equivalent feature in Linux Chrome uses a unix socket in your Chrome profile directory. It was a little subtle to implement in that you need to also handle the case where a crashed browser leaves a socket behind.


Does that mean you can send links to different instances of Firefox under different profiles, or even particular windows (eg sending a link to the same screen/workspace as the programming passing the URL)?

Historically I think the last Firefox to open received all the link-opening requests.


I hear all the pushback on Wayland with boastful claims about x11

X11 has been crap since I started exclusively using Linux in 1998

Is junk, it's unstable, slow and bloated. Why people grasp onto the network aspect is beyond me. The model is horrendous and outdated and little used at that. Better alternatives have blown x forwarding out of the water years


I've personally never experienced any problems with X11. What exactly are its problems?

Being old isn't always a bad thing, quite the contrary, often times the things get old is by being quite good and working well.


Sounds like he was considering re-inventing SSH tunneling, which is built-into SSH.


It’s really nice to see X11 slowly die out.


Somewhat less nice for those of us who are using it and don't appreciate people trying to break our setups.


Software breaks itself simply by being unmaintained. If people were serious about Xorg, it would see more activity.


> Software breaks itself simply by being unmaintained.

It really doesn't; unmaintained software breaks when the world changes around it. Sometimes this is fair, like needing to keep up with changes in how the kernel exposes graphics capabilities, and sometimes it really isn't, like applications deciding to only support wayland when they aren't doing anything that wouldn't work fine on X11.

> If people were serious about Xorg, it would see more activity.

Oh, like people funding new work which allows a new maintainer to work on it? https://news.ycombinator.com/item?id=29017498


Aye, I love suddenly not having X forwarding through SSH. Oh, wait.


Using X11 as some kind of client-to-client IPC was really just a hack in the first place.


Not really, that is the intended purpose of X11 properties after all.


...A 'hack' the system was designed around, sure.

Why do you think it has the odd server-client architecture in the first place?


[flagged]


Depends on the kind of hacker. Some do worse.


No! You're thinking of "crackers." There's a very serious difference. The word "hacker" being co-opted by writers who don't understand it is like the literally-as-figuratively phenomenon. Literally doesn't mean figuratively, they're just using the word incorrectly.

https://www.techrepublic.com/blog/it-security/hacker-vs-crac...

http://www.stallman.org/articles/on-hacking.html

http://www.catb.org/jargon/html/C/cracker.html


If popular usage of a word drifts and the new usage is widely and unambiguously understood by most speakers in a target linguistic community, it is the folks writing articles in objection who do not understand its meaning.


So the purpose of this forum is for news and discussion on breaching security systems?


As far as I can tell, the word "cracker" has close to zero uptake outside of the food industry (and, perhaps, people who produce cracks for software). You're far better off letting the context distinguish between the type of hacker, or mentioning it explicitly if the context does not make it clear, since more people will understand what you're talking about.


>Literally doesn't mean figuratively, they're just using the word incorrectly.

Literally has been used to mean figuratively since at least the 1700s[0], and words mean whatever people decide they mean.

[0]https://blogs.illinois.edu/view/25/96439




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: