Hacker News new | past | comments | ask | show | jobs | submit login

It's lovely.

Some elaboration: Waypipe takes a completely different approach from X forwarding. X forwarding entails shipping a bunch of X drawing commands across the network. This is incredibly inefficient for modern use-cases, where local rendering would be done through direct rendering instead (more details here: https://superuser.com/a/1217295/58251). Waypipe takes a step back and instead just renders a low-latency H264 video stream that it ships across the network. It won't be pixel-perfect, but a lot snappier! (In principle there's no obstacle to doing it pixel-perfect either, at the expense of bandwidth)




I imagine that historically, sending draw commands would have been highly efficient and quite responsive— show these form controls, this text field with these contents, etc. But as common use-cases have gotten more and more graphical, it's made more sense to just render a buffer and send that instead.


X11 forwarding essentially does that anyway nowadays. Modern toolkits don't actually use most of X11 because it's inefficient and unnecessary with the hardware we have.


And because the graphics primitives built into X11 are... well, primitive. The protocol was designed in the mid-1980s, at a point in time where display resolutions were low and color displays were usually limited to 256 colors or less. As a result, the graphics and text primitives don't support antialiased rendering or any kind of color blending -- so anything using those functions looks rather ugly and dated.


I was under the impression that pretty much anything recent used client-side rendering anyway? For example, anything Qt based.


Nope, Qt can (and is, for instance by default in Debian) be configured with -xcb-native-painting which does what you'd expect it to ; I use it that way and it's pretty cool, last week I was debugging an app running on a raspberry pi behind a university proxy a few hundred kilometers away from home and it worked fine, and has the very nice benefit of using my .Xresources (e.g. for my local screen's hi DPI)


IIRC that only applies to Qt Widgets apps which is a decreasing number of apps. The XCB native painting flag doesn't work with Qt Quick and actually seems to break those apps which use it, because Qt Quick is rendered with GL. Someone correct me if I've got the details wrong here but for me that flag is pretty broken, it falls into the category of "legacy compatibility only" just like everything else in X11...


Actually the flag was added in some Qt 5.x version due to outcry.


Yeah, because people were using that for some older Qt Widgets apps. AFAICT it was never supported for Qt Quick because that doesn't even use the Qt platform style.


Going to ask: do you know _any_ desktop software using Qt Quick ?


Yes. KDE has been slowly adopting it for 11 years (!) now: https://aseigo.blogspot.com/2010/10/plasma-in-18-24-months.h...


As a data point the immense majority of apps I use are still widgets.


Yeah they will be if you use a lot of those old style toolbars-forms-and-dialogs apps. For other apps I don't see much interest in Qt Widgets.


Wtf. We really live in different worlds. Most programs you use in your desktop are not "toolbars-forms-dialogs-and-buttons" ? What exactly do you use ? Braille interface ?


Web browsers? Programs based around images/video? LibreOffice? Krita? Blender? Most "productivity" apps I see are based around a canvas or around images in some way. If they can hardware accelerate that, they will.


All of these are "toolbars-forms-dialogs-and-buttons".

Something like this https://github.com/KDE/kongress/blob/master/screenshots/comb... would perhaps not be "toolbars-forms-dialogs-and-buttons" , but that is hardly recognizable as an application you find on your desktop PC.


Except they aren't really, the main widget in all of those is already an OpenGL accelerated canvas... If your application is primarily a form or a dialog then yeah, but everything else that has images or video or custom drawing is going to want hardware acceleration. Maybe you spend a lot of time filling out forms or in text editors or writing emails? But I think even those text-based apps will want to be hardware accelerated eventually too, just look at the number of GPU rendered terminal emulators that are popping up.


> Yeah they will be if you use a lot of those old style toolbars-forms-and-dialogs apps.

that's, like, most apps by a gigantic margin.


I can't say that's been my experience!


I've just checked my desktop and (besides a ton of terminals), I have :

- Zim (GTK widgets note taking app): https://zim-wiki.org/

- Strawberry (Qt widgets music player): https://www.strawberrymusicplayer.org/

- QtCreator (Qt widgets IDE)

- KTimeTracker (Qt widgets time tracker)

- Telegram (Qt widgets, even if it does not look like it ;p)

- Firefox (its own thing, hardware-accelerated)

- VSCode (electron, hardware-accelerated)

My document-writing software is TeXStudio, also in Qt widgets. The main software I work on, https://ossia.io is too (the canvas can be rendered with Qt's GL painter but this leads to better performance only on 4K+ resolutions, on 2K Qt's software renderer is faster and has less latency in all the systems I could try, and has a way lower "idle" energy consumption as it does not particularly wake the GPU). Other than that, software that I use occasionnally are all widgets-based and don't do GPU rendering (AFAIK): LMMS, QLC+...

Anecdotally, I tried every GPU accelerated terminal I could find and none felt as good as my trusty old urxvt


> Qt can (and is, for instance by default in Debian) be configured with -xcb-native-painting which does what you'd expect it to

It seems to me that -xcb-native-painting does the exact opposite of what I "expect Qt to do"? In any case, if it can use server-side X11 painting, that doesn't mean that it doesn't support the opposite, which is what I meant -- unless the Qt devs completely removed the code for client-side rendering, it's basically already ready for that scenario (passing pixmaps), isn't it?

> and has the very nice benefit of using my .Xresources (e.g. for my local screen's hi DPI)

Not quite sure how that is relevant. Isn't it possible to query that from the client, then produce a pixmap in an appropriate resolution? That should be transparent.


> It seems to me that -xcb-native-painting does the exact opposite of what I "expect Qt to do"?

I was talking about the flag. Qt by itself can render however you want it to, if someone wanted to make a platform abstraction that would render Qt apps with ncurses that would be possible too.

> it's basically already ready for that scenario (passing pixmaps), isn't it?

sure, if you use it as a local X11 client (or on other platforms than X11) it's what happens too


I wonder, is this purely a compile-time thing, or is the Qt binary library capable of switching between the two depending on whether you have a remote display or not? It seems like something that should be possible without forcing you to choose whether the performance of local or remote users is prioritized with a compile-time switch.


> is the Qt binary library capable of switching

yes, that's how a single Qt binary can run on wayland, x11, raw EGLFS, or even expose itself over VNC (https://doc.qt.io/qt-5/qpa.html)

you can force a platform with QT_QPA_PLATFORM=<...> with <...> being one of the plug-in names in your /usr/lib/qt/plugins/platforms

The only limit is that it can only be chosen on startup but not changed "while" the application runs if e.g. for some reason you wanted to switch a Qt app from X11 to wayland without restarting the app.


This clearly makes sense architecturally, but my goodness these envvars are a nuisance in Nix where you have wrapQtAppsHook and a bunch of other hackery to ensure that the proper plugins are findable at runtime.


Heh, I completely forgot about the ability to switch between Wayland/X11 session which makes this necessary in the first place anyway.

Cool, thanks, I learned something new.

> or even expose itself over VNC (https://doc.qt.io/qt-5/qpa.html)

Hmmm, I wonder how difficult it would be to support Ultimate++'s TURTLE protocol (https://www.ultimatepp.org/reference$WebWord$en-us.html)...


Wtf? Sending drawing commands over the wire is WAY more efficient for the vast majority of cases than sending compressed H264 video, and DEFINITELY much lower latency no matter what the codec latency is! It was even mentioned as a con of Wayland in the recent story "What is wrong with desktop Linux" right here on HN.


> Sending drawing commands over the wire is WAY more efficient for the vast majority of cases than sending compressed H264 video, and DEFINITELY much lower latency no matter what the codec latency is!

How closely have you profiled this? Back in the early 2000s I found there was a fairly strong divide between the applications which used X in the traditional manner, where sending the command across the wire could potentially be faster modulo loss of parallelism, and the growing number of applications which were doing more of the graphics internally so they could control font rendering, antialiasing, blending, etc. directly and were just slinging bitmaps back and forth. I'd expect that would have gotten worse over time rather than better.


> > Sending drawing commands over the wire is WAY more efficient for the vast majority of cases than sending compressed H264 video, and DEFINITELY much lower latency no matter what the codec latency is! > > How closely have you profiled this?

Yes. Anyone who actually experimented with tools like X2Go or Xpra/winswitch vs. plain old `ssh -Y -C`, even back in the mid aughts already, would know that GP is wrong.


I am not sure which GP you are referring too, since I am a heavy user of the original NoMachine, and it has incredibly better latency even when used over long-latency links (or should I say: specially when used over long-latency links). The new commercial NoMachine also uses H264 and I outright refuse to upgrade to it since it's just worse -- though it makes commercial sense for them since they want to support Windows.

But also just ask anyone who has used RDP versus something like VNC.


I was indeed referring to you!

I'm surprised by your account, and glad that you've given it. This is mine:

In high school, I was a Linux hobbyist and I also had a habit of forgetting and losing documents I needed for school. Additionally, the school's computers not completely locked down, but the selection of software on them was very limited.

So I played with a lot of remote access solutions for accessing my home desktop, both to do things on it that I couldn't do at school (like browse an uncensored web or using an IDE I couldn't install on the school computers) and to access my files or even running applications. I tested a lot of stuff, both with Windows clients and with Linux clients (but I was limited to the former at school).

I tried X2Go, TigerVNC, plain Xpra, SSH (with normal and insecure forwarding, with and without compression), NX (FreeNX on the server, but with the proprietary client alongside several open-source clients), and WinSwitch, which was some nice tooling built around Xpra. (On the LAN, I even messed around with VirtualGL for 3D acceleration with remote applications.)

Xpra was the preferred choice, and WinSwitch using Xpra and H.264 encoding worked best for me when I was away from home. Using plain X forwarding, I noticed that some applications were painfully laggy, especially Eclipse and Firefox. But even with my modest home internet connection (with especially low bandwidth on the uplink), the H.264-based solutions were very usable for any application I threw at them.

> But also just ask anyone who has used RDP versus something like VNC.

For connecting to full desktop sessions, I remember tigervnc being way faster than x11rdp.


Your usecase is actually quite close to mine. Except that NoMachine was the fastest of them all. I didn't try to run Firefox (why? you can proxy and browse locally!), but I had to use it to run software that could only be run on premises. And, basically, anything that was not NX incurred a huge latency that made using it a maddening experience.

For me, nxproxy/nxagent is basically the equivalent of mosh compared to SSH. If you have never felt the need to use mosh, then your link is not a high latency one.

Note that to my knowledge there is no actual implementation of an RDP server for Linux; they all just send do VNC over RDP (i.e. send bitmaps).


I remember NX being pretty good. I don't really remember why I ended up using Xpra more, but I used it for individual applications. I usually used NX only for full desktop sessions. And I only had whatever version of FreeNX I could get set up server side, which I remember being difficult, so maybe proper NoMachine NX would have been better.

I also never really dug into how Xpra used image encoders in its whole process. There's this diagram in the old (probably outdated) docs:

https://www.xpra.org/trac/attachment/wiki/DataFlow/Xpra-Data...

It seems more sophisticated to me than just a VNC approach, since it lets you send over the individual windows. There's this note on the current docs:

> Choosing which encoding to use for a given window is best left to the xpra engine. It will make this decision using the window's characteristics (size, state, metadata, etc), network performance (latency, congestion, etc), user preference, client and server capabilities and performance, etc

I'm not sure if that's more selective or cleverer than NX's approach, or basically the same. I wonder now whether it was latency, bandwidth, or processing power on the client (all we had was IGPUs) that was scarcest for me back then.

> For me, nxproxy/nxagent is basically the equivalent of mosh compared to SSH. If you have never felt the need to use mosh, then your link is not a high latency one.

I think plain SSH worked fine for me back then, though, so maybe latency was less of a problem for me than it has been for your use cases. Mosh is great though, especially for cellular internet connections.


> But also just ask anyone who has used RDP versus something like VNC.

I'm not sure I follow. While VNC and RDP are very different protocols, they ultimately both work by sending bitmaps of the server screen to the client.

Recent versions of RDP use H264 encoding, under the name "RemoteFX". I believe commercial VNC implementations do the same, although the free ones seem to be sticking with their own encoding schemes, as they believe they're better optimized for desktop content, but they're both still streaming bitmaps.

In any event, like others have mentioned, X11 forwarding has degraded over time to the point I only use it when working with locally hosted VMs. While old X software works well with it (xterm, motif toolkit stuff, things directly using xlib/xcb) modern applications like web browsers and the newer Qt/GTK libraries just don't handle latency well at all. They may be sending vector operations, but they're sending tons of small vector operations, which TCP overhead and network latency really hurts. And applications using libraries like Skia (which includes LibreOffice) are doing local rendering and just pushing bitmaps out.

I think X forwarding is conceptually better, but only if applications are really using X idiomatically, which they aren't.


RDP does not stream H264 -- RemoteFX needs a (v)GPU, so it is not the default. In fact you can go an peruse the local/persistent RDP cache and see it full of standard Windows GUI bitmaps.


My point was that both RDP and VNC display the remote screen by sending bitmaps over the wire. You can view the contents of the RDP bitmap cache and see that, while some UI elements are being "intelligently" selected, it's largely rectangular chunks of the screen, not unlike VNC.


Both can send bitmaps, but RDP can and does send graphics (and even text!) rendering commands. You can even influence how text is antialiased on the client. And even "dumb" VNC benefits from having more specific damage (as in, "areas of the screen that changed") than just a generic "all the screen changed, here's the new framebuffer, go figure it out".


Only since Qt5 I started seeing some programs not sending raw X acceleration commands; that's way latter than "the early 2000s", and they can still be configured to render using X. With few glaring exceptions (browsers, games, etc.) most programs still issue X rendering commands, even if they don't use font stuff.

But the point is that it's still much faster to send vector drawing commands over the wire. Programs/toolkits no longer caring to do so is a problem which we paper over by finding efficient ways to compress bitmaps (like h264), but almost by definition you cannot ever surpass the benefits of vector graphics.


The problem with X11 drawing commands is that there are a lot of round-trips over the network. Client asks for something, has to wait for the server response before proceeding, then it asks something else, etc. XCB tries to help mitigate problems that Xlib created that were not in the X11 protocol, but it's still highly inefficient.

In practice, running everything locally and then just streaming the framebuffer ends up being faster.

Not that we couldn't come up with a fast-over-the-network protocol. It's just that it's not really here.


> Not that we couldn't come up with a fast-over-the-network protocol. It's just that it's not really here.

In practice the "fast-over-the-network" protocol ends up being HTTPS + HTML + CSS + JavaScript, with a web browser as the client.


i don't know why parent was being downvoted.

this is exactly why i am building web applications instead of desktop apps if i expect to run the application on a remote machine.

a very good example is the deluge torrent client.

by concept it is a desktop gui application.

but it is designed in a client-server mode that allows the actual gui to run anywhere and access a backend through a thin protocol that provides a much better experience than remote X would. it has both a traditional gui interface and a well designed webinterface.

there is no reason we could not be designing more applications like this.


> The problem with X11 drawing commands is that there are a lot of round-trips over the network.

This is not a problem with drawing commands or the fact that the X11 wire protocol supposedly inefficient (it is actually very efficient). It is a problem of GTK and Qt that introduce unnecessary round trips. Both tooklits (which are mainly the reason the Linux never took off because they are pure garbage) never cared in the least about remote applications.

As comparison libXt based toolkits like motif run perfectly fine over the network and are very responsive.


I'd advise against using hyperbole and dismissing projects as "pure garbage". I tried to ask you this before but I would be really interested to know what your use case for motif is in 2021. That seems like a recipe for pain and frustration, significantly moreso than any pain and frustration you would have had with GTK and Qt.


Instead of playing the strengths of the UNIX desktop they are purposefully omitting them and trying to be a subpar copy of Windows/Mac all while throwing backwards compatibility out of the window for no apparent reason on a regular basis. They are pure garbage to such an extent that they appear to be deliberate sabotage.


> while throwing backwards compatibility out of the window for no apparent reason on a regular basis. They are pure garbage to such an extent that they appear to be deliberate sabotage.

This is exactly how I feel about Wayland. It is gratuitously incompatible and its proponents regularly lie about X to push it. Makes me think they want to give desktop Linux a finishing blow.


I'm not sure who you are referring to when you say "proponents", Wayland and X are mostly being developed by the same people. I don't see why they would lie about their own software. Also, if you look into it, you'll find everything that was redesigned in Wayland and broke backwards compatibility was done for a reason.


It is always your own software the one you want to rewrite, because "the second time you'll get it right".


Well, I think Wayland is going to avoid the "second-system effect" as it was intentionally designed to be smaller and less grandiose than X. http://catb.org/jargon/html/S/second-system-effect.html


But the reality is those features were added to the first thing for reasons. Which means there's user pressure to add them again... which means the second system effect is virtually impossible to avoid.

See for yourself:

https://wayland.app/protocols/

And unlike the first system, it doesn't have the experience of use to refine it.


I'm curious as to why you think Motif "plays to the strengths of the Unix desktop" or what those strengths would be. I don't believe Motif has ever been particularly popular among GUI developers, it seems to me the only reason it was used was because it was the only real option on Unix for a while. Also please avoid assuming bad faith and suggesting without evidence that something is "sabotage".


That article wasn't really accurate, most clients aren't using the X drawing commands anymore. The "vast majority of cases" has moved to using GL or Vulkan, which also doesn't serialize over the network.


The number of Vulkan or GL programs running in a desktop Linux is close to zero (or at most 1: the compositor). Browsers vary but on most setups they use practically nothing with GL/Vulkan (blame drivers). Gtk+ (even 3) still sends X drawing commands. Qt5 does not by default (it renders by itself, without any GL/Vulkan) but can still be configured to use X.


Qt has Qt Quick and QGraphicsScene which will use a GL backend.

GTK4 has a GL backend by default.

Web browsers and Electron use Skia which has GL and Vulkan backends.

Pretty much every video game is using GL and Vulkan already, or Proton which translates D3D to Vulkan.

Video players are using VA-API directly instead of XvMC.

The only thing in your list that doesn't have a GPU accelerated backend is GTK3, which GNOME is currently migrating away from to use GTK4. To my knowledge GTK3 also tries really hard to use Cairo on the client side for as many things as possible, and generally avoids using X drawing commands. I think it should be fairly obvious by now that graphics developers will prefer to use accelerated APIs whenever possible and don't care at all for "network transparency" if it means they have to use an outdated and mostly inadequate API.


I have absolutely no single program using Qt Quick on my desktop, nor I can actually remember the name of any single one. Which is funny since I actually was a Qt Quick developer in the past and can tell you a shitton of embedded platforms (i.e. cars) that use Qt Quick. Just no desktop programs.

I do not have any single Gtk4+ program on my desktop, and I use a quite up-to-date rolling distribution.

Skia having a GL backend does not mean it is used. Firefox does not use it on almost any Linux platform. It still blacklists even the FOSS drivers. Tested today with upstream v93 and using radeon opensource driver.

No idea why bring VA-API into this. It is practically the same thing as XvMC with a much more generic API. It is also not necessarily Vulkan or GL. Ironically this is also one area where intelligent remoting tools win (they can send the original compressed video stream down the wire. RDP does it), where a plain H264 stream will be forced to recompress, increasing latency at best.

Cairo is also backed by X rendering and this is the default.

Basically, if I ignore the compositor and games, I do not have _any single program_ on my system which uses GL or Vulkan for 2d graphics. Not surprising: in my experience, using GL for 2D graphics (i.e. arcs, lines, etc.) usually ends up in a big slowdown -- and a big increase in memory usage and crashes. It is mostly worth only for when you do pure texture manipulation like scaling, rotating, etc. i.e. final compositing or layering.

And, if, in addition to the above, I ignore the browser, and set the corresponding Qt flag, then _all programs_ in my system render using X rendering.

Easily tested because the performance difference is abysmal when using NX.


I would say that's probably something specific to your set up, and you may want to try some more apps, or ask those developers of your apps what their plans are. If you use Plasma, then you are using some Qt Quick software. Currently only the GNOME extensions panel is ported to GTK4, but more things are aimed for GNOME 42. In Firefox you need to enable webrender which is still beta but should be stabilized soon. I used VA-API as an example because that is another thing that only works locally and doesn't touch X. If you are using VA-API to output video to an X window then it explicitly won't send the original compressed video stream because the point of VA-API is to use hardware decompression. If you want to stream video then X and VA-API are both the wrong tool, you have to use something like gstreamer or phonon.

"Cairo is also backed by X rendering and this is the default."

This is incorrect, Cairo X rendering only happens if you use the Cairo Xlib surface, which GTK3 only uses in some circumstances.

Sure, maybe you're using a lot of applications don't use GL or Vulkan now. But if they are being actively developed, they are probably actively moving towards it. We can revise my original comment if you think it was wrong and want to make it more relevant to your situation: The "vast majority of cases" has moved to using GL or Vulkan or are taking steps to move towards it.


Well, it is a big difference. Someone was saying below that the vast majority of rendering was done "client-side" since 2000s, and it actually couldn't be farther from the truth. Even in 2021, the vast majority of of programs are still rendering server-side.

And, I have also been reading the claims of programs going to switch to OpenGL "any time soon" since the 2000s, with people backing out of the idea always due to "drivers" or the like. My experience trying to accelerate 2d programs with OpenGL has always been a disaster anyway. Maybe Vulkan is more suited to 2d rendering, but I would be surprised.

That is why I don't believe in that argument. Server-side rendering _is_ working as of today. Most programs do not use OpenGL at all. Wayland is breaking all of this; it's not that it was already broken.

> If you use Plasma, then you are using some Qt Quick software.

I was perusing the KDE source and the only program I could find is plasma-widgets. Not surprising: the only program in the entire KDE desktop that uses Qt Quick is a widgets program. It's basically a layering program.

> Currently only the GNOME extensions panel is ported to GTK4

I use latest released version of Gnome and it is not.

> Cairo X rendering only happens if you use the Cairo Xlib surface, which GTK3 only uses in some circumstances.

i.e. ALWAYS when using X as backend. It is the default. What else are they going to use, the PostScript backend?


"And, I have also been reading the claims of programs going to switch to OpenGL 'any time soon' since the 2000s, with people backing out of the idea always due to 'drivers' or the like."

I really don't know what else to tell you. Like I said, GTK4 and Qt are already using it. Chrome/Electron is using it, and Firefox will have it very soon. Wayland didn't break anything here, that and improvements in the drivers were just the missing pieces to finally complete the transition. Really, developers have been trying to ditch X for an extremely long time, you just said you've been hearing them talk about it for 20 years. Well as you know it takes a long time to rebuild everything.

"I was perusing the KDE source and the only program I could find is plasma-widgets"

I can think of several: KDE Connect, KSysGuard, System Settings, Kamoso, Kongress, there are more but I can't remember all of them! And yes, the entire shell also uses QML and Qt Quick.

"I use latest released version of Gnome and it is not."

This happened in GNOME 40 so your distro may be behind. See some docs from like 6 months ago: https://gjs.guide/extensions/upgrading/gnome-shell-40.html#p...

"i.e. ALWAYS when using X as backend. It is the default. What else are they going to use, the PostScript backend? "

No, the image backend is probably what you would consider the default because it works everywhere and can be used for off-screen rendering. In my experience Cairo Xlib surfaces are actually pretty uncommon because client side operations are done so frequently.


> I really don't know what else to tell you. Like I said, GTK4 and Qt are already using it. Chrome/Electron is using it, and Firefox will have it very soon.

Well, then don't repeat exactly the same argument. The point is that most desktop software _as of this day_ does not use client-side rendering, and even less desktop software uses OpenGL for rendering. Most of them are using plain classic widgets. We find some exceptions, but this is hardly enough to claim that most desktop software uses OpenGL, not in 2020, much less in 2000s.

> I can think of several: KDE Connect, KSysGuard, System Settings, Kamoso, Kongress,

While KDE Connect and Kongress do have some qml for the interface ( https://invent.kde.org/deepakarjariya/kdeconnect-kde/-/tree/... ) , I have not been able to find any QML whatsoever for the rest (e.g. https://github.com/KDE/ksysguard ).

Kongress looks like a widget anyway. You can easily recognize these programs by how poorly they integrate with the rest of KDE, and I definitely do not see them with any frequency at all.

> This happened in GNOME 40 so your distro may be behind.

So, apparently, gnome-shell uses it through GJS, which is why I can't find any binary linking directly to gtk4. It's still literally one user.

> No, the image backend is probably what you would consider the default because it works everywhere and can be used for off-screen rendering.

The xlib backend is obviously also capable of off-screen rendering, otherwise all hell would break loose. And the xlib backend is still the default.

Seriously: https://github.com/GNOME/gtk/blob/master/gdk/x11/gdksurface-... .

Why don't you just try? It's not hard to get into a situation where you don't have a working OpenGL environment and _all_ software still works. It's not hard to measure the bandwidth used for indirect X. Etc. Etc.


There isn't anything else for me to say, and I am not arguing with you or presenting an argument, this is just a casual conversation. Most of the software I know about uses OpenGL or Vulkan for rendering, and the server side ones are the exception. I'm repeating it because it didn't seem like you were understanding what I was saying, if you did understand it then you can disregard the previous comment. You can tell me that you don't use that software, which is fine for you, and I'm happy for you to use whatever suits you, but it's not really a meaningful discussion for us to have either. Please avoid making such comments or suggesting that I don't know what I'm talking about. Of course I can't know exactly what is going on on your computer, so if you want to explain it, then just tell me.

"I have not been able to find any QML whatsoever for the rest"

You won't find some of the QML from looking at the apps, bits of it are scattered in the KDE frameworks too. I don't think they integrate poorly, work has been done to make them match the Breeze skin.

"It's still literally one user."

Yeah that was kind of a dry run for GTK4 porting. As I said before, everything else is being ported, the next step was to get the support libraries ported over and then everything else can follow. https://gitlab.gnome.org/GNOME/Initiatives/-/issues/26

"The xlib backend is obviously also capable of off-screen rendering"

You usually don't want to do that, it introduces unnecessary round trips when you could just render it on the client side and then avoid that. That link to gdk surface is misleading, even on X11, GTK4 is using the GL renderer as default and is not going to call that or bother creating a cairo surface. Believe me, I've tried this in a situation without a working OpenGL environment and the performance is degraded.


Then why not use a wire protocol for Vulkan? SPIRV already exists. It only needs some wrapper code tacked on to work as a X extension and you have perfectly efficient server side rendering that potentially works even over the network without breaking anybodies backwards compatibility.


I'm very confused by this comment, SPIR-V is not a wire protocol for Vulkan. And you can't really serialize Vulkan over the network, that breaks things like vkMapMemory.

If you want something like a remote Vulkan, the thing to watch there would be WebGPU.


WebGPU translates 1:1 to SPIR-V.


That sentence doesn't really make sense to me, did you mean WGSL translates 1:1 to SPIR-V? "WebGPU" refers to the API, not the shading language.


Yes I was referring to the "WebGPU Shading Language" which I abbreviated with WebGPU which should be obvious. The WebGPU API is the boiler plate you need to make WGSL over the wire work. My original proposal was an X extension in a similar fashion that makes SPIR-V over the wire work.


I don't see why you would need an X extension, WebGPU works fine within a browser. You don't need to touch X or Wayland or Mesa or anything.


> The "vast majority of cases" has moved to using GL or Vulkan, which also doesn't serialize over the network.

IIRC, GLX did serialize over the network (though AFAIK limited to an older version of OpenGL; the vast majority of cases has moved to a newer version of OpenGL).


Yeah, indirect GLX is limited to OpenGL 1.4 and earlier. It also isn't even enabled by default on recent Xorg releases, you have to opt-into it.


I agree. And that's why Microsoft's RDP is still going.


What about drawing this small bitmap here? Oh wait it’s now animating? Just calculate the amount of those in your average interface and say which is faster, compressing/decompressing on both side, or uncompresses draw commands that will have to transmit the whole thing serially?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: