because rendering on the CPU is CPU-intensive when there's a lot of stuff scrolling by.
even on an integrated GPU, text rendering is far faster when you use the GPU to render glyphs to a texture then display the texture instead of just displaying the glyphs individually with the CPU.
It's comical being downvoted for this without comment. Having actually analyzed terminal performance, and optimized terminal code, this is based on first hand experience. The vast performance difference between terminals is almost entirely unrelated to rendering the final glyphs.
I'll add it to my (unfortunately far too long) backlog (he says and goes on to write an essay; oh well - for a blog post I'd feel compelled to be more thorough). But the quick and dirty summary:
1. The naive way is to render each change as it occurs. This is fine when the unbottlenecked output changes less than once a frame. This is the normal case for terminals and why people rarely care. It falls apart when you e.g. accidentally cat a huge file to the terminal.
Some numbers with the terminals I have on my system (ignored a whole bunch of xterm/rxvt aliases; e.g. aterm, lxterm etc.): cat of a file of 10MB on my system on a terminal filling half of a 1920x1024 screen on a Linux box running X takes (assume an error margin of at least 10% on these; I saw a lot of variability on repeat runs):
Take this with a big grain of salt - they're a handful of runs on my laptop with other things running, but as a rough indicator of relative speed they're ok.
Sorted in ascending order. These basically fall in two groups in terms of the raw "push glyphs to the display" bit, namely using DrawText or CompositeGlyphs calls or similar, or using GPU libs directly.
Put another way: Everything can be as fast as rxvt(-unicode); everything else is inefficiencies or additional features. That's fine - throughput is very rarely the issue people make it out to be (rendering latency might matter, and I haven't tried measuring that)
Note that calling the rest other than kitty and wezterm not GPU-accelerated is not necessarily entirely true, which confuses the issue further. Some of these likely would be slower if run with an X backend with no acceleration support. I've not tried to verify what gets accelerated on mine. But this is more of a comparison between "written to depend on GL or similar" vs "written to use only the specific OS/display servers native primitives which may or may not use a GPU if available".
2. The first obvious fix is to decouple the reading of the app output from the rendering to screen. Rendering to screen more than once per frame achieves nothing since the content will be overwritten before it is displayed. As such you want one thread processing the app output, and one thread putting what actually changed within a frame to screen (EDIT: you don't have to multi-thread this; in fact it can be simpler to multiplex "manually" as it saves you locking whatever buffer you use as an intermediary; the important part is the temporal decoupling - reading from the application should happen as fast as possible while rendering faster than once per frame is pointless). That involves one big blit to scroll the buffer unless the old content has scrolled entirely out of view (with the "cat" example it typically will if the rest of the processing is fast), and one loop over a buffer of what should be visible right now on lines that have changed. The decoupling will achieve more for throughput than any optimisation of the actual rendering, because it means that when you try to maximise throughput most glyphs never make it onto screen. It's valid to not want this, but if you want every character to be visible for at least one frame, then that is a design choice that will inherently bottleneck the terminal far more than CPU rendering. Note that guaranteeing that is also not achieved just through the naive option of rendering as fast as possible, so most of the slow terminals do not achieve this reliably.
Note that this also tends to "fix" one of the big reasons why people take issue with terminal performance anyway: It's rarely that people expect to able to see it all, because there's no way they could read it. The issue tends to be when the terminal fails to pass on ctrl-c fast enough and stop output fast enough once the program terminates because of buffering. Decouple these loops and skip rendering that can't be seen and this tends to go away.
3. Second obvious fix is to ensure you cache glyphs. Server side if letting the display server render; on the GPU if you let the GPU render. Terminals are usually monospaced; at most you will need to deal with ligatures if you're being fancy. Some OS/display server provided primitives will always be server-side cached (e.g. DrawText/DrawText16 on X renders server-side fonts). Almost all terminals do this properly on X at least because it's the easiest alternative (DrawText/DrawText16) and when people "upgrade" to fancier rendering they rarely neglect ensuring the glyphs are cached.
4. Third fix is you want to batch operations. E.g. the faster X terminals all render whole strips of glyphs in one go. There are several ways of doing that, but on X11 the most "modern" (which may be GPU accelerated on the server side) is to use XRender and CreateGlyphSet etc. followed by one of the CompositeGlyphs, but there are other ways (e.g. DrawText/DrawText16) which can also be accelerated (CompositeGlyphs is more flexible for the client in that the client can pre-render the glyphs as it pleases instead of relying on the server side font support). Pretty much every OS will have abstractions to let you draw a sequence of glyphs that may or may not correspond to fonts.
There is a valid reason why using e.g. OpenGL directly might be preferable here, and that is that if used conservatively enough it's potentially more portable. That's a perfectly fine reason to use it, albeit at the cost of network transparency for those of us still using X.
So to be clear, I don't object by people using GPUs to render text. I only object to this rationale that it will result in so much faster terminals, because as you can see from the span of throughput numbers, while Kitty and Wezterm doesn't do too badly they're also nowhere near fastest. But that's fine - it doesn't matter, because almost nobody cares about the maximum throughput of a terminal emulator anyway.
You're welcome. It's a bit of a pet peeve of mine that people seem to be optimising the wrong things.
That said, to add another reason why doing the GPU ones may well be worthwhile on modern systems anyway, whether or not one addresses the other performance bits: Being able to use shaders to add effects is fun. E.g. I hacked in an (embarassingly bad) shader into Kitty at one point to add a crude glow effect around characters to make it usable with more translucent backgrounds. Doing that with a CPU based renderer on a modern resolution would definitely be too slow. I wish these terminals would focus more on exploring what new things doing GPU based rendering would allow.
In the 80's 'glyph rendering' was usually done right in hardware when generating the video signal though (e.g. the CPU work to render a character was reduced to writing a single byte to memory).
I was specifically thinking of bitmapped machines like the Amiga. Granted, e.g. a modern 4K display w/32bit colour requires roughly three orders of magnitude more memory moves to re-render the whole screen with text than an Amiga (typical NTSC display would be 640x200 in 2 bit colour for the Workbench), but ability of the CPU to shuffle memory has gone up by substantially more than that (raw memory bandwidth alone has - already most DDR2 would be able to beat the Amiga by a factor of 1000 in memory bandwidth), but the 68k also had no instruction or data cache, and so the amount of memory you could shuffle is substantially curtailed by the instruction fetching; for larger blocks you could make use of the blitter, but for text glyph rendering the setup costs would be higher than letting the CPU do the job)
> but for text glyph rendering the setup costs would be higher than letting the CPU do the job
Depends on how the glyph rendering is done. Modern GPU glyph/vector renderers like Pathfinder [1] or Slug [2] keep all the data on the GPU side (although I must admit that I haven't looked too deeply into their implementation details).
That part was about the Amiga blitter specifically. The setup cost for small blits and the relatively low speed of the blitter made it pointless for that specific use.
even on an integrated GPU, text rendering is far faster when you use the GPU to render glyphs to a texture then display the texture instead of just displaying the glyphs individually with the CPU.