Hacker News new | past | comments | ask | show | jobs | submit login
WezTerm is a GPU-accelerated cross-platform terminal emulator written in Rust (wezfurlong.org)
175 points by thunderbong on March 13, 2023 | hide | past | favorite | 156 comments



Wez is a really nice guy to work with too; had the pleasure of working with him briefly a bit over a decade ago (we were at sort of sister companies on a project that involved both). Sadly he was impacted by the layoffs at Meta recently, so if you're looking for an incredibly productive and sharp as hell engineer, it might be worth looking him up.


On my machine, when I last tried the various accelerated terminal emulators, I wasn't convinced. At least under plain X, GL context creation adds extra latency when creating new windows (might be different if you use a compositor all the time I guess). In addition, on terminals such as kitty, the startup time of a full new process was really non-negligible I suspect due to the python support.

With a tiling window manager, the built-in notebook/tiling functionality is not really useful (the window manager is more flexible and has universal keybindings) so when looking at the time required to pop a full new window in either single or shared instance they were actually behind regular xterm. Resource usage wasn't stellar either (xterm was still better than most lightweight libvt-based terminals). Couldn't feel much of a latency improvement (again, X without compositor).

I'm sure at full throughput the difference is there, but who is looking at pages of output you can't read? I do keep terminals open for days, but my most common usage case is mostly open window -> run a small session -> close and I got annoyed fast.


A GPU-accelerated terminal emulator sounds like a nuclear-powered kitchen mixer to me.

Like, why? Over 20 years of using terminal emulators not even single time I was like "Man, I wish my terminal was faster".

Is this just a fun project to do, like, "yay, I wrote a GPU-accelerated terminal emulator!"?


It depends on your workflow and on your resolution too. For example I do most things exclusively inside the terminal. If you are using vim, and are making use of splits, on a 4k60hz(or 1440p144hz) screen and want to scroll on one split and not the other, you will notice how slow and laggy redraws are. This was especially noticeable on macos (yay work computers) for me, which led me down the GPU accelerated terminal rabbit hole. iterm2 had its metal renderer, which (at the time) only worked with ligatures disabled, whereas kitty/wez/etc did not have that limitation.

The litmus test I use is how smooth can the terminal emulator run `cmatrix` at fullscreen


cmatrix is the benchmark I didn't know I needed. Konsole seems to handle it fine, even in multiple tmux panes at once, or maybe I just can't see.


I've only had the issue on macos, konsole on my linux box works fine. I've stuck with kitty though cuz it works great on both linux and macos and I love the url opening feature as mentioned here: https://news.ycombinator.com/item?id=35140206


Probably inspired by the performance problems with the windows terminal [1] and the accelerated terminal [2] developed by Molly Rocket as 'answer'? A series of videos presenting the poc [3]

[1] https://news.ycombinator.com/item?id=28743687 (It takes a PhD to develop that) [2] https://github.com/cmuratori/refterm [3] https://www.youtube.com/watch?v=hxM8QmyZXtg, https://www.youtube.com/watch?v=pgoetgxecw8


GPU-accelerated terminals have been a thing for a long time.


I've been doing a lot of my non-work computing lately on an actual VT420, which tops out processing bytes coming from the serial line (the computer you're logged in to) at 19.2kbps. I could stand for it to be faster, especially with the screen at 132x48. But never in 30+ years have I ever thought a terminal emulator connected to a session running over a pty on the same machine was slow.

I have started to see "terminal" apps that won't run on a real terminal, though. Using UTF-8 regardless of your locale, using 256-color xterm escapes regardless of your TERM setting, being unreadable without 256 colors, etc, and in general not using termcap/terminfo.


because rendering on the CPU is CPU-intensive when there's a lot of stuff scrolling by.

even on an integrated GPU, text rendering is far faster when you use the GPU to render glyphs to a texture then display the texture instead of just displaying the glyphs individually with the CPU.


Only if the terminals rendering is extremely naive. That is, not using methods first used in the 80s


It's comical being downvoted for this without comment. Having actually analyzed terminal performance, and optimized terminal code, this is based on first hand experience. The vast performance difference between terminals is almost entirely unrelated to rendering the final glyphs.


I'd love to read your blog post about your experiences in this matter. We need more of these here on HN.


I'll add it to my (unfortunately far too long) backlog (he says and goes on to write an essay; oh well - for a blog post I'd feel compelled to be more thorough). But the quick and dirty summary:

1. The naive way is to render each change as it occurs. This is fine when the unbottlenecked output changes less than once a frame. This is the normal case for terminals and why people rarely care. It falls apart when you e.g. accidentally cat a huge file to the terminal.

Some numbers with the terminals I have on my system (ignored a whole bunch of xterm/rxvt aliases; e.g. aterm, lxterm etc.): cat of a file of 10MB on my system on a terminal filling half of a 1920x1024 screen on a Linux box running X takes (assume an error margin of at least 10% on these; I saw a lot of variability on repeat runs):

     * rxvt-unicode: 0.140s

     * kterm: 0.2s

     * kitty: 0.28s (GPU accelerated)

     * xterm: 0.51s

     * wezterm: 0.71s (GPU accelerated)

     * gnome-terminal: 0.86s

     * mlterm: 0.97s

     * pterm: 1.11s

     * st (suckless): 3.4s
Take this with a big grain of salt - they're a handful of runs on my laptop with other things running, but as a rough indicator of relative speed they're ok.

Sorted in ascending order. These basically fall in two groups in terms of the raw "push glyphs to the display" bit, namely using DrawText or CompositeGlyphs calls or similar, or using GPU libs directly.

Put another way: Everything can be as fast as rxvt(-unicode); everything else is inefficiencies or additional features. That's fine - throughput is very rarely the issue people make it out to be (rendering latency might matter, and I haven't tried measuring that)

Note that calling the rest other than kitty and wezterm not GPU-accelerated is not necessarily entirely true, which confuses the issue further. Some of these likely would be slower if run with an X backend with no acceleration support. I've not tried to verify what gets accelerated on mine. But this is more of a comparison between "written to depend on GL or similar" vs "written to use only the specific OS/display servers native primitives which may or may not use a GPU if available".

2. The first obvious fix is to decouple the reading of the app output from the rendering to screen. Rendering to screen more than once per frame achieves nothing since the content will be overwritten before it is displayed. As such you want one thread processing the app output, and one thread putting what actually changed within a frame to screen (EDIT: you don't have to multi-thread this; in fact it can be simpler to multiplex "manually" as it saves you locking whatever buffer you use as an intermediary; the important part is the temporal decoupling - reading from the application should happen as fast as possible while rendering faster than once per frame is pointless). That involves one big blit to scroll the buffer unless the old content has scrolled entirely out of view (with the "cat" example it typically will if the rest of the processing is fast), and one loop over a buffer of what should be visible right now on lines that have changed. The decoupling will achieve more for throughput than any optimisation of the actual rendering, because it means that when you try to maximise throughput most glyphs never make it onto screen. It's valid to not want this, but if you want every character to be visible for at least one frame, then that is a design choice that will inherently bottleneck the terminal far more than CPU rendering. Note that guaranteeing that is also not achieved just through the naive option of rendering as fast as possible, so most of the slow terminals do not achieve this reliably.

Note that this also tends to "fix" one of the big reasons why people take issue with terminal performance anyway: It's rarely that people expect to able to see it all, because there's no way they could read it. The issue tends to be when the terminal fails to pass on ctrl-c fast enough and stop output fast enough once the program terminates because of buffering. Decouple these loops and skip rendering that can't be seen and this tends to go away.

3. Second obvious fix is to ensure you cache glyphs. Server side if letting the display server render; on the GPU if you let the GPU render. Terminals are usually monospaced; at most you will need to deal with ligatures if you're being fancy. Some OS/display server provided primitives will always be server-side cached (e.g. DrawText/DrawText16 on X renders server-side fonts). Almost all terminals do this properly on X at least because it's the easiest alternative (DrawText/DrawText16) and when people "upgrade" to fancier rendering they rarely neglect ensuring the glyphs are cached.

4. Third fix is you want to batch operations. E.g. the faster X terminals all render whole strips of glyphs in one go. There are several ways of doing that, but on X11 the most "modern" (which may be GPU accelerated on the server side) is to use XRender and CreateGlyphSet etc. followed by one of the CompositeGlyphs, but there are other ways (e.g. DrawText/DrawText16) which can also be accelerated (CompositeGlyphs is more flexible for the client in that the client can pre-render the glyphs as it pleases instead of relying on the server side font support). Pretty much every OS will have abstractions to let you draw a sequence of glyphs that may or may not correspond to fonts.

There is a valid reason why using e.g. OpenGL directly might be preferable here, and that is that if used conservatively enough it's potentially more portable. That's a perfectly fine reason to use it, albeit at the cost of network transparency for those of us still using X.

So to be clear, I don't object by people using GPUs to render text. I only object to this rationale that it will result in so much faster terminals, because as you can see from the span of throughput numbers, while Kitty and Wezterm doesn't do too badly they're also nowhere near fastest. But that's fine - it doesn't matter, because almost nobody cares about the maximum throughput of a terminal emulator anyway.


Bookmarked! Will definitely come in handy when I finally write my own thing. Thank you for this!


You're welcome. It's a bit of a pet peeve of mine that people seem to be optimising the wrong things.

That said, to add another reason why doing the GPU ones may well be worthwhile on modern systems anyway, whether or not one addresses the other performance bits: Being able to use shaders to add effects is fun. E.g. I hacked in an (embarassingly bad) shader into Kitty at one point to add a crude glow effect around characters to make it usable with more translucent backgrounds. Doing that with a CPU based renderer on a modern resolution would definitely be too slow. I wish these terminals would focus more on exploring what new things doing GPU based rendering would allow.


In the 80's 'glyph rendering' was usually done right in hardware when generating the video signal though (e.g. the CPU work to render a character was reduced to writing a single byte to memory).


I was specifically thinking of bitmapped machines like the Amiga. Granted, e.g. a modern 4K display w/32bit colour requires roughly three orders of magnitude more memory moves to re-render the whole screen with text than an Amiga (typical NTSC display would be 640x200 in 2 bit colour for the Workbench), but ability of the CPU to shuffle memory has gone up by substantially more than that (raw memory bandwidth alone has - already most DDR2 would be able to beat the Amiga by a factor of 1000 in memory bandwidth), but the 68k also had no instruction or data cache, and so the amount of memory you could shuffle is substantially curtailed by the instruction fetching; for larger blocks you could make use of the blitter, but for text glyph rendering the setup costs would be higher than letting the CPU do the job)


> but for text glyph rendering the setup costs would be higher than letting the CPU do the job

Depends on how the glyph rendering is done. Modern GPU glyph/vector renderers like Pathfinder [1] or Slug [2] keep all the data on the GPU side (although I must admit that I haven't looked too deeply into their implementation details).

[1] https://github.com/pcwalton/pathfinder

[2] https://sluglibrary.com/


That part was about the Amiga blitter specifically. The setup cost for small blits and the relatively low speed of the blitter made it pointless for that specific use.


Then what about the existence of Konsole?


it's existence is irrelevant. how does it perform? this conversation is about performance.


Better in latency and throughput without GPU-accelerated rendering while being feature-rich, that's the point.


forgive me if i don't take your word for it. I remember K-everything being slow as molasses and have avoided K-anything since.


That's wrong since like KDE 5.10. Better test first before assumptions


if it took 5 major revisions to address obvious and appalling performance problems then I'm quite sure my desire to stay away is justified.


It didn't take 5 major revisions. There was a big regression with KDE 4 compared to previous versions, and that was remedied in KDE 5.


Command line things that are noisy can legitimately run faster, being bound by the rate at which the terminal can dump characters

With high refresh rate displays it really helps avoid a blurry mess too

I can read the stream almost like The Matrix


A faster terminal is a great idea, but GPU acceleration is not the way to do this.

GPUs aren't really meant for bit blitting sprite graphics. (Which is what a terminal really does.)


Isn't that literally what GPUs were designed to do?


The term GPU is primarily associated with 3D graphics, and most of what GPUs do is designed for that. Hardware acceleration of 2D graphics existed long before 3D hardware acceleration became common for PCs, but wasn’t called GPU, instead it was simply referred to as a graphics card.


Texture blitting is a very important part of 3D graphics, and is essentially what is required here.


The difference is that applying textures to a 3D object is almost never a pixel-perfect operation, in the sense of texture pixels mapping 1:1 to final screen pixels, whereas for text rendering that’s exactly what you want. Either those are different APIs, or you have to take extra care to ensure the 1:1 mapping is achieved.


There are ways to configure the texture blitter to be precisely 1:1. This is written into the GL/Vulkan standards for exactly this reason, and all hardware supports/special cases it. It is how pretty much every GUI subsystem out there handles windowing.


Yes, my point is that this is a special case separate from normal 3D graphics.


…so why are GPUs not the way to do this, when GPUs are in fact the way it is commonly done?


The transforms are specified so you can position things perfectly these days, when aligned with screen pixels.

Think of the compositing of layers of translucent windows used in modern 2d window managers, while dragging them around. Or even scrolling in a browser. Those rely on the GPU for fast compositing.

Even for 3d, think of the screen-space techniques used in games, where it's necessary to draw a scene in layers combined with each other in various interesting logical ways (for shadows, lighting, surface texture, etc), with the pixels of each layer matching up in a reliable way.


Most of what a GPU does is drawing pixels though (even in 3D games), and that's as 2D as it gets.


If you can to the number crunching for 3D graphics, you see as hell can do it for 2D graphics.


It’s a different set of operations for the most part, when you look into it. Drawing a 2D line or blitting a 2D sprite is quite different from texture-shading a 3D polygon. It’s not generic “number crunching”.


It's tensor ops all the way down


Ok but just because operations aren't perfectly identical doesn't mean you can't do it and it certainly doesn't mean it will be slow. I have had great success with SDL_gpu.


Actually they were, before 3D acceleration became a thing, 2D acceleration was a thing.


The 2D acceleration wasn’t called GPU at the time though. That denomination only started with 3D acceleration.


The name may have changed, but the task of blitting remained with GPUs. They are good at it.


You are thinking about Trident cards, which had 2D acceleration.

Nowadays, everything is done with the same pipeline as the 3D graphics, there's no need for two pipelines.

GPUs are meant for this. Modern ones. Your knowledge is incomplete, outdated, or both.


In (realtime) rendering the saying goes "bandwidth is everything", and that's exactly what GPUs do really well, moving incredible amounts of data in a very short time.


I agree with you, but stuck with wezterm for some time now for it's non-GPU related features. Specifically the font configuration with fallbacks and configurable font features such as ligatures and glyph variations is nice. I use a tiling window manager and a terminal multiplexer, so I have no use for terminal tabs/splits/panes. I wish there was something as "simple" as alacritty, but with nicer font rendering.


Kitty (the Linux one) has ligatures support, and it is GPU accelerated.


I love wezterm due to its ligature and colourscheme support, and the fact it's very clean and simple compared to, say, Konsole (I also generally use i3 leading to KDE apps not being the prettiest).


> xterm was still better than most lightweight libvt-based terminals

Even worse: although many terminal emulators claim to emulate some "ANSI" terminal or be "VT100 compatible" and so on, most of them aren't at all. Simply run vttest in your terminal of choice and be surprised, especially by how many of them fail at very basic cursor movement tests. One of the few terminal emulators which gets most things right is xterm. It's also one of the very few terminal emulators which even supports exotic graphics capabilities like Sixel/ReGIS/Tek4014. Nobody should underestimate xterm …


The author of Zutty has a pretty comprehensive writeup around that: https://tomscii.sig7.se/2020/12/A-totally-biased-comparison-...


Xterm is like that sleeper car. Looks basic but beats everyone if it comes down to it.


> I'm sure at full throughput the difference is there

I am not. It makes next to no sense to me. Maybe if you have a highres screen and dedicated VRAM. Otherwise going through the GPU interfacing ceremony just adds overhead.


Yeah, as I keep saying in these threads, the performance needed to do "fast enough" terminals was reached no later than the 1980s, and while bits per pixel and resolution has increased since then, it has increased slower than CPU speed. It's not the CPU cost of getting pixels on the screen that bottleneck most terminals.


In my experience, there's two archetypes of terminal users: * Open one window and leave it open forever. Reuse that one for all commands. * Open a window, run a couple commands, and close it.

For the second group, startup perf is everything, because users hit that multiple times a day. For the first group, not so much.

Some of the other tiling functionality is also more helpful for folks that aren't on platforms with as powerful of window managers (macOS, Windows)


I am in the second group, kinda - i hit Win+Shift+X (my global key for opening a new terminal) pretty much all the time to enter a few commands. I basically open terminals in a "train of thought"-like fashion, when i think of something that isn't about what i do in one terminal i open another to run/check/etc out. Sometimes i even close those terminals too :-P (when i work on something there might be several terminal windows scattered all over the place in different virtual desktops).

Also i'm using xterm and i always found it very fast, i never thought that i'd like a faster terminal.


i think a very effective workflow is missing from this list: open a long running terminal window but have many tmux panes.

many modern wm's and terminals have multitab and multiwindow features but i invested time only into learning tmux and i can use it anywhere. and of course nohup functionality is builtin by definition.

i have said it before and i can say it again: terminals come and go, tmux stays.


Been using WezTerm as my primary term for ~6 months and been loving it. In particular:

    - Keyboard cut-and-paste.  CtlShiftSpace to highlight likely selections, matching letter to cut, shift letter to cut AND paste.
    - Built in mosh+tmux-like functionality, but at the GUI level.  I do a lot of terminal work at home from my Mac connected to my Linux laptop WezTerm instance, with multiple panes/tabs.


After reading the comments on this thread I decided to give wezterm a try today. It's safe to say that I probably won't be going back to kitty or alacritty any time soon! Such a pleasant experience to configure with lots of very sane defaults. The only things I added were the open-http-links-in-browser binding I found in the comments here, and a quick select matcher for sha256-* strings for when I'm updating manually packaged things in my Nix config.


how do you multiplex with it though? The docs aren't very helpful


In what something is written is irrelevant, please stop with this "written in Rust" as it is not an automatic badge of quality. if you have to specify that something is written in any language (Rust the latest trend) then it has no merits on its own.


> In what something is written is irrelevant, please stop with this "written in Rust" as it is not an automatic badge of quality.

It is not an automatic badge of quality for the program itself, but it is an indicator that someone, somewhere is using $LANGUAGE to make a program like that.

It isn't so much promotion for the program itself, but for $LANGUAGE both to attract like-minded programmers who like it as well as show other programmers that $LANGUAGE is using for making things - which perhaps will give them an incentive to check out the language too, if for no other reason than perhaps answering the curiosity some may have about how a program like that is written in that language.

Hacker News is a place full of programmers after all and programmers do tend to care about programming languages and their uses.


Is it trying to pull merits from the fact that it is written in Rust, though? I think attributing quality claims by default from the language is something you are projecting on your own. The title just says it is "written in Rust", not "written in Rust, therefore better".


After trying out Hyper (electron), Warp (Rust) and iTerm (Objective-C, Swift), I'm not as much interested in the language itself, but at least the fact that it's not an Electron terminal. So I guess I do subconsciously project the "therefore better", not because it's Rust, but because I'm assuming it won't grab a couple GB of memory.


> The title just says it is "written in Rust", not "written in Rust, therefore better".

Why does the title say "written in rust" then, if not because the author believes it's inherently worthy of merit? It's largely irrelevant to users of the software.


For those who may be interested? Why does it bother you when it is not relevant to you?

I follow HN mostly through RSS with various filters. Rust is one of the keywords that I follow.


Not sure if you are referring to the title specifically or the trend in general, but the title is taken almost directly from the one sentence blurb on the wezterm site. As a general description, I can see why as an open source project it would be useful to include the implementation language as one of the first descriptors of the project, and this seems to be fairly common practice. For example, look through just about any This Week In Matrix [0] blog post and you'll see various projects described as "x written in C" or "y written in C++17" or "z written in kotlin" etc. Programmers/sysadmins are curious so for a tool aimed primarily at that demographic, I don't see why a project shouldn't include that it's written in rust just to avoid grouping itself with the other trendy rust projects (as you likely wouldn't have complained if it were written in eg c++ and advertising that fact in its one sentence description).

[0] https://matrix.org/blog/posts


It's not irrelevant for open source. As a Rust programmer I find these projects much more interesting than say COBOL, which I don't use. The ability to fix something myself is a big factor in my choice of what software I use.


> In what something is written is irrelevant

Would you say the same if something was written in FORTRAN, COBOL or assembly?


Much as it can be irritating for a hype/hate-intolerant person (I am one too), you cannot possibly contend it's irrelevant here, so you'll just have to put up with it.


Is it really? If I see a game written in, say, Haskell, this grabs my interest. What’s wrong with looking at Rust written software?


Knowing something is written Rust does tell me that it will be relatively easy to add my own features (possibly upstreaming those changes if others so desire) and fix any bugs I find.

Why? Because a project written in Rust will certainly be using Cargo for package management, which is absolutely delightful relative to just about any other language’s package manager.

I can git clone and start hacking on just about any Rust project almost immediately. If I’m using 100 tools throughout the day (that’s a conservative estimate) I’d much prefer each of those were written in Rust than C or C++, for instance. Otherwise that’s 100 opportunities for me to find a missing feature or bug and end up futzing with autotools and cmake and ninja and distro-packaged but-not-quite-the-right-version dynamic libraries and broken pkg-config files and and and…

As an example, I recently needed to add support to probe-rs for debugging embedded devices via JTAG with a CMSIS-DAP probe. One git clone and I immediately have all of my dependencies resolved, my editor immediately has go to definition and autocomplete working, and I have a host of debug target definitions for debugging probe-rs itself.

OpenOCD doesn’t support the chip I’m targeting, so if I wanted to use that instead of probe-rs I’d have to first set up a dev env for it and add that support myself. That consists of using a tool like bear to trace the build execution so it can spit out a compile_commands.json file so the C and/or C++ language server can make sense of the project. Oh, and I need to repeat that step if I realize later that I missed a couple defines, otherwise the lang server will think huge swathes of code are preprocessed-out. And this is all after I’ve chased down the myriad of build and runtime dependencies.

I find missing functionality and/or critical show stopping bugs regularly in the tools I use, and it doesn’t matter if the tools are commercially funded or super popular or whatever: I opened an issue for a critical miss-linking problem in Google’s bazel build rules for the Go language, and that’s still open something like 6 years later. And ironically, I discovered the bug while working on a project that uses Go, C++ and C++ protobufs — two of those things are Google inventions, and the other is a predominant language in the Google stack! Their own stuff doesn’t even work together.

So no, it’s not a matter of me choosing bad tools, to be clear. I’m just often put in a position that the best course of action is rolling up my sleeves and fixing whatever issue I’ve discovered.

For someone like me, knowing something is written Rust gives me some peace of mind, knowing that it’ll be (comparatively) easy to work on when I inevitably discover a deficiency. Not because the language is great (though that’s also true), but because the package management simply doesn’t suck.

So please, please stop with the fallacy that “it has no merits on its own”. It does have merits, though maybe you don’t personally appreciate them. And that’s okay. But some of us do appreciate those merits — and they very much unequivocally, objectively are there.


It is also unlikely, being a GPU based terminal, that it is written without lots of unsafe memory munging and other fun things, since the underlying API's are still C/C++, and even if they aren't, interacting with GPUs and other hardware is just hard sometimes if they haven't been built thinking of your programming of language. There is of course, a difference between wrapping C apis and them being automatically unsafe, and using lots of unsafe rust like memory transmutation/etc.

In the case of wezterm, a quick glance shows that not only is this correct (lots of unsafe rust, even beyond the normal use of wrapped functions), there is also not a lot of documentation about what the safety conditions of the various unsafe code is.

(which is the usual good practice in handling this sort of thing).

So to your point, saying it's "written in rust" doesn't matter here.


> since the underlying API's are still C/C++,

If the use of GPUs is via CUDA, there are my wrappers [1] which are RAII/CADRe, and therefore less unsafe. And on the Rust side - don't you need a bunch of unsafe code in the library enabling GPU support?

[1] : https://github.com/eyalroz/cuda-api-wrappers/


You do, but people here really don't like to admit the rust ecosystem is like that. There is plenty of code that is infinitely better and safer than the equivalent C++ would ever be.

There is lots of "bad" unsafe code that is much worse, and over time, they will have just as much trouble trying to handle this overall as C++ does.

(and no the ability to say you don't want unsafe transitive crates doesn't fix anything)


A few years back lots of posts included "written in Go". I don't we'll ever get rid of it...


Does anyone know the impact on battery compared to alacritty and kitty?


Definitely lower battery consumption than Alacritty, at least while this is still an unresolved issue https://github.com/alacritty/alacritty/issues/3108


Consumes also half of the memory compared to Kitty.


Been digging into escape sequences lately and gave WezTerm a spin to test some things out

One thing that put a sour taste in my mouth for WezTerm is they document all escape sequences [0] using a deviation from standard syntax (sometimes using `:` over `;` like for RGB colors, at least I've not found any other documentation about this being commonly adopted). They do put a disclaimer at one point in the document but that only helps if you read it linearly and it feels weird to document your implementation of a standard to only work with your terminal, locking people's applications into it.

As for rendering, I had problems with what look like correct escape sequences not rendering correctly (alternative underline styles, colored underlines). I still need to dig further into this where in the stack the problem is (my application or where in WezTerm)

[0] https://wezfurlong.org/wezterm/escape-sequences.html


Colon is the correct separator there. Semicolon separates different instructions, a group of values that is a single instruction is colon separated. For example \e[38:5:196;1m specifies red and bold.

Using ";" is popular, and some parsers even require it, but it makes little sense. A terminal that doesn't know about 38 (anything old) will interpret that 5 as blink if you use ";", which is not what anyone wants.


From my experience, the colon separator for 256/rgb attributes is a fairly recent clarification to the spec. I think that's the way it was always supposed to be. But when I was initially writing the 256 color support for the Windows Console about 6 years ago, `TERM=xterm-256color` still emitted colors with just the semicolons as a separator.

The colons are more correct, but the semicolons are seemingly more broadly supported.


I don't recall where I saw it, but my understanding was that : was in the original spec but one of the early implementors (Konsole?) misread it and used ; and that erroneous form is what took off, and here we are today.


I'm guessing it may have been one or both of these `xterm`-related documents:

* <https://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h4-F...>

* <https://invisible-island.net/xterm/xterm.faq.html#color_by_n...>

More details in my other comment: https://news.ycombinator.com/item?id=35157058


Part of the problem is probably that termcap uses : as a separator between capabilities. There is \C for colon, but there are implementations that don't support it. If nothing else I know the termcap[info] command in screen doesn't.

That doesn't stop terminals from supporting it of course, just from saying they do.


> Colon is the correct separator there

Where would I find this documented and how broadly is it supported? So far, the most exhaustive documentation I've found is wikipedia [0], vt100 [2], and wezterm with only wezterm talking about this. I also have not seen `:` handled in the various parsers (e.g. `vte` for alacritty) or generators (e.g. `termcolor`) I've looked at.

[0] https://en.wikipedia.org/wiki/ANSI_escape_code

[1] https://vt100.net/emu/dec_ansi_parser


Okay, it took a little bit of digging but I eventually found more about this via a link from the Wikipedia page.

* via <https://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h4-F...> (search for "CSI Pm m Character Attributes (SGR)"):

    If 88- or 256-color support is compiled, the following apply:
          [snip]
          o   The 88- and 256-color support uses subparameters described
              in ISO-8613-6 for indexed color.  ISO-8613-6 also mentions
              direct color, using a similar scheme.  xterm supports
              that, too.
          o   xterm allows either colons (standard) or semicolons
              (legacy) to separate the subparameters (but after the
              first colon, colons must be used).

          The indexed- and direct-color features are summarized in the
          FAQ, which explains why semicolon is accepted as a
          subparameter delimiter:

            Can I set a color by its number?
(This links to <https://invisible-island.net/xterm/xterm.faq.html#color_by_n...>.)

The `ctlseqs` document continues:

          These ISO-8613-6 controls (marked in ECMA-48 5th edition as
          "reserved for future standardization") are supported by xterm:

            [snip example with colons ":"]

          This variation on ISO-8613-6 is supported for compatibility
          with KDE konsole:

            [snip example with semicolons ";"]
So, the confusion between use of ";" and ":" separators is because the RGB values are not actually parameters (specified as separated by ";"), rather, the RGB values are subparameters (specified as separated by ":"). [Edit #1]

This is covered in more detail by <https://invisible-island.net/xterm/xterm.faq.html#color_by_n...> (mentioned above):

    We used semicolon (like other SGR parameters) for separating the R/G/B
    values in the escape sequence, since a copy of ITU T.416 (ISO-8613-6)
    which presumably clarified the use of colon for this feature was costly.

    Using semicolon was incorrect because some applications could expect their
    parameters to be order-independent. As used for the R/G/B values, that was
    order-dependent. The relevant information, by the way, is part of ECMA-48
    (not ITU T.416, as mentioned in Why only 16 (or 256) colors?). Quoting from
    section 5.4.2 of ECMA-48, page 12, and adding emphasis (not in the
    standard):

      [snip]

    Of course you will immediately recognize that 03/10 is ASCII colon,
    and that ISO 8613-6 necessarily refers to the encoding in a parameter
    sub-string. Or perhaps you will not.
:D

      [snip]

    Later, in 2012 (patch #282), I extended the parser to accommodate the
    corrected syntax. The original remains, simply because of its widespread
    use. As before, it took a few years for other terminal developers to
    notice and start incorporating the improvement. As of March 2016, not
    all had finished noticing.

      [snip]

     * Going forward (e.g., xterm patch #357), these terminfo building blocks
       are used in ncurses:

        * xterm+256color2 is the building-block for standard terminals
        * xterm+256color is a building-block for nonstandard terminals
It's unfortunately seemingly not possible to directly link to the specific text but hopefully that's enough pointers for people to find it themselves.

Once again, seems it's amazing that anything works anywhere. :D

[Edit #1: Corrected from subparameter separator from "," (which is correct for one of the specs I read but incorrect here) to ":".]

[Edit #2: Typo fix.]


For the sake of completeness, a couple of updates[0]:

(1) With regard to the "03/10 is ASCII colon" comment: the "03/10" form is apparently known as "column-row notation" (or "column-line notation", see https://en.wikipedia.org/wiki/ISO/IEC_2022#Notation_and_nome...) and refers to the position of the character in an ASCII table such as: https://en.wikipedia.org/wiki/File:USASCII_code_chart.png

I actually found a reference (https://dicom.nema.org/medical/dicom/2019d/output/html/part0...) that explains the notation: "the [US ASCII] value can be calculated as (column * 16) + row, e.g., 01/11 corresponds to the value 27 (1BH) [i.e. 0x1B]."

It then notes "The column/row notation is used only within Section 6.1 to simplify any cross referencing with applicable ISO standards." :D

A (IMO less effective) description can also be found in ECMA-35 (6th edition; section 5.2 "Code Tables").

(2) A nuance with regard to my "the confusion between use of ';' and ':' separators is because the RGB values are not actually parameters" remark:

The ECMA-48 (5th Edition; section 5.4.2) specification (& ISO/IEC 8613-6 / ITU-T Rec. T.416) talks about "parameter string" & "parameter sub-string" rather than "sub-parameter".

(Also, the "reserved" values mentioned apparently weren't ever actually specified in a standard; and--as noted elsewhere--the format of the ANSI parameter(s) don't exactly match the ODA (ISO/IEC 8613-6 / ITU-T Rec. T.416) format anyway--see ITU-T Rec. T.416 (1993 E) page 41.)

(3) The Wikipedia talk page on ANSI escape codes makes...interesting...reading: https://en.wikipedia.org/wiki/Talk:ANSI_escape_code#Clarific...

[0] i.e. let me inflict this "knowledge" on you. :D

[n] The best explanation for how we ended up in this mess is, I think, found in "Figure 6 - Structure of 8-bit codes" of ECMA-35 (6th Edition). :D


Thanks!


kitty is also worth a look, though it doesn't run on Windows:

https://sw.kovidgoyal.net/kitty/

https://github.com/kovidgoyal/kitty/releases

I used iTerm2 on macOS for many years and still keep it around, but switched to kitty as my daily driver in late 2021 and have been very happy with it.


Not to be confused with KiTTY

http://www.9bis.net/kitty/index.html#!index.md

> KiTTY is a fork from version 0.76 of PuTTY

> KiTTY is only designed for the Microsoft® Windows® platform.


me too...kitty got me out of iterm2, it feels more responsive, faster to close and open new windows/tags


Not a criticism of this app itself, but - I could not find an explanation on the website why I would prefer this terminal emulator & multiplexer to those available with every desktop environment, such as gnome-terminal, lxterminal, qterminal, xfce4-terminal etc.


I've been using it for +1 year now, love it.

I have only one complain: I tried everything to configure it to open in a specific position in the screen (top left corner), and for FSM's sake, It's impossible! (Windows 10)


it's possible now with the nightly version (might need negative X on some machines)

  local wezterm = require("wezterm")
  wezterm.on('gui-startup', function(cmd) -- set startup Window position
    local tab, pane, window = wezterm.mux.spawn_window(cmd or
      {position={x=0,y=0},width=100,height=20}
    )
  end)


Oh, I will try, thanks so much!


I've been using WezTerm for a year more or less and haven't had any problems so far. Whenever I want anything fancy I just go to the docs or dig through the Github repo (issues, PRs, etc) and that's all I need.

I just couldn't stand the petulant attitude of Alacritty's maintaner(s) and it didn't take much to find something else. Although I can't give this for granted now I haven't seen there's the same sort of behavior on WezTerm.


As a Neovim user I like that the config is in Lua. There are however odd things that it still doesn’t have compared to Kitty, like the ability to have squared off corners in macOS


The killer feature of kitty for me is `crtl-shift-e`[0]. I really wish more emulators would copy it. Wezterm has explicit hyperlinks[1], but it requires you to click it with a mouse which is the last thing I want to do when inside a terminal.

--

0: https://sw.kovidgoyal.net/kitty/conf/#shortcut-kitty.Open-UR...

1: https://wezfurlong.org/wezterm/hyperlinks.html#explicit-hype...


I used wezterm for a while and we added this key binding, I think it was from Wez himself ? -

    { key="e",          mods="CTRL|ALT",      action=wezterm.action{QuickSelectArgs={
            patterns={
               "http?://\\S+",
               "https?://\\S+"
            },
            action = wezterm.action_callback(function(window, pane)
               local url = window:get_selection_text_for_pane(pane)
               wezterm.open_with(url)
            end)
          } }
    },


Just the thing I was missing. Thank you


is there a way to make this work with Explicit Hyperlinks?


For me the starting reason was the speed of the terminal but now its the open last command output in less[0], I know no other terminal that does this and it is a true life saver.

0: https://sw.kovidgoyal.net/kitty/conf/#shortcut-kitty.Browse-...


I switched to WezTerm a few months ago after setting up a new computer and missing an easy way to share the iTerm2 config between my computers just like I do with most of my other tools that use dotfiles.

It’s a very nice application, easy to configure, very customisable, stable. I’ve only have experienced one bug with app, I opened an issue in GH and Wez had a nightly version with a fix in just a few hours.

If you haven’t check it out yet, I highly recommend it.


WezTerm seems nice in a lot of ways, but it's got some odd quirks, like how it lacks menus on macOS or how on Linux, it uses a hyperminimalist tiling-WM-style titlebar instead of whatever your WM furnishes.

This makes it seem pretty squarely targeted at i3/Sway/vim sorts of users, which doesn't make it bad but an odd fit for more general users.


I tried WezTerm for a few months on MacOS. No problem so far (my use case is pretty minimal, e.g tab 1 for compiling, tab 2 for code editing, tab 3 for SSH, etc), except lacking of GUI for user configuration. You need to understand Lua for that: https://wezfurlong.org/wezterm/config/files.html

Eventually I uninstalled it and back to iTerm2. Maybe one day I'll learn Lua and use it again.


I feel like WezTerm has a vision of being something bigger than a traditional VT emulator. It feels more like an environment to live in, an Emacs of terminal emulators, with its own tiling, status bar, dynamic scripting, builin Lua console, programmable events, very rich graphics. It's not a faithful old-school TTY emulator like xterm, it's not featureful, but boring Konsole, it's not ascetic but fast Alacritty. It's an experiment.

I don't personally like all of its design decisions, but an Emacs-like vibe really intrigues and attracts me.


I installed the nightly version yesterday, and was pleased to see menus On macOS.


If you’re on MacOS with a newer device, make sure to set front_end = “WebGpu” in your wezterm.lua file, so Wezterm can utilize Metal drivers.

I spent ages trying to get Wezterm font rendering to match iTerm2. After spending far too long in the font section of configuration, it turns out this was the solution.


Serious question: why does a terminal editor need GPU acceleration? It's usually showing 80x25 of text, yeah? I guess text can get scrolling quickly sometimes or it needs maybe... 1,000,000 lines of lookback


A couple of points:

    - WezTerm has windows/tabs/panes in it, and one use case would be ~10 panes on a 4K display in various columns/rows.
    - Within those you might have applications that themselves have columns/rows of panes, like vim.
    - Why not use the GPU to draw the graphics so you can offload the CPU?  When there's a lot of text to go through (something scrolling stdout hard, jumping through a big file), the extra performance can be nice.


> It's usually showing 80x25 of text, yeah?

That would be pretty small in modern times. Certainly some people are working with terminals that size, often then spilling out into multiple terminals side by side on the screen, but modern resolutions are good enough to support a lot more text per terminal than that. As a single data point, my half-screen terminal right now is 140x75.


Another data point, I keep a 1920x1200 monitor in portrait mode and have kitty fullscreen on that monitor. My workflow is [emacs, browser] + kitty, I keep emacs and my browsers fullscreen on another monitor.


Text rendering is a quite CPU-intensive thing to do nowadays. It can burn your CPU with unnecessary work to move bitmaps around, while the GPU, which is specifically created and tuned for this kind of tasks, stays idle.

This is basic mechanical sympathy.


Text rendering in a terminal emulator is not a CPU-intensive thing.


Why do you think pixels constituting text are any different to pixels constituting images?

A terminal editor needs GPU acceleration for the same reasons any other displayed item needs GPU acceleration.


It doesn't. At least not explicit (e.g. using the render extension with X11 might well end up using a GPU behind the scenes, but no need to write GPU-specific code; a fast terminal requires pretty much too primitives: an ability to copy a rectangle, and an ability to draw a list of textures, if you need to deal with the GPU it's a failure of abstraction).

Fast enough bitmap terminals have existed since the 1980's, and it takes really naive rendering for the rendering to be a bottleneck ever since then. Resolutions have increased, sure, but overall the increase in resolution and colour depth has been slower than the increase in CPU speed.

If you test various terminals, you'll find the difference in throughput is massive and the most popular ones are not necessarily the highest throughput ones, because people rarely run into the use cases where it makes a difference.

(the one step up from naive rendering that makes the most difference: the renderer should run in a separate thread, at most once per frame, and scroll the required number of lines with a single copy - that copy might well be optimised to a blit using the GPU by the X server or whatever, but that's besides the point; do just that and output does not bottleneck on the rendering as long as the renderer can render individual frames fast enough)


GPUs are tricky to program so it's a nice sized programming puzzle.

In general if GPU style processors had better dev experience, less fragmentation and other chilling effects, we might be running a lot of our software on them for general efficiency.


what's the difference with alacritty?


I am guessing multiplexer. I really miss tmux on windows, hopefully this could be a replacement. But I don't know about the project


FWIW the Windows Terminal also natively supports panes & tabs for multiplexing: https://learn.microsoft.com/en-us/windows/terminal/panes which were highly inspired by tmux.


I live inside tmux inside alacritty running wsl. I tried wezterm after reading all the great reviews last time it was posted on hacker news and i found it severely lacking in font configuration department contrary to popular opinion here.


tabs, ligatures and much more flexible configuration with its lua configs


Best terminal for MacOS.


I really wanted WezTerm to be the best terminal on macOS. However, when I tried switching from Kitty to WezTerm a month or so ago I found WezTerm’s latency was noticeably higher than Kitty and WezTerm often painted lines as blank in vim when they actually had content. So I was forced to go back to Kitty on macOS.

On Linux, it’s no contest, I love WezTerm over Kitty on Linux and didn’t have any of these issues.


Downloaded the latest stable for macOS and still has the same problem as before - if you have long list of items you want to scroll through or if you try ot simple move cursor with an arrow (hjkl) - you get some weird lags. Neither kitty nor Alacritty has this.


What does this have to offer compared to Alacritty?


Main thing was support for ligatures which Alacritty does not support: https://github.com/alacritty/alacritty/issues/50


As someone who doesn't use tmux, two features pushed me to this over Alacritty: native scrollback with a scrollbar, and native tabs.


Tabs, splits, programmability, best-in-class graphics support, an ssh multiplexor, session management, Lua scripting.

Alacritty and Wezterm have very different--if not opposite--philosophies. Choose Alacritty if you want something simple, fast, focused, done right. Choose WezTerm for flexibility and being a cross-platform environment by itself.


Ligatures is one. I also find the copy mode and the hyperlinks feature [1] handy.

[1]: https://wezfurlong.org/wezterm/hyperlinks.html


Lower OpenGL / GLES version number requirements, so wezterm can run on Arm64 hardware with no hassle.


Honest question: for someone who doesn’t live in a terminal, what does something like this do that a default terminal with OhMyZsh doesn’t do?


Zsh is the shell which is a mini language and environment, so you can run that in many different terminals. OhMyZsh is a set of configuration scripts and plugins to configure Zsh. A terminal is the thing that renders text, handles input and all the escaping complexities. So it's usually the UI/colors around the text that the shell provides. Old terminals were even physical devices with ports that allowed connecting to headless computers.


Wezterm for example supports various protocols for images in a terminal. Many do, but some standard terminals don't. Gnome-terminal still doesn't. Terminal.app in MacOs doesn't.


I'd be interested in latency measurements.

I can't care less about the throughput of a terminal; xterm was fast enough 20 years ago.


OSC52 support is a major plus for me, another one with OSC52 support is alacritty


145mb-226mb for a terminal

that's worse than an electron terminal

something is wrong


Looks like the macOS binary size is a combination of shipping 3 binaries, each with most of the application compiled in (and statically linked), and containing builds for both Apple Silicon and x86 in each.

  2 % tree .
  .
  └── Contents
   ├── Info.plist
   ├── MacOS
   │   ├── strip-ansi-escapes
   │   ├── wezterm
   │   ├── wezterm-gui
   │   └── wezterm-mux-server
   ├── Resources
   │   ├── terminal.icns
   │   └── wezterm.sh
   └── _CodeSignature
    └── CodeResources

  2 % du -hs Contents/MacOS/*
  2.8M    Contents/MacOS/strip-ansi-escapes
   48M    Contents/MacOS/wezterm
  119M    Contents/MacOS/wezterm-gui
   44M    Contents/MacOS/wezterm-mux-server
A bit of duplicative bloat, but there's nothing wrong with that for an young project trading a bit of space for an easier build/release process.


Keep in mind, wezterm is a statically linked binary, while builtin and distro-provided packages will use dynamic linking. Depending on how it's built, it may also have embedded debuginfo and lack LTO.


It is a bit frustrating how much hate Electron gets but statically compiled languages are all the rage & general concensus is it's totally sweet.


It is not just about size of the app but also RAM footprint.


Rust has the NodeJS/Python dependency tree bloat problem.

Unlike other terminal emulators, this project is entirely statically compiled, so the code for the built-in SSH client, serial port support, muxing server/client protocol, shader compilation engines, image decoding libraries, D-BUS clients, threading libraries, HTTP(S) (1.1/2/3) with several types of compression, and of course the GPU shader compilers and APIs.

I built Wezterm on my machine and the binary is about 21MiB in size. About 12MB of that is code, the rest is debug symbols. 2MB of that is the Rust standard library and another 1MB is a LUA runtime. Then there are several image libraries that take about half a megabyte.

The compiled source code itself is only 893KiB in my file. The rest is just dependencies and random ELF segments.

So yes, the terminal emulator is about one Windows 95 installation in size, but that's all quite explainable if you care about the features.

For comparison, I've run `debtree` on gnome-terminal (3.4KiB on disk) and the dependency tree is reported to be 691MiB. Microsoft's Windows Terminal is distributed in 18.6MiB - 66.4MiB (I don't know the compression msixbundles use if any) which is comparable. iTerm2 seems to be about 74MiB decompressed. Even `xterm` (the most barebones terminal emulator I can find) relies on 20MiB of dependencies.

If xterm is all you need, then there's no need to download anything else. If you want more features in a terminal emulator, you're going to have to deal with larger binaries.


It's easy to downvote, but the question is not totally out of place.

On my machine:

* WezTerm 224 MB

* iTerm2 74 MB

* Terminal (default MacOS) 10 MB

And yet, I have read comments proclaiming that iTerm is bloatware.

p.s. I've been using Wezterm for more than a year, and I love it.


terminal.app doesn't even support true color (for starters)... apples and oranges.


Terminal.app is (was?) very good on the latency side though: https://danluu.com/term-latency/


Yeah it's a shame they aren't updating it because the typing experience is one of the best.


This is the size of the binary or RAM usage? If the size the binary, does it matter in most cases? If the size of RAM, yeah, I can see how that seems a lot for a terminal.


The binary is statically linked and embeds several fonts for a consistent, cross-platform, out of the box experience.

The on-disk size != RAM usage, and on-disk size really doesn't matter on modern systems, unless you're on a very resource constrained system, in which case, you probably don't want a feature-heavy GUI anyway.


Wezterm runs in about 134MiB of RAM on my machine. It's pretty consistent above the baseline, growing to 190MiB with 150 tabs open.


well if you're so sure, look at the code and the feature list[0] (to see what it actually does) then tell us what it is doing wrong.

[0]: https://wezfurlong.org/wezterm/features.html


If I compare this feature list to e.g. features of xterm (VT52/VT100/VT220/VT320/VT420/partial VT520 support, graphics via Sixel/ReGIS/Tek4014, …), I don't see a single reason why wezterm should be roughly 100MB large (according to pacman on Archlinux) vs roughly 1MB for xterm.

If you think "but terminal multiplexing and SSH!", then feel free to add 0.9MB for screen (or 1MB for tmux) and 4.8MB for openssh, and you're still way below the size of wezterm.


idk, I am with you and I don't typically care about sizes like this, but this does feel like quite a lot, although not as much as reported above. The split between the two rust programs here is pretty interesting.

[nix-shell:~]$ du -hs $(nix-build '<nixpkgs>' -A wezterm)

94M /nix/store/w7hpqwqqp7xq70wjsm8bd1yara0rhk9v-wezterm-20220408-101518-b908e2dd

[nix-shell:~]$ du -hs $(nix-build '<nixpkgs>' -A kitty)

25M /nix/store/9k0jn3219l6l7ywcqyvbifhd38cagfd9-kitty-0.25.2

[nix-shell:~]$ du -hs $(nix-build '<nixpkgs>' -A alacritty)

6.2M /nix/store/cq7zaagwcwn9blynw027bqszhnld2sk2-alacritty-0.10.1

Is the feature set difference between these really that much? At least, between kitty and wezterm?


Alacritty is basically as minimal a terminal emulator as you're going to get, wezterm is the polar opposite end of the spectrum. Kitty is in the middle, the main thing that wezterm has that kitty doesn't that I'm aware of is the daemon process that allows you to disconnect/reconnect to sessions which is what allows wezterm to function as a tmux replacement (vs kitty can't replace that aspect of tmux). Compared to alacritty, it has multiplexing, ssh sessions (new tabs/panes opened in that session are opened in the context of the remote host without requiring additional authentication), incredibly flexible configuration via lua, etc. And as far as size goes, kitty and wezterm are written in different languages so comparing them directly is not so simple.


This does seem crazy. I have a hobby project that's a statically linked animation editor (so it does a lot of stuff including exporting/encoding GPU streams into a video). It has AV1, Freetype, and a few other large deps included and it only clocks in at 16.7mb




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: