> That establishes the fact that modern keyboards contribute to the latency bloat we’ve seen over the past forty years
Sometimes I try to imagine what things would be like if people were still trying to optimize their stuff as if it were to be ran on a 16mhz machine with a few kb of ram. Then I get back to the 6 electron apps I gotta work with, where latency between a key showing up in the search box and search results appearing gets close to 10s.
The hardware folks gave us magical exponential growth for a straight 30-50 (depending how you count it) run.
That lead to a "we can add more features or we can make it fast" pick one and the business always picks more features. (note: business here just means "non-devs" and can include some devs).
We used that amazing hardware to optimise towards delivery speed over everything else (and while we did/do some horrible things - there is no doubt that a modern dev in 2021 commands processing power and vast libraries that would have been a dream to kid me in the 80's) and when every layer of your stack leaves performance on the table that adds up cumulatively.
Throw in that on the time scales of computers humans are laughably slow and you can get away with a lot of inefficiency.
It's an interesting trade-off - I figure at some point we'll reach the limits of what we actually can do and efficiency will shoot back up the importance chart (in some ways for the hyperscale folks it has for a while, 5% improvement over millions of boxes adds up).
Saying it's "lazy devs not optimising" isn't fair though, they are optimising but in other directions.
> That lead to a "we can add more features or we can make it fast" pick one and the business always picks more features.
Is this really true though? In my estimation the number of features for any given piece of software since the 90s has gone up a little in some cases, down in others (looking at you GNOME), leading to net average of 0.
The software with the most gain in features seem to be high end games and AV editing, which also seem to be performing really well because they aren't typically made in electron.
What does seem to be true is that developers use frameworks like electron because they believe it saves them a lot of time and their time is more valuable than the user's. Well, that and the rampant complexity fetishism.
Performance means having the freedom being able to choose technology stacks along a other axes than performance. Ease of developer use, development speed, debugging, deployment targets, available and market talent are just some other things the business of engineering cares about. So of course engineers may pick Electron because nothing else fits that criteria if you ignore performance. It solves
* Cross-platform distribution. An electron app builds off the extensive work Chrome has done supporting the various operating systems (and chromium has ported it to many others).
* Democratizes native application development. It’s not gate kept behind needing C++ developers and makes it easier to have JS developers work on it (+ abundance means it’s cheaper for the business)
* Speeds up development. There’s no arguing that ReactJS has been a successfully dominant and powerful JS UI toolkit
Sure, do I wish there were better, leaner UI toolkits out there? Yes. If the goal however is to enable the far greater pool of web developers to also be able to build desktop apps with the same ease as a website, I don’t know that you could do much better than the Electron model (there’s an interesting one that uses the browser engine installed in the OS instead to lower distribution size, which is more of a “different but same” kind of situation” kind of thing). Qt is trying ball compete but it’s really struggling and Rust and other tech stacks are struggling to compete. They have perf, but are lacking in the “enable large swaths of developers to contribute” piece.
> It’s not gate kept behind needing C++ developers and makes it easier to have JS developers work on it
That’s not true gatekeeping. Any JS dev is free to learn to write C++ and it's a fairly meritocrital process. I could flip it around and make the same accusation: for decades, "the web" was "gatekeeping" because C++ devs had to learn JS to write apps for the web. Not fair, right?
The C++ (or just not-JS) requirement for desktop software was also not necessarily a bad thing either because writing (correct) C++ or other lower-level code is why native apps are better. They’re written by people that are more likely to understand the cost of memory allocations, cache locality, off-main-thread compute, and the inner working of the OS. The good ones understand how the thread pump works and why just as some things need to happen off the main thread, other things need to happen on it. Web devs are used to everything being on the server (meaning "very, very far away" in terms of latency) but native app devs understood/understand "in memory" vs "on disk" vs "on a different machine." (There are tons of crappy "native devs" that don't get these things and it's painfully obvious when you use those apps.)
I understand why people might be miffed at that, but if technology x comes out tomorrow that lets VC-backed play dough architects build real skyscrapers, are we better off?
It's amusing to me that in one breath the statement seems to claim it's not gate keeping & then proceeds to gate keep & argue the importance of the gate.
The fallacy of thought is assuming that JS developers don't know the importance of those topics when in fact I've met many who do understand all of that (including the inner working of the OS). It may not be the dominant trait, but that would be equally true with C++ developers, you'd just be complaining about all the segfaults & security vulnerabilities are because of the people who don't care to learn "correct" C++. It's the no true Scotsman fallacy. Also, I suspect many of the same memory issues would likely appear if they're in JS land - the JS runtime in browsers seems to be spectacularly good at hiding the costs of things that you might naiively expect to have bad performance in a native language where the compiler doesn't do such tricks. It's everything else the browser is doing for you that's likely the memory consumption (GPU acceleration, accessibility, etc).
I think maybe there's some ways to provide alternate toolkits that are more efficient while retaining JS developers (something like React.Native layered onto Rust+Iced), but ignoring where the market clearly has spoke about development preferences seems unconstructive unless you have some proposal other than "rewrite everything in C++".
> if technology x comes out tomorrow that lets VC-backed play dough architects build real skyscrapers, are we better off?
Depends on what that means, but probably. Faster iteration cycles & lower overheads are very powerful effects to drive down costs. Think SpaceX. You'd probably still regulate it in centralized areas, but in the middle of nowhere, why not let people experiment with what's possible? "Hobbyists" already break the sound barrier racing around on salt flats.
Before you respond, think about it this way. You're complaining about Electron on an article about keyboard latency which is measuring low-level latency issues well before it hits Electron. Clearly it's a challenge for those engineers & that's a very niche problem. They're writing code in C.
Exactly. Today's dev is optimizing for feature delivery speed. While the kid in me that wrote 4k demos dies a little more inside every day at what we've become, you can't deny that if you had told us back in the 80s that we would receive a customer request in the morning, design and develop the feature in the afternoon, and push it into production for thousands of users across the globe that evening, I'd have thought you were insane. We mourn what we've lost in dev but we're so used to how amazing things have gotten on the axis of feature delivery speed we fail to appreciate it.
> That lead to a "we can add more features or we can make it fast" pick one and the business always picks more features.
Right - the question to ask is "what are businesses incentivized to do?" and the answer is "add features".
With some exceptions. Video games, for instance - gamers don't like it when their $60-$80 AAA game runs at 15 FPS on their $3500 rig, so video game developers are actually incentivized to optimize for performance.
If we can change the incentives, then we can influence the outcomes.
One way of changing the incentives is to get users to complain about slow products and use faster ones (although that might be difficult). Another, possibly more feasible way, might be to get federal and state governments to start requiring user-facing software products purchased/contracted to have maximum latencies on a given baseline spec. You can bet that that would cause certain companies to suddenly care about performance.
> I've been doing dev on a citrix desktop, where a single keypress can take seconds to register.
Why would that be slow, though? Citrix would only be like that if you were over a high-latency network connection - that's unavoidable. If it is a LAN connection then the Citrix box is overloaded: add more RAM and processor cores (and disable the page-file: disk IO is a big killer of VDI UX).
The problem with the Electron apps mentioned is that it's a reflection of the sad state of affairs that we have crappy UX caused by bad technical decision-making, rather than a crappy UX just because the hardware is underprovisoned.
Yeah, it's natural that remote desktop is bad in bad connection even if Citrix is well made. But it can be said that adopting VDI over Citrix solution for developer machine is a mistake, something like adopting Electron.
> But it can be said that adopting VDI over Citrix solution for developer machine is a mistake
That's debatable - certainly in many cases it makes a lot of sense: e.g. a corporate environment with hot-desking (i.e. Windows Roaming Profiles), but programmer/dev users need a consistent workspace so they can't use roaming profiles, instead having consistent and persisted VDI environment makes sense.
Another reason is that you can pre-image and snapshot an entire OS environment more easily (and even more easily if you're using a Hyper-V VDI instead of a multi-user Terminal Services or Citrix VDI) which is great for ensuring everyone on a team has the exact same build/dev/test environment. This is being reinvented as "Code Spaces" which GitHub and Visual Studio seem to be really getting behind: https://github.com/features/codespaces and https://github.blog/2021-08-11-githubs-engineering-team-move...
-----
But I'd wager that most devlop-in-Citrix/TermServ situations are in "normal" corporate businesses with "no fun allowed" IT policies, and even if they don't have a hot-desk office policy they probably still don't like the idea of anyone having root/admin access to their own machines. (I'm fortunate to have never been in that situation in my working life, but I've heard plenty of horror stories from friends who did short-term contract dev work on-site for companies like Tesco and the like)
My impression is that some parts of Windows force animations to play out fully (probably not allowed to skip keyframes?) instead of real-time. So when something is animated, and the connection is bad, it takes ages for the animation to finish. Also, animations stay enabled in the first place in some places when using RDP is a bug in itself. This seems to be mostly the case in Microsofts own UWP apps. I distinctly remember the pain of the "!&§/$+ Edge full-screen logon "make me your browser already" nag, it literally took five minutes for that dumb IE-to-edge logo animation to play, blocking the entire desktop while doing so.
RDP is insanely optimized and there a lot of hard science and math involved in making it work efficiently and effectively for dynamically changing content from steaming media to editing in a terminal and everything in between. However, the technology is constantly updating - if your experience is only with RDP from the days of XP, I can understand your frustration. Also, most third party (non-MSFT) products (servers or clients) don’t implement the full spec and stick to the bare-bones, high-latency, low-throughput v1 protocol which is still RDP thanks to MS dedication to backwards compatibility, and that will absolutely give you a subpar experience as well.
>Sometimes I try to imagine what things would be like if people were still trying to optimize their stuff as if it were to be ran on a 16mhz machine with a few kb of ram
People (mainly EEEs) still do this in the embedded space, where most hardware interaction is done my directly writing to/reading from registers.
Latency for most things is on the order of 100ns - 10us, maybe 1ms or so if you're using a particularly slow serial bus.
They need to turn their attention to the pumps at the gas station near me, which seem to be optimized for ear piercing beeps with each key press. I realize the blind may need feedback while they fill up their car, but surely if they can hear well enough to drive they can hear a normal beep.
All the gas stations near me have been spending all their R&D budget on publishing advertisements. They have loud mini TVs embedded into the pump that you can’t turn off churning out lowest dregs of society crap and ads on full blast while you pump. It’s now a conflict of interest for them to prioritize pump maintenance and upgrades because the faster you finish pumping the less ad money they make.
I mean, we also have these massive pipelines and reorder buffers nowadays, and pretty good optimizing compilers, so really tuning all-over would be bit of a nightmare and not super rewarding (just tune GEMM kernels if that is your thing).
I'm surprised by the section on including key travel time in the latency measures. Wasn't the Apple 2 keyboard about comparable in travel distance and resistance to today's typical mechanical keyboard switches?
I still can't find it in me to like the Apple Magic keyboard because I feel like my fingers are smashing into a metal plate hundreds of times a minute. That's a me problem though and others just have better form.
Now I'm curious about the latency of the stuff I've put on teensys in my own custom keyboards/controllers and will need to resist the urge to go shopping for a scope.
I had to get to grips with the USB spec when designing my USB scope [1].
Adherence to the USB spec itself doesn't necessarily create human-perceptible latency; HID frames can be transmitted every 1 ms and could be parsed in a negligible number of CPU cycles on an embedded machine with USB-OTG and no operating system.
Like everything, the noticeable lag appears because the application-layer software is going through 15 layers of abstraction rather than talking directly to the input device, and this can become as complex and bloated as it wants.
A lot of time can be wasted with poor debounce implementations. Cherry switches claims <8ms bounce IIRC, but you only need to debounce one of the edges. So as soon as you see the key start bouncing, you know it is activated, so you can immediately register the press, then do debounce. That way you only delay the registration of the release. By using a shift register you can also minimize it to the actual bounce time, not a predetermined delay. I (personally) don't know of any commercial keyboards that implements it this way, but given that their software is closed source, it is very possible that some do use this algorithm.
Not only that, but the matrix scanning could be done via interrupts instead of via polling as is typically[0] done. An interrupt could fire on a level shift of any of the column GPIOs, then the MCU needs to search the rows to find the selected key. After the interrupt the MCU will need to poll until all the keys are released. So there's another source of latency that could be improved.
I think the only source of polling left is in USB, but I think that's inherent to the USB protocol (someone correct me if I'm wrong here). Without the USB polling I think it would be possible to have key-press-to-USB-packet completely interrupt driven which should make the latency in the keyboard itself negligible.
[0]: I say typically but like you said a lot of implementations are closed source, so who knows. All of the discussions I've seen on matrix scanning use the polling method, and the open source implementations use polling as well (e.g. Keyberon[1]).
It’s counter-intuitive (and you probably already know this but for the benefit of others) but depending on frequency and CPU availability, polling can have a lower latency than interrupts. You just have a much smaller window of time to execute your handling code (unless you don’t care).
It’s actually partially why (some?) hard real-time systems eschew interrupts altogether. They introduce a source of non-determinism into the mix as an interrupt handler can stall out non-interrupt performance-sensitive code (or starve it).
The few open source implementations I have looked at scan the matrix as fast as the processor allows. So using interrupts can reduce power consumption a lot, but depending on the actual matrix layout, probably not any worthwhile speedup in the nominal case, as you still have to scan the rows after the interrupt.
AFAIR, the USB poll rate is a function of USB rate, and what the endpoint reports/requests. Low-speed, full-speed and high-speed all have different poll rates.
That's what I thought when I set my handwired keyboard firmware settings using QMK, just trigger on the edge. But then occasionally switching the exhaust fan switch in the room next door would make errant keystrokes, not cool, so I decided I could live with a few ms more latency. Maybe a proper PCB like a bought keyboard could be more noise immune though.
It was probably the best you could do in your position, but debounce is a hack to work around EMI and not the actual solution. Ideally, the circuit should be electrically resilient to EMI and the debounce should only be handling mechanical noise from the switch itself and not concern itself with any outside interference.
I made an implementation of what I describe for TMK years ago. There is possibly a problem with the hardware design of your keyboard, probably too weak pull ups coupled with EMI susceptibility, of the hard-wiring. I am having no issues with false triggers on my end.
> I'm surprised by the section on including key travel time in the latency measures. Wasn't the Apple 2 keyboard about comparable in travel distance and resistance to today's typical mechanical keyboard switches?
>>A major source of latency is key travel time. It’s not a coincidence that the quickest keyboard measured also has the shortest key travel distance by a large margin. The video setup I’m using to measure end-to-end latency is a 240 fps camera, which means that frames are 4ms apart. When videoing “normal" keypresses and typing, it takes 4-8 frames for a key to become fully depressed. Most switches will start firing before the key is fully depressed, but the key travel time is still significant and can easily add 10ms of delay (or more, depending on the switch mechanism). Contrast this to the Apple "magic" keyboard measured, where the key travel is so short that it can’t be captured with a 240 fps camera, indicating that the key travel time is < 4ms.
Yeah, I don't get that either.
It should be the time a key makes contact to the time the signal is sent. If I were to design that I would make an external trigger that connects to the solder pads of the key and start the timer on the logic analyser.
I like my keys with a lot of travel and a good thunk when they send the signal.
This would penalize them extensively for doing exactly what I want.
I mean if you want fast keyboards capacitive key caps would be even faster, but I've yet to meet anyone who has replaced their keyboard with a tablet.
Because typing on a capacitive tablet would be tiring. You can't rest your fingers anywhere without registering a touch, so you lose all the benefits of a home row. You need to be able to distinguish between resting contact and a positive press, and capacitives would make that hard. Not impossible, but not easy either, and tablets today don't do it.
Not sure what you're explaining? I'm just pointing out that their metric is extremely flawed and can be easily gamed by an inferior solution. Like the Apple Magic.
I'm not seeing how its "gaming the metric" to include travel time of the key on a keyboard. Typing is ultimately a physical interaction with an interface, not just data transmission over a wire. Therefore, if we are measuring the latency of the input we must include physical travel time. Consider an extreme example, if each "key" used a control lever, like the type on a tractor or used to control thrust in an airplane, we'll just say 90 degrees with a radius of 8", with a significant amount of linear activation force(relative to a keyboard). It wouldn't matter if the device had response time that was virtually instant when you reached the activation point, the input latency of this system would be so significant as to be impractical for most typing tasks. If anything excluding travel time would be gaming the metric.
There is no expectation of a character appearing on screen until the click. So the latency between key touch and click has no feel of latency to a person.
The flip side is that there is no expectation of a character appearing until the click happens. So a keyboard with a click which reduced latency by a lot would start showing characters before the click occurred -- and this would be incorrect behavior.
If I'm touch typing on a keyboard with full travel distance, I may have more than one key in motion at the same time, so the latency of each keypress doesn't exactly add.
Additionally, if you're talking gaming keyboards as article goes into, you don't time your "shot" for when you first touch the key. The timing of the shot is relative to when the click happens. You might even milk the key the same way you milk a trigger on a gun. Or if you're in a rapid fire use you flutter back and forth across the threshold with a clickless switch like cherry reds, and you don't use the full travel of the key. Either of these the latency measure in the article doesn't line up with real world use.
>I'm not seeing how its "gaming the metric" to include travel time of the key on a keyboard.
Because if we remove travel time by using a capacitive keyboard you end up with something that is objectively a worse keyboard. Replacing the capacitors with strain gauges so you can rest your fingers on the home row without activating them results in an worse keyboard yet again.
Any metric that when optimized produces an objectively worse product is a bad metric and optimizing a bad metric is gaming it.
This is quite literally about as subjective as it gets. You like a thunk when you type, and there isn’t anything wrong with that, but other people really like short stroke keys, and they might even find the keyboard you described delightful, especially—I would imagine— if it was paired with a nice, timely audio queue and haptics.
I know several people who absolutely loved the butterfly keyboards, and I found them a joy to type on in short bursts. However I, personally, have too heavy of hands/fingers for them and prefer ~80g actuation force so I can completely rest my fingers on the keys without activating them.
>if it was paired with a nice, timely audio queue and haptics.
So now you have extra latency between key activation, audio activation and haptic activation. You know what gives you all of those for free and ensures they are always timed correctly?
A physical key.
>I know several people who absolutely loved the butterfly keyboards, and I found them a joy to type on in short bursts. However I, personally, have too heavy of hands/fingers for them and prefer ~80g actuation force so I can completely rest my fingers on the keys without activating them.
Activation force and key travel distance have nothing to do with each other. I'm half tempted to build a strain gauge keyboard with zero travel just so people can pay me to see how bad they are.
The less physical feedback a keyboard gives you the more typos you make using it. Arguing that keyboards which encourage you to make more typos are good is as stupid as saying that camouflaged stop signs are as good as the old fashioned red ones.
>The less physical feedback a keyboard gives you the more typos you make using it. Arguing that keyboards which encourage you to make typos are good is as stupid… this is objectively true.
This seems like a tenuous assumption. It completely ignores the main point I have, which is that different users have different preference. That’s why I brought up activation force, because my problem with the magic keyboard wasn’t the feedback, which I absolutely loved, but the activation force of the keys. I and many others have never found the amount of feedback on the Magic keyboard to be inadequate. You are projecting your personal perception that a large amount of key travel distance makes you type more accurately and is therefore desirable(to you), into an axiomatic truth about the nature of keyboards for everyone(i.e. that significant travel distance is needed to make a high quality keyboard and therefore shouldn’t be counted in a metric about the latency of inputs).
It’s not an “objectively worse product”. It’s a product that does well on some metrics (size, apparently latency) and does poorly on others (famously, durability). Then there’s the completely subjective aspect of what sort of feel you want from the keyboard — which is why Cherry has a bazillion different switch types (and why I have a tester set on my desk right now because I’m trying to decide on a new keyboard).
Prioritising each of those is up to you. Your priorities are different from mine, and my preferences don’t make your keyboard of choice “objectively worse” overall.
You're not waiting for character to appear between starting pressing next key - if you touch type you have multiple keys "in-flight". So, the latency is less than appears - since bulk of that time is eaten by the fact that you start to press next key before the first one is even registered.
Literally the last half of your last sentence. The reason nobody uses capacitive tablets as keyboards has nothing to do with speed, however it's measured.
I think you are right that there are good keyboards that would get penalized in this metric. But I'm not sure that it makes sense to try to improve the measurement system -- is is possible that this just isn't a very important measurement.
There’s a great video here that goes through all this - the gist is that USB keyboards specify what polling rate to use, 1000hz for the best ones, maybe 125hz for the cheaper ones. An interesting thing about ps/2 is that even though it doesn’t have to wait for the keyboard to poll it transmits data at much slower speed than USB (khz vs mhz) which somewhat offsets the advantage it gets from being interrupt driven
That bit made no sense. It might have made some sense if it was measured from the moment you get physical feedback from the keyboard that the key has been pressed. But I suspect that this isn't as straight forward as it is tempting to believe. (For instance if the mechanical feedback and the moment of contact do not occur at the exact same moment)
Seeing this again makes me want to dig out the sketches I had for a FPGA-based keyboard controller. There's no poll latency when nothing is polled! (Okay, okay, there's a few clocks.)
The trouble with that project was always that I hate prototyping with BGAs, having to deal with carrier boards sucks, and all the QFP or similar FPGAs were limited to 144 pins, which is just not quite enough for 108 keys given the number of other demands for I/Os, and the number of I/Os you actually get on a 144-lead package.
I wasn't much looking forward to dealing with the USB either (I seem to remember it would require at least a small softcore?) but at least there's plenty of open or vendor implementations there to "borrow".
If anyone wants to rip this off, go right ahead as long as you let me know... this controller was only one piece of a larger idea, and I love it when people do my job for me :)
108 keys sounds like 4 keys too many to me, but I wonder if you could use multiple FPGAs and include a USB hub (soft or real), if you split the ten-key out, that might be enough to do the rest of the keyboard with a 144 pin chip, and the tenkey with a smaller chip.
108 includes 104 plus 4 media keys. I think there are also a few international layouts with more keys than US-standard, but I never got that far.
> I wonder if you could use multiple FPGAs and include a USB hub (soft or real)
That... is a recipe for pain. Complexity of small hardware projects is basically Ackermann in the number of processors you are coordinating.... Ideally in a product you really, really want only one firmware blob to update at a time.
> if you split the ten-key out, that might be enough to do the rest of the keyboard with a 144 pin chip, and the tenkey with a smaller chip.
I don't know if alt-codes would work with this strategy.
Regardless, none of this is worth it just to avoid prototyping with a BGA or dev board. (I'm not even trying to avoid those for production; this is my day job so I'm not that afraid of big BGAs. I just don't want to ever have to solder or probe them, and the gods alone save you if you have to rework a joint!)
In case anyone is wondering about the keyboard prices, i looked through the list and also looked up the unavailable ones on the Internet. Here's their current prices at a glance, in USD:
Keyboard Latency Connection Price (USD)
chinfai silicone 35 USB FS 17
genius luxemate i200 55 USB 21
logitech k360 60 unifying 25
logitech k120 30 USB 29
easterntimes i500 50 USB FS 31
razer ornata chroma 35 USB FS 59
MS comfort 5000 40 wireless 71
kinesis freestyle 2 30 USB FS 89
apple magic (usb) 15 USB FS 99
das 3 25 USB 101
unicomp model M 30 USB FS 104
hhkb lite 2 20 USB FS 112
pok3r vortex 30 USB FS 129
filco majestouch 30 USB 129
topre type heaven 55 USB FS 149
olkb planck rev 4 40 USB FS 169
kinesis advantage 50 USB FS 319
ergodox 40 USB FS 354
MS natural 4000 20 USB 399
Personally, i have a Logitech K120 at home as a backup keyboard, and as far as membrane keyboards go, it's a solid and affordable choice. I'm not sure whether it's regional differences or something else, but i got my keyboard for about 10-15 euros. I guess the fact that it's extremely "boring" also adds to the charm, at least when compared to designs like Logitech K360.
Also, i wish i could vouch for the Unicomp Model M keyboards, however shipping one to Europe would almost cost me as much as the keyboard itself and i'm currently not sure i can exactly afford to splurge like that. Regardless, have a look at their site, they're essentially carrying on with the legacy of IBM Model M keyboards in a way: https://www.pckeyboard.com/
From what i've seen while occasionally visiting their site, they also seem to introduce some new varieties occasionally. From what i've heard, those keyboards aren't exactly ideal for gaming (a regular mechanical keyboard might be better there), but are really good for typing because of their buckling spring design. LinusTechTips also had a video on the Model M keyboards a while ago: https://www.youtube.com/watch?v=D7wmMZmMinM
I second the K120. I switched to it from my Das Keyboard Pro 4 after I got fed up maintaining it. The 3 key rollover is not ideal though, but otherwise I think it's solid - light to the touch, sturdy build, unassuming design, nothing weird going on with the layout.
I use the K120 and when I bought first bought it a few years ago it was $12. I just bought another one and found it combined with a mouse (listed as "MK120" but really the same keyboard) for $15.
I've been thinking about building a new PC, mostly for development (if the budget allows also for a bit of gaming), and was wondering what components to pick to reduce latency and to get a satisfying feeling of "immediacy". Like when I first got an SSD and apps started instantly. Or when I got my first > GHz PC and optical mouse, and the cursor moved so smoothly it was uncanny.
For input to screen latency, I think wired keyboard and mouse would be important. For "instant app opening" a recent CPU and enough RAM, and a NVME SSD. But beyond that I'm not sure. Does anybody have any recommendations or tips? Does a 140 Hz or G-Sync monitor help? Are there any pitfalls (I've heard anecdotically that some CPU power saving states can cause micro-stutter)?
Some days it feels like I have the last non 16:9 monitor ever made (and it’s not even 4:3, it’s just 16:10).
Every year I search for a true 10-bit IPS 16:10 (or better) hi-dpi or hi-FPS (ideally both) monitor and every year I’m disappointed. My ZR30W is the oldest part of my PC (Model F keyboard notwithstanding) and has survived maybe four builds.
A high refresh rate monitor will cut down latency by 10-15ms. It'll also look/feel much smoother. You won't notice the 1ms added by a good wireless mouse in my experience. (I'm happy with my wireless Logitech G305 mouse on Linux, it's supported by the kernel so it's plug and play. Also surprisingly good battery life.)
There are tests out there measuring the latency of wireless mouse and unless you are buying a dirt cheap one, you are fine. There is no real difference in latency against wired.
> There is no real difference in latency against wired.
I disagree, but that’s not important. What is important is that there’s a huge difference in the standard deviation of latency (i.e. jitter) and that is - to me - unacceptable.
Well, that's why I'm holding on to my decades old Cherry PS/2 keyboard. No polling, no latency, no ghosting, no key rollover issues.
Unfortunately the PS/2 port is becoming rare, I'm limited to MSI for my upcoming AMD B550 motherboard because it's the only brand that offers it on a nicely specced ATX board.
Apparently [1] PS/2 limited by a slower data rate than USB 2.0 which cancels out almost all the advantage of its lower initial latency to begin the transmission, making them both about equally capable in actual latency - at least in theory.
I wonder how much CPU load changes the equation. I've experienced USB input locking up or stuttering as a result of high CPU load, which presumably wouldn't happen with PS/2.
I recently switched to Ergodox keyboard and was wondering about latency since it felt like the keyboard had lag, but I attributed it to learning period.
I was like - no way this is happening, since Ergodox is using USB cable whereas I used Apple's keyboard before. This seems to confirm my feeling.
Interestingly, I think that it's highlighted the most in Slack which is slow enough bloatware to have a lot of latency already and those extra milliseconds just add up. The simpler the editor the smaller the impact I think.
I assume your Ergodox is running QMK, which depending on some configuration values had a relatively slow debouncing mechanism. I think times were around 20ms. Checking https://www.youtube.com/watch?v=jWL4nU-vtWs which compares it with ZMK it seems like it is down to 8ms, but ZMK is still faster (1ms)
Not really as it's not that of an issue. It's just something I'm feeling and it can be attributed to many different factors - keyboard switch, key travel, software sluggishness. However bring 100+ WPM typer those couple ms could contribute to that feeling.
I just want to chime in and say that at with my personal computer setup I was not able to recreate these findings at all. I was getting keypress to character latencies of around 30ms in Windows gVim as measured by averaging multiple runs w/ a 240Hz camera. I don't even think I have a particularly low latency setup as I'm using an eGPU.
I'm surprised that he includes the key travel time. If a keyboard maker claims low latency, I assume they mean from the point when the switch actuates, not when the keypress starts. It seems like he is mostly measuring travel distance and actuation point position, which are things you can find out without a high speed camera.
A related issue: this article measures the latency of a large selection of terminal emulators, with interesting results: https://lwn.net/Articles/751763/
A repeat of my 2017 post for the original article.
The issue not mentioned in the article is number of lines on the keyboard matrix used to detect keypresses. Cheaper non-gaming keyboards can only detect up to 4 simultaneous keystrokes. Gaming keyboards can detect up to 6. That may not sound like a significant difference, but if you're moving diagonally (eg. W and D), running (Shift), holding an item (or use) (E) and jumping (space), that is 5 keys which need to be processed. Moving from my older Gaming keyboard to a generic Logitec keyboard, and I was no longer able to run diagonally in FIFA games while doing trick moves. So the non gaming keyboard made me stop playing that game.
The feature you are talking about is called n-key rollover. Cheap keyboards generally don't have it. Similar to you, I discovered the issue in the 90s with FPS games.
Keyboard manufacturers have been designed with n-key rollover as a feature since the late 70s, believe it or not. Keyboard technology has, in many ways, regressed since then. Fortunately, good keyboards are finally being made again—they just need to work on appearance and decent layouts again.
I guess it wouldn't work with the lab setup, but it'd be interesting to see how various laptop keyboards factor in. My one here has very short travel (similar to a apple magic keyboard) and is the ribbon cable to motherboard connection going to be quicker than USB?
I'm dubious about the accuracy of their lab setup though. From the write up - "The start-of-input was measured by pressing two keys at once -- one key on the keyboard and a button that was also connected to the logic analyzer." that seems like a very week point in what is otherwise a great experiment although they do state "The median jitter was < 1ms" which surprises me.
i once was in an environment where keystrokes for everyday applications, like, windows instant messenger clients, traveled through three states with hundreds of millis of latency. i complained that this would make everyone slow and crazy. they didn't like the complaints and pushed back. i think they should have fixed it instead.
Does anyone else experience massive latency with Apple's keyboard/case for iPad Pro? Mine is unusable in Pages (better in Notes), and often the first keystroke is missed, regardless of which app I'm in. Super annoying defect in an otherwise enjoyable keyboard.
In a lot of games like WoW or SC2 you need to press a lot of different buttons, to the point that it's not really practical to preload. In CS, maybe you can preload, but it would be nice to look at some footage and see if professional players actually do.
Awhile ago there was a submission and discussion on HN looking at the latency between a keystroke and something appearing on screen through the layers of OS and application, but I can't find it, anyone?
And I thought human reaction time was 100-200ms+ anyway Or would training on a particular game/movement sequence improve that.
Noticing latency is very context sensitive. A fun example is drawing lines on a touchscreen. Even ~20 milliseconds manifests as a visible gap between the tip of the pen and the line that "chases" it.
Temporal resolution of human hearing obviously varies with what exactly you measure but I believe it is in a single digit millisecond range for discerning between two clicks in quick succession and a single click.
While participants performed dragging and scribbling tasks, very low levels of latency could be discriminated, i.e., ~1 versus 2 milliseconds [...] [0]
This has been posted here a few times, but one interesting exception is what systems these were tested on. The author prefaces their analysis by comparing a mysterious 'Kaby Lake' system with their Mac, then proceeds to crown the Apple HID as the winner.
An easy way to cook the results would be to test the Apple peripheral on a Mac and run everything else on your Windows desktop. Frankly, the BSD I/O stack will destroy the one on Windows any day, so it would stand to reason that it might be more of a software bottleneck than a hardware one.
I'm not accusing them of getting anything wrong though, I simply assume they aren't withholding any important testing info.
Sometimes I try to imagine what things would be like if people were still trying to optimize their stuff as if it were to be ran on a 16mhz machine with a few kb of ram. Then I get back to the 6 electron apps I gotta work with, where latency between a key showing up in the search box and search results appearing gets close to 10s.
Somehow things got wrong somewhere.