Some countries selectively put some people in the "you have a very long notice period" and "you will not be working for us, but still be paid, during your notice period" (so-called gardening leave).
Not unusual in the finance industry, somewhat unusual (but I have heard of it) in "more pure tech". Probably also more common the further up you get in the corporate hierarchy.
In Sweden, at many places that do a materials science engineering degree, it is historically under "Mining engineering" rather than "machine engineering".
The speed of light in vacuum is a hard upper limit. Most signal paths will be dominated by fibre optics (about 70% of C) and switching (adding more delay).
But, yes TrueTime will not magically allow data to propagate at faster-than-light speeds.
My take-away from interviewing is that as a candidate, getting a "you did not get the job" doesn't change anything. Since I've had the privilege of mostly being employed while interviewing for new jobs, this is a "meh, status quo" thing. Nothing changes, I have no decisions to make. If I get an offer, I have a decision to make.
Does it sting when I get a "No"? Yes, a little, but I did my best and (presumably) someone else did better. So, I take solace in that I did not have to make a (relatively large) decision.
I am cusrious what you mean by "LeetCode-style". My understanding is "present a problem, only take the resulting code, judge that".
I did a fair number of coding interviews (for SRE positions) at Google (I don't actually know how many in total, but 25-30 is probably a safe guess) and, yes, it started with a small problem that should be solvable in well under half the interview time. I only used problems that passed my "is this fair to the candidate" screening (look at problem, try to write a working solution, if that takes more than 7 minutes, the problem is not fair).
The value in the interview comes from (a) listening to the candidate narrating their thought process during the coding, (ii) discussing the solution with the candidate, sometimes in terms of complexity, sometimes in terms of correctness, depending on what is on the board, (3) refining the code written (add more functionality, change functionality).
For a language I knew, I tended to overlook small syntactic mistakes (a whiteboard does not have any syntax highlighting) and if there was any questions about library functions, I would give an answer (and note down what I said, so that would be the standard used when judging) and the bulk of the score came from the discussion about the code, not the code itself.
If that matches what you mean by "LeetCode-style interview", that's certainly been in place since at least 2011. If it does not, things may have changed, but at least the aim wit ha coding interview back when I was still at Google was less "get some code judge, on the code" and more "get some code, judge the discussion about the code". It is also entirely possible that interviewing for SWE positions was different frmo hiring for SRE positions.
Even so, it's probably not going to be super-effective, unless each workplace is willing to have enough useful small things that do not require knowing substantial chunks of the existing code base.
It is probably realistic to expect someone to write something useful (although possibly small) in three days. It is less realistic to expect someone to write a useful component integrated in a large system that they have to learn, in three days.
In an ideal situation, you'd get all "planned maintenance" emails for things you care about and no emails for the rest of them.
That (probably) means that the system for dealing with planning maintenances (well, usually, "approving them") needs to have a sufficiently good understanding of what humans care about what changes.
At a previous job, the planned change tracking system was REALLY good at tracking what specific compute facility was going to be impacted by any specific change taking place in that facility. And had a really good way of allowing you to filter for "only places I have stuff running" (and I think, even some breakdown of general change types as well).
It was, however, not easy to get notification of "there will be maintenance on submarine cable C, taking it off-line for 4 hours" or "there will be maintenance at cable station CS, taking cables C1, C2, and C3 down for 3h". And as one of the things "we" (the team i worked in then) was doing was world-wide low latency replication of data, we did actually care that cable C was going to be down. But, the only way we could find out was "read all upcoming changes" and stick them in the team calendar.
Was it good? Eh, it worked. Was it the best process I've seen? Probably ,yes.
If you aim to be "human-friendly" (and that is, as I understand, the raison d'etre for YAML), there is a subtle semantic difference between "true" and "on" (and "false" and "off") and as a human it may be nice to express that semantic difference.
As for that semantic difference, if we expect the light source to have one of exactly two states (that is, "not a dimmable light"), we probably want to express that as "lightsource: on" rather than "lightsource: true".
And that is where the friction between "humanfriendly" and "computer-friendly" starts being problematic. Computer-to-computer protocols should be painfully strict and non-ambiguous, human-to-computer should be as adapted as they can to humans, erring on "expressive" rather than "strict".
I am also not sure if I am happy or sad that the set of configuration languages in the original article didn't include Flabbergast[1], which was heavily inspired by what may be simultaneously the best and worst configuration language I have seen, BCL (a language that I once was very relieved to never have to see again, and nine months later missed so INCREDIBLY much, because all the other ones are sadly more horrible).
How many seconds in an hour? Most of the time, 3600, occasionally 3601, and very rarely 3599. Hours in a day? Mostly 24, but 23 once a year ad 25 once a year.
These all seem like good reasons to make then functions (taking a timestamp as a n argument) rather than mostly-correct constants.
I swear, the more I learn about calendars and timekeeping, the more I realise I never ever want to deal with it.
Same thing for the ZX-80 and ZX-81. If you were careful with your text prompts, you could save several bytes by injecting a keyword instead of typing it out, but anything that was a "start of line only" could be tricky injecting into a string.
Not unusual in the finance industry, somewhat unusual (but I have heard of it) in "more pure tech". Probably also more common the further up you get in the corporate hierarchy.