For vim/neovim users you can use digraphs, where you press `<C-k>` followed by a 2-character code to get a symbol (see `:help digraphs` and `:help digraph-table`).
For example,
<C-k> -> gives →
<C-k> => gives ⇒
<C-k> d* gives δ (greeks are generally all _letter_*)
<C-k> D* gives Δ
Some of the combinations are a little weird to rememeber, but if you use them regularly then it's easy enough (like greek or arrows).
In my editor I just press Ctrl+x, 8, Enter, and then I can look up Unicode codepoints based on either the hex code or the name. Alternatively you could do the whole Unix 'do one thing and do it well' thing and use a character selection program which allows you to look up a character (I've got KCharSelect, I believe Windows has a similar thing called Charmap or something).
- Like the sibling mentioned, you can use a “compose key” (it’s built in on Linux and can be installed on other platforms). There are other OS-wide options like the TeX input method for Gnome, or TextExpander snippets on Mac.
- In Vim, Ctrl-K in insert mode lets you enter “digraphs”. For instance, Ctrl-K *s will insert a Greek letter “sigma”.
- In Emacs, C-\ will activate an “input method” such as the “TeX” input method that can insert any math or Greek Unicode symbol using LaTeX notation. (I personally like Emacs’ “transient” input method that is activated for one symbol at a time, since it interferes less with coding.)
- In Sublime Text and VSCode there are third-party plugins that insert Unicode symbols via e.g. TeX notation.
- Every Julia editor extension, as well as its REPL, lets you press tab to convert a set of LaTeX-like notations into Unicode symbols in a common way.
- Most editors have support for snippets of some kind, and many have available snippet packs for common math and Greek symbols.
Depending on your operating system, you might have (the option to remap a key to) a compose key. Then you can press compose followed by, for example, <= to make ≤. This also works for letters with diacritics on them, like "o for ö or 'o for ó, currency like =C for € or =L for ₤, or other symbols like tm for ™.
You might also be able to set up your own compose codes in ~/.XCompose, but that didn't work for me. Could be another casualty of Wayland or just some quirk in my setup.
Because as an industry, we are bad at our jobs. The network facing software has critical security vulnerabilities. Even security folks accept that as the way of the world.
At the point the software is released it has (hopefully) no known security vulnerabilities, which is a reasonably secure situation to be in.
However, eventually some of them will become known, and that is not safe.
There are plenty of reasons not to want your IOT bulb to be insecure that are unrelated to people mining crypto.
A pwned IOT lightbulb can be used to help DDOS sites. It can relay DDOS traffic, eating your own bandwidth. It can be constantly probing the other devices on your network looking for vulnerabilities, until it pwns something else and is able to slurp down your passwords and credit card numbers.
Are you seriously suggesting that having an actively malicious computing device inside your home network is no big deal?
If it has a camera, it can be used to steal your security keys if it can see the power LED on your device (or potentially even just if something connected to your device has a power LED).
But a "critical security vulnerability " depends on the use. My daily driver? Yes, I want all of the security updates. A raspberry pi for playing arcade games that I occasionally scp a ROM over to? I really don't care if someone hacks in.
We, as an industry, are bad about pushing "every device that is on the internet needs to be as up to date as possible all the time" when it reality there is a lot of unimportant stuff on the internet.
It's like locks. I wouldn't secure my house with a bike lock, but it's fine for my bike. My bike is less full of important stuff.
At best, that means you're externalizing the costs, i.e. now your device is part of a botnet and becomes a problem for other people. But of course that assumes that it doesn't become a problem for you as well; a compromised device on your network is a great launching point for local attacks and a way to send illegal traffic out through your internet connection.
Ah yeah, I need to stop working at random notice, because some CVE bros have to immediately update all my things to hedge the risk of organized crime targeting my $0 value data like I'm that casino.
Meanwhile in reality, no one gives a f about the rPi you use for your Guinea pig feeder.
The "security" industry is unfortunately full of corpo-authoritarians. Once they realised a lot of the population can be forced to do anything if they can be convinced it's for "security", they've been doubling down on that.
Well, a common thing with open computing resources these days is cryptominers. Sure, you don't care about updates, until someone puts a miner on it and you have to go in and try to fix it. It wouldn't matter that your single device doesn't have enough processing power when there are tens of thousands of similarly vulnerable devices to hijack.
According to the parent post, two families of languages. I disagree, though. Each standard _is_ a language. But which one does your compiler implement?
Exactly, it's "a language" but what you say will be interpreted differently depending on which specific version of which specific compiler you happen to be using today.
An interpreter with a JIT compiler is able to do more optimizations because it has the runtime context to make decisions, while a AOT (ahead of time) compiler will not know anything about what happens at runtime.
This is why some JIT'd languages (like Javascript) can be sometimes faster than C.
Can you give some simple example for the folks in the back of how JIT'd languages can be faster than C? I think most people are under the impression that statically compiled languages are "always faster."
> Can you give some simple example for the folks in the back of how JIT'd languages can be faster than C?
If the JIT has instrumentation that analyzes execution traces, then it can notice that a call through a function pointer always goes to the same function. It can then recompile that code to use a static function call instead, which is considerably faster.
Basically, it can perform a similar set of optimizations to a static compiler + profiling information in basic cases. In more advanced scenarios, it specializes the program based on the runtime input, which profiling can't do for all possible inputs, eg. say the above function pointer call only happens for input B but not for input C.
In theory, some execution sequences are not knowable except at runtime which could be optimized after the code has already been running for a while.
In practice, static AOT compilation is essentially always faster for a couple reasons. The various types of overhead associated with supporting dynamic re-compilation usually aren't offset by the gains. Re-compiling code at runtime is expensive, so it is virtually always done at a lower optimization level than AOT compilation to minimize side-effects. CPU silicon is also quite good at efficiently detecting and optimizing execution of many of these cases in static AOT code. You can also do static optimization based on profiling runtime execution, which is almost (but not quite) the same thing with more steps.
Honestly it always depends on what "faster" means for you. For one crowd faster means "fast number crunching" (e.g. anything AI these days). There statically compiled code reigns supreme because it is mostly about how fast your very specialized code (e.g. matrix multiplications) runs and it does not hurt if you just ship a specialized, statically compiled version for all possible targets. (iirc GCC does something like that when building generic code that will utilize different code sets (SSE,AVX,etc) when they are available at runtime.
For another crowd "fast" means that the code they haphazardly thrown together in an interpreted language runs fast enough that nobody is negatively affected (which is a completely valid usecase, not judging here).
And to answer your question for examples:
An interpreter with JIT compiler might for example notice that you have a for loop that always gets run with the same number of arguments, unroll the loop and at the same time vectorize the instructions for an immediate 4x gain in execution speed.
Otoh Javas Hotspot JIT compiler tracked how often code was called and once a "hotspot" was identified compiled that part of the program.
Last example: if you are using an interpreted language (say Python) every roundtrip through "python-land" costs you ... A simple for loop that just runs a simple instruction (say: acc = 0; for x in xs: acc += x) will be orders of magnitudes slower that calling a dedicated function (numly.sum(xs)), JITing that code (e.g. with numba) will remove the roundtrip through python and achieve similar speeds.
This is all in theory. Everyone says this like it's already here but it's really not (in the sense that these fast jits are still mostly worse than well-written C).
But what it is is mostly choosing optimizations based on runtime characteristics. It's a dynamic analogue to profile-guided optimization. Like you might have an optimization that trades code size for CPU cycles, which you could choose not to do at runtime if you're low on memory bandwidth instead of CPU time. Stuff like that.
I'm not an Emacs user so what I'm about to say may be based on totally incorrect information.
Seems to be that Wdired is just a hack that represents file and directory metadata in a textual format (in a buffer in Emacs parlance). It's not obvious to anyone that editing a filename in this buffer actually goes to the file system and changes the name of the file. It's even less obvious that removing rows removes files or directories. It actually sounds rather dangerous to use.
This is not functionality that emerges from some elegant design of Emacs, it seems it's just a hack.
First, Wdired is not the normal mode for Dired. You have to hit a key to activate that mode.
Second, it doesn't actually make any changes to the file system until you hit the key combo to commit them. And the files aren't deleted immediately, even when you do commit; they're just flagged for deletion in the standard Dired interface. You'll have to hit a button to actually "expunge" them from there, and that asks for confirmation again before doing so.
Think of Wdired more as a convenient way to build up a transaction of FS changes.
And since everything is a buffer, undo works just fine until you commit it. Or you can load up fresh Dired view of the original metadata in another buffer and ediff them to verify your changes before you commit. :-)