There are theories [1] that part of the brain's function might work at the quantum level. If true, we probably won't be able to really understand what happens by measuring it this way...
Classical computers also do things "at the quantum level"; CPU gates rely on quantum effects, probably the HDD does too, etc. But it doesn't prevent understanding because that computer part implements a simpler interface.
The brain probably isn't a quantum computer, or else we'd be able to factor integers quickly in our heads.
According to philosopher Paavo Pylkkänen, Bohm's suggestion of the quantum mind "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level". [1]
Factoring integers is a logic operation (thus not performed at the quantum level). But an operation like identifying an object or a smell (as it is what the article here is about) could be performed at a more deep level using quantum mechanics.
Hmm… it seems like that's only true if it doesn't actually mean anything impressive. Your neurons or nose obviously can rely on quantum effects in the same way a modern semiconductor transistor process does. But that doesn't imply anything huge like "consciousness can't be emulated on a classical computer."
there is also the almost-definitely-true idea that the mind is an emergent property of the body. We can't understand emergent phenomena using reductive approaches, in the same way that we can't understand quantum phenomena using classical approaches.
> I just don't understand why so many trillions of dollars around the world are being spent on military power that just sits by and watches dictatorships kidnap protestors from other countries and execute them.
We make wars to control energy sources, trade routes, etc. The west as happily puts a dictator in place of a democratic government when it's in their interest (or the best interest of the men in power and their supports).
You also need to have an opposition figure who you can tell is likely to succeed with your assistance ahead of time, and who can make credible promises to pay you back (ie. with contracts for the rebuilding, contracts on oil extraction, etc.)
It's very clear how this played out in Libya, for instance, if you read the leaked Hillary Clinton emails. [0]
A friend of mine, from Romania, has fond memories of his youth under the communist rule of Nicolae Ceaușescu, being always reminded how great the leader was and how he admired him as a kid.
He discovered later he was only manipulated into that, but still classified this as good memories.
It's a little bit like those old adverts, it doesn't mean it is a good thing to perpetuate.
On a mac I can't live without "Magnet" [1]. It lets you do organize your windows in half/thirds of screens with simple keystrokes. That should be part of the OS.
I use Rectangle [0] for the same purpose, it has a few more bells and whistles and is open source. It does have a bit of a debounce problem on multiple screens though (one tap might move the window two positions).
Will try it out. Having said that it is on my Linux machine "for free". I'm not even quite sure if I'm allowed to use these paid apps on the Macbook Pro I've been given.
For Linux users on Gnome looking for similar functionality, I use the gTile extension to accomplish this. When I first got my ultrawide display on macOS, Divvy was critical to be able to do this. gTile was a similar enough replacement to get me my workflow back.
It's not even a software development exclusive problem. You might create a product, sell it on Amazon and then notice Amazon sell its own AmazonBasics version of it after a while.
After years of struggling with config file in heterogenous production environments, I'll argue the opposite: environment variables are the BEST option to manage your configuration.
Nocebo effect is proven for vaccines. Even if we inject salted water: many will develop fevers, arm pain, headaches, nausea, ... (because they expect such symptoms). In some cases, as many as 90% of observed side effects are nocebo. i.e. if 100 people get the real vaccine and 100 people get salted water, 10 will get side effects on the first group, 9 will get the same side effects on the control group.
Great achievement. One remark though, it sounds like the author does a lot of assumptions that something is better without actually benchmarking before and after. On those kind of projects, one should measure the impact of each step. Maybe the new version is only faster because it uses WebGL, maybe the WASM code is actually slower... Or is it the opposite?
In my youth, I did a lot of x86 assembly programming. It's very easy to end up with a code slower that compiled high level languages. Here's an example: aligning memory buffers made a piece of code 50% faster (the bottleneck was memory bandwidth). That's a sort of optimization a compiler might (or might not) do for you. With ASM languages you have the control, so you're responsible for doing it.
Michael Abrash's Black Book is a bible in term of approach to software optimization. It's old but a nice read. Out of print, a free ebook is maintained here: https://github.com/jagregory/abrash-black-book
I agree, a lot of assumptions to why things are faster. As far as I know WASM is not necessarily faster than native JS if you write the JS code properly (typed arrays, don't generate garbage, object pooling, etc.).
That's a nice tool, this is a feature I enjoyed in Mint-like proprietary tools I used to use previously.
What's your workflow with it? Do you run `ledger-guesser <transaction details> >> journal_file.txt`? I'd somewhat prefer typing out the details in an editor, but I suppose I could type a bunch of transactions one per line in a file and xargs that into ledger-guesser.
What does it do when your transaction is of a new type that it wasn't trained on? Does it have a confidence threshold below which to tell you "I cannot guess this one"?
I plug this tool in my scripts that import transactions from banks (and stripe).
The scripts extract transaction details (amount, payee, currency), then they use `ledger-guesser` to create ledger entries and add them to the journal.
The generated entries are "uncleared". Then I manually "clear" the entries. Review. Commit. (You can also use a tag for reviewed transactions if you already use ledger's "cleared" indicator for something else).
For the best results, I have 1 journal per bank account. So I have different training data for each bank account.
There's no confidence threshold. When there is a new type of transaction is encountered, the guesser will chose the account with the highest probability.
The guesses are made based on the words found in the payee (and date). Unknown transaction have generally a few known tokens. Example: "INCOMING TRANSFER FROM NEW_CLIENT"... The classifier will probably classify that entry as "Incomes:OurLargestClient". In that case I have just to fix that entry to change the client, all the rest is good, it still saves a good amount of typing.
Right. Ya, in truth I think the big studios were all built off a single hit. And those hits keep paying out for a long time and they leverage either the brand to build sequels or just start stringing out new titles hoping another will "hit". Cross-marketing is huge too.
You can be successful building mobile games, just like you can win the lottery. :)
[1] https://en.wikipedia.org/wiki/Quantum_mind