This is the clear replacement for computational resources when the EMP apocalypse that so much fiction predicts finally comes about.
There will be accelerated breading programs selecting for ever smaller & smaller crab sizes, and while Intel is stuck trying to perfect 10 centimeter crabs, TSMC will be creating cutting-edge football field sized processors running on 3 centimeter crabs.
Garbage collection in code will take on a whole new meaning as dead crabs are aggressively removed and repurposed into the local food supply, while protestors picket TSMC because the edible crab meat from 3cm crabs is miniscule & uneconomical to extract.
Bitcoin will become massively deflationary as hash rates plummet and vast quantities of the world's computation & food supply are turned towards mining 1 or 2 blocks a year, but transaction fees sky rocket.
Hilarious comment. But this is HN so I have to nitpick the minor flaws I noticed:
> Bitcoin will become massively deflationary as hash rates plummet and vast quantities of the world's computation & food supply are turned towards mining 1 or 2 blocks a year, but transaction fees sky rocket.
(1) Mining difficulty is scaled to the hashpower of the network. In the long run the EV of ten minute time between blocks is maintained.
(2) Fun fact, BTC tx fees already spiked high years back, during the bitcoin civil war. The day I had to pay an $80 transaction fee was the day I dumped BTC and never looked back. The tx fees were not due to some true technical limitation but rather an artificially imposed constraint when BTC got taken over from within by a faction that wrested control of /r/bitcoin and censored all dissent, convincing the unending hordes of speculators that the block size was a holy constant that could never be touched without ruining the vaguely-defined “decentralization”. Just a fun bit of history that felt relevant.
> Mining difficulty is scaled to the hashpower of the network. In the long run the EV of ten minute time between blocks is maintained.
The key part here is “in the long run”. IIRC the difficulty is only adapted after a certain number of mined block, so in case of a mining power collapse you still need to mine blocks at the old difficulty for a while, which would be really slow in the aforementioned scenario before a really long time has elapsed (more than a thousand years).
Anyway, in such scenario, you would probably not have a worldwide reliable network anymore, so the whole concept of bitcoin would just be be obsolete.
> the block size could never be touched without ruining the vaguely-defined “decentralization”
Fun fact: All forks of Bitcoin which attempted to remove it actually ruined their decentralization in the process and are now either dead or valued <1/100 of Bitcoin. Seems like the market values a stable and secure Layer 1 over altcoins promising free lunches.
The market capitalization of a cryptocurrency has no relation to its utility as a currency, except insofar as a volatile currency - particularly in the downward direction - hampers its usage as a currency without converting from USD at the time of tx
I would disagree with calling "ethereum" the fork. But even accepting that premise, it's still an example where exactly one version gets to live, and the one made by a group of average people is the one that died. There is effectively no forking allowed, and implementing a fork of bitcoin is just a slipshod alternative to BIPs.
If you redefine 'forks' to refer to the minority chains you are of course correct that "popularity contests don't think much of forks", but this would be a) tautological and b) not the correct use of a well-defined technical term.
ETC is not a fork and it wasn't 'made' by anyone. It is just the original, unaltered transaction history of Ethereum. ETH was 'made' by introducing a hard-fork and altering parts of Ethereum's transaction history, diverging from prior consensus rules.
It's not tautological to say that you get 100% or 0%. You can't effectively split the population. It almost happened with ethereum, and has never really come close with bitcoin.
It's also not tautological to say "The thing called 'bitcoin' at the time has always been the winner. The thing called 'ethereum' at the time has always been the winner."
> not the correct use of a well-defined technical term
If you want to be really technical, every single update is a fork, and every single time two blocks are mined at the same time it's a fork.
I would rather go with what people were actually calling 'bitcoin' or 'ethereum' the day after any forking of the chain. I don't think it's reasonable to say that we're on a fork of a fork of a fork of a fork of a fork of a fork of bitcoin or ethereum.
> ETC is not a fork and it wasn't 'made' by anyone. It is just the original, unaltered transaction history of Ethereum. ETH was 'made' by introducing a hard-fork and altering parts of Ethereum's transaction history, diverging from prior consensus rules.
All the blocks after the divergence point were made by someone. The original transaction history is part of both.
And I would say that using consensus to change the consensus rules is part of the consensus rules.
> It's not tautological to say that you get 100% or 0%.
I was referring to your argument that "popularity contests don't think much of unpopular (i.e. minority) forks".
> The thing called 'ethereum' at the time has always been the winner.
The thing called 'Ethereum' at the time is now called ETC, and it lost.
> every single update is a fork, and every single time two blocks are mined at the same time it's a fork.
Only updates which change consensus are considered forks. Two blocks mined at the same time is not a fork (because consensus rules have not been changed), but a temporary chain split. Forks can sometimes cause permanent splits though (ETH, Bitcoin alts) when both sides retain hashrate to mine/support it.
> I was referring to your argument that "popularity contests don't think much of unpopular (i.e. minority) forks".
So am I, but you're interpreting my words wrong.
I wasn't saying that "in the realm of blockchain forking, the less popular one will be less popular", which would be a tautology. I was saying that "in the realm of blockchain forking, the less popular one will get demolished".
> The thing called 'Ethereum' at the time is now called ETC, and it lost.
Right before the fork, they were both called Ethereum.
Right after the fork, ETH was called Ethereum. And it won.
"at the time" and "the day after" are supposed to be roughly synonyms in my comment there.
And you didn't address the issue of how validation logic has changed many other times on coins, with the original continuing to mine for a while before sputtering out and dying. Is "Bitcoin" dead? Nobody is running the original logic anymore.
> I was saying that "in the realm of blockchain forking, the less popular one will get demolished".
Sorry I still find myself reading your words the same way. If you can only determine if a fork was "demolished" by rating its popularity (i.e. market value or hashrate), you are back to the tautology:
"The most popular fork wins" seems to be your argument. But "winning" here means having higher market cap/more hashrate. And being more popular also means having higher market cap/more hashrate.
Let's say a fresh fork happens causing a chain split. Both chains have 50% market cap and hashrate. How do you measure "popularity" at this point so that your prediction of "the most popular fork wins" is actually helpful? ETH forking away from ETC was a very controversial topic at the time.
> And you didn't address the issue of how validation logic has changed many other times on coins, with the original continuing to mine for a while before sputtering out and dying.
This is exactly what we are discussing with ETC. ETH changed consensus and the original (ETC) continues to mine for a while before sputtering out (it is not dead... yet).
> Is "Bitcoin" dead? Nobody is running the original logic anymore.
Not sure what this is supposed to support? There are still Bitcoin Qt 0.8.x nodes on the network which are ~10 years old.
> Let's say a fresh fork happens causing a chain split. Both chains have 50% market cap and hashrate. How do you measure "popularity" at this point so that your prediction of "the most popular fork wins" is actually helpful? ETH forking away from ETC was a very controversial topic at the time.
You have to go forward into the future to evaluate the claim. The claim is that if you look at things... let's say 5 years later, one of those is going to be dead. If you find that one of the forks still has, for example, 20% as much hash power as the other, you have disproved the claim. So it's not a tautology.
Another way to formulate that part of the claim would be to say something like "the half-life of a fork is only X amount of time, so it's a very bad idea to try to pull one off".
> There are still Bitcoin Qt 0.8.x nodes on the network which are ~10 years old.
There's no way they're fully compatible. At the very least, consider that block that generated 184 billion bitcoins and had to immediately get the code changed. And look at all of these: https://blog.bitmex.com/bitcoins-consensus-forks/
> The claim is that [...] one of those [forks] is going to be dead.
This is much weaker claim than your original one (~"the market values popularity over technical features") and a much less controversial one.
> something like "the half-life of a fork is only X amount of time, so it's a very bad idea to try to pull one off".
> If you find that one of the forks still has, for example, 20% as much hash power as the other, you have disproved the claim.
Again, ETH was forked off the original Ethereum chain and won (it has ~98% hashrate) so it is not necessarily a bad idea to attempt one.
> There's no way [~10 year old Bitcoin 0.8.x nodes] are fully compatible.
You can download v0.8.1 here (https://github.com/bitcoin/bitcoin/releases/tag/v0.8.1) and it should be able to fully sync the Bitcoin blockchain (i.e. including blocks mined today). If you use older clients for mining you might generate blocks that are ignored by other miners but your old client will still keep in sync and follow the same correct chain. (Something that is not true for ETH or Bitcoin forks).
According to your link you can apparently sync even older clients (e.g. v0.3.24) if you take care of non-consensus related bug fixes that would otherwise crash your client.
> This is much weaker claim than your original one (~"the market values popularity over technical features") and a much less controversial one.
You keep putting different versions of words in my mouth, but I would endorse that formulation of the "original". Popularity will crush technical features in a pretty short timeframe. The technical features have an opportunity to pull people in, but it is very limited and then that version will be pushed toward 100% or 0%. It is not sustainable to split the market, unlike most markets. You can compare different markets this way, seeing if it's true or false for that market, so it's definitely not a truism.
> Again, ETH was forked off the original Ethereum chain and won (it has ~98% hashrate) so it is not necessarily a bad idea to attempt one.
It means it's a bad idea to try to split things, because even in that case splitting didn't work.
So you'd better be damn sure that a powerful majority will want your version. You can't rely on merely many people wanting it.
Also, you don't think that having it be named Ethereum is a bit of a special case?
If you fork bitcoin or ethereum, you won't be taking the name with you.
Forks are arbitrary. There's no need to literally gather 51% consensus, it just means users of Fork A are in an entirely different universe than Fork B and that's fine.
For example when BTC split into BTC and BCH, for me the ticker for Bitcoin went from BTC to BCH because BCH was the continuation of "real bitcoin". And the absurdly-small-blocks crowd obviously felt the opposite. And that's completely fine.
One of the most pernicious beliefs of the BTC crowd is that a hard fork is some evil, unforgivable thing. It shows a total misunderstanding of how free software works.
> One of the most pernicious beliefs of the BTC crowd is that a hard fork is some evil, unforgivable thing. It shows a total misunderstanding of how free software works.
It's not about software, it's about network effects when the only thing giving a coin value is those network effects. Nobody cares if you fork the software. Diverse software is encouraged!
When the hash rates are at 1 per month, I don't think we will have enough hashing power, probabilistically, to match 10 minute marks. Verification of the hash to validate and consent to the block will also be at 1 per month, for consideration.
> Ethical note: No specific license was required for this work. The duration of any single experiment was so short that each crab never reached dangerous level, that the crabs were kept in comfortable condition, and that after all experiments the crabs were released to their natural habitats. Furthermore, on visual inspection,no crabs appeared to have been injured or adversely affected by the experiments.
The way this ethical note is phrased seems to suggest the experimenters basically went to the beach and picked up a bunch of crabs and then released them again, as opposed to any sort of standardized procurement process, which I find amusing.
It's particularly amusing given that, once the experiment was done, it would have been perfectly acceptable to toss them all into a boiling pot of water at a local restaurant.
As a matter of fact, loosely construed, that might be what they means by "natural habitat"
How would one prove that? It's easy to show crabs can make logic gates - just do it. How would you prove wombats can't ever implement logic gates? Maybe you just used the wrong treats, and peanut butter snacks would make them motivated enough to train.
But again, how do you prove wombats + peanut butter won't ever make logic gates? Maybe they will if the temperature and humidity are at certain levels? Or maybe different levels of lights or sound? It's really, really hard to account for all variables with a negative result.
> It's easy to show crabs can make logic gates - just do it
That's just an artifact of the way we reason about the world. We take as an assumption that induction works for "positive" events (we showed that these crabs can make logic gates) and can't accept the same evidence when it pertains to "negative" events (we showed that crab movements _are not_ uniformly random -- no you didn't; what if you didn't feed the crabs the right kind of peanut butter?).
Yes, but the hypothesis in the paper (soldier crabs can implement logic gates) is _also_ of the latter form, despite its positive construction. Being usable for a logic gate implies an element of control and repeatability, and unlike a statement like "the sky can be blue" a point observation doesn't suffice as proof.
They won't be annoyed if they use language normally. But if someone is trying to be prescriptivist, they deserve some annoyance. And I didn't say it was clever.
What's more complex: the logical gate made of crabs or an individual crab? What's more complex, a corporation or a person? A worm or a single cell in a worm?
It would take an AWFUL lot of crabs acting as logic gates to model a crab brain or an entire crab. A single logic gate by itself is clearly less complex than a soldier crab.
But if you really could model a crab brain perfectly in a turing machine (not clear you can), then let's make a crab brain out of crab logic gates, and then use those mega-crab-brains to make more mega logic gates, and put together a bunch of those to... you'd probably run out of crabs.
Although now I question my original statement -- is a single logic gage made out of crabs clearly less complex than a crab? I mean, it's made out of multiple crabs, how can it be less complex than one crab?
As a software engineer I'm so used to thinking abstraction, that I just sealed it off as a "a logic gate, don't look below the covers".
A logic gate made of crabs is a "leaky abstraction" when the crabs start dying, need to be fed, etc.
IANA physicist or neuroscientist but I think one open question is whether mechanistic causality is sufficient to describe/model living beings, or even non-life.
See Wheeler and Feynman's interpretation of electrodynamics for instance, which rejects time-reversal symmetry breaking and proposes that both solutions of Maxwell's equations are metaphysically meaningful. I am not aware of any consensus on how causality actually works.
Any physicists reading this, yell at me if I'm wrong.
I'm not aware of any data that says either that the brain relies on quantum effects, that quantum effects are needed to solve the binding problem, or even for sure that the binding problem is a problem that needs solving.
I also know of no such data, and only arguments that are not commonly accepted. Anyhow, what prevents a TM simulating a system 'at the quantum level'? A TM has infinite storage by definition, so the state space is not a limiter.
Lots of reasons. You need energy to store the information. You wouldn't be gaining anything, only losing efficiency. The quantum level is the "lowest" level of existence, there's nothing "lower" to use to simulate it. Furthermore, "random" is a critical concept in QM, and one can never achieve true random with a TM. What's the digital algorithm to generate a truly random number?
Edit: in a way simulating a crab with a bunch of crabs would be kind of like simulating energy dynamics/QM with energy
A Turing machine, that is, the mathematical formalism, can definitely simulate quantum mechanics.
A classical Turing machine simulating a quantum Turing machine, or other model of quantum computation, would, aiui, incur a super-polynomial slowdown (maybe exponential? My impression is that that it might not be known to exponential. But at worst basically exponential).
The randomness is not an issue. Just don’t add any wave function collapse, or just list the probability of each outcome.
This is just not the case, even with infinite energy and time. There are properties of quantum Turing machines that are not reproducible with classical Turing machines.
That abstract appears to be referring to the superpolynomial slowdown in simulating one, which I already pointed out.
(If there is more than the abstract there, the scrolling isn’t working on my phone.)
There is no function that a QTM can compute that a TM cannot. But a QTM can compute some functions much faster.
Or, as phrased in the abstract “these do not include the computation of any non-recursive function”.
Edit: of course, there are things that can can be done with QM that can’t be with a TM (such as the entangled multi-party prover/verifier setup), but none of them are “compute this function (with no limit on how long it takes)” or “simulate this situation (with no limit on how long it takes)”
Quantum may not be the lowest level: Nobel prize winning physicist Gerard 't Hooft speculates the existence of a more orderly layer underlying the quantum layer.
Otherwise, I see no reason why a thing cannot be used to simulate a thing of the same kind: Computers are used to simulate other computers on a daily basis.
Your left and right visual fields are united in one "space". Classical computation cannot achieve such unity. Digital computation concerns discrete parts changing in discrete steps, which is not like a field at all.
Sure they can, self driving cars do it with a half dozen or more visual fields combining to a single model of the world around them.
Also, your visual system is fairly discrete, between the receptive fields of individual retinal cells, to the granularity of the optic nerves, to how the bottom end of the visual cortex samples all of that.
Self-driving cars don't experience visual qualia. A model is not the same as an actual experience. There's no binding-problem to solve with self-driving cars because there's no attempts to make them conscious. It's a completely different thing.
There's no reason to believe qualia arise in a given discrete computation. Why would they? In what steps in the algorithm do qualia arise and why, what characteristics do they have, what causal roles do they play, etc.
It's completely self-evident we experience qualia. It's what our experiences are made of. There wouldn't be anything to experience or discuss if we didn't. The brain is not a deliberate, man-made object like a computer is, hence why it can possess these properties with us being unaware of how (they were selected for via evolution), but the computer cannot.
The input system is discrete but the end-result, our conscious experience of our world-simulations (made up of visual qualia) are not discrete. They are unified.
An example of how this could be implemented (not saying this is the case, just one of several possibilities):
But that whole argument comes from the a priori idea that you can't build a singular model of the world from discrete inputs. There's no evidence or even logical chain for that conclusion.
And even the quantum world is discrete. That's why it's called a 'quantum'. There are fixed size quantities moving through the field.
No, it's not. The argument rests on the fact that it can't be solely discrete across the entire system. The discrete information needs to "come-together" in a non-discrete way, e.g. something like quantum coherence. This is the binding-problem in a nutshell.
All quanta arise from the wave-equations and can be modeled with continuous mathematics.
> The argument rests on the fact that it can't be solely discrete across the entire system. The discrete information needs to "come-together" in a non-discrete way, e.g. something like quantum coherence.
That's just conjecture though. There's no real evidence that a system of discrete components can't work together to create a single unified system. Stating that they can't with no evidence is an a priori conclusion.
Unless you have a compelling reason for why brains are a unique type of matter whose functions are partly derived from quantum phenomena (more so than other things, that is) then I see no reason why this is true. Or, if it is true despite any lack of uniqueness by brains, then nothing can be simulated because all simulation would be restricted by the quantum barrier.
The law of large numbers let's you abstract in such a way that the group can appear simpler than the individual unit. Of course, to fully simulate it across every single possible scenario, one would need to model all the individual units in order to accurately model the group, and so technically the group is more complex to fully model.
Unclear if question is genuine, but I would say that each level is capable of performing computations (and/or acting in the world) in ways which are inaccessible to other layers.
I know saying 'things are hard' is an obvious conclusion, but that doesn't make it wrong!
'Soldier crab' is a name attached to several unrelated crustaceans. The crustaceans studied in the paper are a variety of soldier crabs that are, in fact, true crabs.
I clicked the link without even looking at the url, hoping for maybe an HTML5 version of the logic gates where you could play with the number of crabs and see a real time simulation of the resulting logic gate.
There will be accelerated breading programs selecting for ever smaller & smaller crab sizes, and while Intel is stuck trying to perfect 10 centimeter crabs, TSMC will be creating cutting-edge football field sized processors running on 3 centimeter crabs.
Garbage collection in code will take on a whole new meaning as dead crabs are aggressively removed and repurposed into the local food supply, while protestors picket TSMC because the edible crab meat from 3cm crabs is miniscule & uneconomical to extract.
Bitcoin will become massively deflationary as hash rates plummet and vast quantities of the world's computation & food supply are turned towards mining 1 or 2 blocks a year, but transaction fees sky rocket.