Hacker News new | past | comments | ask | show | jobs | submit login
All you need is λ, part one: booleans (antitypical.com)
133 points by azhenley on Sept 26, 2020 | hide | past | favorite | 31 comments



The first part of the first paragraph is wrong:

> Nearly a century ago, Alonzo Church invented the simple, elegant, and yet elusive lambda calculus. Along with Alan Turing, he then proved the Church-Turing thesis: that anything computable with a Turing machine can also be computed in the lambda calculus.

The Church-Turing thesis is not something one can prove. It's more like an intuition and/or definition we have. It states that anything that is computable can be computed with a Turing machine. See [1].

"Anything computable with a Turing machine can also be computed in the lambda calculus" this is true but it not the Church-Turing thesis and its called a Turing-equivalence. This was also not proved by Church, but by Turing in an appendix to his paper [2].

About a computation model being Turing-equivalent: the fact that most reasonable computation models we came up with were proven to be Turing-equivalent is one of the reasons we believe in the Church-Turing thesis.

[1] https://plato.stanford.edu/entries/church-turing/

[2] https://link.springer.com/article/10.1007/s10506-017-9200-2


Another statement of the Church-Turing thesis is to say that the the functions computable by a Turing machine (or lambda-computable, or general recursive) makes a good proxy for the intuitive concept of computability.

From this perspective, "thesis" is a misnomer: it just happens to be a useful definition for "computability".

That's not to say that there aren't edge cases where intuitive "computability" and Turing computability disagree; for most people, any number of oracle machine constructions will fall under the former but not the latter. We've just chosen to define the term in a way that's "easy" to define somewhat rigorously and has useful properties.

It's kind of just like if we were to call the usual definition of the reals "Cauchy's thesis" or somesuch.


The Church-Turing proposition carries an implicit challenge: find an algorithm to evaluate a function which is not Turing computable. Because it can be challenged in this way, it’s not just a definition (mere stipulation how to use the word “computable”).


Back in 200 B.C. we had something called the "Euclid-Archimedes Thesis" (okay it wasn't literally called this) which stated that every measurable curve was either a line or a circle. The idea that there was a way to measure the length of any irregular curve was simply impossible. And this thesis went unchallenged for an amazing 1,800 years until the 17th century when several mathematicians constructed measures for various irregular curves.

Be very skeptical about "theses" that are handwavy about a frontier subject (pure, abstract geometry was just as much of a frontier subject as abstract machines) suspiciously lacking any proof.


It's pretty easy to come up with "algorithms" in the intuitive sense that are not Turing computable; just violate any of the finiteness constraints.

The fact that you might object to considering such procedures algorithms rests on the fact that the "thesis" is providing a definition.


> The fact that you might object to considering such procedures algorithms rests on the fact that the "thesis" is providing a definition.

Unfortunately, many people don't seem to understand this.

An interesting paper related to this issue is The Myth of Hypercomputation [1]

Basically it is easy compute something that is uncomputable by using uncomputable inputs.

[1] https://link.springer.com/chapter/10.1007/978-3-662-05642-4_...


Thank you for the pointer to this. Searching for the paper led me to a few things. If anyone would like to read the paper mentioned in OP's post, you can find it here: https://www.researchgate.net/publication/243784599_The_Myth_...

This is a response to "The Myth of Hypercomputation" (entitled "The Myth of 'The Myth of Hypercomputation'"): http://kryten.mm.rpi.edu/PRES/TURKUHYPER/NSG_SB_MoMoH_presen...

This is an interesting blog post that explains a bit about both positions, without addressing the rebuttal to Davis' paper: https://paulcockshott.wordpress.com/2018/02/13/no-mysteries-...


Turing machines and being turing computable doesn't require any finite constraints. Both infinite time and infinite storage space are expected for a universal turing machine. We obviously have never built one.


There are many finite constraints. First, there is no infinite time [1]. The most famous uncomputable problem is the halting problem (given an algorithm A and an input x, can we compute it? that is, will A stop on x?). Although we have an infinite tape, since the time of a computation is finite, the space it uses is also. Second, we describe a Turing machine using finite sets.

[1] Actually, as described by Turing, a Turing machine neves stops, but this is only because he is interested in computing real numbers (for example pi). But even using an original Turing machine, we do not "compute" pi. We only "compute" pi with some precision.


In a way, anything discrete is a logical assertion that exists purely in imagination. Base reality is continuous in every way. It’s imaginary human reality to separate things apart from the whole. Physics and philosophy have struggled with this over atom vs non atom view of “base reality” - interesting to see how it shows itself in many different ways. Anything quantized runs into issues with consistency, as the quantity only works in an imaginary model. No model is sufficient as it has a fixed size of requirements to be a model.


I think the jury is still out on whether space is quantized.

Maybe wave function values can’t be quantized, but that’s not “in every way”.

And I’m pretty sure the different eigenstates for spin operator in a given direction are distinct and not a continuum?


(I’ve also been wondering for a few years whether it is possible to make a version of quantum mechanics which works with the wavefunctions taking on values from a (very large) finite field instead of the complex numbers. My impression so far is “probably not” because you need exponentials, and the multiplication group of a finite field doesn’t have a nontrivial group homomorphism to/from the additive group. Also, for the simple way of defining differentiation for functions over finite fields, has 0 as the only eigenvalue of differentiation, and I’m thinking that the other definition I made up for it also does. I think I saw some definition of differentiation for finite fields which maybe does have a non-zero eigenvalue, Yeah found it, https://arxiv.org/abs/1501.07502 but, this doesn’t have the “derivative” of a constant be 0, so I’m not sure that it would be suitable either.

So, my guess is that probably one can’t make quantum mechanics work with wave functions that have their values all from a finite field. Still going to keep looking a bit more though. )


Maybe your intuition provides an analogous but different conclusion that may be more suitable to novel conceptual models of reality. Pursue your intuition I’d say. I’m in a similar path.

Oddly I find music to be a good hint of reality lately. I wasn’t too interested at musical theory but now digging into it the geometrical spaces it shadows is inspiring. Projective geometry allows continuity. Axiomatically it also allows arithmetic among other things. Very intriguing.


The unfortunate truth is that once you pick apart the definitions of deep things like logic and math, you see that we don’t even all agree on them. We like to think that because we can describe something with math it’s inherently true, but the answer is pretty much always up for debate.

We found flaws in basic set theory and had to move to the updated Zermelo-Frankel set theory because of it. People try and create new foundations of math all the time for varying reasons.

Anyway, I see so many people speak with such conviction about truth and objectivity, that it’s refreshing to see someone show humility about what we actually know about our universe. Lots is still up for debate, is the sad truth.


> In a way, anything discrete is a logical assertion that exists purely in imagination. Base reality is continuous in every way.

This is a strong statement. What about any thing that can be counted? Apples, sheep, atoms in a molecule, permutations of objects? Certainly these are discrete quantities?


It’s also an assertion that defies quantum physics as we know it. The whole point is that things are discrete at the smallest levels.


Just because you can recognize something as a unit doesn’t mean that it isn’t continuous fundamentally.

If you look at your hand you may count your fingers but they’re still part of your body as a whole. Any “border” is an arbitrary limit that is fully in the domain of human imagination. Our senses of the world allow us to conceptualize arbitrary borders and units of a continuum.

Base reality is continuous, absolutely nothing exists without the whole. That is to say, only the whole exists. Anything in the finite will never be adequate to describe true reality.

Physics actually does understand this quite a bit. Much of theoretical physics is divorced from reality to the point where quantum physics itself is nowhere near describing reality. It may help describe certain properties based on our observations but many postulations are dependent on phenomena that are entirely non physical.

There are factions within physics and the “atomic mathematical model centric” faction has the popular voice. There exists within physics those who do not see the universe as atomic. You don’t hear much about them today because information is gated.


Even if something is continuous, that doesn’t imply that all borders within it are arbitrary. And even among borders that are somewhat arbitrary, not all of them are equally arbitrary. Whether a metal is iron or lead is a kind of question that aligns more with the workings of the universe than the question of whether the current month is September or October.

The goal of cleaving reality at the joints is not entirely hopeless.

In a bimodal distribution, there is a point which is a local minimum for the probability density which lies between two local maxima of probability density.

As to whether an infinite amount of classical information is needed to perfectly describe the behavior of physical objects, I am agnostic. As to whether a bounded-in-size physical object can be used to store an unbounded amount of classical information, I’m fairly confident that this is not possible (as a thing that happens to be true of the universe, not something required by logic alone).

(I am not arguing that “whether a sample is entirely lead” is a perfectly discrete question, as, I hear that it is thought that on extremely extremely long time scales that quantum tunneling may cause collections of lighter-than-iron elements to combine and form iron, and so even in a sample of “pure” [some element lighter than iron], maybe there might basically immediately be some minuscule component of the wave function in the “these atoms combined into some heavier element” direction? I’m not sure. I don’t know whether that would decohere or whatever immediately and therefore with high probability go back to a state of purely the one element? Idk. What I am claiming is that even if there is a continuous path connecting states we would call element A with those we would call element Z, it is nonetheless natural and _Correct_ to make a distinction between different elements, and these distinctions are more natural and less arbitrary than other distinctions we might make.)


Yeah I appreciate your thoughts in this realm. It’s balanced. Good to think about. I am of the perspective that it’s all a whole and interactions within a whole create observational effects. I don’t think there’s an absolute answer as much as the consideration of being able to maintain multiple perspectives of thought as equally valid depending on the need.

Distinction is almost strictly a utility of communication. But that could be expanded into language and mind itself and gets meta quickly.

Maybe my poking is more related to an irk that language itself being used is in the context of finality and truth. Every sentence is a statement of absolute truth. No wonder everyone argues endlessly over what is.

Again. Appreciate the reply.


My understanding is that the halting problem is not computable even given infinite time.

It might be possible to show that with some degree of infinity the halting problem is solvable, but giving yourself infinite tape and time emphatically do not solve the problem.


Just last night I found this excellent introduction to the Lambda Calculus by Gabriel Lebec [1]. It's one of the best I've come across and covers briefly history, building up combinators (using Smullyan's names) leading up to implementing Church encodings and gives Javascript equivalents (or approximations when not possible). The slides themselves are also very aesthetically pleasing, Gabriel has a good sense of colour and design

[1] https://youtu.be/3VQ382QG-y4


You're right about the aesthetically pleasing slides. I usually find slides a distraction. Here they're pure elucidation.


see also John Tromp's 2012 entry to the IOCCC: https://www.ioccc.org/2012/tromp/hint.html

> This program celebrates the close connection between obfuscation and conciseness, by implementing the most concise language known, Binary Lambda Calculus (BLC).

> BLC was developed to make Algorithmic Information Theory, the theory of smallest programs, more concrete. It starts with the simplest model of computation, the lambda calculus, and adds the minimum amount of machinery to enable binary input and output.

> More specifically, it defines a universal machine, which, from an input stream of bits, parses the binary encoding of a lambda calculus term, applies that to the remainder of input (translated to a lazy list of booleans, which have a standard representation in lambda calculus), and translates the evaluated result back into a stream of bits to be output.

> Lambda is encoded as 00, application as 01, and the variable bound by the n'th enclosing lambda (denoted n in so-called De Bruijn notation) as 1^{n}0. That’s all there is to BLC!

> For example the encoding of lambda term S = \x \y \z (x z) (y z), with De Bruijn notation \ \ \ (3 1) (2 1), is 00 00 00 01 01 1110 10 01 110 10


After a few failed attempts to begin to understand the Lambda calculus, this is the one that finally "stuck" with me. Looking forward to future articles in this series!

This part:

> encoding a datatype with lambdas means representing the datatype as functions supporting all of its operations

was the key in being able to make practical sense of its purpose.


On the same topic, Jim Weirich's 2012 talk "Y Not - Adventures in Functional Programming" on lambda calculus and the (little 'c') Y combinator is fantastic.

https://www.youtube.com/watch?v=FITJMJjASUs


In roughly this vein, I wrote a brainfuck interpreter in the lambda calculus a while back, it was fun

https://github.com/MichaelBlume/lambdafuck


I just wanted to say that this project both horrifies and impresses me.


A friend of me implemented a lambda evaluator that he used to teach various concepts in the lambda calculus (https://srtobi.github.io/lambda/docs/).

You can find examples here: https://github.com/SrTobi/lambda/blob/master/stddef/stddef.l...


Another approach can be seen here http://lambdaway.free.fr/lambdawalks/?view=fromroots2canopy without closures, lexical syntax and laziness. I don't know if it's still lambda calculus but we can still have fun.


Another take on the same topic:

http://www.flownet.com/ron/lambda-calculus.html

Culminating in computing the factorial of 10 in pure lambda calculus on an actual computer in non-geologic time.


"First define your notation. Then define your terms"

Shadowed? Is that like.. peter pan?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: