I'm sure if you _just_ use the right experimental language extensions and strangle your program under a mountain of indecipherable nonsense, performing arithmetic at the type-level, you can trivially prevent that though!
This is the greatest sarcastic comment I've seen all year. You nailed the Haskell zealot to a tee. Meanwhile the Forth zealot is getting IBS just hearing about the code complexity from Haskell and all the myriad of language extensions to make it closer to actual mathematics.
Don't get me wrong, Haskell is really really cool and some brilliant coders use it. One day I'll actually make it through Haskell book.
See, the whole type-level and value-level is a false dichotomy, that's the whole idea behind dependent types. There shouldn't be a difference between a type and a value.
A (data) type is a set, and a value is an element in that set. So you're arguing that set versus element is a false dichotomy, which is preposterous. Even a set which contains just one element is still distinct from that element itself.
Rather, the whole idea behind dependent types (if dependent types can be glibly summarized by a "whole idea") is that logical predicates give rise to sets. E.g. "all integers that are squares" or "prime numbers". Or all points <x, y> such that x^2 + y^2 <= r^2.
A proposition is not the same thing as the domain of that proposition's parameters; but it does denote a subset of that domain.
A dependent type can contain a large number of values and that will often be the case.
If we have a situation in the program where we must calculate a value that is the member of some dependent type that contains billions of values, and only one values is correct, there is the threat that we can calculate the wrong value which is still in that set. That dependent type will not save our butt in that case.
We have to isolate the interesting value to a dependent type which only contains that value. "correct value" is synonymous with "element of that set which contains just the correct value".
Still, that doesn't mean that the set is the same thing as that value.
A big problem with dependent types is that if the propositions on values which define types can make use of the full programming language, then the dependent type system is Turing complete. This means that it has its own run-time; the idea that the program can always be type checked at compile time in a fixed amount of time has gone out the window. Not to mention that the program can encode a self reference like "My type structure is not correct". If that compiles, then it must be true, but then the program should not have compiled ...
> A (data) type is a set, and a value is an element in that set.
Nope, a type is not a set. Type theories are separate formal systems designed to be (among other things) computationally aware alternatives to set theory, and they are defined by "terms", "types" and "rewrite rules". The whole point of a dependent type theory is to be able to encode things like sets and propositions in the same formal system.
The rest of your comment is wildly off based on this misunderstanding.
What the GP was probably referring to is the persepctive of dependent types from the lambda cube, where you have three different extensions to simply typed lambda calculus's regular functions from value to value:
1. Parametric polymorphism, which are functions from type to value
2. Type operators, which are functions from type to type
3. Dependent types which are fubnction from *value to type*
Since dependent types allows functions from value to type, it kinda erases the artificial barrier between type-level and value-level. In reality it doesn't really erase it, somuch as replace it with an infinte hierarchy of levels to avoid Girrard's paradox.
No, they really should be since (1) it gives more guarantees about your program in compile-time and (2) lets you express more assumptions about your program to your compiler (3) for free.
Their (3) is the only debatable point (aside from the inherent subjectivity of statements involving the word "should", which applies equally to you), since dependently typed languages tend to put a lot of the burden of proof onto the programmer. The other points are practically true by definition.
If types and values were identical, there would be no need for a runtime in any program, and that is simply not possible. There are many things which can only be modeled as values and not as types. (Randomness, for instance.) Dependently typed languages allow a little bit of value-level feedback into the type system, but they certainly do not eliminate the distinction.
And certainly none of this is free. Indirection has costs, and trying to model the entire behavior of a program in a compile-time system is going have costs. Some of those costs will push solvable problems into an unsolvable state, and then things break down. (For another example of this: software can try to emulate hardware, but there are things that hardware can do that software cannot and vice versa, so it wouldn't make sense to say they are identical.)
>The other points are practically true by definition.
"More guarantees" and "more assumptions" are not definable terms. You can't count the number of theorems that need to be true for some model to be accurate.
You seem to be operating from an overly narrow perspective. Type theories were invented for mathematical logic and only co-opted for programming later. It doesn't make sense to try to prove things about type systems by referring to a runtime. The rest of your objections show similar misunderstandings of the question. While I said myself a dependent type system is not free, its costs have nothing to do with "indirection", whether you mean type system complication or pointer chasing (the latter is more likely to be reduced). Randomness is about computational purity, which is nearly orthogonal to the expressiveness of your type system. I'm not completely sure what you're trying to say about hardware, not I'm pretty sure it's a red herring, at least from the theoretical perspective relevant to characterizing dependent types.
I don't know what you're trying to say. The point I am trying to refute is "the whole type-level and value-level is a false dichotomy", and the existence of dependent types do not support that claim, they only provide a narrow domain where the distinction is blurred.
> Randomness is about computational purity, which is nearly orthogonal to the expressiveness of your type system.
If types and values are to be considered identical, then random types should be just as useful as random values. They are not. That alone should indicate that values and types have a well-motivated distinction.
Additionally, dependent types offer no additional capabilities, only convenience. Having a type defined in terms of a value communicates nothing more to a compiler than would a functionally equivalent assertion, though it would certainly be easier to use and work with in some situations.
I'm not trying to say anything about dependent types here. I'm trying to say that the goal of producing a language which makes no distinction between type and value is neither useful nor possible.
To be quite blunt, what I'm saying is that you don't have enough context to have an informed opinion on the relationship between dependent types and the distinction between types and values. It's pretty clear you don't have a clear idea of what a dependent type system actually is. People who study type theory and logic are pretty much on the same page about this.