There are many paradigms in programming. Each have their strengths. A purely functional approach ala Haskell is not the only way. Based on your comment, it would seem no one should use C/C++. Yet many do. It depends on what you want to achieve, what is your abstraction budget, your performance requirement, legacy code...
OCaml offers a pragmatic functional approach to programming. And now you are going to be able to have your OCaml code run in a truly parallel fashion on your multicore CPU.
In the future there are plans to add typing to effects though. (There is support for effects currently but its untyped and experimental). When that happens you can track changes to state (which is a kind of "effect") if you want to...
> Based on your comment, it would seem no one should use C/C++
Maybe, there are better options out there these days for parallel and concurrent programs. Parallel programming in C and C++ is extremely fraught for the very reasons the parent brings up. There are so many footguns, starting from the fact that the favorite debugging method of C programmers, printf(), is not even thread safe.
It’s so bad that Rust markets as a feature “fearless concurrency”, capitalizing on the recognition that the prevailing emotional state of a C or C++ dev writing concurrent or parallel programs is one of fear.
And the very thing that makes Rust concurrency fearless over C and C++ is that borrowing and mutability are explicitly tracked. As we enter a world where over a dozen CPU cores are the norm, we are learning what works and what doesn’t in writing programs for these machines, and integrating those learnings into new languages.
One detail that usually is left out of the fearless concurrency story is that it only works in-process across threads, it does very little to help in distributed concurrency across multiple processes accessing shared data, eventually written in various languages.
Which in the age of micro-services is quite relevant as well.
Definitely better than other languages, still doesn't prevent one to actually think about end-to-end overall system design.
I prefer printf because it works pretty much anywhere, handles concurrency by default (as in you can see the interleavings, though the log call itself is locked), and allows me to have a custom tailored view of the state I want to see.
The last point may not be obvious, but debuggers have tons of noise for complex programs. In practice I just want to see how my program state changes over time while a debugger shows the entire program state or a large subset of it.
I think the future of debugging is going to be structured program state logging. Ideally we should be able to take our logs and partially reconstruct program state over time. For example, in addition to source location, we should save the lexical information for each variable logged so you can have interactivity with your logs and source code.
I sympathize with what he's saying, and I imagine he's correct, for his context, but some of us don't work on never-ending code and just want to write the best code we can, as quickly as possible, so we can move on to the next challenge. For those that are more pragmatic, and enjoy working on the hard problems the code is trying to solve rather than the hard problems of the code, using a debugger can be beneficial.
There are tons of ways to use debuggers, even in places that might seem one needs to contend themselves with printf debugging, what many miss is learning what is actually available.
So you end up with legions of Linus hatting on debuggers and proud themselves of never having to use one, some kind of macho thing I guess.
I've been programming for years and I still find printfs an extremely useful debugging technique. they let you debug an entire program in parallel, as opposed to the more focused single snapshot debugging you do with a debugger.
> A purely functional approach ala Haskell is not the only way.
that wasn't the OP's argument though, the argument was that OCaml is somehow generally better for engineers.
> OCaml offers a pragmatic functional approach to programming.
an evaluation of something as pragmatic depends purely on what one whishes to practice. There's no universally objective notion of pragmatism.
> And now you are going to be able to have your OCaml code run in a truly parallel fashion on your multicore CPU.
no, it won't be able to do that automatically. Your code will have to respect certain invariants to function properly, and you as a developer will have to enforce these invariants with the available tooling at hand. Haskell has purity, guaranteed STM, and `par` labels for that. OCaml doesn't have those and the existing codebases will have to eliminate their thread-unsafe public interfaces first.
> When that happens you can track changes to state
how are you planning to track state changes without purity?
> > And now you are going to be able to have your OCaml code run in a truly parallel fashion on your multicore CPU.
> no, it won't be able to do that automatically. [...]
I was comparing it to the old situation in OCaml is that it was impossible to have threads that were executing pure OCaml code and were _not_ IO bound to execute truly parallely. That limitation is removed.
To me it is pretty obvious that you will need to use things like atomics, thread safe data-structures, mutexes etc. to ensure your code runs properly in multicore OCaml. It was implicit in my response. But I should have been more explicit.
> that wasn't the OP's argument though, the argument was that the OCaml is somehow generally better for engineers.
Engineers tend to be pragmatic. OCaml is pragmatic. So in quick short form, OCaml may be better to solve a certain kind of problem than something that is more pure and abstract like Haskell. It was intended as an informal argument and not an argument in a court of law :-).
Maybe. But at this point it just looks like a mean dig at Haskell. A better word would be opinionated. Haskell is opinionated, and that is fine.
The biggest bait in this thread is using the term "engineers". It should rather be: people trained in imperative programming in mainstream imperative languages. Then it makes sense.
Gosh yes engineers would rather not use math. If there are approximations that work well enough (and when incorrect goes the way that leads to buildings not falling down) engineers definitely like to use less math.
For one thing, less math means less room to make mistakes, and often ugly but works well is better than elegant with a higher chance of failing due to people screwing up.
> And approximations are not math all of the sudden?
The context was Haskell versus OCaml, so yeah — Haskell is the more academic, pure math. Ocaml (in this context) is the more approximate, but more practical option for larger projects. In practice people use things like C and Python because spherical cows are close enough.
Ok, you are another person in this chain who decided to just drop "more practical" and call it a day. Why bother.
It is especially funny in the thread about OCaml getting soon (2022 maybe) multicore support. Something this less practical ivory tower Haskell had before it was even cool.
> How can parallel untracked mutations of untracked state be better for engineers?
What is untracked? Is
unsafePerformIO (printLn ...)
tracked? What is tracked? Is
foo :: IO ()
thread-safe? Maybe, maybe not. What meaningful information does this signature says to me? That it does some IO? That's an extremely useless information, especially if most of your code is IO something.
What granularity does IO have? Does
foo :: IO ()
throws any exceptions? Maybe, maybe not.
The need for effect tracking for writing correct programs is way overstated by some Haskell programmers. It's usually much more prudent to write DSLs which hide effects like variable mutations and logging, than expose them.
For example you can start your program with a pure correct-by-construction core DSL, and then add logging, mutable variables where needed underneath the DSL's terms without breaking the semantics of the DSL. With effect tracking you are doomed to either reinvent custom effects to be able to switch interpreters painlessly, or you'll need to break the DSL by the addition of effects.
Neither is prudent in real world, neither gives more value than takes. What is really funny is that some Haskell programmed believe that logging should be tracked, but allocation apparently should not (yes, it's a side effect).
Where in fact all the "effects" are need to be tracked only when they are meaningful, i.e. when they are part of our DSL's semantics, and not some hidden part of the interpreter I don't need to know about.
Imagine a theorem prover function which creates a conjunction of two terms:
val conj : term -> term -> term
It doesn't matter if it allocates or logs, at the precision that is interesting to us, it's just a function creating a conjunction of two terms.
This repetitive talking point is getting boring. Go figure whether it's tracked now:
{-# LANGUAGE Safe #-}
-distrust-all-packages
> What meaningful information does this signature says to me? That it does some IO? That's an extremely useless information, especially if most of your code is IO something.
That's an extremely narrow view which you wouldn't have if you ever tried to implement a safe sandboxed environment.
> What granularity does IO have? Does [...] throws any exceptions? Maybe, maybe not.
what does it have to do with tracked parallel mutations?
> Neither is prudent in real world, neither gives more value than takes
define prudent and define real-world.
> It doesn't matter if it allocates or logs
have you heard about referential transparency? It's the thing your "theorem prover" example does not have.