I think this demonstraights a real weakness of Haskell. Several times I've started a new project only to spend several hours trying to work out the correct way to structure what I want. Often it is hours before I even get my first successful compile. This is exactly as shown in this blog post. While some people enjoy this kind of tinkering, for me this really quickly saps all my motivation for the task. I just want to get something, anything, working. I work best jumping directly into a task and understanding it from the inside out, learning from my mistakes. Sometimes if I do find a quick and dirty way to get something working in Haskell - often refactoring it also becomes pretty difficult - which turns me off again. Neither of these are problems I have in other languages.
Arguments for which approach is better aside, it is no wonder lots of people who learn and approach tasks like me, feel a little betrayed by Haskell. I've used it for a bunch of things, I'm past the steep learning curve - but it seems I'm still not reaping the rewards. Is my philosophy of jumping into projects really so bad? Is it Haskell's place to question such an approach, when it has served me so well elsewhere?
It takes practice to be as fluent in FP as in imperative programming. When I've got the basics I still couldn't perform as quickly in Haskell as in other languages. Took me at least a year to become quick and apart from the weak library support (most things are a bit over the top for the beginners), I find Haskell amazingly productive. It's not _that_ different after you get used to it. Not having to write an other for loop ever again in a language which does't support generics alone worths it.
The language itself if you don't venture too far into the depths of the latest research is quite comfortable and clean. The fact that I can express my thoughts effortlessly in a few keystrokes and that the types (ADT, generics, typeclasses) are so descriptive makes it my favourite language for day to day tasks.
At least for the time being... until a dependently typed language becomes usable.
Haskell enforces a programmer to create structure. There's no way around it. And it's (nearly) impossible to hack things together without failing early and often.
As a result, before the problem you're trying to solve can be explored, you're stuck making guesses on how the problem should even be structured. You spend a few hours with one guess, it turns out to not get your far, and you try another.
The unfortunate part is that these guesses aren't letting you get to the meat of the problem effectively; you're stuck trying to solve a meta-problem.
Of course, once you have solved the meta-problem well enough, you can start exploring your problem. Unfortunately, you may find that your meta-solution doesn't actually let you answer questions you didn't know you wanted to answer from the get-go, and now you're back to square-one.
When everything is right, the program is usually very beautiful, safe, and—if you're skilled enough—efficient.
Perhaps not all Haskell programmers have this issue for very non-trivial programs, though I certainly do.
I don't really agree. The greatest structure in most of my Haskell programs is some kind of ErrorT or something. That requires me to structure the way I handle IO and the way that Errorable things happen, but beyond that, `a` is just as generic as any Python type. Sure, a lot of libraries do require you to accept certain premises, which may be a problem, but most of them provide ways to break out of their abstractions: ErrorT provides `runErrorT :: (Monad m, Error e) => ErrorT e m a -> m (Either e a)` which allows me to break out very easily, and due to the composable nature of Haskell, even if the library does not provide a function, it is often far from hard to write one yourself.
I agree that Haskell forces the programmer to create structure, but I don't agree with your meta-problem hypothesis. I think the structure actually is a big part of the real problem, and Haskell just prevents you from implementing incomplete or incorrect solutions.
Those solutions might in another language require a hack or leaky abstractions to work, if they would be possible to get to work at all.
I've walked into problems like the OP has before with Haskell, and when there seems to be no nice way of defining a type structure to model the problem, it usually meant that the idea in my head had a fundamental problem. If I look at this tree I get the feeling that perhaps he just has two problems, and he's looking at it like he's got only one.
And once you do get your type structure neatly in place, often times the implementation will just flow out of your fingertips. And when it does, it will be powerful, flexible and robust.
It's more like your weakness, and I don't see want it has to do with Haskell.
And if you really don't want to think about correct types just use the first came to your head. You can always change them later. And the compiler will tell you every single place your forgot to change.
I agree with this. If it's not natural for you to think ahead, create the structure you need before writing it up, Haskell is not the right language for you. It's good there are languages that fit your needs, nonetheless.
> If it's not natural for you to think ahead, create the structure you need before writing it up, Haskell is not the right language for you.
No, Haskell is probably even more fit here. Haskell is a godsend for prototyping! Because you throw any crap into the editor and with Haskell you'll know immediately what's wrong with said crap. And then when you refactor it later the compiler will guide and help you.
I feel like people just haven't got balls to see many compilation errors and prefer buggy, non-working, but “see! no errors, not like in those haskells!” code.
Please don't inject such language into technical discussions on Hacker News. The last thing we want is to swerve into low-quality flamewars.
Orangeduck, reikonomusha and others have posted fine comments in this thread. If they're wrong, the thing to do is show, not tell, that they're wrong. Then disagreement will make the discussion better instead of worse.
I think part of the problem is that buggy code that compiles produces instant gratification. You see results, even if incomplete. Maybe down the road you'll reach a dead-end due to a problem you didn't foresee, and the buggy code didn't help you find it because it compiled anyway, and you didn't have to think too hard about all cases or about the structure of the problem. And then it may be too late and you'll have to throw everything away and start from scratch.
But in the meantime, you get the feeling you're making progress, even if you're not. I think that's the psychology of the matter.
I'm a Haskell fan (and still a newbie, unfortunately) and I sometimes fall prey to this feeling.
There's a lot of study in psychology about how our mental models of our future selves result in the choices in we make in the present. I know quantifiably from having worked in a both ML family languages and the popular dynamic scripting languages that the amortized time I spend debugging the software is far less with the ML-style languages. Yet with the scripting languages there's definitely a psychological appeal of seeing something appear to "work" ( it really doesn't ) and then debugging it into existence, although the engineer in me knows that this is a really bad way to develop solid software in the long run.
So perhaps rapid prototyping and exploration should be done in a worse-is-better language like Python, then when the elegant structure is thought of, the code can be recast in Haskell?
Do you think to program or program to think? It seems like Haskell is biased to the former.
It's not uncommon for C++ programmers to first prototype in a simpler language like Python, figure out the structure, then reimplement in C++ for performance. No reason that can't be a thing with Haskell too.
Very true. I remember reading in StackOverflow where a guy commented that while programming in Haskell he was thinking more and programming less - he seemed to indicate that this was the most important thing. While design and structure is important, I think the reason he was "thinking so much" is that Haskell forces you to think non-trivially even for simple problems in many cases. One common argument made is that a loop is much more natural in many real-life scenarios rather than recursion or folds - so the mental model of the problem matches closely with the machine model - which is not the case when you are forced to use recursion.
I completely disagree. And I'd go so far as to say that a lot of that criticism comes from a lack of familiarity in functional programming/Haskell compared to imperative programming. The argument of a loop being more realistic is horrible imo, as both refer to performing processes, which are abstract concepts: my mind probably views these differently to yours, and most likely very differently to a lay person, who will never have had to consider the idea.
I gave a talk recently, and I briefly explained how to decode a protocol buffer varint: if the last bit of a byte is true, shift the first 7 bits and repeat the process on the next byte. Does that map better to
getVarInt :: G.Get Int
getVarInt = G.getWord8 >>= getVarInt'
where getVarInt' n
| testBit n 7 = do
m <- G.getWord8 >>= getVarInt'
return $ shiftL m 7 .|. clearBit (fromIntegral n) 7
| otherwise = return $ fromIntegral n
or
def read_varint(self):
run, value = 0, 0
while True:
bits = self.read(8)
value |= (bits & 0x7f) << run
run += 7
if not (bits >> 7) or run == 35:
break
return value
I'm perfectly happy saying it is equally well represented on both of them. I know I much prefer the first, but that's probably just because I prefer recursion to an explicit loop. From the sounds of it, you'd prefer the second, and I believe that's just because you prefer loops to recursion.
In my experience the vast majority of people find explicit loops easier to understand. Thinking in terms of a function that calls itself is a struggle for most people. FP advocates that really care about language adoption should be more willing to recognize that this is a hurdle and not wave it away as a personal preference, IMO.
In my experience, the vast majority of people do not learn a functional language to the point where they are productive in it. My hypothesis is that the correlation between people who finding explicit loops easier to understand and people who do not learn functional languages to the point where they are productive is evidence of causation.
In my experience, people who are ideological about FP often claim that the vast majority of people do not just get FP, and their preferences are misguided and not natural.
Yet, sequential and direct control flow are common in human language, we know how to tell or be told to do something N times. Recursive formulations are harder to explain, and most people don't learn math as easily as they do language.
It's all about precision. To tell someone how to do the dishes seems easy. But give the same instructions to someone who has never done dishes and you quickly see the problem. It goes the other way too. I dare you to try and explain a simple repeating algorithm in prose. No lists or numberings allowed.
Simple recursions work exactly the same as the equivalent iterations. They branch on a condition, execute a step and repeat. How state is handled is the key difference. As Haskell has no concept of a variable, the only way to rebind a name is to recall a function.
I was going to say that a professional in our field who has problems with recursion and the basics of discrete math should take a look in the mirror. But maybe we have managed to raise the abstractions high enough so that one can be productive without knowing the fundamentals of computing. I probably need to broaden my concept of a professional in our field.
Even if someone hasn't done the dishes before, contorting the explanation through recursion is not helpful. Recipes and instruction manuals are not recursive, and are rich in direct control flow and avoid indirect (higher order) control flow like the plague. Recursion for human discourse is just not used often, it is a relatively recent concept to us.
There is plenty of program writing to do that don't involve deep knowledge of maths, in fact I would say a vast overwhelming majority of work companies need to get done are not helped by, and possibly even hindered by, an intricate understanding and application of recursion (keep in mind, recursion is actually dangerous in strict languages). Most of us are more like police dectectives using well worn tooling and intuitive problem solving skills to get the job done. Sometimes we might even need the help of a mathemegician, but not most of the time.
Recipes and other manuals are also extremely vague when compared to computer programs. Humans have such vast amounts of context and prior knowledge available to them that pedantic instructions are not needed. The only prior knowledge the computer has is the rest of the program. For this reason I don't like the comparison of natural language and programming languages very much.
I would argue that we use recursion quite a lot. For example, here's how clean a bunch of dishes. If the bunch is no more, your done. Otherwise, take a dish and clean it. Then clean the rest of the dishes like you did with the first one. We often leave out the end condition as it is often clear from the context.
I'm not advocating a deep knowledge of math as I know from experience that it has very few applications in software development. But to me a professional is someone who is not just skilled but knows the history and fundamentals of his trade. He knows not just that for (i = 10; i > 0; i--) terminates but also has an idea why that is so (and what it means that a program terminates).
Why would recursion be any more dangerous than iteration?
That is not recursion, but basic procedure call; from wiki:
> Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure. For instance, a recipe might refer to cooking vegetables, which is another procedure that in turn requires heating water, and so forth. However, a recursive procedure is where (at least) one of its steps calls for a new instance of the very same procedure, like a sourdough recipe calling for some dough left over from the last time the same recipe was made. This of course immediately creates the possibility of an endless loop; recursion can only be properly used in a definition if the step in question is skipped in certain cases so that the procedure can complete, like a sourdough recipe that also tells you how to get some starter dough in case you've never made it before. Even if properly defined, a recursive procedure is not easy for humans to perform, as it requires distinguishing the new from the old (partially executed) invocation of the procedure; this requires some administration of how far various simultaneous instances of the procedures have progressed. For this reason recursive definitions are very rare in everyday situations.
Recursion quickly causes a stack overflow when you get it wrong in a strict language (and sometimes even when you get it right!), iteration does not (rather, the CPU just spins and memory is safe). Also, iteration is intrinsically less expressive than recursion, meaning it follows to use it via the principle of least force.
Recursion can cause stack overflow in lazy language too. But lazy or strict, that's more of an implementation detail although a very visible one. An iteration that appends to a list will run out of memory just the same. I also don't see how a recursion that runs out of stack space is any more dangerouse than an iteration that goes to an infinite loop.
That is just a loop in my opinion; it would be easier and more clear to express it as a loop. The power of recursion isn't really necessary, nor is it somehow more enlightened.
Exhausting your call stack in most languages is really bad as far as debugging is concerned: it loses the ability to even form a decent error message (stack overflow), and you are left with poor content for diagnosing the problem. Exhausting the heap or having an infinite loop, in comparison, are vastly easier to debug (the latter being much easier than the former, thrashing the VM system is also a pain in the arse, though less so than exhausting the stack).
The problem is a possibly little more fundamental, though. Explaining a looping function to someone can be as easy as "think of how you do the dishes, you take one... repeat until there are no dishes/soap/water/room left."
This is very close to how every repeating process is taught and understood by folks. You take something, and repeat until there is a specific condition. Compared with a recursive solution. Which is of a few forms. One being to take your problem, break it into a series of identical problems and constantly restart with the results of having run a previous problem.
I grant that a recursive solution is not that much more difficult to phrase this way. "Take a dish, clean... start over unless there are no dishes/soap/whatever left." I just don't know if I have ever heard something stated this way.
I don't understand where you have "run += 7" and break if run = 35 in the python code. Does the Python code have more error checking than the Haskell code?
The Haskell code checks this implicitly with the use of the Get monad, I believe. In Python, you have to manually tell the code to not reach for bits that aren't there. In Haskell, the concept of a failure state being implicitly dealt with is used all over.
Also, the reason the Python code is a `while True` with an explicit counter and check at the bottom is because Python lacks a do-while loop
Huh, yeah, didn't notice that implementation difference. In the Python code, the longest valid varint is 5 bytes (want the result to fit into int32 I guess?). In this case the bug is not strictly a problem as the Haskell code is used to read varints from trusted inputs only, whereas the Python is designed to read untrusted input. But good catch
I think the real problem here, and it is a true problem with Haskell development, is perfectionism.
Code in Haskell can be so elegant, and examples of beautiful code are so abundant, that hacking together a bunch of hacks to see if they work doesn't seem like a conceivable way of doing things.
But of course, Haskell can be used that way to figure out and explore your problem. You can throw IORefs around, do everything with ugly IO effects, pass tons of parameters around, etc.
Perfectionism is a problem I had with Scala awhile back. My penance was to program for the next 7 years in C# (at least I didn't go back to java). Sometimes no hope for achieving elegance can be empowering in getting things done.
I don't think Haskell is at all designed for quick and dirty prototyping and exploration. The whole library and tool mindset is against that kind of developer activity.
I've never found a language better suited to quick and dirty prototyping and exploration than Haskell. I was part of a small team that built a core web service from scratch in Haskell. We had no idea what we were doing (feature-wise) when we started, and just put some stuff down. Make it compile, see how writing the next bit of code works out. If it's terrible, refactor. Every part of the Haskell toolchain is about empowering refactoring. Not with silly automated tools - with a set of language features that tell you when you didn't think about something.
Even after the service was up and handling hundreds of requests per second (yes, really - it was hardly the company's first foray, and we had a lot of customers lined up to use this as soon as it went live) we often found that the current design was severely hindering a new feature we wanted to add. So we'd just change the design. It wasn't uncommon for an update to require changes to 20% of the lines in 75% of the files. And it was no big deal. The compiler had our backs, and made sure we made all the changes in a way that made sense. We didn't always get it right - sometimes the system tests caught things the compiler missed, and on rare occasions a bug even slipped into production. Such is life in a startup, right?
But here's the thing. At no point in the process was there a monolithic design done ahead of time. In fact, it can hardly be said that there was any designing done ahead of time. Pretty much all the design work was a result of aggressive refactoring when adding new features. The whole process over years of development was nothing more than turning a quick and dirty exploratory prototype into a real, solid production system.
For contrast, an associated service was written using Rails, because we thought it'd be easy for what that service needed to do. It ended up being a nightmare to refactor and add new features. We were just never sure we got all the mechanical cases when something deep needed to be changed. Between the two, there's just no question which system was better when you start out not having any clue what the product needs to be.
There is more to quick and dirty programming than refactoring. I would say refactoring even doesn't play much if a roll here. What is key is the ability to be in a broken state for long periods of time while still being able to test and executed pieces of the system. This doesn't work in Haskell since Haskell is not just statically typed, it is Statically Typed! Type inference and code generation are slaves to types, and don't handle errors very well. That could be fixed, but it doesn't seem to be a priority (generate code Forman incorrect program seems to be taboo).
More crucially, Haskell programmers tend live in their compiler and not their debugger. The adage "well typed programs don't go wrong" means you spend a lot of time just getting the code to compile, fitting types together, and not much time seeing how things work together. Much of this is because the community emphasizes the compiler, and in fact good graphical debuggers for Haskell aren't there yet (and even difficult to design given lazy evaluation). Bret victor didn't do his inventing in principle talk with Haskell for good reason!
Python and ruby. programmers live in the REPL and debugger, they don't get to observe type errors early like Haskell programmers do, but they immediately get to see real interactions. I can then see why going between Haskell and ruby would be a disaster.
You can compile haskell programs with type errors (in GHC, anway) if you want. They crash when you attempt to execute code that has a type error, but if that's what you want, feel free to use it.
It's not something I've ever needed. It's got little to do with quickly exploring design space.
Edit:
I guess I should include a couple words about why it's not really necessary in real coding. A side effect of good module boundaries is that you can test breaking changes in isolation in the REPL. Change a few things that are tightly coupled at the same time, load their module, don't worry about fixing any other modules until the first one works the way you like. In practice, this means you almost never need the option to defer type errors to runtime.
"real interactions"? It is disingenuous to imply that all Haskell programmers do is see type errors while Python and Ruby programmers have 'real interactions'.
For instance, today I wrote tons of stuff in the IO Monad and had what you call 'real interactions'.
Also, more time spent in the debugger doesn't necessarily mean a faster working program. The psychology of it is more rewarding, but there's nothing necessarily more "real" about it.
A very small tip for people learning Haskell (coming from a fellow newcomer to the language): as you read the code, mentally replace "::" with "has type of."
Nah, it is a type of numeric ID's for different types of things. The parameter 'a' is just a phantom type parameter to keep you from mixing up a mesh ID with a texture ID.
Arguments for which approach is better aside, it is no wonder lots of people who learn and approach tasks like me, feel a little betrayed by Haskell. I've used it for a bunch of things, I'm past the steep learning curve - but it seems I'm still not reaping the rewards. Is my philosophy of jumping into projects really so bad? Is it Haskell's place to question such an approach, when it has served me so well elsewhere?