Heterogenous lists are always coming up when people try to translate OO ideas into Haskell. It's great that this example used the barrier of heterogeneity as a reason to think harder about their design instead of barreling forward.
In particular, heterogeneity causes a form of information loss via type erasure (existential typing). The problem is that this is pretty heavy machinery and is not always well-suited to such a simple problem as a heterogenous list. Additionally, once you've produced a type like
[forall a . Renderable a => a] -- not a valid Hs type, but close
frequently you've reduced your options for what to do down to really just a single choice. In this case, `render`. This is what's sometimes called the Existential Antipattern—if you have an existential type where only one way forward remains... you may as well just take that way forward. This is especially easy in a lazy language like Haskell.
renderEm :: Ball -> Player -> Player -> [IO ()] -- homogenous!
renderEm b p1 p2 = [render b, render p1, render p2]
Note that this is functionally very similar to the implementation of `render` for Game as given in the article. In particular, the only difference is that the article's `render` method destroys the barriers between these `IO` actions by sequencing:
instance Render Game where
render (Game b p1 p2) = sequence (renderEm b p1 p2)
where `sequence` sequences a list of monadic actions
-- try thinking about this as "distributing" a monad over a list
-- much like you distribute multiplication over addition
--
-- x (a + b) --> x a + x b
--
-- and then go look up Data.Traversable
sequence :: Monad m => [m a] -> m [a]
sequence [] = return []
sequence (ma:mas) = ma >> sequence mas
This tension between a "reified list of actions" and a composite action is pretty much the heart of the expression problem—the reified list is an initial encoding and the final action the final encoding. This kind of thing shows up all the time when dealing with heterogenous lists. The reason being that OO basically favors final encodings to the extinction of initial ones... but initial encodings are what generate type information.
Could you explain what you mean here by initial/final encoding, and why you say OO favors final encodings but initial encodings generate type information? I'm familiar with the notions of initial algebra, and slightly less so with final coalgebras, but I don't quite see what you're getting at.
I'm running pretty fast and loose here, but you can see objects as being ADTs defined by their eliminators [0] while constructors produce new forms and their relevant type information. Eliminators can do that as well (in negative positions), but I want to drive home that a lot of what feels weird about Haskell comes from the effects of working with constructors/pattern matching. A similar comparison might be made between case classes and "regular" classes in Scala, but I don't know Scala as well.
"[I]f you have an existential type where only one way forward remains... you may as well just take that way forward. This is especially easy in a lazy language like Haskell."
This doesn't seem especially easy in Haskell (in that it doesn't seem harder elsewhere), but especially the case a lazy language like Haskell. In an eager language, (forall a . Renderable a => a) is isomorphic to (() -> IO ()) and subtly distinct from (IO ()).
On a related note: The article (quite reasonably) avoids discussion of the graphics library, but I want to know more about that side of things. I wish graphics got more attention in the Haskell ecosystem in general.
The options right now are pretty dismal. There is quite literally not a single Haskell graphics or GUI package that I've been able to install on OS X. I'd love to use Haskell to build games or desktop GUI apps, but without a library for OpenGL, windowing, etc., it's not practical.
This isn't just whining, though. I bring this up to ask: What can I, as a relatively green Haskell developer, do to improve the situation? Is there a realistic path for a new Haskell user to get involved in the package ecosystem?
I'd be willing to bet that a great many developers like me wanted to try Haskell, but gave up when they found out how many Cabal packages don't compile. I think fixing that would do a lot for Haskell's mainstream acceptance.
It's not just OSX, it's a problem (in general) on windows, too. I was trying to create my first haskell project as a web crawler. Every direction I turned I had issues installing libraries to the point where I decided it's a deal breaker because I can't write code on my preferred OS.
Even if they eventually make these transitive dependencies OS agnostic, the libraries that already exist will depend on the versions that aren't.
> I'd be willing to bet that a great many developers like me wanted to try Haskell, but gave up when they found out how many Cabal packages don't compile. I think fixing that would do a lot for Haskell's mainstream acceptance.
Likewise. The more I learn about Haskell, the more I've blown away by it's power and elegance. It's been a deeply rewarding experience. But spending days fixing dependencies to update the compiler on OSX and linux, followed by Cabal packages failing to compile is no fun. Hopefully the increased attention Haskell is receiving nowadays will help resolve these growing pains.
As a fellow green Haskell developer, I get the sense that the best option is going to be GHCJS[0]. The browser is far and away the best environment for graphics programming because of its ubiquity and easy to use APIs. Sure, someone could step up to the plate and write a nice, idiomatic wrapper for SDL and OpenGL and whatever, but a good Haskell library that targets the browser would spread like wildfire in comparison.
I hear that, and I hope that project works out. But I'm still interested in developing native apps. There's a reason so many professional applications (games, intensive apps like Photoshop and Blender, etc) are still native.
You may have seen The Birth and Death of JavaScript:
I don't know whether that prediction will come true. But obviously it hasn't thus far. For now, if I need native performance, I need native code. Sadly, that means C, C++, or Java until such time as Haskell libraries compile reliably.
- ghc-pkg gets itself into an error state pretty quickly after installing a fresh Haskell Platform, and ghc-pkg recache must be run.
- You must manually request the latest version of cabal-install. cabal install cabal-install by default gives you a version without sandboxes.
These are just a few of the issues that come to mind; I've run into quite a few more in my Haskell explorations. You can imagine how this state of affairs would cause a lot of developers to give up. I'm not at all blaming anyone in the truly wonderful Haskell community. Everything about Haskell is done volunteers, and I'm grateful for their work. I'm just offering a theory as to why Haskell hasn't been embraced as warmly as, say, Node or Go.
"- You must manually request the latest version of cabal-install. cabal install cabal-install by default gives you a version without sandboxes."
So far as I'm aware, doing a
cabal update
cabal install cabal-install
should always get you the latest version of cabal-install.
One thing people encounter frequently, that may be confused for this, is that cabal doesn't install things system-wide, and so you have to 1) make sure that the place cabal is installing things is in your path, and 2) possibly make sure your shell hasn't cashed the location of the old system-wide version of cabal.
It's totally possible that there's a bug that's more specifically as you describe that I just haven't heard of, of course - just trying to help if I'm able.
I do try and make sure that HsQML works on Windows and MacOS in addition to Linux. Although, I admit I haven't tried MacOS 10.9 as my Mac is still running 10.8.
I don't think I've ever been contacted concerning issues with Windows or MacOS, which is probably indictative of the size of the user base combined with probability of any one person deciding to go to the effort. On the other hand, I have fixed genuine build failures for Linux users because the platform is diverse enough that it "worked on my system" but not theirs.
In any case, my contact details are on the HsQML Hackage page. If you'd like to send me the error message you're seeing, I'd be happy to try and help you get it working.
Get in touch with the developers and work with them to get it building. Often times they don't have access to your platform, so just being that helps. Anything you can do on top of that is gravy.
I wasn't making any point. Fwiw, I have been able to meet my own (quite limited) gui needs in Haskell with gtk...
Edit: On reflection, I find your assertion a little strange here. Anecdotal failure to build on one particular setup doesn't seem like a particular highlighting of bad bindings.
I can add another anecdote. In almost 10 years now of using Haskell on Mac, Linux and Windows, I've never once had any success building any GUI binding for Mac OS, despite trying every one I could find (gtkhs included) at least once a year on many different OS/hardware combinations.
(unless you count HOC, but I never actually got it working as-was - I brought it back from bit-rot on a couple occasions, adding ObjC 2 support and rewriting a fair chunk of the low-level stuff, but never did find the time to update the fragile header-parsing stuff for the actual Cocoa-binding generator)
Huh. It is sounding like there's an issue with Mac GUIs, then. Maybe the Mac/Haskell overlap is just too small? Both are small-to-niche... It would probably make sense to get a group oriented around getting that fixed up. For myself, I'm meeting my needs - and both 1) more GUI apps and 2) more Mac support are almost entirely orthogonal to them, so I'm not likely to participate.
> Maybe the Mac/Haskell overlap is just too small?
Probably so. Though I would suggest that the neither one is a small niche in itself. Mac especially--it's the favored platform for every developer I know except one. Yes, it's less popular outside tech circles (probably due in part to the price).
Imagine this scenario: There are tons of Mac users who want to learn Haskell. They try it, but can't install libraries. The Haskell community never hears from them; one could say the system failed silently. Meanwhile, Linux works fine, and the Haskell-Linux community keeps growing.
So perhaps there's a self-reinforcing Linux-centric bias in the developer population.
I wouldn't at all say either is "a small niche." As niches go, they're both pretty large... I actually have no idea how the prevalence of Mac differs inside and outside tech circles. It's a decided minority in every case, with Windows still dominant and likely Linux still dominated (though I'm far less confident about that in dev circles than I used to be). For what it's worth, virtually every developer I know well enough to know what they prefer uses either Linux or Windows, with the exception of my mother who decided some few years back that Mac is "Unix enough" now. I expect that there's a lot of clustering, though, and neither of our experience represents a uniform sampling.
Your general point - that it's likely self-reinforcing - is certainly strong. I'd even expect it to be exacerbated a bit in this case by it being GUI things in particular showing issues, where (at the risk of stereotyping) there is probably a correlation between those who prefer a Mac and those who prefer a GUI.
> virtually every developer I know well enough to know what they prefer uses either Linux or Windows
Interesting. It must be clustered, as you say.
I use a Mac largely because there are a handful of professional apps that don't run on Linux. (Otherwise, I'd probably go with Mac hardware and a Linux OS.) Which implies that the demands of my industry is what pushed me (and perhaps the people I know) onto Macs.
> there is probably a correlation between those who prefer a Mac and those who prefer a GUI.
It's not so much that I personally prefer a GUI. (I don't, in general.) It's that I make a lot of software for other people, so GUIs aren't a matter of preference but of professional obligation. Also, there are certain applications I'd like to do where a GUI is pretty much the only sensible option: Design tools, certain types of games, etc.
Sometimes I do. Often, if a Haskell package doesn't have a Github repo, I don't know how to contact the developers or submit a bug report. Is there a standard place on Hackage or in ghc-pkg where one can find that info?
"Is there a standard place on Hackage or in ghc-pkg where one can find that info?"
Both! It's in the ghc-pkg dump output, though there's doubtless a better way at getting at that. As for Hackage, if you just pare the above link back to https://hackage.haskell.org/package/hsqml you'll find package meta-info which includes:
Author Robin KAY
Maintainer komadori@gekkou.co.uk
Home page http://www.gekkou.co.uk/software/hsqml/
Source repository head: darcs get http://hub.darcs.net/komadori/HsQML/
I'll try installing QML on mavericks tomorrow and debugging it. Feel free to list anymore that failed to compile for you and I'll try to do the same for those.
I'm not sure that this works in more general cases, though.
If I have a more general game with more objects of more types in it, adding each type to the render function is going to get old. In object oriented programming, I'd just call render() on each entry in the list of game objects. But this approach is going to lead me to:
- render each entry in the list of Foo objects
- render each entry in the list of Bar objects
- render each entry in the list of Baz objects
- ...
which doesn't seem to me to work out very well as the game grows more complicated.
(Sorry about all the line breaks - I can't seem to figure out how to get HN to display it right without them.)
This approach seems to be what people new to Haskell gennerally go to first (probably because it is the natural solution given the type system). However, as much as we Haskellers hate to admit it, there are design patterns in Haskell that can offer more maintainable solutions that what the language naively presents.
In this case, a common pattern is to copy the OO concept of casting. For example, instead of having:
class Render a where
render :: a -> IO()
we could have:
class Renderable a where
toRender a -> Render
along with:
data Render = ...
render :: Render -> IO()
render r = ...
This leads to some noise with needing to put a bunch of toRender functions in a short amount of code, but this problem does not get worse as complexity increases.
That problem gets especially harry once you consider every subsystem in the game is going to behave similarly, AI, physics, Sound, etc.
It seems like the right answer is the same as it is in OO languages favor composition. Grab the Renderable out of the Foos, Bars, and Bazes then pass that list to the render function. It follows the articles suggestion about think about your types more. Why does anyone think the render function needs the entire player state instead of just the part it needs to render something on the screen?
My preferred way to approach this, in Haskell specifically, is to use records as a naïve encoding of objects or interfaces. For example, expanding on the functionality a little bit:
data GameEntity = GameEntity
{ render :: IO ()
, getPosition :: Point
, setPosition :: Point -> GameEntity
}
makeBall :: Point -> GameEntity
makeBall pos = GameEntity { render = myRender
, getPosition = pos
, setPosition = mySetPos
}
where myRender = {- draw the ball somehow -}
mySetPos newPos = makeBall newPos
{- and similar for makePlayer -}
What I've done is used a record type to encode the interface that it's supposed to expose, while hiding exactly what the particular implementations of the interface are. It's also nicely extensible; if I wanted to add in some other kind of entity—say, a turtle—all I'd need to do is add a function like
makeTurtle :: Point -> ShellColor -> TurtleDisposition -> GameEntity
> What I've done is used a record type to encode the interface that it's supposed to expose,
I love how Haskellers (in general, not you in particular) bash OO and then come up with the exact same technique of simulating OO that is used in C (a struct of function pointers). Something which OO languages provide out of the box (interfaces, virtual methods, etc).
Now, let's take this a step further: how would you simulate double-dispatch, or multiple dispatch in Haskell? This is sorely missing from mainstream OO languages.
I can't speak for the community as a whole, but the Haskellers I know generally "bash" OO with respect to its use of pervasive mutable state and open recursion, and not because the notion of "object" is an inherently wrong one. Those features must be explicitly included in the above model of object (by manually including reference cells and manually invoking a "method" with its own "instance", respectively) whereas they are implicitly included in every object in most commonly-used OO languages.
As for multiple dispatch, you could always use multi-param type classes:
{-# LANGUAGE MultiParamTypeClasses #-}
class Say a b where say :: a -> b -> IO ()
instance Say Int () where say = putStrLn "Case one"
instance Say Int Int where say = putStrLn "Case two"
which corresponds roughly to the CLOS snippet
(defgeneric say (a b))
(defmethod say ((a integer) (b null)) (format t "Case one"))
(defmethod say ((a integer) (b integer)) (format t "Case two"))
I do not believe that there is a way to use the record-based model of objects I showed earlier to do the same thing, but perhaps there is some way that is presently eluding me.
There is a slight technical difference that winds up being a more significant practical difference between the C solution (struct of function pointers) and the Haskell solution (record of Haskell functions): the Haskell functions can close over arbitrary data.
I don't think Haskellers bash OO that much. They bash some of the trappings of it. GoF is usually a typesafe library in Haskell because it has better abstraction capabilities. Functions form better fundamental units than objects do (but are a form of "object" themselves). Composition is better than inheritance. Classes are not generally so useful.
Really, I think Haskell has a super great object system. It's just (a) not the central conceit and (b) so naturally embedded in the language that you can program for a long time without even noticing it's there.
Codata typically indicates you're looking at an object. In Haskell, due to it being non-terminating as a language, codata and data are unified so most "objects" look identical to their non-object form. You can also notice them by definitions based on eliminators. Automata are a good example
newtype Auto f i o = Auto { run :: i -> f (Auto f i o, o) }
data ADT f = forall st
. ADT { unfold :: st -> f st
, state :: st
}
Here, the internal state type `st` is closed over as an existential value—when you create a new ADT you can pick what st is but the type system then ensures that nobody ever can access that type again. Instead, you have to use the `unfold` elimination form which projects the state into a "class" `f` (represented as a Functor, but that's immaterial) which defines a signature of methods over the abstract state. Auto, from before, is definable this way.
data AutoClass st o =
AutoClass { next :: st
, out :: o
}
type Auto = ADT AutoClass
Subtyping occurs naturally with quantified types like the existentially-typed `st` variable or the universally quantified types you see all the time in type signatures. It's a more natural form of subtyping since it's all defined by increasing typeclass bounds. Something like
{C} : set of constraints Ci : constraint
------------------------------------------------------
forall a . {C} a => a :> forall a . ({C} a, Ci a) => a
This guarantees that these these two types are good "subtypes on eliminators" which satisfies the Liskov Substitution Principle if not some of the more strict definitions of subtype "niceness".
Double/Multiple dispatch is trivially handled in Haskell since polymorphism is solved by an entire Prolog embedded in the type system.
The thing is that Haskell also has initial data, things like finite lists are better thought of as a big tree of constructors
1:(2:(3:(4:(5:[]))))
and then pattern matched upon (defining catamorphisms)
sum :: [Int] -> Int
sum (a:bs) = a + sum bs
sum [] = 0
and this pattern is perhaps a little bit emphasized in mainstream Haskell because it sides on the "functional" side of the expression problem. It doesn't mean, though, that Haskell doesn't like the "object oriented" side, but instead that Haskell favors both.
Sure it does. In object oriented languages, you still have to write all of the same code, you just group it by what data it affects. In (strongly typed) FP, you're writing the same code, but now it's grouped by functionality, rather than by data type.
However, I don’t think this is a good use case. We can get around this problem in a cleaner and safer way by using the type system rather than subverting it.
Right, I know he said that. But my point is, his solution doesn't scale well to a more complicated problem. (Unless I misunderstood either his solution or your point?)
There are more real solutions which scale better, but in these toy problems it's hard to get to the meat of the problem. Sometimes the "existential antipattern" is a good choice (see Oleg's finally tagless encoding of, say, the linear lambda calculus). Sometimes creative use of static structure can scale much more neatly than lists of concrete objects.
Seeing so much stuff about Haskell lately, but there seems to be a curious dearth of actual software written in it, if it's so great. How is it that janky hacked together languages like JS and PHP have huge numbers of projects built with them, while a supposedly superior language like Haskell is mostly academic? If it really makes you that much faster, where are the apps?
I understand the critique being made, but this seems like a slightly outdated view of Haskell. There is quite a bit of software being written in Haskell these days. One project that got some press recently is a long existing project that recently went open source, Cryptol [0].
While lots of projects in Hasekll continue to be libraries written for Haskell, there are also lots of languages using Haskell for their implementation language. Browse the GitHub trending repositories listing for Haskell [1] for an idea of what is being done with it.
To be fair most of the trending github projects are libraries for use in Haskell. There are very few actual applications - there is pandoc, git-annex and hakyll and that is it.
It's worth pointing out that the presenter's first language was Haskell and he's been coding in it for over a decade.
LYAH won't get you from apples to expert in weeks, much less months; more likely years.
Consider me skeptical -- needing to build the latest and greatest of Haskell [7.8] from source on a modern Linux distro (CentOS binary with antiquated libgmp.so.3 dependency, seriously?) is a gigantic PITA compared to virtually every other language where you just download a standlaone binary of latest & greatest from langugage X, modify your PATH, and hit the ground running.
Why do you need to build the latest and greatest? Building the very latest gcc/clang is also going to be a PITA. In either case, there's a perfectly servicable binary distribution and the stuff packaged in my OS's repo is still plenty usable.
> Why do you need to build the latest and greatest?
Why shouldn't I? When Scala 2.11 was released I downloaded the binary, changed my PATH, fired up a new terminal and started exploring the latest release. Takes 2 minutes or so.
I'd like to do the same with Haskell. 7.8 looks to have significant language improvements that I want to explore vs. read about and be stuck on 7.4.1 (Fedora 18's provided version).
Indeed I did just that, the issue is that the only binary distributions for Linux are CentOS 6 and some flavor of Debian, both of which are dinosaurs compared to any modern distro. The long and short is the installation fails due to a missing dependency on antiquated libgmp.so.3, thus cooking my CPU for an hour and building from source.
If you want to talk about barriers to Haskell adoption, this is certainly one of them.
"both of which are dinosaurs compared to any modern distro."
Without any clue of what constitutes "any modern distro" in your mind, I don't see how this can proceed further. Note that Centos 6.5 and Debian wheezy are the latest from their respective projects. I believe the Debian version will happily install under recent Ubuntu and derivatives.
I'm sorry your preferred distro doesn't have better support.
On average, people that don't know much about software development, but want to make software, will do it using a language that has a lower cost to entry. For all the elegance and purity of a language like Haskell, it seems to be completely overwhelming for most beginners.
Not all software is created by such people, but I think it explains quite a bit.
I think the package ecosystem is a big barrier. I code on OS X, and it seems like half the Haskell packages I try to install fail to compile. I always get super motivated to do my next project in Haskell, but then give up when I can't install the required libraries. Maybe the situation is better on Linux.
In any case, I think Haskell will need reliable package management on at least OS X and Linux before most developers consider it a serious choice for real projects.
I'm not sure which issues you're running into, but for what it's worth, I've had a lot better success using the newer cabal sandboxes[1] than just running `cabal install foo` all the time. (If you're from the Ruby world, it acts a lot more like bundler with `--standalone`).
Previously, I had more issues with conflicting version constraints which I think is more of an issue with library authors and not necessarily cabal itself.
I'd love to, but Haskell Platform ships with an old version of Cabal that doesn't have sandboxes. The new version of Cabal itself doesn't compile on the Mac. (In keeping with the theme of "nothing compiles.")
Edit: At least, I can't get the new version of Cabal/Cabal Install working on my Mac.
There was a relatively serious bug caused by Mavericks' clang being a non-standard CPP, but I think that's fixed in newer versions of GHC. It's certainly possible to get a Cabal compiled on Mac, though.
Against the usual recommendations, I install the Haskell platform with Homebrew. After that a 'cabal update ; cabal install cabal-install' gives a newer version without any problems.
Manual is OK if there's at least a clear path to getting something to install. I don't see that path, though.
My biggest issue is that I don't have the expertise to debug an obscure Haskell compilation error. I won't develop that expertise unless I can use Haskell over the long term on real projects. I can't do that unless libraries are available. So it's a chicken and egg problem.
I think the same is true for many people who'd like to dive deeper into Haskell. If we can't initially lean on the work of expert package maintainers, we can't ever become Haskell experts ourselves. I believe the developer community could expand very quickly if this problem could be solved.
Certainly the case. Much eased (though not eliminated) by the recent addition of cabal sandboxes. There's still no good way to see all the native libraries required by a cabal install, and occasionally there are actual conflicts between packages... I've been meaning to populate http://en.wikibooks.org/wiki/Haskell/Resolving_Cabal_Hell but have been kinda hoping (almost certainly in vain) that someone with deeper knowledge beats me to it.
People generally learn by forming patterns from many examples, not by studying the patterns themselves. Attempts to directly communicate abstract patterns generally fails (any school anywhere: "this is boring because we are never going to use it").
Programmers are inherently attracted to building on imperfect abstractions, because that brokenness is something to latch on and set about easily solving (as it's been solved many times before and they don't even need to solve it perfectly). If the abstraction did exactly what they wanted, they would have to recognize that, understand what was given to them, and then confront the essential complexity of their problem that much sooner.
Honestly, it seems to me like if you don't understand category theory and type theory well, using Haskell will be either hard or impossible. That's what people who are into Haskell are into, and they seem to be a relatively rare breed. (I have a lot of trouble understanding these subjects, though I continue to try. I still don't know what the hell a monad really is.)
Do you want to know "what a monad is" in math or Haskell?
In programming, a monad is a particular design pattern, which is exposed in a particular interface (appropriately called Monad) in Haskell. The design pattern supports a certain way of combining things. The fact that so many disparate things support this interface (State, IO, Software Transactional Memory, Readers, Writers, Continuations...) means that all of the code we write that generically talks to that interface can talk about any of those things, and that's pretty powerful. The fact that one of those things is, in a certain sense, voodoo (IO) shouldn't lead you to think they all are - Monad is just an interface for combining things according to certain patterns.
>if you don't understand category theory and type theory well, using Haskell will be either hard or impossible
Absolutely not true. I never completed a single credit of university and struggled with high school math. I don't know any type theory or category theory.
Not only am I comfortable in Haskell, I teach Haskell.
Just throwing my voice into the chorus, I know nothing about category theory either, and I don't think that hinders learning Haskell.
I do think there is a subset of Haskellers who use category theory to prove certain ideas and they are able to easily express that in code. However I haven't found it necessary to understand the theory behind why something is sound in order to practically use their work in my projects. That says more about Haskell's expressiveness than the target audience to me.
Setting aside years of imperative (and also OO) programming and learning a different approach to solving problems has been the biggest challenge for me, by far. That's why I agree wholeheartedly with https://news.ycombinator.com/item?id=7687200
I think a more apt analogy would be: no furnace to forge your own cooking utensils (hour long build from source to get latest and greatest [7.8] installed) which you use to prepare the meals (wait for your application to compile) that you then stuff into your mouth (deploy to server).
The 7.6 packaged in Debian is perfectly usable and compatible with everything I've wanted to grab from Hackage. There's a few new bells and whistles in 7.8 (like TypedHoles and -fdefer-type-errors) that I'm looking forward to, but their lack doesn't mean I can't build existing code and when developing new code it just means I lack some new tools. Since these tools are lacking everywhere else, their temporary lack is obviously not keeping people away from Haskell.
Basically, you can get yourself access to a furnace to forge your own melon baller, but you've already got a spoon, and someone else will be shipping you a melon baller next month.
> Well, JS is used because it's the only way to run code on a client system.
Not sure what you mean by this - I guess you're assuming the "client system" is always a web browser?
Currently, I'm writing code for a piece of middleware that is a client to a server that models networking equipment. This client then pushes hundreds of thousands of responses into a fast message queue. None of this is written in JS.
The ecosystem plays a big part in this. NPM is actually a pretty nice package manager to work with. Cabal, on the other hand, took me quite some time to setup and use.
Cabal is actually an extremely nice package manager... it just, like much of Haskell, likes to tell you things won't work far before they fail. In particular, building things with Cabal requires that everything compiles together successfully. This is a much stronger requirement than NPM and has push some more sophistication into the dependency resolution Cabal does.
Indeed, if we deferred all type errors and compile failures from Cabal to runtime like Javascript/NPM then nobody would complain about Cabal! I think it's unfortunate that Cabal get's so much unnecessary blame when the problem is often in some other place entirely.
This is mostly true, but I think we do see more dependency related issues than other languages. My working theory is that this is because we wind up building more small, generally useful packages that get used by lots of things and which can therefore be in conflict. I haven't set out to carefully validate this, though.
> This is why we hear that Haskell reprise if it compiles, it works.
If this were true then functions would not need bodies, you would just define their signatures and move on with life.
The truth is that even with its superb type system, Haskell still needs to run your code. Your code might be statically correct but its runtime is up to you.
I would prefer it if people rephrased this claim like "If it compiles in Haskell, it's more likely to run than if it compiles in Java".
Of course it's not at all true that anything in Haskell that compiles works for any task X. It's not even quite true that, setting out trying to build something that accomplishes X, X will always be accomplished as soon as you get it to compile. However, in my experience, it is frequently true that it is surprisingly the case - I put together something somewhat large and it works first time where in another language (that I might even know better) I'd expect to have a few bugs to fix. People joke around, but I don't think anyone actually makes the either of the stronger assertions and expects to be believed, in which case I don't really think there's a problem (but I'm sorry if it bugs you!) - on the other hand, maybe I'm being overly charitable and people really are intending the stronger forms...
Well, as you go to the next steps you can use proof search techniques to do exactly that: write your types and your programs write themselves as the "only possible implementation".
This is already possible sometimes in Haskell so long as we restrict ourselves from pathological values like exceptions and non-termination. In fact, the first place this phrase shows up is Russel O'Connor talking about highly polymorphic lens code.
This kind of type limitation of possible implementations is called parametricity and is difficult to encounter even in most typed languages as it requires purity.
In practice types often winnow the possible implementations to be a relatively small set. This effect is improved if you also include notions of law-abiding implementations as Haskellers often do. At the end of the day, it's true that implementations (don't yet) write themselves, but, more realistically, that the constraints of type and theory drive you naturally to the correct solutions even if you never once figure out the "operational" aspect.
I don't think anyone believes that types are sufficient outside, at least outside of a dependently typed language (at which point you'll have more diversity of opinion).
"If this were true then functions would not need bodies, you would just define their signatures and move on with life."
That's because both addition and multiplication, for example, have the same type signature.
Prelude> :type (+)
(+) :: Num a => a -> a -> a
Prelude> :type (*)
(*) :: Num a => a -> a -> a
A better rephrasing might well be "If it compiles and you've used the right operations, which is made easier because most of the wrong operations will blow chunks all over the place, it works."
Dependent typing anyone? (I think what you're looking for goes by the name "code extraction" in Coq, but I've never gotten into it as a programming environment.)
Yeah its a bullshit line. There are certain class of functions in Haskell that can be completely derived from their signature (see djinn) but Haskell's type is not strong enough for automatic formal verification.
I would prefer is people just said "I program faster in Haskell"
And you know what I am tired of reading, Cedric Buest?
You trolling every programming language discussion with fake names and sock puppets relating your fake made up experiences with functional programming. Do you have no dignity?
You know you're not covering your tracks very well when there are full-fledged watch accounts named after you which are trying to keep your uninformed opinion in check lol
"A type class defines a set of functions which must be implemented for a type to be considered in that type class. Other functions can then be written which operate not on one specific type, but on any type which is in its given class constraint."
Call me crazy but this just sounds like a Java interface to me.
Edit: On further thought, I guess the difference is that in Java, the interface itself is a type. So all instances of classes which implement IShowable can be said to be of type IShowable. Whereas it seems that in Haskell a typeclass is not, itself, a type.
There are a few other differences as well. I'm not completely familiar with the ins and outs of Java interfaces, but:
* You can instantiate types to classes at any point in
time (type definition, class definition, or even orphan
instances, though that last category is frowned upon)
* Typeclasses indicate typing bounds but do not destroy
type information. This means that we can define
things like
showableId :: Show a => a -> a
showableId x = x
which allow only showable types to pass but does not
destroy type information
> showableId (3 :: Int)
3 :: Int
> showableId (id :: Int -> Int)
!! Type error
* Typeclasses can dispatch on *any* type in the signature.
This includes the famous "return type polymorphism" but
generally means that typeclass resolution involves
solving a terminating form of Prolog during
typechecking. This means that type information flows
forward and backward over judgements and allows for
greater inference.
* Typeclasses can abstract over higher kinded types. So we
can write something like
count :: Traversable t => t a -> Int
count = getSum
. getConst
. traverse (const $ Const (Sum 1))
which generically counts the elements in any container
instantiating the "interface" Traversable.
There's also some even funkier techniques you can use when you start involving MultiParamTypeClasses, FunctionalDependencies, or TypeFamilies.
An interface in OOP langs is more similar to the Existential data type which the author is trying to avoid using in this post, because it is sometimes seen as an anti-pattern in Haskell (although some people take this to the extreme and tell you to avoid Existentials completely).
If we take for example, a simple IRenderable interface
interface IRenderable {
void Render();
}
There's a bit of boilerplate to add, but we can get something pretty similar in Haskell:
class Renderable a where
render :: a -> IO ()
data Render = forall a. Renderable a => Render a
instance Renderable Render where
render (Render a) = render a
The obvious difference between the author's proposed solution and this OOP-style interface is the open versus closed world assumption. By having an interface, we have an open world in which we can easily add new types to render, without changing existing code - only adding new instances. In the solution proposed by the author (and the various other "solutions"), they break the open world assumption and fall back to a closed world - where you need to create specific types which encapsulate all the known types that can be rendered - this is demonstrated by the author's Game and ExtendedGame types - if you need to create a new "ExtendedExtendedGame" type each time you add a new renderable type - this solution obviously does not scale, does it?
With the Existential now, we can create a list of Render, which would be similar to having a list of IRenderable in Java. We can't do anything with the list other than call render on each item - which is almost always what we want to do anyway - so the oft-reported "existential anti-pattern" is usually no such thing. In fact, the claim that this is an antipattern stems from the idea that existentials are "type-erasing" - that you might want to convert back from a Render to a Ball for instance (or from an IRenderable to a Ball).
Even in Java, this would be a bad idea - it requires an explicit cast which could fail at runtime - you simply wouldn't do such thing unless you were certain of its type, or you guarded such cast by first using the `instanceof` operator.
if (r instanceof Ball) {
Ball b = (Ball)r;
...
}
You would not normally do this directly on each type, as you'd be back to a closed-world assumption. Instead, you'd usually create a mapping of types->functions, where you can dynamically test a type and do the relevant action, and you can continue to add new items to the map.
In the event we do want a list of renderable items, and we don't want to lose type information - Haskell can also provide a similar type-cast to the one you'd use in Java - it's rightly called `unsafeCoerce` because it is unsafe - as is the Java version, which throws a ClassCastException when used incorrectly.
If we modify the existential data type to also include type information, via Haskell's Data.Typeable module, we can encapsulate the bad behavior of unsafeCoerce, and ensure that we only expose a safe version - one that returns "Maybe x" instead of "x, but may fail".
{-# LANGUAGE ExistentialQuantification #-}
module X.Render (
Render,
toRender,
fromRender
) where
import Data.Typeable
import Unsafe.Coerce
data Render = forall a. (Typeable a, Renderable a) => Render TypeRep a
instance Renderable Render where
render (Render _ a) = render a
toRender :: (Typeable a, Renderable a) => a -> Render
toRender a = Render (typeOf a) a
fromRender :: (Typeable a, Renderable a) => Render -> Maybe a
fromRender (Render t a) =
case unsafeCoerce a of x | t == typeOf x -> Just x
| otherwise -> Nothing
You can even go as far as emulating Java's instanceof operator (althought not quite the same - it doesn't handle subtyping), just create a function instanceof in a typeclass, and use it as an infix operator.
class InstanceOf a where
instanceOf :: a -> TypeRep -> Bool
instance InstanceOf Render where
instanceOf (Render tReal _) tWanted | tReal == tWanted = True
instanceOf _ _ = False
...
(toRender Ball) `instanceOf` (typeOf Ball) == True
Frequently there will be many different ways to solve or architect a problem in Haskell, for better or worse depending on your viewpoint. Personally I think it it is better.
But anyways, another option would be to make a type for your list which wraps each possible element, then you can just pattern match on the ADT.
Also slightly concerned that this has language targeting beginners with almost 0 Haskell knowledge: I am not sure quick anecdotes like this will do more to help than confuse complete beginners.
I still think those interested should start with LYAH which does a good job giving enough context with the flurry of new things to learn.
I suppose it's inevitable if haskell gets more popular there will be more posts like this (which i like) but I'm not sure if its the right way to onboard newcomers.
I'm all for more posts like this and I hope we see more. I think it helps tear down the misconception that Haskell is only for people who do a ton of research beforehand.
The problem with this import solution is that almost nobody actually does it! Nearly every module you will ever import exposes most of the constructors of the ADTs they define - because Haskell encourages it - it's much simpler and cleaner to pattern match over constructors than the functions you use in place of them in order to encapsulate the constructors - which you need to use guards to match over instead.
You could also pattern-match over them with View Patterns[^1], i.e. export an alternate ADT that is the 'acceptable' view on the data and a function that takes the encapsulated implementation and turns it to the alternate representation—but I have literally never seen this done, save in the documents describing the motivations for View Patterns.
I think a better example is needed. The example's solution to the Haskell heterogeneous list "problem" could be implemented in other statically typed languages (C++, Java) although it wouldn't be necessary to do so.
I don't see how the Haskell type system is safer in this example but I feel there's something interesting there which I don't understand. Can someone explain the advantage of type classes over what could be done with interfaces and templates in C++?
That's actually the point. (Have you read the "wearing the hair shirt" paper?)
One of the hardest problems in moving from other languages to Haskell is that many of the solutions that you would immediately default to rely on run-time information or behavior, like the heterogeneous list. Those don't translate well, if at all. Instead, you need to step back and change the problem, by relying more on compile-time, type-level information.
In particular, heterogeneity causes a form of information loss via type erasure (existential typing). The problem is that this is pretty heavy machinery and is not always well-suited to such a simple problem as a heterogenous list. Additionally, once you've produced a type like
frequently you've reduced your options for what to do down to really just a single choice. In this case, `render`. This is what's sometimes called the Existential Antipattern—if you have an existential type where only one way forward remains... you may as well just take that way forward. This is especially easy in a lazy language like Haskell. Note that this is functionally very similar to the implementation of `render` for Game as given in the article. In particular, the only difference is that the article's `render` method destroys the barriers between these `IO` actions by sequencing: where `sequence` sequences a list of monadic actions This tension between a "reified list of actions" and a composite action is pretty much the heart of the expression problem—the reified list is an initial encoding and the final action the final encoding. This kind of thing shows up all the time when dealing with heterogenous lists. The reason being that OO basically favors final encodings to the extinction of initial ones... but initial encodings are what generate type information.