My biggest piece of advice for people using lenses is to ditch all the operators. Things like ^. or ^.. or ^? or ^@.. or even <<|>~ are all real operators. Yet they look like line noise. Nobody fully remembers them anyways. Just ditch all operators. Use named functions. The function toListOf is immediately clear what it's doing (that it takes a structure and a fold to convert to a list) but ^.. is not.
In general I avoid all custom operators and only use operators that are in packages preinstalled by the compiler (basically just base and containers).
I strongly recommend using the lens operators. They are uniformly named such that you can trivially identify their behavior based on their lexical construction, and using them reduces mental parsing overhead significantly.
For the former assertion: ^. means "get a single result". ^.. means "get multiple results". ^? means "get zero or one result". ^@.. means "get multiple results, along with their indices". <<|>~ means "modify a value by combining the target with the |> operator from Snoc, then return a tuple of the old target value and the full structure including the combined value". There is a tiny language in the pattern of operator names, and it's worth the 3 minutes of work it takes to learn it.
And as a reward for learning it, you get to write expressions with far fewer parentheses. This is a massive win. Parenthesized expressions introduce a miserable minigame during reading, where you have to properly match each paren to its correct partner keeping a mental stack to handle nesting. By contrast, the lens operators give you the far simpler mental parsing task of separating the optic, the input, and the operation on the input. There's no nesting involved. The process is a simple visual scan that doesn't require keeping a mental stack. It's a lot easier to quickly read and comprehend.
About the only thing you lose is the ability to easily read code out loud. I don't limit myself to thinking in sounds, but I guess for some people it's important to communicate code out loud. For those kinds of pedagogical purposes, I guess it's ok to pass on the operators. But for code I'm going to work with over a long period of time I'd much rather have the readability advantages of the operators.
Having fewer parentheses is not a win, it makes more things implicit and forces everyone to remember operator precedence. In my opinion operator precedence is never worth remembering other than plus minus multiply and divide.
I find heavily parenthesized expressions easy to read, just because I tend to break them into multiple lines and the indentation serves as a guide. Don't put too many of them on a single line.
That might be a strong argument in many languages, but in Haskell you really don't need to memorize operator precedence. In nearly every case, the types tell you the precedence. They don't literally, but most expressions only type check in one particular parse tree.
As a result, you just don't think about precedence when reading code. If you assume the code type checked correctly, you know that it all just makes sense. You don't need to create a parse tree. You just trust.
(Actually, this is the huge advantage of Haskell in most every case. You don't need to understand everything. You just trust that it does what makes sense, and you're right. The compiler enforces it.)
You may want to check out J as a language. It is wonderfully terse and allows for point-free programming, and has all of the advantages you point to above.
I disagree. There are many operators that you’ll never use but if you memorize
(^.), (.~), and (%~), you’re pretty much set for a lot of real-world software development.
Per Kmett’s original talk/video on the subject, I can confirm my brain shifted pretty quickly to look at them like OOP field accessors. And for the three above, the mnemonics are effective:
“^.” is like an upside down “v” for view.
“.~” looks like a backwards “s” for setters.
“~%” has an tilde so it’s a type of setter and “%” has a circle over a circle, so it’s over.
I’ll also add that my experience in recent versions of PureScript things get even nicer: visible type application lets you define record accessors on the fly like:
foo ^. ln@“bar” <<< ln@“baz”
“.” Is unfortunately a restricted character and is not the composition operator like Haskell, but I alias “<<<“ with “..”
The pretty obvious question with the above is: why don’t you just write “foo.bar.baz”. In my case I use a framework that uses passed lenses for IoC, but I think “%~” is always nicer and less repetitive than the built-in alternative.
Maybe a text-editor should allow the user to look at source code through different "lenses" (pun intended) and show the meanings of symbols whenever the user wants to see them.
Emacs (of course) has `prettify-symbols-mode` which lets you describe symbols (eg lambda) and replacement characters (eg λ); the effect is purely in the display system—the underlying buffer does not get modified.
I agree strongly with this and take it one step further: I avoid the infix backticks that turn functions `into` operators.
But I'm not a hardliner. I do use backticks sometimes when building joins with Esqueleto and I do use a limited set of lens operators, like ^. and sometimes the %= variants if the situation calls for it.
I understand where you are coming from, but in a large number of cases this advice does not really make sense.
Sure, a Haskell programmer does write toList when relevant. The lens library also has operators named each or over or _Left.
But let's say I write <$> and you tell me to use fmap. Is that any better? Not particularly. If you didn't know what Functor is, you wouldn't automatically be able to infer it by reading the name `fmap`.
A large number of Haskell concepts simply have no analogue in everyday English.
Let’s just say that if you wanted to understand lenses, this is not where you should start; and if you wanted to move to more advanced scenarios, I wouldn’t start here either.
Arrow is how I went from minimal FP to bloody good at it. The Arrow devs are also very giving with their time, and we still interact occasionally online even though I've moved on to Unison for the most part.
Uhhh... Haskell syntax is simpler than python's or javascript's. It's neither obscure nor impenetrable, but it sounds like it's different than what you're used to.
This is such low hanging bait that I'm not even interested in interacting further with it than: Haskell's syntax is obscure and impenetrable for the vast majority of software engineers because it was designed by FP nerds with zero interest in ergonomics.
It doesn't make it a bad syntax. It is, however, objectively terrible for anyone unfamiliar with it.
"infix", "Functor", and "as" are the only words in this code. Everything else is single letters (thanks math traditions..) and punctuation.
What's a <&>? <$>? ::? We've got two different kinds of arrows, => and ->.
-- is obviously enough a line comment.
At least I know what = means.. give or take it's constant ambiguous meaning between languages of assignment and/or equality testing.
And this isn't even delving into the black arts of defining types, where the really ugly punctuation toolkits get opened.
I don't care whether or not they represent regular functions nor what their calling syntax is. What I care is that the base language has many many dozens of them to remember and then to parse in the wild, and then that authors are encouraged to continue proliferating more of them:
```Haskell
-- What does this 'mouse operator' mean? :thinking_suicide:
(~@@^>) :: Functor f => (a -> b) -> (a -> c -> d) -> (b -> f c) -> a -> f d
you're just not familiar with it. like when you ask what "a <&>" means, well this code is literally defining what it means. it means whatever comes after the equals sign
in fact the example you picked is trivially simple even if you dont really know the syntax. all you need to do is not rage out and stop thinking. it's literally saying that "as <&> f" is equal to "f <$> as". so you dont even need to know what <$> is, its extremely straightforward that <&> is just flipping the order of the arguments to <$>
also if you read the type signature then there really arent many possible things this function could do. you start with a Functor a and a function that maps from a to b, and you end up with a Functor b. there's really only one possible implementation of that (applying the function (a -> b) to the a in Functor a)
like im curious, what would you even name these things to make it clearer? it's so abstract in generic in the very concept of it that it's hard to come up with any more specific naming
Look, the code comment goes so far as to say that <punctuationA> is a flipped version of <punctuationB> so in this example that part isn't even the part that's not clear to somebody who hasn't memorized all of the language's in-built punctuation (nor new crap invariably invented by libraries).
But what is a <$>? Should one google something like that and hope that Google honors it like a word instead of ignoring it like it usually does punctuation? (In this case Google does appear to honor it as a keyword.. in 9/10 circumstances with punctuation blocks it however does not) Or ask an LLM and get a confidently wrong answer there?
Is it even one piece of punctuation, or two or thee pieces placed adjacent to one another? Like maybe the <> are like parenthesis and the $ is an argument in there.. who knows until you study the nuts and bolts of how this particular language lexes things. (I'm just kidding though.. I get that Haskell would never never be so cruel as to use <> as parenthesis anywhere ;)
Any keyboard bash of punctuation can be defended as understandable once you've memorized all of the symbols and composition rules to parse it. People who code Brainfuck all of the time can read it quite readily, and might even argue that it's overblown that it was ever named "Brainfuck" to begin with.
> Functor functor => functor someType -> (someType -> someOtherType) -> functor someOtherType
> is that better?
Moderately, though there remains room for improvement.
Personally I'm able to follow what's happening to the right of the fat arrow because I've cut my teeth on that style of type signature in Elm and Lean. It's taking as first argument a functor over one type ("typeA" might be a still better name), second argument is a function that in turn accepts typeA and emits typeB, and this signature's output is a functor over typeB.
Unfortunately I've mostly forgotten what a functor was after once learning it (IIRC it's an isomorphism in category theory or some kind or another? I can remember what monads are and how to use them in coding, and a monad over a type makes sense to me.. it's basically a tagged type) and in that light the left side of fat arrow being "Functor functor" doesn't clarify much. the lowercase "functor" (or "f") is obviously like a type variable, though probably specifically for functors instead of types. The uppercase one.. I don't know.. says something about the shape of the operator being declared?
In any event, finding a way to make the variable name descriptive without echoing the keyword at the beginning would be helpful, just as in prose we often use synonyms to avoid confusing word re-use.
And finally: bear in mind this isn't primarily a discussion about "I lack exposure to X therefore fill me in on X", but instead about "lots of people lack exposure to X therefore being easier to digest in the first place would have been preferable".
If you're in the Clojure world and feel an appetite for something like Optics, checkout the Specter library from RedPlanetLabs/Nathan Marz; it's Optics by another name, but functionally/philosophically quite similar.
I learned lenses from the mentioned Edward Kmett video but wish I'd learned from the "Optics by Example" book instead; it's more cohesive, comprehensive and can save you a bunch of time - https://leanpub.com/optics-by-example/
I don't understand why Haskell can't provide an imperative interface (at the grammar level, not semantic level) to get/set values in a type. If you can provide the do-notation to "simulate" imperative code, then why not?
Haskell's design prioritizes referential transparency and equational reasoning, which would be compromised by imperative get/set operations that mutate state directly - lenses provide a purely functional alternative that maintains these properties.
That’s why I said at the grammar level and not at semantic level. I also brought the do-notation example, which kinda gives an imperative interface to the various monads
Everyone is like "Haskell is such a cool language, it's so much more clear concise and understandable than that stupid language you like so much" (their words, not mine). Then you ask them how they write `foo.bar.baz = 1` and you get 50k words of documentation, 113 new operators[1] like `<<<>~`, and a library with 20 new dependencies. I make fun of them only because I love them - I think Haskell has brought us a lot of cool things like Maybe and Either - but how has no one ever taken a step back and gone "wow, this seems a tad complex for what we're trying to accomplish"?
Because foo.bar.baz = 1 has a side-effect, and side-effects, though powerful, are extremely prone to error. Lenses take more effort, but give us the same amount of power without all the errors. Many people believe the trade-off is worthwhile.
Thanks, I just took a look. It works by relaxing constraints to allow mutation on a “draft” copy of the data. Interesting idea! (But verboten in Haskell, of course.)
But there's nothing 'verboten' about this in Haskell! Haskell could allow for the exact same syntax, and do the exact same thing behind the scenes. Oh but no, LENSES of course, you must rewrite obvious code in the most obtuse manner possible. :)
But lenses already do the same thing, just with a worse syntax. do notation already rewrites a lot of messy Haskell into cleaner Haskell. It's not that different!
A monad is just a flatmappable. The end. That's the whole tutorial. If you're coming from JS/TS and know how to construct a singleton array and can use Array.prototype.flatMap, you already can do monads. Anything else "monadic" is not a monad. It's a property of something else that can be derived from what I wrote above, OR it's a property of one specific monad not monads in general.
Monads are a set of annotated functions or methods that participate in shared encapsulating middleware. It's kind of like writing an interpreter for existing code by changing shape of the inputs, outputs, and possibly even flow control of the execution of those functions -- but without writing an interpreter.
The easiest example would be something like wrapping a bunch of arithmetic operations with a "cumulative" monad. Effectively this changes your add, sub, mul, div functions such that instead of taking 2 floats and returning a float, they take a hashmap and return a hashmap. The hashmap consists of the original args as well as the cumulative total, for whatever reason. The details of the hashmap are hidden from you, you use the functions as per normal.
You could also make the wrapper monad have some state, and then batch the operations while making them appear to execute sequentially, or make it appear you are doing pure logic when I/O is happening under the hood.
While you can do monads in dynamic languages, it can be hard to reason about changes to the code without strong compiler support, so typically you see it more often implemented in statically typed languages.
In dynamic languages such as lisp you might be better off writing a small interpreter, and in OO languages there are other patterns that might serve the purpose better.
I still don't know what a monoid is though. Or an applicative.
In general I avoid all custom operators and only use operators that are in packages preinstalled by the compiler (basically just base and containers).
reply