preommr 1 day ago

To explain like two fundamental rules (we can make wrapper types, and do flatmap) I will:

- Write 5 paragraphs setting up an imaginary scenario involving fantasy elements of aliens, dragons, and a magical kindom where they speak using message boxes

- Introduce basic category theory by starting with what a functor is

- Explain all the effects of a monad in such general terms that it basically amounts to anything and everything - since a function can be anything and do everything and it's just function composition

- Write some snippets of Haskell, and just assume that you're familiar with the syntax

- Talk about how delicious burritos are

  • alper 1 day ago

    I've read more than my fair share of these tutorials, and I'd like to be proven wrong here but I don't think I've ever seen one that explains what the point of these functional constructs (similarly with Applicative etc.) is.

    "You can do IO now." So what? I could do IO before that as well.

    Very rarely are practical explanations discussed. Even if they are discussed, the treatment is shallow and useless.

    • lmm 1 day ago
      • amenghra 1 day ago

        From the very beginning of the article (level 1), I don't see what's wrong with code that looks like the following. Early return seems to fix the "typing this makes me feel ill" part? To me, the following code seems perfectly readable without requiring the reader to know about function composition.

          def doFunctionsInSequence1(): Option[Set[Int]] = {
            val r1 = f1(null)
            if(r1.isEmpty) {
              return None
            }
        
            val r2 = f2(r1.get)
            if(r2.isEmpty) {
              return None
            }
        
            return f3(r2.get)
          }
        • lmm 1 day ago

          I find that pretty repetitive, but more, having to reason about branching control flow adds a lot of mental overhead that I'd rather spend on my business logic.

    • w4rh4wk5 1 day ago

      From my experience having used Haskell (a long time ago), the main benefit of Monads is the `do` and <- syntax. Once you got your thing to satisfy the Monad interface, you unlocked the nice syntax for writing code. That, and compatibility with transformers.

      Whether this is the best thing since sliced bread or not, is left as an exercise to the reader.

      • codebje 23 hours ago

        Hah, I like that: the main benefit of monads is turning your functional language back into an imperative one...

        IMO it's because option is a monad, list is a monad, io is a monad, async is a monad, try-except is a monad, why invent different magic syntax and semantics for all of them when there's a perfectly good abstraction that covers the lot, and that lets you write functions that are agnostic to which particular monad they're in to boot.

      • lmm 9 hours ago

        > From my experience having used Haskell (a long time ago), the main benefit of Monads is the `do` and <- syntax. Once you got your thing to satisfy the Monad interface, you unlocked the nice syntax for writing code.

        Nah, I don't even use the syntax much any more. The main benefit is the huge library ecosystem that works generically with any monad, so that if you want to e.g. traverse over a datastructure with your effectful action you can just use cataM or whatnot from recursion-schemes instead of writing it yourself, if you want to compose pipelines of them you just use Conduit, etc.

    • jappgar 1 day ago

      Haskell is primarily a bunch of type gymnastics designed to give the impression of "purity" when no such thing exists in the world.

    • hypendev 1 day ago

      A joke says that its because once you get it, you lose the ability to explain it like a normal person :)

      And another joke says the best way to explain a monad tutorial is to write another one, so sorry for this.

      Just think of it as a box.

      If amazon sent items themselves, it would be hard to pack, no way to standardize, things would break often or fall out of their respective boxes.

      Now, if you put it into one of the standardized boxes, that makes things 100x easier. Now you can put these on a conveyor belt, now you can have robots sorting these, now you can use tape to close them, standardization becomes easy as it's not "t-shirt,tennis ball,drill" but just "box box box".

      So now you can do all kinds of things because it's all a box. And you can also stress test the box.

      It's the same with these.

      A. You can just have a function that: calls a something on IO, maps it's values, does a calculation, retries if wrong, stores the result, spits it out.

      Or B. you can have functions that calls any function on IO, functions that map any value to any other value, functions that take any other function and if that function fails calls another function or retries, one that stores any value given to it and returns with information if it saved or not etc.

      The result is the same in the end, but while 1 makes the workflow be strictly defined only for that case, and now you have to handle every turn and twist manually (did the save save? what if not? write a check, write a test that ensures its not and the check works, same if it does...) the 2 lets you define workflows with pre-tested, pre-built blocks that work with any part of your codebase.

      And it makes your life 1000x easier because now you have common components that work with any data type inside your codebase, do things your way always, are 100% tested and make it easier to handle good cases, bad cases, wiring and logistics. And you can build pipelines out of them. Because at the end, what it does is just lets you chain functions that return wrapped values.

      And you end up with code like:

      val profileData = asAsync { network.userData(userId) } //returns a Async<Result<UserData, Error>

      .withRetries(3) // Works on Async, and returns Result, retries async if fails

      .withTraceId(userId) //wrapped flatmap that wraps success into Trace<T> and adds a traceId

      .mapTrace(onError = { ErrorMappingProfile }, { user -> Profile(user.name, user.profileId) } // our mapTrace is a flatMap for Trace objects, so it knows how to extract trace objects, call the functions and wrap them again

      .store("profile_data") //wrapped mapCatching again for storage explicitly that works on Trace objects, knows how to unwrap them, stores them,

      .logInto(ourLogger) // maps trace objects into shared logger

      Each of these things would before have to be manually written inside the function, the whole function tested for each edge case. if/else's, try/catch, match/when/switch.

      This way, only thing you need to cover with tests now is `network.userData()`, as all other parts are already tested, written and do what they say they do. And you can reuse this everywhere in your projects. Instead of being a function you call with data, it becomes a function you give a box and it returns a box. Then you can give it to any other function that needs a box. If boxes make no sense, think of the little connectors on lego bricks, or pipe connectors in plumbing, or stacking USB adapters or power strips.

      I can't stress enough how much this approach helped me in real life cases - refactoring old codebases especially, as once you establish some base primitives, the surface area starts massively collapsing as the test surface area increases.

    • PunchyHamster 23 hours ago

      I feel like the moment you understand what it is in Haskell you lose ability to explain it to people without heavy math theory background

      But from what I observed its a group of fancy foreach loops that they put under same name for some reason

    • jerf 23 hours ago

      You may appreciate my own contribution, https://www.jerf.org/iri/post/2958/ , which includes an entire section titled "If They're So Wonderful Why Aren't They In My Favorite Language?", a section explaining why IO is not a good lens to understand monads and why "monads" don't really have anything to do with "making IO possible" (very common misconception), as well as what I believe to be one of the more practical applications of monads as a way of generating an audit log of how a particular value came to be what it is without. That example specifically arose from one of the rare instances I used the monad pattern in my own real code. Though I still didn't abstract out the monad interface, because if you only have one, that does you no good. The entire point of an interface is to have multiple implementations. It just happens to be a data type that could have implemented the monad interface, if there had been any use for such a thing in my code, which there wasn't.

      • scythmic_waves 22 hours ago

        I read this years ago and I think it's the best one I've read. Thanks for writing it!

      • tayo42 19 hours ago

        > If They're So Wonderful Why Aren't They In My Favorite Language?

        Aren't they now though? Like option is everywhere lately

        • jerf 19 hours ago

          Supporting "Option" is not "having monad". An Option data type can implement a Monad interface, but you can have an Option data type with no particular monad support in your language, or you can have an Option data type that implements something like "bind" or "join" but there's no interface that it conforms to.

          If that sounds like gibberish it's because you don't have the right definitions loaded into your head. You can read the article I linked to fix that.

          In this case note that what you are calling "Option" is called "Maybe" in Haskell and also in that article. There is an entire subsection explaining why using Maybe/Option as a lens to understand "monad" is a bad idea because by monad standards, it's degenerate, and degenerate instances of an interface make for bad examples. Just as if you're going to explain "iterator" to someone, starting out with "the iterator that returns nothing" isn't really a good idea, because it's not good to try to explain a concept with something that right out of the gate in some sense denies everything about that concept.

          It's a common mistake. There's also some people who think that by adding flatmap to their list/array data type they've "implemented monads". No, they've just implemented flatmap on their list/array; they don't "support monads" by doing that. There are plenty of monad implementations that can't be understand as "flatmap", such as STM. ("flatmap" completely fails to capture the idea that a monad implementation may carry around additional data not visible from the level you're using the implementation on. That's one of the main reasons my example is structured the way it is in the article.) "flatmap" isn't "monad" in exactly the same way that "walk the next item in the array" isn't "iterator", or even more simply, "red" isn't the same as "color". Flatmap is an implementation of monad, walk the next item in the array is an implementation of iterator, red is an implementation of color.

        • lmm 10 hours ago

          Very few languages let you write a function that works for both Option and for other not particularly related monadic types (e.g. Future), while being fully typesafe, which is what I'd call "having monads".

      • thefunkychook 9 hours ago

        I enjoyed your article, thanks for sharing.

        As I understand it, one thing the tutorial didn't go into, which I think is an important subtlety, is that it's not enough to have an implementation of "bind" to have a monad interface. You also need an implementation of "return : a -> m a" (i.e. a way of making sources of 'a's when given an 'a'), AND a proof that these implementations together satisfy the monad laws (i.e. that they "play nicely" together).

        Without all three components, you can have something that "looks like" a monad, in that it has definitions for "bind" and "return", but isn't actually one, because those particular definitions don't also satisfy the monad laws.

    • ddellacosta 23 hours ago

      I think it's worth reading this if you want to understand the initial motivation for introducing Monads to Haskell: https://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/b...

      (And in the context of the previous paper, this one motivates Applicative well I think: https://www.staff.city.ac.uk/~ross/papers/Applicative.pdf)

      That said, I've never really understood the enthusiasm the industry has for introducing Monads outside of Haskell. As I understand it, at the time Philip Wadler wrote his paper, Haskell was pretty painful to use due to its adherence to purity. Monads were presented as a way to maintain purity while providing a principled way to support all kinds of effectful computations. But without some of the features Haskell provides (I'm thinking of typeclasses and HKTs in particular), and given that almost any language you'll be introduced to outside of Haskell already has ways to do e.g. IO or whatnot, it almost always ends up feeling like bolting something on with not a lot of benefit.

      Don't get me wrong, I think there's value in stuff like https://github.com/fantasyland/fantasy-land --I find organizing how I think about computations around these algebraic concepts helps me a lot, personally. But that's distinct from introducing these concepts into day-to-day work in a non-Haskell language, especially on a team, which is often more trouble than it's worth unless everyone has already bought into it and is willing to deal with the meaningful friction introducing this stuff produces.

      I assume the overabundance of Monad tutorials and libraries has to do with the cachet of knowing this relatively obscure, intellectual thing and being able to explain it to your peers, or to be more charitable, perhaps it's a byproduct of getting excited about learning this new, distinct way to approach computation and wanting to share it with everyone. But the end result is that now we have tons of ridiculous tutorials and useless Monad libraries in tons of languages.

    • manoDev 22 hours ago

      Nobody will explain you like this, but the main point was being able to satisfy the compiler without introducing an escape hatch into the language.

      Haskell is based on Miranda, and Miranda is based on Hope. Purely functional languages were really purely functional, academic experiments with no way to express side effects, so no way to express practical programs.

      Philip Wadler took the monad (the name that already existed in category theory), and showed how computations could be expressed in Haskell with the “do notation” as an example. That made Haskell practical without breaking the “beauty” of the language, by having to introduce new special syntax or something outside the type checker capacity.

      So, I don’t think there’s a motivation besides being an exercise in expressivity within the limitations of pure functional programming. Similar ideas in describing computation as lazy executed instructions already existed elsewhere, like the interpreter pattern.

      • yobbo 21 hours ago

        The point is rather that in a pure language, each io operation needs to be dependent on a sort of "world state" which is updated for each operation. They chose to implement this state as the io monad but there could have been other ways.

      • ux266478 18 hours ago

        > and showed how computations could be expressed in Haskell with the “do notation” as an example

        To be clear, do notation is new special syntax that was added to make monads more ergonomic. Traditionally you used >> or >>=, which looks a lot more like closures.

    • noelwelsh 22 hours ago

      * Composition and reasoning. Standard things in FP. Build big things from little pieces. Understand them the same way.

      * Explicitly define the order of evaluation (important in Haskell, where lazy evaluation makes the default order of evaluation difficult to trace)

      * Useful mental model that helps with 1) design and 2) understanding new concepts

      * Abstraction. Ignore irrelevant details. Write the standard library once, use it in many different situations.

    • ux266478 19 hours ago

      Within most languages, you're operating at a semantic level where much of the "point" is already obviated for you. They deal with fundamental structure that you take completely for granted, and you use it all implicitly. A monad is very simple at the core of it, it's an ordered collection, flattened into a single context. What you're collecting, what that ordering means, what that context is, etc. define what the monad is used for.

      You could do IO? IO requires temporal ordering. Take for instance:

          print("Hello ")
          print("World!\n")
      

      Would obviously result in:

          Hello World!
      

      But would it? You are implicitly assuming that the first line will be evaluated and print before the second. It's a reasonable assumption to make, most programming languages embed that in their execution semantics. What if I told you that the assumption isn't actually guaranteed? What if we didn't give that temporal ordering in the same way? What if for instance, a function could return a result without evaluating its arguments? This is called non-strict evaluation (note: this does not necessarily mean lazy evaluation). In the case of a non-strict language, you would need some way to tell the program that the first line should happen before the second before you can do any kind of IO. For a strict language, the IO monad doesn't make sense because you don't need to tell the program that.

      Haskell is almost like a metalanguage. You're describing a program, but it's not like describing a program in Python or Scheme. You are expressing a program in graph reduction, and that's very different compared to how you're used to thinking of computer programs. That's the practical reason why Haskell has the IO and State monads, because they reify as a temporal grounding for instructions. Your program has a completely different concept of flow than in the real world, and these are tools you have to bridge that gap. It's important to note, this is just a very specific usecase of monads.

      If you find treatment to be shallow, it's probably because you're looking for answers in shallow contexts. I used to be as confused as you, and the answer I eventually discovered is because I was ignorant of my own ignorance. I needed a healthy dose of computational philosophy to broach the subject. As someone else has said, once you understand it, it can be hard to explain it to someone who doesn't understand it. It's not a short topic to be learned in a series of twitter posts or a blog. It's something you come to understand after a lot of exposure and study and careful rumination. And of course, primary sources.

bedobi 21 hours ago

Here’s my monad tutorial for programmers

A monad is anything you can flatmap with

The monad of list is you flatmap a list on a list and instead of getting a list of lists, as you would if you just mapped, you get a single flattened list

The monad of Result is you flatmap many function calls (like http requests or whatever) on each other and instead of getting many results, you get a single flattened result

Most of you already know this, without necessarily even knowing what a Monad is

Monad literally just means "one thing" - you take many things, and flatmap them into one

Thanks for attending my ted talk

  • throw_await 21 hours ago

    And by introducing Monad, we gain the ability to abstract over these things: List, Option, Result, Functions, State, ...

    • bedobi 18 hours ago

      yes, which is why Monad should be an interface that types like List, Option, Result etc implement, instead of flatMap being just a random discrete function that exists on random types by accident, with no common abstract link between them

      Kotlin, Java, basically every other language except Haskell, F# etc didn't get that memo

cubefox 1 day ago

I understood the monad concept for a few months in university. After the exam was over, I soon stopped understanding it. The same thing happened with the concept of VC dimension. It's kind of interesting, because we usually don't think of "understanding" as something that comes with a time limit.

  • alper 1 day ago

    It happens all the time. For a brief period I understood musical notation and rhythm and then it was gone. Similarly I had a time in my life where I knew by feeling whether a French noun was le or la.

    • cubefox 22 hours ago

      The sneaky thing here is that understanding, and knowledge in general, disappear silently. So you don't notice it when you unlearn and forget something. Only if a situation comes up where you need that understanding again do you notice that it is gone. Coming to understand something is conscious, losing that understanding is unconscious.

cpa 1 day ago

Pretty cool!

I've spent a lot of time wrapping my head around monads; whenever I thought I "got it," I would come across some exotic monad that completely blew my mind. The best way to understand them is not to rely on analogies but just follow the rules—everybody says that, but it took me a while to truly realize it.

See, for example, the Tardis monad or the Cont monad: https://www.reddit.com/r/haskell/comments/446d13/exotic_mona...

  • ducklord 1 day ago

    I tried understanding the rules but actually using it helped me to get it. Especially when I was using a parser combinator to parse a programming language.

LukeHoersten 22 hours ago

2015 was the best for Haskell. Definitely had a bit of a moment.

12_throw_away 19 hours ago

Honestly, it seems like the common denominator for all this confusion is Haskell, and specifically its IO system, not the monad interface itself. E.g., lots of languages have something like an "Iterable" interface, which - while it may be non-trivial for beginners to learn - absolutely does not require tortured metaphors to explain it. No one has ever needed burritos to explain Result::and_then [1].

[1] https://doc.rust-lang.org/std/result/enum.Result.html#method...

armchairhacker 1 day ago

I still don’t understand why it’s named from Gnosticism (https://en.wikipedia.org/wiki/Monad_(Gnosticism))

  • urxvtcd 1 day ago

    Yeah, would like to know as well. I think the applicative functor was originally call "Idiom", another weird name.

  • yccs27 1 day ago

    Monads got their name from monoids (being a monoid in the category of endofunctors). Monoids are equivalent to one-object categories, so the name uses the greek syllable "mono" for one.

petesergeant 22 hours ago

It took me a long time to write an explainer on embeddings, and one day I will finally finish my Monad tutorial. I think fundamentally you need to have needed them to solve a problem to get them, and outside of pure languages you have to do a lot of special-condition setup to explain why you need them. "You Could Have Invented Monads"[0] is probably my favourite existing one.

0: https://blog.sigfpe.com/2006/08/you-could-have-invented-mona...

Dig1t 1 day ago

So weird, on the front page at the same time as this: "Biology is a burrito"

When I saw that link it immediately reminded me of this: https://blog.plover.com/prog/burritos.html

>Monads are like burritos

And then a few links down is this link to monad tutorials.

Weird coincidence.

FrustratedMonky 1 day ago

A good explanation I read once.

That the best way to understand Monads, was to write a Tutorial about Monads.

Which does make sense. To understand a subject, the best way is to teach the subject.

ReptileMan 1 day ago

I have always thought that monads are just side effects and that's it.

  • hocuspocus 22 hours ago

    Most monadic effects aren't executing side effects.

    • lmm 10 hours ago

      On the contrary, every monadic effect is a side effect. It's just that what exactly that means is specific to the monad in question.

  • chuckadams 17 hours ago

    Monads are popular for side effects because they have an implicit notion of sequencing, so evaluating a monadic expression enforces the sequence of operations. Works out nice for IO and Futures and so on. But List is also a monad, and flatMap (as many other languages call it) doesn't inherently have any side effects at all. Same goes for Maybe/Option (essentially a list of zero or one element), and State (which does take advantage of sequencing).

    It's all about getting an intuition for how many things fit the shape:

        flatMap :: m a -> (a -> m b) -> m b
    

    Where "flatMap" might have different names in different types. Once you see that pattern in some code, you'll start seeing it in a lot of other places.