###

Question:

While monads are represented in Haskell using the bind and return functions, they can also have another representation using the join function, such as discussed here. I know the type of this function is M(M(X))->M(X), but what does this actually do?

###

Solution:1

Actually, in a way, `join`

is where all the magic really happens--`(>>=)`

is used mostly for convenience.

All `Functor`

-based type classes describe additional structure using some type. With `Functor`

this extra structure is often thought of as a "container", while with `Monad`

it tends to be thought of as "side effects", but those are just (occasionally misleading) shorthands--it's the same thing either way and not really anything special^{[0]}.

The distinctive feature of `Monad`

compared to other `Functor`

s is that it can embed *control flow* into the extra structure. The reason it can do this is that, unlike `fmap`

which applies a single flat function over the entire structure, `(>>=)`

inspects individual elements and builds *new* structure from that.

With a plain `Functor`

, building new structure from each piece of the original structure would instead nest the `Functor`

, with each layer representing a point of control flow. This obviously limits the utility, as the result is messy and has a type that reflects the structure of flow control used.

Monadic "side effects" are structures that have a few additional properties^{[1]}:

- Two side effects can be grouped into one (e.g., "do X" and "do Y" become "do X, then Y"), and the order of grouping doesn't matter so long as the order of the effects is maintained.
- A "do nothing" side effect exists (e.g., "do X" and "do nothing" grouped is the same as just "do X")

The `join`

function is nothing more than that grouping operation: A nested monad type like `m (m a)`

describes two side effects and the order they occur in, and `join`

groups them together into a single side effect.

So, as far as monadic side effects are concerned, the bind operation is a shorthand for "take a value with associated side effects and a function that introduces new side effects, then apply the function to the value while combining the side effects for each".

**[0]:** Except `IO`

. `IO`

is *very* special.

**[1]:** If you compare these properties to the rules for an instance of `Monoid`

you'll see close parallels between the two--this is not a coincidence, and is in fact what that "just a monoid in the category of endofunctors, what's the problem?" line is referring to.

###

Solution:2

What join *does* has been adequately described by the other answers so far, I think. If you're looking for a more intuitive understanding...if you're wondering what join "means"...then unfortunately the answer is going to vary depending on the monad in question, specifically on what M(X) "means" and what M(M(X)) "means".

If M is the List monad, then M(M(X)) is a list of lists, and join means "flatten". If M is the Maybe monad, then an element of M(M(X)) could be "Just (Just x)", "Just Nothing", or "Nothing", and join means to collapse those structures in the logical way to "Just x", "Nothing", and "Nothing" respectively (similar to camccann's answer of join as combining side effects).

For more complicated monads, M(M(X)) becomes a very abstract thing and deciding what M(M(X)) and join "mean" becomes more complicated. In every case it's kinda like the List monad case, in that you're collapsing two layers of Monad abstraction into one layer, but the meaning is going to vary. For the State monad, camccann's answer of combining two side effects is bang on: join essentially means to combine two successive state transitions. The Continuation monad is especially brain-breaking, but mathematically join is actually rather neat here: M(X) corresponds to the "double dual space" of X, what mathematicians might write as `X**`

(continuations themselves, i.e. maps from X->R where R is a set of final results, correspond to the single dual space `X*`

), and join corresponds to an extremely natural map from `X****`

to `X**`

. The fact that Continuation monads satisfy the monad laws corresponds to the mathematical fact that there's generally not much point to applying the dual space operator `*`

more than twice.

But I digress.

Personally I try to resist the urge to apply a single analogy to all possible types of monads; monads are just too general a concept to be pigeonholed by a single descriptive analogy. What join means is going to vary depending on which analogy you're working with at any given time.

###

Solution:3

From the same page we recover this information `join x = x >>= id`

, with knowledge of how the `bind`

and `id`

functions work you should be able to figure out what `join`

does.

###

Solution:4

What it does, conceptually, can be determined just by looking at the type: It unwraps or flattens the outer monadic container/computation and returns the monadic value(s) produced therein.

How it actually does this is determined by the kind of Monad you are dealing with. For example, for the List monad, 'join' is equivalent to concat.

###

Solution:5

The bind operation maps: `ma -> (a -> mb) -> mb`

. In `ma`

and (the first) `mb`

, we have two `m`

s. To my intuition, understanding bind and monadic operations has come to lie, largely, in understanding that and how those two `m`

s (instances of monadic context) will get combined. I like to think of the Writer monad as an example for understanding `join`

. Writer can be used to log operations. `ma`

has a log in it. `(a -> mb)`

will produce another log on that first `mb`

. The second `mb`

combines both those logs.

(And a bad example to think of is the Maybe monad, because there `Just`

+ `Just`

= `Just`

and `Nothing`

+ anything = `Nothing`

(or F# `Some`

and `None`

) are so uninformative you overlook the fact that something important is going on. You can tend to think of `Just`

as simply a single condition for proceeding and `Nothing`

as simply a single flag to halt. Like signposts on the way, left behind as computation proceeds. (Which is a reasonable impressions since the final `Just`

or `Nothing`

appears to be created from scratch at the last step of the computation with nothing transferred into it from the previous ones.) When really you need to focus on the combinatorics of `Just`

s and `Nothing`

s at every occasion. )

The issue crystallized for me in reading Miran Lipovaca's Learn You a Haskell For Great Good!, Chapter 12, the last section on Monad Laws. http://learnyouahaskell.com/a-fistful-of-monads#monad-laws, Associativity. This requirement is: "Doing `(ma >>= f) >>= g`

is just like doing `ma >>= (\x -> f x >>= g)`

[I use `ma`

for `m`

]." Well, on both sides the argument passes first to `f`

then to `g`

. So then what does he mean "It's not easy to see how those two are equal"?? It's not easy to see how they can be different!

The difference is in the associativity of the `join`

ings of `m`

s (contexts)--which the `bind`

ings do, along with mapping. Bind unwraps or goes around the `m`

to get at the `a`

which `f`

is applied to--but that's not all. The first `m`

(on `ma`

) is held while `f`

generates a second `m`

(on `mb`

). Then `bind`

combines--`join`

s--both `m`

s. The key to `bind`

is as much in the `join`

as it is in the unwrap (`map`

). And I think confusion over `join`

is indicative of fixating on the unwrapping aspect of `bind`

--getting the `a`

out of `ma`

in order to match the signature of `f`

's argument--and overlooking the fact that the two `m`

s (from `ma`

and then `mb`

) need to be reconciled. (Discarding the first `m`

may be the appropriate way to handle it in some cases (Maybe)--but that's not true in general--as Writer illustrates.)

On the left, we `bind`

`ma`

to `f`

first, then to `g`

second. So the log will be like: `("before f" + "after f") + "after g"`

. On the right, while the functions `f`

and `g`

are applied in the same order, now we *bind* to `g`

first. So the log will be like: `"before f" + ("after f" + "after g")`

. The parens are not in the string(s), so the log is the same either way and the law is observed. (Whereas if the second log had come out as `"after f" + "after g" + "before f"`

--then we would be in mathematical trouble!).

Recasting `bind`

as `fmap`

plus `join`

for Writer, we get `fmap f ma`

, where `f:a -> mb`

, resulting in `m(mb)`

. Think of the first `m`

on `ma`

as "before f". The `f`

gets applied to the `a`

inside that first `m`

and now a second `m`

(or `mb`

) arrives--inside the first `m`

, where mapping `f`

takes place. Think of the second `m`

on `mb`

as "after f". `m(mb)`

= ("before f"("after f" `b`

)). Now we use Join to collapse the two logs, the `m`

s, making a new `m`

. Writer uses a monoid and we concatenate. Other monads combine contexts in other ways--obeying the laws. Which is maybe the main part of understanding them.

**Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com**

EmoticonEmoticon