Continuing in the vein of cool Haskell examples I find on the internet, this post is going to be about a particularly epic fizzbuzz implementation that I saw in a three-year-old Reddit thread. Now, the OP in that thread had a serviceable but run of the mill fizzbuzz implementation, but what caught my eye was the top-voted comment. The author (who has since, sadly, deleted their account, or I would have credited them here) had accomplished fizzbuzz in a mere 2 lines of code! Here is the snippet copied verbatim:
let (m ~> str) x = str <$ guard (x `mod` m == 0) in map (fromMaybe . show <*> 3 ~> "fizz" <> 5 ~> "buzz")
Seeing this was one of those moments where you just say "oh man, I have to understand how this works!". Luckily there were a few people in that thread who had already hashed out explanations, so I could already get the gist of what was going on. This post is going to be an attempt to explain the above two lines to myself.
The fizzbuzz two-liner is a single expression with a
let binding that defines
an operator called
~>. We shall put the
let binding to one side for the
moment and concentrate just on the core expression:
map (fromMaybe . show <*> 3 ~> "fizz" <> 5 ~> "buzz")
OK so we're using the function
map, which has the signature
map :: (a -> b) ->
[a] -> [b], and we've applied it to a single argument, meaning that the
bit in parentheses must be a function
a -> b. Now, the core of fizzbuzz is all
about turning integers into strings (arbitrary integers into their string
representation, multiples of 3 into "fizz" etc.) so we can probably assume that we
will be mapping over a list of integers and producing a list of strings.
We can test this hypothesis by loading the two-liner into GHCi (We have to add the imports -- which I got by hoogling the function names that GHCi didn't know about).
λ> import Control.Monad (guard) λ> import Data.Monoid ((<>)) λ> import Data.Maybe (fromMaybe) λ> let (m ~> str) x = str <$ guard (x `mod` m == 0) λ> let core = (fromMaybe . show <*> 3 ~> "fizz" <> 5 ~> "buzz") λ> :t core core :: (Show a, Integral a) => a -> String
This seems to check out; the type signature looks a bit weird because Haskell
derives the most general signature it can, but we can interpret it as
Integer -> String.
From abstract to concrete
Ok, so now we're going to start from the
core expression (adding clarifying
(fromMaybe . show) <*> (3 ~> "fizz" <> 5 ~> "buzz")
Let's analyse this from the outside in by first looking at the types of the
arguments on either side of the
λ> :t (fromMaybe . show) fromMaybe . show :: Show a => a -> Maybe String -> String λ> :t (3 ~> "fizzbuzz" <> 5 ~> "buzz") ... :: (Alternative f, Integral a) => a -> f String
Hmm, the first one is kind of understandable, but the second one is still quite
abstract. In order to make this more concrete we could try to glue these pieces
<*>. Let's remind ourselves of the signature for
λ> :t (<*>) (<*>) :: Applicative f => f (a -> b) -> f a -> f b
Now we have all the ingredients; let's try and match the type signatures for
the previous expressions with the (very abstract) one for
f (a -> b) -> f a -> f b a' -> (Maybe String -> String) -> a' -> f' String -> a' -> String
f matches up with the
a' ->, and the
matches up with the
Maybe. Given that we know that the whole combination needs
to give something of type
Integer -> String, this fixes the type of
the above to
Just to make things crystal clear let's rewrite the signatures for the two sub-expressions using the concrete types that we managed to deduce:
(fromMaybe . show) :: Integer -> Maybe String - String (3 ~> "fizz" <> 5 ~> "buzz") :: Integer -> Maybe String
This is pretty cool; by combining several expressions that individually have very abstract types we've managed to deduce concrete types for these expressions!
We can also see that by using
<*> we're using the
Applicative instance of
functions to elide the
Integer parameter to the two sub-expressions. We
core like so:
core n = fromMaybe (show n) $ (3 ~> "fizz" <> 5 ~> "buzz") n
which is, in my opinion, more explicit but much less readable!
Down the layers
Now that we have these concrete types we can start understanding how everything fits together.
fromMaybe has signature
a -> Maybe a -> a; it takes a default value, a
Maybe value and returns the default value if the
Nothing. In code:
fromMaybe a (Just b) = b fromMaybe a Nothing = a
core the default value is
show n, where
is the number we're fizz-buzzing. This makes sense, as if
n is not divisible
by 3 or 5 then we should show just the number itself.
We can therefore see that
3 ~> "fizz" <> 5 ~> "buzz" takes
n and should
n is not divisible by 3 or 5, and
Given this, it kind of makes sense if we can first look at
3 ~> "fizz" in
isolation. If we look at the type signature for
λ> :t (<>) (<>) :: Monoid m => m -> m -> m
we see that it takes two things of type
m and produces a third thing of the
same type. We can therefore deduce that the type of
3 ~> "fizz" is the same as
the whole expression
3 ~> "fizz" <> 5 ~> "buzz", and is therefore
To understand how
3 ~> "fizz" works we'll first have to look at the definition
(m ~> str) x = str <$ guard (x `mod` m == 0)
Ok, the last bit,
x `mod` m == 0, is clearly checking whether
m. Let's look at the signatures of
λ> :t (<$) (<$) :: Functor f => a -> f b -> f a λ> :t guard guard :: Alternative f => Bool -> f ()
<$ seems to take two arguments, the second one being a functorial one,
and returns the first value in the functorial context of the second value. If I
had to guess I would say that it's implemented like so:
a <$ fb = fmap (const a) fb
or, in point free style:
(<$) = fmap . const
Looking at the definition of
~> again we can see that the expression evaluates
str put into the functorial context of
guard (x `mod` m ==
0). What the hell does that mean?
Once again we're getting hit by the fact that the type signatures of the individual pieces are too general; we need to put stuff back into context and "match up the types" to understand what is really going on.
We know that
str <$ guard (x `mod` m == 0) must have type
and we know that
str has type
String and guard returns an
f () where
is some functor (
Alternative being a subclass of
Functor). We can therefore
guard (x `mod` m == 0) must therefore have type
Maybe (). This
means that the only values this expression can have are
Just () and
Combined with the
<$ we can therefore see that
(m ~> str) x evaluates to
Just str when
Down, down, down
So now we've understood that layer of structure, let's see if we can
understand the combination
3 ~> "fizz <> 5 ~> "buzz. Because we'll be
referring to this thing a few times, I'm going to give it the name
buzzer :: Int -> Maybe String buzzer = 3 ~> "fizz" <> 5 ~> "buzz"
The expressions on either side of the
<> are functions from
<> is defined as follows between functions:
(f <> g) x = f x <> g x
so clearly for this to work
g must have the same signature, and
the return value must itself be a monoid. We know that
Maybe String for our case.
Maybe is indeed a monoid if the thing that
it contains is also a monoid; we just identify
Nothing with the monoidal
identity for the contained values and we're done.
String is, of course,
a monoid with the empty string as its identity element and concatenation
Putting all this together we can see how
works. We can explicitly treat each case: not divisible by 3 or 5, divisible
by either 3 or 5, divisible by both 3 and 5.
When we apply
buzzer to a number that is neither divisible by 3 nor by 5
then both of the subexpressions evaluate to
Nothing and we get
Nothing <> Nothing, which is just
Nothing. In the second case we get
Just "fizz" <> Nothing or
Nothing <> Just "buzz", which evaluate
Just "fizz" and
Just "buzz" respectively (thanks to the monoid on
Maybe). In the final case we get
Just "fizz" <> Just "buzz", which
Just ("fizz" <> "buzz") which is
Putting it all together
Now comes the question of how we would rewrite this fizzbuzz so that it's easier to understand. On one hand we want to use abstraction to help us reveal the actual structure of the problem (without getting bogged down in the messy details) and on the other hand we don't want to abstract into the stratosphere so that it's no longer clear what our intention is.
My compromise would probably look something like this:
import Control.Monad (guard) import Data.Monoid ((<>)) import Data.Maybe (fromMaybe) (m ~> str) x = if x `mod` m == 0 then Just str else Nothing fizz_or_buzz :: Integer -> Maybe String fizz_or_buzz = 3 ~> "fizz" <> 5 ~> "buzz" fizzbuzz :: Integer -> String fizzbuzz = fromMaybe <$> show <*> fizz_or_buzz main = traverse putStrLn $ map fizzbuzz [1..100]
Essentially I made the following changes:
- I preferred an explicit 'if-then-else' over the use of
<$, but did not apply a type signature to
~>as I feel it would obscure, rather than clarify, meaning.
- I put an explicit type signature on the piece that handles the fizzing and buzzing, but kept the abstract monoidal composition. I think that even if someone is not 100% clear on how all the monoid instances interact, the signature and definition make it obvious what this piece is doing. In addition the formatting makes it easy for someone else to modify the code, say to add printing of "baz" if the number is divisible by 7, or to reverse the order of "fizz" and "buzz".
- I prefer using applicative style for both of the arguments for
fromMaybe; In my opinion this clarifies intent drastically.
So in the end we have not actually changed too much: the code still works in essentially the same way; I just clarified intent by adding explicit names to things, adding type signatures, and using explicit language features as opposed to what I consider excessive abstract logic.
Of course, the changes I made are coming from a place of ignorance; I am a
total Haskell noob, so the things that are not obvious to me could well be
obvious for a Haskell veteran. For example, the fact that I chose to keep the
fromMaybe <$> show <*> fizz_or_buzz is due to the fact that I understand and
know how to use the applicative instance of functions; maybe if I had more
<$ I would find the initial two-liner clearer
than my explicit 'if-then-else'. I guess only time will tell.
People complain about object oriented programming because when you make a method call you have no idea what code is actually getting called ('cause dynamic dispatch + having to follow the object's method resolution order). I would posit that finding the definition of any of the functionality defined in a typeclass is the same thing. From a function definition it is sometimes impossible to know what code will actually be run because it can depend on the type of the arguments; you need to go to the call site to find out what will happen.
In addition, I find that the abstract level at which Haskell operates sometimes
confuses more than it helps. Even though a
a -> b and
Maybe a both have
monoid instances, the meaning is totally different for the two. In my opinion
this is a case where treating things too generally can actually obscure meaning.
Work from the outside in
I found that the complexity from overgeneralising can be combated by working top-down. You first need to figure out the type of the top-level/outermost expression and work inwards. If you start out trying to understand the types for the constituent expressions, often they will be to general for you to be able to understand why they are being used in the first place.
By starting from the outermost expression you can apply the technique of "matching up the types" to figure out what is going on one layer down, and then carry on recursively like this until you have the concrete types for the innermost expressions.
Is there such a thing as being too general?
Abstraction is in some sense the essence of programming computers. It allows us
to see the forest instead of the trees and often enables thinking about
problems in a more fruitful way, i.e. closer to the domain in which the problem
was originally defined. Many languages define abstract (as opposed to concrete)
concepts. Python (my go-to language) has the concept of a
mapping etc. These are all useful concepts, as they signal
intent; we can define an algorithm that works on any
iterable, and this
gives us the freedom to pass it an array, linked list, or anything else that can
be iterated over. Someone reading the algorithm doesn't need to care about the
actual type that is passed in to understand what is going on.
Haskell takes this 1 step further with
I am yet to be convinced that this is actually useful for a wide variety of
cases. Even if
List can both formally be considered as applicative
functors, the applicative instances for these two types means totally different
Maybe represents computations that can fail, whereas the
Applicative instance represents all possible combinations of the provided
computations. If I write some code that does something with a general
Applicative, I don't really know what the code means before I apply it to
concrete types. This means that even if I can formulate an algorithm using
Applicative, naming this thing sensibly is going to be a real
On the other hand, some very smart people clearly think that thinking at this level of abstraction does produce better software, and I am still very new to Haskell and functional programming in general. I would really like to see a good set of concrete examples that show how abstracting into the stratosphere like this is actually beneficial and produces code that is more maintainable.