Obviously, interacting with ListenBrainz requires some sort of IO so whatever API I will be offering has to live within some sort of monad. Currently, there are three major options:

*Supply an API targetting a concrete monad stack.*Under this option, our API would have types such as

`submitListens :: ... -> M () getListens :: ... -> M Listens`

where

`M`

is some*particular*monad (or monad transformer).*Supply an API using type classes*This is the

`mtl`

approach. Rather than choosing which monad my users have to work in, my API can be polymorphic over monads that support accessing the ListenBrainz API. This means my API is more like:`submitListens :: MonadListenBrainz m => ... -> m () getListens :: MonadListenBrainz m => ... -> m Listens`

*Use an extensible effects framework.*Extensible effects are a fairly new entry, that are something of a mix of the above options. We target a family of concrete monads -

`Eff`

- but the extensible effects framework lets our effect (querying ListenBrainz) seamlessly compose with other effects. Using`freer-effects`

, our API would be:`submitListens :: Member ListenBrainzAPICall effects => ... -> Eff effects () getListens :: Member ListenBrainzAPICall effects => ... -> Eff effects Listens`

So, which do we choose? Evaluating the options, I have some concerns.

For option one, we impose pain on all our users who want to use a different monad stack. It’s unlikely that you’re application is going to be written soley to query ListenBrainz, which means client code becomes littered with `lift`

. You may write that off as syntatic, but there is another problem - we have committed to an interpretation strategy. Rather than describing API calls, my library now skips directly to prescribing how to run API calls. However, it’s entirely possible that you want to intercept these calls - maybe introducing a caching layer or additional logging. Your only option is to duplicate my API into your own project and wrap each function call and then change your program to use your API rather than mine. Essentially, the program itself is no longer a first class value that you can transform.

Extensible effects gives us a solution to both of the above. The use of the `Member`

type class automatically reshuffles effects so that multiple effects can be combined without syntatic overhead, and we only commit to an interpretation strategy when we actually run the program. `Eff`

is essentially a *free monad*, which captures the syntax tree of effects, rather than the result of their execution.

Sounds good, but extensible effects come with (at least) two problems that make me hesistant: they are experimental and esoteric, and it’s unclear that they are performant. By using *only* extensible effects, I am forcing an extensible effects framework on my users, and I’d rather not dictate that. Of course, extensible effects can be composed with traditional monad transformers, but I’ve still imposed an unnecessary burden on my users.

So, what do we do? Well, as Old El Paso has taught us: why don’t we have both?

It’s trivial to actually support both a monad transformer stack *and* extensible effects by using an `mtl`

type class. As I argue in Monad transformers, free monads, mtl, laws and a new approach, I think the best pattern for an `mtl`

class is to be a monad homomorphism from a program description, and often a free monad is a fine choice to lift:

```
class Monad m => MonadListenBrainz m where
liftListenBrainz :: Free f a -> m a
```

But what about `f`

? As observed earlier, extensible effects are basically free monads, so we can actually share the same implementation. For `freer-effects`

, we might describe the ListenBrainz API with a GADT such as:

```
data ListenBrainzAPICall returns where
GetListens :: ... -> ListenBrainzAPICall Listens
SubmitListens :: ... -> ListenBrainzAPICall ()
```

However, this isn’t a functor - it’s just a normal data type. In order for `Free f a`

to actually be a monad, we need `f`

to be a functor. We could rewrite `ListenBrainzAPICall`

into a functor, but it’s even easier to just fabricate a functor for free - and that’s exactly what `Coyoneda`

will do. Thus our `mtl`

type class becomes:

```
class Monad m => MonadListenBrainz m where
liftListenBrainz :: Free (Coyoneda ListenBrainzAPICall) a -> m a
```

We can now provide an implementation in terms of a monad transformer:

```
instance Monad m => MonadListenBrainz (ListenBrainzT m)
liftListenBrainz f =
iterM (join . lowerCoyoneda . hoistCoyoneda go)
where
go :: ListenBrainzAPICall a -> ListenBrainzT m a
```

or extensible effects:

```
instance Member ListenBrainzAPICall effs => MonadListenBrainz (Eff effs) where
liftListenBrainz f = iterM (join . lowerCoyoneda . hoistCoyoneda send) f
```

or maybe directly to a free monad for later inspection:

```
instance MonadListenBrainz (Free (Coyoneda ListenBrainzAPICall)) where
liftListenBrainz = id
```

For the actual implementation of performing the API call, I work with a concrete monad transformer stack:

`performAPICall :: Manager -> ListenBrainzAPICall a -> IO (Either ServantError a)`

which both my extensible effects “run” function calls, or the `go`

function in the `iterM`

call for `ListenBrainzT`

’s `MonadListenBrainz`

instance.

In conclusion, I’m able to offer my users a choice of either:

- a traditional monad transformer approach, which doesn’t commit to a particular intepretation strategy by using an
`mtl`

type class - extensible effects

All without extra syntatic burden, a complicated type class, or duplicating the implementation.

You can see the final implemantion of `listenbrainz-client`

here.

The ReaderT design pattern has been mentioned recently, so where does this fit in? There are two options if we wanted to follow this pattern:

- We require a HTTP
`Manager`

in our environment, and commit to using this. This has all the problems of providing a concrete monad transformer stack - we are committing to an interpretation. - We require a family of functions that explain how to perform each API call. This kind of like a van Laarhoven free monad, or really just explicit dictionary passing. I don’t see this really gaining much on abstracting with type classes.

I don’t feel like the ReaderT design pattern offers anything that isn’t already dealt with above.

]]>`mtl`

-like type classes based on the idea of lifting languages for a given effect into larger monad transformer stacks. This approach felt so mechanical to me I set about exploring a way to formalise it, and am happy to announce a new experimental library – `transformers-eff`

.
`transformers-eff`

takes inspiration from the work of algebraic effects and handlers, and splits each effect into composable programs for introducing effects and handlers that eliminate these effects. As the name indicates, this work is also closely related to monad transformer stacks, as they provide the implementation of the specific effects. I believe the novelty in my approach is that we can do this entirely within the system of monad transformers, and this observation makes it very convenient to create re-usable effects.

Before looking at an example, I want to start by presenting the core API. First, we have the `Eff`

monad transformer:

`data Eff (f :: * -> *) (m :: * -> *) (a :: *)`

If you squint, you’ll see that `Eff`

has the familiar shape of a *monad transformer* - it transforms a given monad `m`

, providing it access to effects described by `f`

. As `Eff f m`

is itself a monad, it’s possible to stack `Eff`

s together. The type parameter `f`

is used to indicate which effects this `Eff`

transformer talks about.

Next, the library provides a way to eliminate `Eff`

by *translating* it into a concrete monad transformer:

```
translate :: (Monad m,Monad (t m),MonadTrans t)
=> (forall x r. f x -> ContT r (t m) x)
-> Eff f m a
-> t m a
```

Translations are defined by a single function that is very similar to the type of “lifts” we saw in my previous blog post. The difference here is that the homomorphism maps into `ContT`

, which allows the translation to adjust control flow. For many effects it will be enough to simply `lift`

directly into this, but it can be useful to inspect the continuation, for example to build non-deterministic computations.

Finally, we have one type class method:

`interpret :: (Monad m) => f a -> m a`

However, this type class is fairly constrained in its instances, so you should read `m`

as actually being some sort of monad transformer stack containing `Eff f`

.

Let’s dive in and look at some examples.

Last post we spent a lot of time looking at various representations of the reader monad, so let’s see how this looks under `transformers-eff`

.

We already have a definition for our language, `r -> a`

as we saw last week. While we could work directly with this, we’ll be interpreting into `ReaderT`

so I’ll use the `Reader`

newtype for a little extra readibility. Given this language, we just need to write a translation into a concrete monad transformer, which will be `ReaderT`

:

```
effToReaderT :: Monad m => Eff (Reader e) m a -> ReaderT e m a
effToReaderT = translate (\r -> lift (hoist generalize r))
```

This is a little dense, so let’s break it down. When we call `translate`

, we have to provide a function with the type:

`forall a m. Reader r a -> ContT _ (ReaderT r m) a`

The `ReaderT r m`

part is coming from the type we gave in the call to `translate`

, that is – the type of `effToReaderT`

. We don’t really need to concern outselves with continuations for this effect, as reading from a fixed environment does not change the flow of control - so we’ll begin with `lift`

. We now have to produce a `ReaderT r m a`

from a `Reader r a`

. If we notice that `Reader r a = ReaderT r Identity a`

, we can make use of the tools in the `mmorph`

library, which lets us map that `Identity`

to any `m`

via `hoist generalize`

.

We still need a way to easily introduce these effects into our programs, and that means writing an `mtl`

type class. However, the instances require almost no work on our behalf *and* we only have to provide two, making this is a very quick process:

```
class (Monad m) => EffReader env m | m -> env where
liftReader :: Reader env a -> m a
instance Monad m => EffReader env (Eff (Reader env) m) where
liftReader = interpret
instance {-# OVERLAPPABLE #-} EffReader env m =>
EffReader env (Eff effects m) where
liftReader = lift . liftReader
```

I then provide a user-friendly API built on this lift operation:

```
ask :: EffEnv e m => m e
ask = liftReader (Reader id)
```

Finally, most users are probably more interested in running the effect rather than just translating it to `ReaderT`

, so let’s provide a convenience function to translate and run all in one go:

```
runReader :: Eff (Reader r) m a -> r -> m a
runReader eff r = runReaderT (effToReaderT eff) r
```

In total, the reader effect is described as:

```
class (Monad m) => EffReader env m | m -> env where
liftReader :: Reader env a -> m a
instance Monad m => EffReader env (Eff (Reader env) m) where
liftReader = interpret
instance {-# OVERLAPPABLE #-} EffReader env m =>
EffReader env (Eff effects m) where
liftReader = lift . liftReader
ask :: EffEnv e m => m e
ask = liftReader (Reader id)
effToReaderT :: Monad m => Eff (Reader e) m a -> ReaderT e m a
effToReaderT = translate (\r -> lift (hoist generalize r))
```

We also looked at a logging effect last week, and this can also be built using `transformers-eff`

:

```
data LoggingF message a = Log message deriving (Functor)
class (Monad m) => EffLog message m | m -> message where
liftLog :: Free (LoggingF message) a -> m a
instance Monad m => EffLog env (Eff (Free (LoggingF message)) m) where
liftLog = interpret
instance {-# OVERLAPPABLE #-} EffLog env m =>
EffLog env (Eff effects m) where
liftLog = lift . liftLog
log :: EffLog message m => message -> m ()
log = liftLog . liftF . Log
runLog :: (MonadIO m)
=> Eff (Free (LoggingF message) e) m a
-> (message -> IO ())
-> m a
runLog eff =
runIdentityT (translate (iterM (\(Log msg) -> liftIO (io msg))))
```

The interpretation here is given an `IO`

action to perform whenever a message is logged. I could have implemented this in a few ways - perhaps lifting the whole computation into `ReaderT (message -> IO ())`

, but instead I have just used `IdentityT`

as the target monad transformer, and added a `MonadIO`

constraint onto `m`

. Whenever a message is logged, we’ll directly call the given `IO`

action. As you can also see, I’ve used a free monad as the source language for the effect. This example demonstrates that we are free to mix a variety of tools (here free monads, `MonadIO`

and the identity transformer) in order to get the job done.

We saw above that when we introduced our `EffLog`

type class, it was immediately available for use along side `EffReader`

effects - and we didn’t have to do anything extra! To me, this is a huge win - I frequently find myself frustrated with the amount of work required to do when composing many different projects together with `mtl`

, and this is not just a theoretical frustration. To provide just one example from today, I wanted to use `ListT`

with some Yesod code that required `MonadLogger`

. There is obviously no `MonadLogger`

instance for `ListT`

, and it’s almost unsolvable to provide such an instance withoutrs/o using orphan instances - neither one of those libraries should need to depend on the other, so we’re stuck! If you stay within `Eff`

, this problem doesn’t occur.

Many will be quick to point out that in `mtl`

it doesn’t necessary make sense to have all transformers compose due to laws (despite the lack of any laws actually being stated…), and I’m curious if this is true here. In this library, due to the limitation on having to write your effectful programs based on an underlying algebra, I’m not sure it’s possible to introduce the problematic type class methods like `local`

and `catch`

.

In the `mtl`

approach a single monad transformer stack might be able to deal with a whole selection of effects in one go. However, I’ve found that this can actually make it quite difficult to reason about the flow of code. To provide an example, let’s consider this small API:

```
findOllie :: (MonadDb m, MonadPlus m) => m Person
findOllie =
do x <- dbLookup (PersonId 42)
guard (personName x == "Ollie")
return x
type QueryError = String
dbLookup :: (MonadDb m, MonadError QueryError m) => PersonId -> m Person
data DbT m a
instance Monad m => Monad (DbT m)
instance Monad m => MonadDb (DbT m)
runDb :: (MonadIO m) :: DbT m a -> m a
```

If we just try and apply `runDb`

to `findOllie`

, we’ll get

`runDb findOllie :: (MonadError QueryError m, MonadIO m, MonadPlus m) => m Person`

We still need to take care of `MonadError`

and `MonadPlus`

. For `MonadError`

I’ll use `ExceptT`

, and for `MonadPlus`

I’ll use `MaybeT`

:

`runMaybeT (runExceptT (runDb findOllie)) :: IO (Maybe (Either QueryError Person))`

Next, let’s consider a few scenarios. Firstly, the case where everything succeeds -

```
> runMaybeT (runExceptT (runDb findOllie))
Just (Right Person ...)
```

However, that query could fail, which would cause an error

```
> runMaybeT (runExceptT (runDb findOllie))
Just (Left "Table `person` not found")
```

Still as expected. Finally, person 42 might not actually be me, in which case we get

```
> runMaybeT (runExceptT (runDb findOllie))
Just (Left "")
```

Huh? What’s happened here is that we’ve hit the `MonadPlus`

instance for `ExceptT`

, and because our `QueryError`

is a `String`

we have a `Monoid`

instance, so we were given an “empty” error. This is not at all what we were expecting!

While this example is a contrived one, I am very nervous that this accidental choice of instances could happen deep within another section of code, for example where I expect to do some local error handling and accidentally eliminate a chance of failure that I was expecting to deal with elsewhere.

In `transformers-eff`

this is not possible, as each `Eff`

deals with one *and only one* effect at a time. This could be done with `mtl`

by introducing a separate type class for failure and only adding an instance for `MaybeT`

, we are working around the problem by convention, and I would much rather bake that in to the types.

The underlying implementation of `Eff`

is built on top of continuations, and due to aggressive inlineing, GHC is able to work some serious magic. In fact, in all the benchmarks I’ve produced so far, `Eff`

is as fast as `transformers`

, and even comes out slightly faster in one (though within the same order of magnitude).

As `Eff`

is just another monad transformer, you can stack in other monad transformers. Note that by doing this you may lack the type class instances you need, so explicit `lift`

ing might be necessary. I mainly expect this being useful by putting `Eff`

“on the top” - for example I can use `Eff`

locally with in a `Snap`

monad computation, provided I eventually run back down to just `Snap`

. This is the same pattern as locally using `transformers`

.

**Extensible**. The approach we take should be*open*, allowing us to define new effects.**Composable**. It should be possible to mix different effects with well defined, predictable behaviour.**Efficient**. We should only have to pay a minimal cost for the use of the abstraction.**Terse**. Haskell is generally not verbose, and whatever system we use should allow us to avoid excessive verbosity. The system should work with us, we should not have to work for it.

I would also add in a 5th point

**Inferable**. Type annotations should not be required for successful compilation.

With this list in mind, what are the current solutions, and how do they measure up?

Starting with the most basic, we can simply choose a concrete monad that does everything we need and work entirely in that – which is usually going to be `IO`

. In a sense this is composable – certainly all programs in one monad compose together – but it’s composable in the same sense that dynamically typed languages fit together. Often choosing a single monad for each individual computation is too much, and it becomes very difficult to work out exactly what effects are being used in our individual functions: does this computation use `IO`

? Will it throw exceptions? Fork threads? You don’t know without reading the source code.

Building a concrete monad can also be a lot of work. Consider a computation that needs access to some local state, a fixed environment and arbitrary `IO`

. This has a type such as

`newtype M a = M (Environment -> State -> IO (a, State))`

However, to actually interact with the rest of the Haskell ecosystem we need to define (at least) instances of `Functor`

, `Applicative`

and `Monad`

. This is boilerplate code and entirely determined by the choice of effects – and that means we should strive to have the compiler write it for us.

To combat this, we can make use of monad transformers. Unlike monads, monad transformers compose, which means we can build larger monads by stacking a collection of monad transformers together. The above monad `M`

can now be defined using off-the-shelf components, but crucially we can derive all the necessary type classes in one fell swoop with the `GeneralizedNewtypeDeriving`

language extension

```
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
newtype M a = M (ReaderT Environment (StateT State IO) a)
deriving (Functor, Applicative, Monad)
```

This saves typing considerably, and is a definite improvement. We’ve achieved more of points 1 and 2 (extenability and composability) by having both programs *and* effects compose. Point 4 (terseness) is improved by the use of `GeneralizedNewtypeDeriving`

. There is a slight risk in terms of efficiency, but I believe if `transformers`

would just `INLINE`

a few more definitions, the cost can be entirely erased. All of this code will infer as we’d expect, as we’re working entirely with explicit types

However, while we had to type less to *define* the effects, we have to type more to *use* the effects! If we want to access the environment for example, we can use the `ask`

operation from `Control.Monad.Trans.Reader`

, but we have to wrap this up in the `M`

`newtype`

:

```
env :: M Environment
env = M ask
```

However, if we want to retrieve the current state in the computation, we can use `get`

from `Control.Monad.Trans.State`

, but we also have to `lift`

that into the `ReaderT`

monad that is wrapping `StateT`

:

```
currentState :: M State
currentState = M (lift get)
```

This is unfortunate – `lift`

is mostly noise that we don’t want to be concerned with. There is also the problem in that the amount of `lift`

s to perform is tied directly to the underlying definition of `M`

. If I later decide I want to layer in the chance of failure (perhaps with `MaybeT`

), I now have to change almost *all* code using `lift`

, by adding an extra one in!

`lift`

is a mechanical operation that is determined by the type of monad transformer stack and the operation that we want to perform. As we noted, for different stacks, the amount of `lift`

ing will vary, but it is determined by the type of stack. This suggests that these `lift`

s could be inferred by the use of type classes, and this is the purpose of the monad transformer library – `mtl`

.

The `mtl`

is a library consisting of type classes that abstract over the operations provided by each monad transformer. For `ReaderT`

, we have the `ask`

operation, and likewise for `StateT`

we have `get`

and `put`

operations. The novelty in this library is that the instances for these type classes are defined inductively over monad transformer stacks. A subset of the instances for `MonadReader`

for example, show

```
class MonadReader r m | m -> r where
ask :: m r
instance Monad m => MonadReader r (ReaderT r m) where
ask = Control.Monad.Trans.ReaderT.ask
instance (MonadReader r m) => MonadReader r (StateT s) where
ask = lift ask
```

We can read this as:

a

*base case*if the outermost transformer is`ReaderT`

, in which case no`lift`

ing has to be performed.an

*inductive case*, stating that*if*we know there is a`MonadReader`

instance somewhere within the stack (that is, somewhere in the stack we are using`ReaderT`

), then the outer monad transformer (in this case`StateT`

) is also an instance of`MonadReader`

by simply passing those operations through to the underlying instance via one application of`lift`

.

With these instances the lifting now becomes automatic entirely at the use of the respective operations. But not only does it become easier to use the operations, our programs also become more generic and easier to reason about. For example, while `env`

previously had the type `M Environment`

, it could now generalise to simply

```
env :: (MonadReader Environment m) => m Environment
env = ask
```

Stating that `env`

is reusable in *any* computation that has access to `Environment`

. This leads to both more options for composition (we’re not tied to working in `M`

), but also types that are more expressive of what effects are actually being used by the computation. In this case, we didn’t use `StateT`

, so we didn’t incur a `MonadState`

type class constraint on `m`

.

Type classes open up a risk of losing type inference, and the approach in `mtl`

is to use functional dependencies. `mtl`

makes use of functional dependencies in order to retain type inference, but this comes at a compositional cost – the selected effect proceeds by induction from the outer most monad transformer until we reach the *first* matching instance. This means that even if there are multiple possible matches, the first one encountered will be selected. The following program demonstrates this, and will fail to type check:

```
getTheString :: ReaderT Int (ReaderT String IO) String
getTheString = ask
```

```
Couldn't match type ‘Int’ with ‘[Char]’
arising from a functional dependency between:
constraint ‘MonadReader String (ReaderT Int (ReaderT String IO))’
arising from a use of ‘ask’
```

When we used `ask`

induction proceeded from the outermost transformer - `ReaderT Int`

. This is an instance of `MonadReader`

, and due to the functional dependency will be selected even though it doesn’t contain the `String`

that we’re looking for. This manifests as a type error, which can be frustrating.

In practice, I’m not convinced this is really a problem, but in the scenario where environments don’t match up we have a few options:

Adapt the environment with tools like

`mapReaderT`

or`magnify`

(from`lens`

).Use

`monad-classes`

which uses a little more type level computation to allow this to work. I’m not entirely sure what the story for inference is here, but I think there*may*be a risk of less inference.Forgo the functional dependencies, as in

`mtl-unleashed`

. In this case you really do give up type inference, and I don’t consider it a viable option (it fails to satisfy point 5 in my criteria in the intro).

Interestingly, the generality we gained by being polymorphic over our choice of monad also opens the door to something we couldn’t do with monad transformers, which is to choose a different implementation of the type class. For example, here’s a different implementation of `MonadReader`

for `M`

:

```
instance MonadReader Environment M where
ask = do
env <- M ask
liftIO (putStrLn "Requesting environment")
liftIO (putStrLn ("It is currently " ++ show env)
return env
```

While a slightly contrived example, we see that we now have the ability to provide a different interpretation for `ask`

which makes use of the underlying `IO`

in `M`

by logging whenever a computation looks at the environment. This technique is even more useful when you start defining domain specific effects, as it gives you the option to provide a pure variant that uses mock data, which can be useful for unit testing.

Let’s move away from monad transformer stacks and see what the other options are. One option that’s getting a lot of attention is the use of *free monads*. A free monad is essentially a type of construction that adds just enough structure over some data in order to have the structure of a monad – and nothing extra. We spend our days working with monads, and the reason the approach afforded by free monads is appealing is due to the way that we build them – namely, we just specify the syntax! To illustrate this, let me the consider the almost traditional example of free monads, the syntax of “teletype” programs.

To begin with, I have to define the syntax of teletype programs. These programs have access to two operations - printing a line to the screen, and reading a line from the operator.

```
data TeletypeF a = PrintLine String a
| GetLine (String -> a)
deriving (Functor)
```

This functor defines the syntax of our programs - namely programs that read and write to the terminal. The parameter `a`

allows us to chain programs together, such as this `echo`

program that prints whatever the user types:

```
echo :: TeletypeF (TeletypeF ())
echo = GetLine (\line -> PrintLine line ())
```

However, this is kind of messy. The free monad construction allows us to generate a monad out of this functor, which provides the following presentation:

```
echo :: Free TeletypeF ()
echo = do
l <- getLine
printLine l
getLine :: Free TeletypeF String
getLine = liftF (GetLine id)
printLine :: String -> Free TeletypeF ()
printLine l = liftF (PrintLine l ())
```

This definition of `echo`

looks much more like the programs we are used to writing.

The remaining step is to provide an interpretation of these programs, which means we can actually run them. We can interpret our teletype programs by using `STDOUT`

and `STDIN`

from `IO`

:

```
runTeletype :: Free TeletypeF a -> IO a
runTeletype =
iterM (\op ->
case op of
GetLine k -> readLine >>= k
PrintLine l k -> putStrLn l >> k)
```

This rather elegant separation between syntax and semantics suggests a new approach to writing programs – rather than working under a specific monad, we can instead work under a free monad for some suitable functor that encodes all the operations we can perform in our programs.

That said, the approach we’ve looked at so far is not particularly extensible between different classes of effects, as everything is currently required to be in a single functor. Knowing that free monads are generated by functors, we can start to look at the constructions we can perform on functors. One very nice property of functors is that given *any* two functors, we can compose them. The following functors below witness three possible ways to compose functors:

```
data Sum f g a = InL (f a) | InR (g a) deriving (Functor)
data Product f g a = Product (f a) (g a) deriving (Functor)
data Compose f g a = g (f a) deriving (Functor)
```

Assuming `f`

and `g`

are `Functor`

s, all of these are also `Functor`

s - which means we can use them to build monads with `Free`

.

The most interesting of these constructions (for our purposes) is `Sum`

, which lets us choose between two different `Functor`

s. Taking a more concrete example, I’ll repeat part of John A. De Goes “Modern FP” article. In this, he defines two independent functors for programs that can access files in the cloud, and another for programs that can perform basic logging.

```
data CloudFilesF a
= SaveFile Path Bytes a
| ListFiles Path (List Path -> a)
deriving (Functor)
data LoggingF a
= Log Level String a
deriving (Functor)
```

Both of these can now be turned into monads with `Free`

as we saw before, but we can also combine both of these to write programs that have access to both the `CloudFilesF`

API *and* `LoggingF`

:

`type M a = Free (Sum CloudFilesF LoggingF) a`

However, in order to use our previous API, we’ll have to perform another round of lifting:

```
-- API specific to individual functors
log :: Level -> String -> Free LoggingF ()
log l s = liftF (Log l s ())
saveFile :: Path -> Bytes -> Free CloudFilesF ()
saveFile p b = lift (SaveFile p b ())
-- A program using multiple effects
saveAndLog :: Free (Sum CloudFilesF LoggingF) ()
saveAndLog = do
liftLeft (log Info "Saving...")
liftRight (saveFile "/data" "\0x42")
-- Lifting operations
liftLeft :: Free f a -> Free (Sum f g) a
liftLeft = hoistFree InL
liftRight :: Free g a -> Free (Sum f g) a
liftRight = hoistFree InR
```

This is a slightly unfortunate outcome - while we’ve witnessed that there is extensiblity, without more work the approaches don’t compose particularly well.

To solve the problem of having to lift everything leads us to the need for an `mtl`

-like solution in the realm of free monads - that is, a system that automatically knows how to lift individual functors into our composite functor. This is essentially what’s happening in the `extensible-effects`

library - as a user you define each individual `Functor`

, and then `extensible-effects`

provides the necessary type class magic to combine everything together.

We should also mention something on efficiency while we’re here. Free monads have at least two presentations that have different use cases. One of these is extremely easy to inspect (that is, write interpters) but has a costly implementation of `>>=`

. We know how to solve this problem, but the trade off switches over to being costly to inspect. Recently, we learnt how to perform reads and binds in linear time, but the constant factors are apparently a little too high to be competative with raw `transformers`

. So all in all, there is an efficiency cost of *just working with* a free monad approach.

`mtl`

and lawsI want to now return to the monad transformer library. To recap, the definition of `MonadReader`

is –

```
class MonadReader r m | m -> r where
ask :: m r
```

But this alone makes me a little uneasy. Why? I am in the class of Haskellers who consider a type class without a law a smell, as it leaves us unable to reason about what the type class is even doing. For example, it doesn’t require much imagination to come up with nonsense implementations of `ask`

:

```
newtype SomeM a = SomeM (StateT Int IO a)
deriving (Functor, Applicative, Monad)
instance MonadReader Int SomeM where
ask = SomeM $ do
i <- get
put (i + 1)
return i
```

But then again – who’s to say this is nonsense? Given that we were never given a specification for what `ask`

should do in the first place, this is actually perfectly reasonable! For this reason, I set out searching for a way to reason about `mtl`

-style effects, such that we could at least get *some* laws.

The `transformers`

library also give us `mtl`

-like type classes, one of which is `MonadIO`

. However, this type class does have laws as well:

```
-- liftIO . return = return
-- liftIO (f >>= g) = liftIO f >>= liftIO . g
class MonadIO m where
liftIO :: IO a -> m a
```

This law is an example of a *homomorphism*. To quote Wikipedia on the subject:

A homomorphism is a structure-preserving map between two algebraic structures (such as groups, rings, or vector spaces).

In this case the algebraic structure is the monad structure of `IO`

. We see that any monad that is an instance of `MonadIO`

has the ability to lift `IO`

operations, and as this is a homomorphism, the laws state that it will preserve the underlying structure of `IO`

.

It’s currently unclear how to apply this type of reasing to `MonadReader`

, given its current definition – `ask`

is just a value, it doesn’t even take an argument – so how can we even try and preserve anything?

Let’s take some inspiration from free monads, and consider the effect language for `MonadReader`

. If we only have `(Monad m, MonadReader r m)`

, then the only thing we can do on top of the normal monad operations is `ask`

the environment. This suggests a suitable functor would be:

```
data AskF r a = Ask (r -> a)
deriving (Functor)
```

I can now wrap this up in `Free`

in order to write programs with the ability to `ask`

:

`type Ask r a = Free (AskF r) a`

Now we have an algebraic structure with properties (`Ask r`

is a `Monad`

) that we would like to preserve, so we can write this alternative form of `MonadReader`

:

```
-- liftAsk . return = return
-- liftAsk (f >>= g) = liftAsk f >>= liftAsk . g
class Monad m => MonadReader r m | m -> r where
liftAsk :: Ask r a -> m a
ask :: MonadReader r m => m r
ask = liftAsk (liftF (Ask id))
```

Et voilà! We now have an equally powerful `MonadReader`

type class, except this time we have the ability to reason about it and its instances. If we return to the instance that I was questioning earlier, we can redefine it under the new API:

```
instance MonadReader Int SomeM where
liftAsk askProgram = SomeM $ do
x <- get
out <- iterM (\(Ask k) -> return (k t)) askProgram
put (x + 1)
return out
```

Now that we have some laws, we can ask: is this a *valid* definition of `MonadReader`

? To check, we’ll use equational reasoning. Working through the first law, we have

```
liftAsk (return a)
= { definition of return for Free }
liftAsk (Pure a)
= { definition of liftAsk for SomeM }
SomeM $ do
x <- get
out <- iterM (\(Ask k) -> return (k t)) (Pure a)
put (x + 1)
return out
= { evaluate iterM for Pure a }
SomeM $ do
x <- get
out <- return a
put (x + 1)
return out
= { monad laws }
SomeM $ do
x <- get
put (x + 1)
return a
```

Already we have a problem. While we can see that this does return the original `a`

it was given, it does so in a way that also incurred some side effects. That is, `liftAsk (return a)`

is *not* the same as `return a`

, so this isn’t a valid definition of `MonadReader`

. Back to the drawing board… Now, it’s worth noting that there is an instance that *is* law abiding, but might still be considered as surprising:

```
instance MonadReader Int SomeM where
liftAsk askProgram =
iterM (\(Ask k) -> SomeM $ do
x <- get
put (x + 1)
k x )
```

Applying the same equational reasoning to this is much easier, and shows that the first law is satisfied

```
liftAsk (return a)
= { definition of liftAsk }
iterM (\(Ask k) -> SomeM $ do
x <- get
put (x + 1)
k x)
(return a)
= { definition of return for Free }
iterM (\(Ask k) -> SomeM $ do
x <- get
put (x + 1)
k x)
(Pure a)
= { definition of iterM given Pure}
return a
```

For the second law, I’ll omit the proof, but I want to demonstrate to sessions in GHCI:

```
> let runSomeM (M m) = evalState m 0
> runSomeM (liftAsk (ask >>= \r1 -> ask >>= \r2 -> return (r1, r2))
(1, 2)
> runSomeM (liftAsk ask >>= \r1 -> liftAsk >>= \r2 -> return (r1, r2)
(1, 2)
```

So while the answers agree - they probably don’t agree with your intuition! This is only surprising in that we have some assumption of how =Ask= programs should behave. Knowing more about =Ask=, we might seek this further law:

`ask >> ask = ask`

This law can also be seen as a reduction step in the classification of our `Ask`

programs, but a `Free`

monad is not powerful enough to capture that. Indeed, the documentation of `Free`

mentions exactly this:

A free

`Monad`

is one that does no work during the normalisation step beyond simply grafting the two monadic values together.`[]`

is not a free`Monad`

(in this sense) because`join [[a]]`

smashes the lists flat.

The law `ask >> ask = ask`

follows by normalisation of our “reader” programs, so a free monad will be unable to capture that by construction – the best we can do is add an extra law to our type class. However, what we can also do is play a game of normalisation by evaluation. First, we write an evaluator for `Free (AskF r)`

programs:

```
runAsk :: Free (AskF r) a -> (r -> a)
runAsk f r = iterM (\(AskF k) -> k r) f
```

and then witness that we can reify these `r -> a`

terms back into `Free (Ask r) a`

:

```
reify :: (r -> a) -> Free (Ask r) a
reify = AskF
```

You should also convince yourself that `(r -> a)`

really is a normal form, and you may find the above linked article on this useful for formal proofs (search for “normalisation”). What we’ve essentially shown is that *every* `Free (AskF r) a`

program can be expressed as a single `r -> a`

function. The normal form of `ask >> ask`

is now - by definition - a single `ask`

, which is the law we were originally having to state.

As we’ve witnessed that `r -> a`

is the normal form of `Free (AskF r) a`

, this suggests that we could just as well write:

```
-- liftAsk . return = return
-- liftAsk (f >>= g) = liftAsk f >>= liftAsk . g
class MonadReader r m | m -> r where
liftAsk :: (r -> a) -> m a
```

(The structure being preserved by the homomorphism is assuming that `(r -> a)`

is a reader monad).

Our strange instance now becomes

```
instance MonadReader UTCTime SomeM where
liftAsk f = SomeM $ do
x <- get
put (x + 1)
return (f x)
```

With a little scrutiny, we can see that this is not going to satisfy the homomorphism laws. Not only does it fail to satisfy the `return`

law (for the same reason), the second law states that `liftAsk (f >>= g) = liftAsk f >>= liftAsk . g`

. Looking at our implementation this would mean that we would have to increase the state based on the amount of binds performed in `f >>= g`

. However, we also know that `>>=`

for `r -> a`

simply reduces to another `r -> a`

function - the implication being that it’s impossible to know how many binds were performed.

Here a counter example will help convince us that the above is wrong. First, we know

`liftAsk (ask >> ask) = liftAsk ask`

because `ask >> ask = ask`

by definition.

By the homomorphism laws, we must also have

`liftAsk (ask >> ask) = liftAsk ask >> liftAsk ask`

Combining these, we expect

`liftAsk ask = liftAsk (ask >> ask) = liftAsk ask >> liftAsk ask`

However…

```
> runSomeM (liftAsk ask)
1
> runSomeM (liftAsk (ask >> ask))
1
> runSomeM (liftAsk ask >> liftAsk ask)
2
```

Now we can see that `SomeM`

’s current definition of `MonadReader`

fails. It’s much harder to write a law abiding form of `MonadReader Int SomeM`

- but it will essentially require some *fixed* data throughout the scope of the computation. The easiest is of course to change the definition of `SomeM`

:

```
newtype SomeM a = SomeM (ReaderT Int IO a)
instance MonadReader UTCTime SomeM where
liftAsk f = SomeM (fmap f ask)
```

You should convince yourself that this instance is now law abiding - for example by considering the above counter-example, or by performing equational reasoning.

The process we underwent to reach the new form of a =MonadReader= type class, extends well to many different type classes and suggests a new pattern for `mtl`

-like type class operations. Here’s a rough framework that I’m having a lot of success with:

To begin, think about the language that your effect will talk about. For the reader monad, we defined the `AskF`

functor, and the same can be done for the exception monad, the failure monad, the state monad, and so on. For more “domain specific” operations, a free monad also scales well - one could imagine a language for interacting with general relational databases, with operations to `SELECT`

, `UPDATE`

, `DELETE`

, and so on.

Individual operations are not enough, we also need a way to write programs using this language. This amounts to finding a suitable way to compose these operations together. An easy first approximation is to use a free structure, again – as we started with for the reader monad. In the case of the aforementioned domain specific relational database example, the free monad might be as far as we want to go.

It’s also worth exploring if there is a normal form that more succinctly captures the operations in your language along with equational reasoning. We saw that the normal form of `Free (AskF r) a`

was `r -> a`

, and the same process can be ran for `Free (StateF s) a`

- reaching `s -> (a, s)`

as a normal form. It’s important to note that if you go through the process of normalisation by evaluation, that you also make sure you can reify your evaluation result back into the original language. To illustrate why, consider the hypothetical relational database language:

```
data DatabaseF a = Query SqlQuery (Results -> a)
runDb :: Free DatabaseF a -> (DatabaseHandle -> IO a)
runDb h = iterM (\(Query q k) -> query h q >>= k)
```

This is fine for an interpreter, but `DatabaseHandle -> IO a`

is not a normal form because we can’t reify these terms *back* into `DatabaseF`

. This is important, because by working with a normal form it means that you can define a whole range of interpreters that see the necessary structure of the original programs. To illustrate one problem with `DatabaseHandle -> IO a`

, if we attempted to write a pure interpreter, we would be unable to see which queries were performed in order to produce the data under `a`

(not to mention the limitation that working in `IO`

would cause).

With your effect language defined, the next step is to define a type class for homomorphisms from this effect language into larger monad stacks. Often this will be a monad homomorphism – much as we saw with `MonadReader`

and `MonadIO`

– but the homomorphism need not be a monad homomorphism. For example, if your source effect language is a simple monoid, then the homomorphism will be a monoid homomorphism. We’ll see an example of this shortly.

With a type class of homomorphisms, we can now export a cleaner API. For `MonadReader`

, this means exporting convenience `ask`

operations that are defined in terms of `liftAsk`

with the appropriate program in our `AskF`

language.

I also suggest providing a “reference” implementation of this type class. For `MonadReader`

, this reference implementation is `ReaderT`

. The idea is that users can immediately take advantage of the effect we’re defining by introducing the appropriate monad transformer into their monad stack.

The type class allows them to more efficiently define the operations in terms of existing monadic capabilities (e.g., `IO`

), but for many simply reusing a transformer will be sufficient.

To conclude this article I want to explore one more application of this pattern applied to building a logging effect. In fact, it is this very problem that motivated the research for this blog post, and so we’ll end up building the foundations of my `logging-effect`

library.

The first step is to identify a language for programs that can perform logging. There’s not much involved here, simply the ability to append to the log at any point in time. Let’s formalise that idea with the appropriate functor:

```
data LoggingF message a = AppendLogMessage message a
deriving (Functor)
```

This functor is parameterised by the type of log messages. The only constructor for `LoggingF`

takes a log message and the rest of the computation to run. We could stop here and lift `Free (LoggingF message) a`

programs, but I want to go a bit further and see are any other ways to express this. I’ll use normalisation by evaluation again, and see what happens.

```
runFreeLogging :: Free (LoggingF message) a -> (a, [message])
runFreeLogging (Pure a) = (a, [])
runFreeLogging (Free (AppendLogMessage m next)) =
case runFreeLogging next of
(a, messages) -> (a, m:messages)
```

We can also take a `(a, [message])`

and turn it back into the equivalent `Free (LoggingF message) a`

, so `(a, [message])`

is another candidate for the language of our logging programs.

But this `a`

bothers me. It occurs only in `LoggingF message`

to capture the rest of the computation, but never does the result of logging affect the choice of what that next computation is. This suggests that it’s mostly noise, and maybe we can just erase it. This would lead us to have logging programs of the type `[message]`

. This type is no longer the right kind for our lifting operation to be a monad homomorphism, which means we have to identify another algebraic structure. Well, lists are certainly a composable structure - they have all the properties of a *monoid*.

With that in mind, we need to consider what it means to be a monoid homomorphism into some monad. First, observe that monads also have a monoid-like operations:

```
monadMempty :: Monad m => ()
monadMempty = return ()
monadMappend :: Monad m => m () -> m () -> m ()
monadMappend l r = l >> r
```

We can now write our lifting type class with the laws of a monoid homomorphism:

```
liftLog mempty = mempty -- = return ()
liftLog (x <> y) = liftLog x <> liftLog y -- = liftLog x >> liftLog y
class MonadLog message m | m -> message where
liftLog :: [message] -> m ()
```

While we reached this type by normalisation-by-evaluation and then a little bit of fudging, there is another way we could have got here. In a sense, `[]`

can be seen as another construction like `Free`

- given any type `a`

, `[a]`

is a free monoid generated by `a`

. An easier route to this type class would have been to describe the individual operations in our logging programs by:

`data LoggingOp message = LogMessage message`

and then using `[]`

as our free construction. As `LoggingOp message`

~ `Identity message`

~ `message`

, we know we could also use `[message]`

, and we’re back at the type class above.

(In my `logging-effect`

library I chose a slightly different representation of the free monoid. Theoretically, this is a sounder way to talk about free monoids, but I’m mostly interested in the slight efficiency win by not having to build up lists only to immediately deconstruct them.)

The last steps are to provide polymorphic operations and a reference implementation that satisfies the laws:

```
logMessage :: (MonadLog message m) => message -> m ()
logMessage message = liftLog [message]
newtype LoggingT message m a = LoggingT (ReaderT (message -> IO ()) m a)
instance MonadIO m => MonadLog message (LoggingT message m) where
liftLog messages = LoggingT (\dispatchLog -> liftIO (for_ messages dispatchLog))
```

Does this reference implementation satisfy the monoid homomorphism laws that is required by `MonadLog`

?

```
liftLog mempty
= { definition of mempty for lists }
liftLog []
= { definition of liftLog for LoggingT }
LoggingT (\dispatchLog -> liftIO (for_ [] dispatchLog))
= { definition of for_ for [] }
LoggingT (\dispatchLog -> liftIO (return ()))
= { liftIO . return = return }
LoggingT (\dispatchLog -> return ())
= { definition of return for LoggingT }
return ()
```

So far so good!

```
liftLog (x <> y)
= { definition of liftLog for LoggingT }
LoggingT (\dispatchLog -> liftIO (for_ (x ++ y) dispatchLog))
= { for_ distributes over ++ }
LoggingT (\dispatchLog -> liftIO (for_ x dispatchLog >> for_ y dispatchLog)
= { liftIO (f >>= g) = liftIO f >>= liftIO . g }
LoggingT (\dispatchLog -> liftIO (for_ x dispatchLog) >> liftIO (for_ y dispatchLog))
= { definition of (>>=) for LoggingT }
LoggingT (\dispatchLog -> liftIO (for_ x dispatchLog)) >>
LoggingT (\dispatchLog -> liftIO (for_ y dispatchLog)) >>
= { definition of liftLog for LoggingT }
liftLog x >> liftLog y
```

Bingo!

In this post I presented a pattern for building `mtl`

-like type classes in a mechanical fashion, and this suggests that maybe some of the details can be automatically dealt with. In the next few days I’ll be presenting my `algebraic-transformers`

library which will show exactly that.

For a while, we’ve had bindings to SDL 2 on Hackage, but these bindings are as close to 1:1 as you can get in Haskell. This results in a library that certainly *can* be used in Haskell, but does not feel particularly like writing ordinary Haskell! A real concern here is that this raises the barrier to entry for those new to either game programming or writing games in Haskell (or both!) - a barrier that I would certainly like to see lowered. To address this, myself and many others have spent the last year working on high-level bindings to abstract away the C-like feel of the existing library, and to present a more Haskell interface.

To give you an idea of how things look, here’s a basic application that opens a window, clears the screen, and quits when the user presses ‘q’:

```
{-# LANGUAGE OverloadedStrings #-}
module Main where
import SDL
import Linear (V4(..))
import Control.Monad (unless)
main :: IO ()
main = do
initialize [InitEverything]
window <- createWindow "My SDL Application" defaultWindow
renderer <- createRenderer window (-1) defaultRenderer
appLoop renderer
appLoop :: Renderer -> IO ()
appLoop renderer = do
events <- pollEvents
let eventIsQPress event =
case eventPayload event of
KeyboardEvent keyboardEvent ->
keyboardEventKeyMotion keyboardEvent == Pressed &&
keysymKeycode (keyboardEventKeysym keyboardEvent) == KeycodeQ
_ -> False
qPressed = not (null (filter eventIsQPress events))
rendererDrawColor renderer $= V4 0 0 255 255
clear renderer
present renderer
unless qPressed (appLoop renderer)
```

Hopefully you’ll agree that the code above is close to idiomatic Haskell.

We’ve tried to be extensive with the bindings, and at the moment the following should (!) all be working:

- Graphics routines have been our highest priority. The bindings should give you full control over window management, access to SDL’s new hardware-accelerated 2D rendering routines, and also the ability to set up an OpenGL context (which is compatible with the
`OpenGL`

and`gl`

libraries). - SDL’s audio abstraction is available, and the bindings allow you to open audio devices and stream audio data.
- A clean implementation of SDL’s event system, designed for use with pattern matching.
- Access to input devices, including keyboard, mouse, pointer devices, and joysticks.
- A large collection of example code, ported from the popular “lazyfoo” and “twinklebear” tutorials.

The bindings are not 100% exhaustive - we’ve omitted some routines that are already provided by the Haskell runtime, but we also currently lack bindings to the following:

- Force-feedback (SDL’s “haptic” functionality). While we do have some code in the repository here, none of the contributors own a device that is compatible with SDL2 to actually test this work. If you do, please drop us a line and help out!
- Gesture recording for touch screens. We’re currently targetting desktops and laptops, but SDL has support for Android and iOS. Hopefully when GHC is easier to target these devices, we can start to explore these SDL bindings.
- Other SDL2 compatible libraries, such as
`SDL2_net`

and`SDL2_ttf`

. We’d love for these projects to have the same treatment, and are more than happy to host them under the`haskell-game`

Github account.

We hope this enables more people to begin building interactive software and games in Haskell. It’s still early days for these bindings, so if you find any bugs (runtime problems or API bugs), or if you find the bindings lacking in anyway, please don’t hesitate to open an issue on our issue tracker.

Happy hacking!

]]>`optional-args`

library, which provides new types for optional arguments along with heavy syntactic overloading. To follow that, Dimitri Sabadie published a blog post discouraging the use of the currently popular `Default`

type class. These are both good discussions, and as with any good discussion have been lingering around in the back of my head.
Since those discussions took place, I’ve been playing with my point in the FRP-web-framework design space - Francium. I made some big refactorings on an application using Francium, mostly extending so called “component” data types (buttons, checkboxes, etc), and was frustrated with how much code broke just from introducing new record fields. The Commercial Haskell group published an article on how to design for extensibility back in March, so I decided to revisit that.

It turns out that with a little bit of modification, the approach proposed in designing for extensibility also covers optional arguments pretty well!

First, let’s recap what it means to design for extensibility. The key points are:

- Functions take
`Settings`

values, which specify a general configuration. - These
`Settings`

values are opaque, meaning they cannot be constructed by a data constructor, but they have a smart constructor instead. This smart constructor allows you to provide default values. - Provide get/set functions for all configurable fields in your
`Settings`

data type, preventing the use of record syntax for updates (which leaks implementation details).

Regular Haskell users will already be familiar a pattern that can be seen in point 3: we often use a different piece of technology to solve this problem - lenses. Lenses are nice here because they reduce the surface area of our API - two exports can be reduced to just one, which I believe reduces the time to learn a new library. They also compose very nicely, in that they can be embedded into other computations with ease.

With point 3 amended to use some form of lens, we end up with the following type of presentation. Take a HTTP library for example. Our hypothetical library would have the following exports:

```
data HTTPSettings
httpKeepAlive :: Lens HTTPSettings Bool
httpCookieJar :: Lens HTTPSettings CookieJar
defaultHTTPSettings :: HTTPSettings
httpRequest :: HTTPSettings -> HTTPRequest -> IO Response
```

which might have usage

```
httpRequest
(defaultHTTPSettings & httpKeepAlive .~ True)
aRequest
```

This is an improvement, but I’ve never particularly liked the reverse function application stuff with `&`

. The repeated use of `&`

is essentially working in an `Endo`

`Writer`

monad, or more generally - a state monad. The `lens`

library ships with operators for working specifically in state monads (of course it does), so let’s use that:

```
httpRequest :: State HTTPSettings x -> HTTPRequest -> IO Response
....
httpRequest
(do httpKeepAlive .= True)
aRequest
```

It’s a small change here, but when you are overriding a lot of parameters, the sugar offered by the use of `do`

is hard to give up - especially when you throw in more monadic combinators like `when`

and `unless`

.

With this seemingly simple syntactic change, something interesting has happened; something which is easier to see if we break open `httpRequest`

:

```
httpRequest :: State HTTPSettings x -> HTTPRequest -> IO Response
httpRequest mkConfig request =
let config = execState mkConfig defaultHttpSettings
in ...
```

Now the default configuration has moved *inside* the HTTP module, rather than being supplied by the user. All the user provides is essentially a function `HTTPSettings -> HTTPSettings`

, dressed up in a state monad. This means that to use the default configuration, we simply provide a do-nothing state composition: `return ()`

. We can even give this a name

```
def :: State a ()
def = return ()
```

and voila, we now have the lovely name-overloading offered by `Data.Default`

, but without the need to introduce a lawless type class!

To conclude, in this post I’ve shown that by slightly modifying the presentation of an approach to build APIs with extensibility in mind, we the main benefit of `Data.Default`

. This main benefit - the *raison d’être* of `Data.Default`

- is the ability to use the single symbol `def`

whenever you just want *a* configuration, but don’t care what it is. We still have that ability, and we didn’t have to rely on an ad hoc type class to get there.

However, it’s not all rainbows and puppies: we did have to give something up to get here, and what we’ve given up is a compiler enforced consistency. With `Data.Default`

, there is only a single choice of default configuration for a given type, so you know that `def :: HTTPSettings`

will be the same set of defaults *everywhere*. With my approach, exactly what `def`

means is down to the function you’re calling and how they want to interpret `def`

. In practice, due to the lack of laws on `def`

, there wasn’t much reasoning you could do about what that single instance was anyway, so I’m not sure much is given up in practice. I try and keep to a single interpretation of `def`

in my libraries by still exporting `defaultHTTPSettings`

, and then using `execState mkConfig defaultHTTPSettings`

whenever I need to interpret a `State HTTPConfig`

.

- Benjamin Kovach wrote about rebindable syntax and list comprehensions.
- Andraz Bajt wrote about GHC’s various “deriving” mechanisms.
- ertes took us through the concept of higher rank types, and their implementation in GHC.
- Roman Cheplyaka explained how to use existential quantification.
- Tim Docker showed us how the apparently simple “scoped type variables” extension is both useful and necessary.
- Tom Ellis showed us GHC’s special support for arrow notation.
- Sean Westfall showed us how to use Template Haskell.
- Mathieu Boespflug showed us the brand new static pointers extension. forthcoming in future GHC versions.
- Everyone else who submitted pull requests or otherwise informed me of minor typos.

I feel the guest posts have added a lot of variety to the series, and this year each post has consistently gone above and beyond my expectations, delivering incredibly high quality content. Once again, thank you all for your hard work - 24 DOGE wouldn’t be the same without you!

Over the course of the month, we’ve looked at just over 20 extensions - but as I mentioned in the opening post, the story certainly doesn’t stop there. GHC is full of many more interesting extensions - I was hoping to get on to looking at GADTs and data kinds, but alas - there are only so many days in the month. For an example of how these extensions all interact when we write “real-world” software, readers may be interested in viewing my recent Skills Matter talk - strongly typed publish/subscribe over websockets via singleton types.

I’ve been really happy to see comments this year from people who have learnt about new extensions, seen previous extensions in a different light, or simply formed a deeper understanding of extensions they were already using. While I was a little nervous about the series at the start, I’m now confident it’s been a great success. A huge thank you to everyone who participated in the discussions - as with 24 Days of Hackage in previous years, I feel the discussion around these posts is just as important.

Finally, a thank you to everyone who donated during the series - these tokens of appreciate are greatly appreciated.

To close 24 DOGE, well… a picture speaks a thousand words.

`Static`

ProgrammingGHC already features quite the zoo of pointer types. There are bare `Ptr`

’s (for marshalling to and from foreign objects), `ForeignPtr`

’s (smart pointers that allow automatic memory management of the target object), weak pointers (references to objects that are ignored by the garbage collector), and `StablePtr`

’s (pointers to objects that are pinned to a specific location in memory). GHC 7.10 will add a new beast to this list: `StaticPtr`

, the type of pointers whose value never changes across program runs, even across program runs on different machines. The objects pointed to by static pointers are *static*, much in the same sense of the word as in other programming languages: their value is known at compile time and their lifetime extends across the entire run of the program. GHC 7.10 also comes with a new language extension to *safely* create static pointers: `StaticPointers`

.

Why yet another pointer type? And why grace it with yet another extension?

Static pointers turn out to be incredibly useful for distributed programming. Imagine that you have a fleet of networked computers, abstractly called *nodes*. You’d like these nodes to collaborate, say because you also have a fair amount of data you’d like to crunch through, or because some of these nodes provide services to other nodes. Static pointers help solve the age-old question of distributed programming: how can nodes easily delegate tasks to each other?

For most programming languages, this is a thorny question to ask: support for distributing computations comes as an afterthought, so there is no first class support. But there are exceptions: Erlang is one example of a language that has escaped from research labs one way or another and natively speaks distributed. Erlang supports literally sending the code for any native (non-foreign) function from node to node. Delegating a task called `myfun`

is a case of saying:

`spawn(There, myfun)`

where `There`

is a variable containing some node identifier. This capability comes at a cost, however. It is in general hard to share optimized compiled code across a cluster of machines, which may not be running the exact same operating system or have the same system libraries available. So Erlang keeps to comparatively slow but easy to handle and easy to distribute interpreted bytecode instead. Moreover, if new code can be loaded into a running program at any moment or existing code monkey patched on-the-go, what tools do we have to reason about the resulting state of the program?

Haskell too natively speaks distributed, at least in its bleeding edge GHC variant. But at much lower cost. In a world where complete systems can be containerized using language agnostic technology, and shipped and deployed within minutes across a full scale cluster, do we really need our language runtimes to distribute *code*? Are we willing to accept the compromises involved? Perhaps that is a problem best solved once, for all programs in any language, using the likes of Docker or Rocket. And once our entire cluster is running instances of the same program by dint of distributing containers, all we need is a means to control which computations happen when, and where, by sharing *references* to functions. This works because, if all nodes are running the same program, then they all have access to the same functions.

Turning on `-XStaticPointers`

adds a new keyword `static`

and a new syntactic form to the language for *safely* creating such references: if expression `e`

has type `a`

, then `static e`

has type `StaticPtr a`

.

For example, here’s a program that obtains a static pointer to `f`

, and prints the info record associated with it:

```
module Main where
import GHC.StaticPtr
fact :: Int -> Int
fact 0 = 1
fact n = n * fact (n - 1)
main = do
let sptr :: StaticPtr (Int -> Int)
sptr = static fact
print $ staticPtrInfo sptr
print $ deRefStaticPtr sptr 10
```

The body of a static form can be any top-level identifier, but also arbitrary expressions, *so long as the expression is closed*, meaning that all variable names are either bound within the expression itself, or are top-level identifiers. That is, so long as the value of the expression could in principle be computed statically.

Given a static pointer, we can get back the value it points to using

`deRefStaticPtr :: StaticPtr a -> a`

Notice that we could as well have used a simple string to refer to `fact`

in the above program, construct a string table, so that if the program were distributed we could have each process communicate strings in lieu of functions to commuicate tasks to run remotely, using the string table to map strings back to functions. Something like this:

```
module Main where
import GHC.StaticPtr
import Data.Dynamic
fact :: Int -> Int
fact 0 = 1
fact n = n * fact (n - 1)
computation1 :: IO ()
computation1 = print $ fact 10
stringTable =
[ ("fact", toDynamic fact)
, ("computation1", toDynamic computation1)
]
main = do
send "some-node" "computation1"
```

where one could imagine node “some-node” running something like

```
serverLoop :: IO ()
serverLoop = forever $ do
sptr <- expect
fromDynamic (stringTable !! sptr)
```

assuming we have a `send`

function for sending serializable values as messages to nodes and a `expect`

function to receive them available.

Values in the string table are wrapped into `Dynamic`

to make them all have uniform type (that way a simple homegeneous list can do just fine as a datastructure). But there are three problems with this approach:

Constructing the string table is error prone: we might accidentally map the string

`"fact"`

to an entirely different function.No type safety.

`fromDynamic`

performs a type cast. This cast might fail if the type of value in the string table doesn’t match the expected type, making the program partial.It is antimodular: each module needs its own string table, which we then need to combine into a global string table for the whole program. If we add a any new module anywhere in the program, we need to also modify the construction of the string table, or accidentally forget to do so, which would constitute a bug.

(Some of these properties can be obtained with some clever Template Haskell hackery, but that solution is still fundamentally anti-modular, as well as contrived to use.)

It is for these three reasons that the `StaticPointers`

language extension comes in handy. There is no need for manually constructing tables. Constructing and dereferencing static pointers is type safe because the type of a static pointer is related to the type of the value that it points to. Separate modules are not a problem, because the compiler takes care of collecting the set of all static pointers in a program into its own internal table that it embeds in the binary.

This all sounds rather nice, but the static pointer type is kept abstract, as it should to ensure safety, so how can we serialize a static pointer to send over the wire, and deserialize it on the remote end to reconstruct the static pointer? The `GHC.StaticPtr`

module exports a few primitives to deal with just that. The idea is that each static pointer in a program is assigned a unique key (a `StaticKey`

). We can obtain the key for a static pointer using

```
type StaticKey = Fingerprint
-- Defined in GHC.Fingerprint.
data Fingerprint = Fingerprint {-# UNPACK #-} !Word64 {-# UNPACK #-} !Word64
deriving (Generic, Typeable)
staticKey :: StaticPtr a -> StaticKey
```

The type of keys is concrete (a key is a 128-bit hash), so keys can easily be encoded and decoded on the wire, using the `Binary`

type class provided by the binary package:

```
-- Automatically derived instance, using `DeriveGeneric`.
instance Binary Fingerprint
```

Provided a key, we can map it to a `StaticPtr`

using

`unsafeLookupStaticPtr :: StaticKey -> Maybe (StaticPtr a)`

Hold on a minute! This type is telling us that using `unsafeLookupStaticPtr`

we can map the key to a static pointer of any type, which we can then `deRefStaticPtr`

to a value of arbitrary type… Have we just lost type safety? In GHC 7.10, yes we have! In GHC 7.12, we will have a much safer lookup function:

```
lookupStaticPtr :: StaticKey
-> (forall a. Typeable a => StaticPtr a -> b)
-> Maybe b
```

(observe that this is a rank-2 type,) or equivalently

```
data DynStaticPtr = forall a. Typeable a => DynStaticPtr (StaticPtr a)
lookupStaticPtr :: StaticKey -> Maybe DynStaticPtr
```

This type says, provided a key and a continuation, `lookupStaticPtr`

will resolve the key to a static pointer and if successful feed it to the continuation. The type of the static key is not known a priori, but we can query the type inside the continuation using the supplied `Typeable`

constraint. The reason only the unsafe variant will ship in GHC 7.10 is because the safe variant will require a change to the `Data.Typeable`

API to be truly safe (see here for details), and because we do not yet store `Typeable`

constraints in the internal compiler-generated table mentioned above. In the meantime, this shouldn’t be a problem in practice: higher level libraries like Cloud Haskell and HdPH hide all uses of `lookupStaticPtr`

behind an API that does guarantee type safety - it’s just that we have to *trust* that their implementations always call `lookupStaticPtr`

at the right type, when ideally we wouldn’t need to entrust type safety to any library code at all, just the compiler.

Static pointers turn out to be suprisingly powerful. As it stands, the language extension nominally only allows sharing references to static values across the wire. But it’s easy to build a lot more power on top. In particular, it would be nice if programs could transmit not just static values over the wire, but indeed (nearly) any odd closure. Consider the following `main`

function:

```
main = do
putStrLn "Hi! Give me a number..."
x <- read <$> getLine
send "some-node" $ closure (static fact) `closureAp` closurePure 10
```

The idea (first found in the “Towards Haskell in the Cloud” paper) is to introduce a datatype of closures (which we’ll define concretely later), along with three combinators to create `Closure`

s from `StaticPtr`

s and from other `Closure`

s:

```
data Closure a
closure :: StaticPtr a -> Closure a
closurePure :: Serializable a => a -> Closure a
closureAp :: Closure (a -> b) -> Closure a -> Closure b
```

Notice that this datatype is nearly, but not quite, an applicative functor. We can only lift “serializable” values to a closure, not just any value. Given two existing `Closure`

s, we can create a new `Closure`

by “applying” one to another. Morally, we are making it possible to pass around not just static pointers to top-level values or purely static expressions, but things that represent (partially) applied static pointers. `Closure`

s are not always static: their value may depend on values known only at runtime, as in the example above.

Come to think of it, a `Closure`

very much acts like the closures that one would find deep in the bowels of GHC for representing partially applied functions during program execution. A closure is morally a code pointer paired with an *environment*, i.e. a list of actual arguments. Closures accumulate arguments as they are applied. In our case, the `StaticPtr`

represents a code pointer, and the environment grows everytime we `closureAp`

a `Closure`

to something else.

We’ll turn to how `Closure`

is defined in a minute, but first let’s talk about what it really means to be “serializable”:

```
data Dict c = c => Dict
class (Binary a, Typeable a) => Serializable a where
serializableDict :: StaticPtr (Dict (Serializable a))
```

This class definition says that if a value can be encoded/decoded to a `ByteString`

(see the binary package), and it can be queried for a representation of its type at runtime, then the value is *serializable*. However, serializable values also need to make it possible to obtain concrete “evidence” that the value really is serializable, in the form of a *static dictionary*. The idea is a neat trick. For all serializable values, we want to be able to obtain a static pointer to the evidence (or “dictionary”) associated with a class constraint. Because if we do, then we can “send” class dictionaries across the wire (or at least references to them)! But we can only take the static pointer of a value, so how does one make dictionary a first class value? The trick is to define a proxy datatype of dictionaries, using the `ConstraintKinds`

extension (the `Dict`

datatype). Any `Dict`

value is a value like any other, but it embeds a constraint in it, which at runtime corresponds to a dictionary.

For example, any concrete value of `Dict (Eq Int)`

carries a dictionary that can be seen as providing evidence that values of `Int`

type can indeed be compared for equality. For any type `a`

, `Dict (Serializable a)`

carries evidence that values of type `a`

are serializable. Any instance of `Serializable`

makes it possible to query for this evidence - for example:

```
instance Serializable Int where
serializableDict = static Dict
```

Now we can turn to the definition of `Closure`

and its combinators:

```
data Closure a where
StaticPtr :: StaticPtr b -> Closure b
Encoded :: ByteString -> Closure ByteString
Ap :: Closure (b -> c) -> Closure b -> Closure c
deriving (Typeable)
closure :: StaticPtr a -> Closure a
closure = StaticPtr
closureAp :: Closure (a -> b) -> Closure a -> Closure b
closureAp = Ap
closurePure :: Serializable a => a -> Closure a
closurePure x =
StaticPtr (static decodeD) `closureAp`
closure serializableDict `closureAp`
Encoded (encode x)
where
decodeD :: Dict (Serializable a) -> ByteString -> a
decodeD Dict = decode
```

(There are many ways to define `Closure`

, but this definition is perhaps most intuitive.)

As we can see from the definition, a `Closure`

is not only a (quasi) applicative functor, but in fact a (quasi) *free* applicative functor. Using the `Ap`

constructor, we can chain closures into long sequences (i.e. build environments). Using `StaticPtr`

and `Encoded`

, we can further make any serializable value a `Closure`

of the following shape:

`Ap (Ap (StaticPtr sptr_decodeD) csdict) bs`

where `sptr_decodeD`

is the static pointer to `decodeD`

, `csdict`

is a static serialization dictionary, and `bs`

is a value encoded as a byte string.

Notice that any concrete `Closure`

type is itself serializable:

```
instance Binary (Closure a) where
put (Ap (Ap (StaticPtr sptr) dict) (Encoded bs)) =
putWord8 0 >> put sptr >> put dict >> put bs
put (StaticPtr sptr) = putWord8 1 >> put sptr
put (Ap cf cx) = putWord8 2 >> put cf >> put cx
get = do
hdr <- getWord8
case hdr of
0 -> do sptr <- get
dict <- get
bs <- get
return $ Ap (Ap (StaticPtr sptr) dict) (Encoded bs)
1 -> StaticPtr <$> get
2 -> Ap <$> get <*> get
instance Serializable (Closure Int)
serializableDict = static Dict
```

(Note that for most types, manually defined `Binary`

instances as above are unnecessary - any datatype with a `Generic`

instance can have its `Binary`

instance derived automatically).

Therefore, suprisingly, adding just static pointers as a primitive datatype in the compiler is all that’s necessary to be able to conveniently send even nearly arbitrary closures down the wire. It turns out that we don’t need to add full blown support for serializing arbitrary closures as an extra primitive to the compiler. That can all be done in user space, and with better control by the user on exactly how. The only limitation is that in effect the environment part of the closure needs to be serializable, but that’s a feature: it means that we can statically rule out accidentally serializing closures that capture gnarly things that we *don’t* want to serialize down the wire: think file handles, locks, sockets and other system resources, none of which the remote end would be able to make any sense of.

Static pointers are a lightweight extension to GHC, with direct applications to distributed programming, or in general, any form of pointer sharing across processes with distinct address spaces. As first observed in a seminal paper about distributed programming in Haskell, this extension adds just enough power to the GHC compiler and runtime to conveniently and safely send arbitrary serializable closures across the wire.

Distributed programming in Haskell is a reality today: there are several frameworks, most prominently Cloud Haskell, with several industrial applications. But the `StaticPointers`

extension is brand new, and in fact no compiler release including it has shipped yet! Framework and application support for it is still lagging behind, but you can help. In particular, adding support to distributed-static and distributed-process would be a great step forward in the usability Cloud Haskell. Other next steps include: adding support for interoperating multiple versions of a program in a cluster, fully implementing `lookupStaticPtr`

(see above), or improving the robustness and speed of message serialization (see for example these great results for an idea of what’s possible here). Those are just some ideas. If you’re interested in participating, the [GHC wiki][ghc-wiki-dH] contains quite a few pointers, and the cloud-haskell-developers@ and distributed-haskell@ mailing lists are good places to coordinate efforts. See you there!

Template Haskell is an extension of Haskell 98 that allows for compile-time metaprogramming – allowing one to directly convert back and forth between concrete Haskell syntax and the underlying abstract syntax tree (AST) of GHC. Anyone familiar with Lisp’s macro system will immediately recognize the similarities – though in Haskell, specific datatypes are used to represent an AST that is used to draw and splice back in code fragments. The ability to generate code at compile time allows one to implement macro-like expansions, polytypic programs, user directed optimization (such as inlining), and the generation of supporting data structures and functions from existing data structures and functions.^{1}

In brief, Oxford brackets `[|`

and `|]`

are used to get the abstract syntax tree for the enclosed expression and ‘splice’ brackets `$(`

and `)`

are used to convert from the abstract syntax tree back into Haskell. The Quotation Monad is used to give unique names to the parsed tokens from the supplied Haskell code, and reification can be used to look up the name, type, constructor, and state of expression, and as well as the AST of Haskell types.^{2}

Template Haskell was introduced by Tim Sheard and Simon Peyton Jones in their paper “Template Meta-Programming for Haskell” (The original paper can be found here) in 2002, though its changed quite a bit since (see here). It was inspired by C++ templates, though TH is functionally more similar to a macro system. Quasiquotation is often used in conjunction with Template Haskell, but makes up a pretty big section, so I will only briefly describe it here. Only another full article of its own, could do quasiquotation justice.

In the wild, Template Haskell is used extensively by Yesod for routing and HTML template binding.^{3} Outside of Haskell, compile-time metaprogramming is used for the creation of Domain Specific Languages (DSLs), typically in the domains of testing and modeling, and generative metaprogramming (compile-time or not) for object relational mapping, typically for mapping database schemas to non-compiled code. And within Lisp, which is famous for it’s macro system, metaprogramming is used to create syntax extensions (syntactic sugar), such as the syntax used in lisp comprehensions.^{4}

*All code in this guide was executed with GHCi version 7.6.3 and Template Haskell version 2.9.0.0*

To get started, start up GHCi with the Template Haskell extension by including `-XTemplateHaskell`

, then load the AST datatypes:

```
$ ghci -XTemplateHaskell
Prelude> :m + Language.Haskell.TH
Prelude Language.Haskell.TH>
```

To see the AST syntax of some Haskell code insert valid Haskell syntax into oxford brackets and run it through `runQ`

which stands for the Q monad (quotation monad):

```
> runQ [| 1 + 2 |]
InfixE (Just (LitE (IntegerL 1))) (VarE GHC.Num.+) (Just (LitE (IntegerL 2)))
```

If you parse through the parentheses you’ll see the return expression forms a tree – an abstract syntax tree!

Checkout the lift class source, which is what’s being invoked by the oxford brackets. The Language.Haskell.TH.Syntax contains the definitions of all the types used in the AST. Using these types, it’s possible to construct any fragment of the Haskell language. Have a look at the Lit data type as an example. Lit stands for literal,

```
data Lit = CharL Char
| StringL String
| IntegerL Integer -- ^ Used for overloaded and non-overloaded
-- literals. We don't have a good way to
-- represent non-overloaded literals at
-- the moment. Maybe that doesn't matter?
| RationalL Rational -- Ditto
| IntPrimL Integer
| WordPrimL Integer
| FloatPrimL Rational
| DoublePrimL Rational
| StringPrimL String -- ^ A primitive C-style string, type Addr#
deriving( Show, Eq, Data, Typeable )
```

tokens represented by it make up literals defined throughout your syntax in the AST, as you can see in our example AST above. Within Language.Haskell.TH.syntax, 35 generic data types are declared with the Data.Data module. If you’re curious about what the AST syntax is referring to study the source.

The Q monad handles the expression’s typing via context, and also gives it a unique name by appending an integer at the end of the expression name to handle scoping distinction. Quotations are lexically scoped and the Q monad handles this using it’s naming scheme. (see the user’s guide wiki for a more in depth explanation of TH’s lexical scoping).

Let’s bind the returned AST expression from the example above to Haskell and evaluate it, using the splice brackets:

```
> $( return (InfixE (Just (LitE (IntegerL 1))) (VarE (mkName "+")) (Just (LitE (IntegerL 2)))))
3
```

Ta da, you converted concrete Haskell to AST and then evaluated it. Though, as you can see, identifiers have to be defined with the `mkName`

type in the AST to evaluate properly.

It’s possible to avoid having to modify the AST to splice it back, but you’ll have to bind it to a variable, as my next example illustrates:

In this example, the Fibonacci sequence is generated using zipWith:^{5}

```
let fibs :: [Integer]
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
let fibQ :: Int -> Q Exp
fibQ n = [| fibs !! n |]
```

Now run `$( ... )`

to excute the expansion:

```
> $(fibQ 22)
17711
```

TH splices in the expression that `fibQ`

represents along with the variable (that is `fibs !! n`

).

Note, expressions and splices can be nested:

```
> $(runQ [| fibs !! $( [| 8 |]) |])
21
```

I’ll explain TH’s syntax next – but after, I’ll show some more impressive examples that show the possibilities of splicing and ASTs.

Template Haskell quotation expression come with 4 different parser types, and an extensive 5th optional type that allows one to define their own types of quotations, called quasi-quotations.

`[| ... |]`

, or`[e| ... |]`

, generates expression AST syntax; it has the type`Q Exp`

.`> runQ [| 1 + 2 |] InfixE (Just (LitE (IntegerL 1))) (VarE GHC.Num.+) (Just (LitE (IntegerL 2)))`

`[d| ... |]`

, generates a list of top-level declaration AST sytnax; it has the type`Q [Dec]`

.`> runQ [d|x = 5|] [ValD (VarP x_4) (NormalB (LitE (IntegerL 5))) []]`

`[t| ... |]`

, generates a type AST syntax; it has the type`Q Type`

.`> runQ [t|Int|] ConT GHC.Types.Int`

`[p| ... |]`

, generates a pattern AST syntax; it has the type`Q Pat`

.`> runQ [p|(x,y)|] TupP [VarP x_5,VarP y_6]`

Custom “quasi-quotations”, have the form

`["quoter"| ... |]`

. The “quoter” can be anything except e, d, t, and p, and the token cannot contain spaces. Though, all GHC is doing is determining which parser to use based on the context within the oxford brackets.^{6}Quasi-quotations is a big second part to meta-programming. They’re essentially what makes it possible to write DSLs. I’m not going to cover it here since this guide is pretty long as it is, but if you’re interested, there are many guides to using quasi-quotations, find them here, here, and here (this one assumes you’re familiar with Parsec parsing).

An important restriction on Template Haskell to remember is *when inside a splice you can only call functions defined in imported modules, not functions defined elsewhere in the same module.* Quotations and splice have to be defined in separate modules, otherwise you’ll see this error:

```
GHC stage restriction:
`...' is used in a top-level splice or annotation,
and must be imported, not defined locally
```

Though, if you’re just binding variables in GHCi with `let`

, you don’t have to worry about this – only when you’re compiling Haskell.

You’re probably wondering if you can evaluate a Q expression the other way, to see what the splice is evaluating. Of course you can – run `runQ(Q exp) >>= putStrLn.pprint`

to see what an expression with a `Q Exp`

type will evaluate to:

```
> let myExp :: Q Exp; myExp = runQ [| 1 + 2 |]
> runQ(myExp) >>= putStrLn.pprint
1 GHC.Num.+ 2
```

If you want to see the expansion of splices, use the flag `-ddump-splices`

when starting GHCi : `ghci -XTemplateHaskell -ddump-splices`

.

Now let’s test it on another fun example with primes:^{7}

```
let isPrime :: (Integral a) => a -> Bool
isPrime k | k <=1 = False | otherwise = not $ elem 0 (map (mod k)[2..k-1])
let nextPrime :: (Integral a) => a -> a
nextPrime n | isPrime n = n | otherwise = nextPrime (n+1)
-- returns a list of all primes between n and m, using the nextPrime function
let doPrime :: (Integral a) => a -> a -> [a]
doPrime n m
| curr > m = []
| otherwise = curr:doPrime (curr+1) m
where curr = nextPrime n
-- and our Q expression
let primeQ :: Int -> Int -> Q Exp
primeQ n m = [| doPrime n m |]
```

```
> $(primeQ 0 67)
<interactive>:18:3-13: Splicing expression
primeQ 0 67 ======> doPrime 0 67
[2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67]
```

Try it on a nested expression, to really see how useful the dump-splices flag is:

```
> $(primeQ ($(primeQ 0 23) !! 3) 167)
<interactive>:20:13-23: Splicing expression
primeQ 0 23 ======> doPrime 0 23
<interactive>:20:3-34: Splicing expression
primeQ ($(primeQ 0 23) !! 3) 167 ======> doPrime 7 167
[7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103,107,109,113,127,131,137,139,149,151,157,163,167]
```

`-ddump-splices`

and `>>= putStrLn.pprint`

should come in handy when debugging.

Now for probably, what I consider to be the hardest aspect of Template Haskell to understand – reification.

Reification allows one to query the state of Haskell `Name`

s and get information about them. Specifically, reify returns a data type called `info`

– which returns data in a specific format on any `Name`

in Haskell, where the format and information depends on whether it’s being interpreted in a type context or an expression context.

TH introduces two new identifiers specifically for reification: Prefix `Name`

s to be evaluted in an expression context with a single quote, and prefix `Name`

s to be evaluated in a type context with a double quote. Though, `Name`

s must be interpretable within those contexts to be reified. (If you intend to use reify on expressions, don’t use quotes in the names of those expressions – otherwise it won’t parse correctly.)

To use reify on a type, use double quotes:

```
> $(stringE . show =<< reify ''Bool)
"TyConI (DataD [] GHC.Types.Bool [] [NormalC GHC.Types.False [],NormalC GHC.Types.True []] [])"
```

Reifying a type returns the AST as represented by TH, here’s the AST in a diagram of the boolean type from above:

The AST of a simple primitive type like Bool produces a small tree, but when used on types deeper down the module chain, relatively large ASTs will be generated. Try reify on `''Lit`

or `''Exp`

to know what I mean, though reify can work on any Haskell type.

To reify an expression, use single quotes, here’s an example with our `primeQ`

expression from above:

```
> $(stringE . show =<< reify 'primeQ)
"VarI primeQ_1627395913 (AppT (AppT ArrowT (ConT GHC.Types.Int)) (AppT (AppT ArrowT (ConT GHC.Types.Int)) (AppT (ConT Language.Haskell.TH.Syntax.Q) (ConT Language.Haskell.TH.Syntax.Exp)))) Nothing (Fixity 9 InfixL)"
```

As you can see `info`

returns different information depending on whether it’s a type or an expression. A type returns its structure in TH’s AST semantics. An expression returns information regarding its name, type, it’s constructor, and it’s fixity.

Use reification of expressions to extract the types associated with the construction of an expression, then reify those types to get its structure in an AST. This allows one to generate the AST of any data type in Haskell – no matter how deep into Haskell it gets.

Reification is very useful from the standpoint of what one can do with an AST to draw and splice back in code fragments within a programming language. Below, in Examples, the second example shows how one can use reify to extract the types from a record’s constructor to write a generic Show function that can generate a `Show`

for any record.

A good example to show what one can do with Template Haskell is a type safe Haskell version of c’s printf function (from stdio.h):^{8}

*Main.hs*

```
{-# LANGUAGE TemplateHaskell #-}
-- Import our template "printf"
import PrintF (printf)
-- The splice operator $ takes the Haskell source code
-- generated at compile time by "printf" and splices it into
-- the argument of "putStrLn".
main = do
putStrLn $ $(printf "Hello %s %%x%% %d %%x%%") "World" 12
putStrLn $ $(printf "Hello %s %s %s %d") "Russian" "with" "Love" 5000
```

*PrintF.hs*

```
{-# LANGUAGE TemplateHaskell #-}
module PrintF where
-- NB: printf needs to be in a separate module to the one where
-- you intend to use it.
-- Import some Template Haskell syntax
import Language.Haskell.TH
-- Possible string tokens: %d %s and literal strings
data Format = D | S | L String
deriving Show
-- a poor man's tokenizer
tokenize :: String -> [Format]
tokenize [] = []
tokenize ('%':c:rest) | c == 'd' = D : tokenize rest
| c == 's' = S : tokenize rest
tokenize (s:str) = L (s:p) : tokenize rest -- so we don't get stuck on weird '%'
where (p,rest) = span (/= '%') str
-- generate argument list for the function
args :: [Format] -> [PatQ]
args fmt = concatMap (\(f,n) -> case f of
L _ -> []
_ -> [varP n]) $ zip fmt names
where names = [ mkName $ 'x' : show i | i <- [0..] ]
-- generate body of the function
body :: [Format] -> ExpQ
body fmt = foldr (\ e e' -> infixApp e [| (++) |] e') (last exps) (init exps)
where exps = [ case f of
L s -> stringE s
D -> appE [| show |] (varE n)
S -> varE n
| (f,n) <- zip fmt names ]
names = [ mkName $ 'x' : show i | i <- [0..] ]
-- glue the argument list and body together into a lambda
-- this is what gets spliced into the haskell code at the call
-- site of "printf"
printf :: String -> Q Exp
printf format = lamE (args fmt) (body fmt)
where fmt = tokenize format
```

Notice that we had to separate the splicing and the expression definitions in separate modules, as mentioned in the syntax section above.

Compile the following with:

`$ ghc --make Main.hs -o main`

running main will print out:

```
$ ./main
Hello World %%x%% 22 %%x%%
Hello Russian with Love 5000
```

Now for an example that shows what one can do with reify – a Generic Show that can produce a `Show`

for any record type:^{9}

*Main.hs*

```
{- Main.hs -}
module Main where
import Derive
data T = A Int String | B Integer | C
$(deriveShow ''T)
main = print [A 1 "s", B 2, C] -- prints exactly <<[A 1 "s",B 2,C]>>
```

*Derive.hs*

```
{- Derive.hs -}
module Derive where
import Language.Haskell.TH
import Control.Monad
data T1 = T1
data T2 a = T2 a
deriveShow t = do
-- Get list of constructors for type t
TyConI (DataD _ _ _ constructors _) <- reify t
-- Make `show` clause for one constructor:
-- show (A x1 x2) = "A "++show x1++" "++show x2
let showClause (NormalC name fields) = do
-- Name of constructor, i.e. "A". Will become string literal in generated code
let constructorName = nameBase name
-- Get variables for left and right side of function definition
(pats,vars) <- genPE (length fields)
-- Recursively build (" "++show x1++...++"") expression from [x1...] variables list
let f [] = [| "" |]
f (v:vars) = [| " " ++ show $v ++ $(f vars) |]
-- Generate function clause for one constructor
clause [conP name pats] -- (A x1 x2)
(normalB [| constructorName ++ $(f vars) |]) [] -- "A "++show x1++" "++show x2
-- Make body for function `show`:
-- show (A x1 x2) = "A "++show x1++" "++show x2
-- show (B x1) = "B "++show x1
-- show C = "C"
showbody <- mapM showClause constructors
-- Generate template instance declaration and then replace
-- type name (T1) and function body (\x -> "text") with our data
d <- [d| instance Show T1 where
show x = "text"
|]
let [InstanceD [] (AppT showt (ConT _T1)) [FunD showf _text]] = d
return [InstanceD [] (AppT showt (ConT t )) [FunD showf showbody]]
-- Generate n unique variables and return them in form of patterns and expressions
genPE n = do
ids <- replicateM n (newName "x")
return (map varP ids, map varE ids)
```

Compile the following with:

`$ ghc --make Main.hs -o main`

running main will print out:

```
$ ./main
[A 1 "s",B 2,C]
```

This guide was for the most part written from collecting information written in other guides on Template Haskell, quasi-quoting, and Lisp macros – from online, wiki, and academic sources. Please check my bibliography to see where what came from what so credit can be properly given where it’s due.

Meta-programming is a powerful programming technique that can allow for the generation of user generated syntax extensions and DSLs. This is useful in that it can allow a programmer to generate custom code generating syntax extensions without otherwise having to change the core language. Template Haskell in particular is especially powerful over similar programming language constructs (i.e. The C Preprocessor, Lisp’s Macro system) in that it makes use of ASTs, reification (through a specific function), and – much in the spirit of Haskell – type-safety. The examples presented above only scratch the surface of what’s possible with reification – imagine the ability to construction entire systems, and then use reify to build ASTs, then swap in and out entire modules, entirely with the use of Template Haskell.

Some questions that have arisen within me from writing this article are: What are the limits of TH’s data type system? Is it truly possible for TH to represent all of Haskell with the finite set of data types written into the module? Is it possible for future language features to defy this set? What are the limits of meta-programming – TH, macros, and similar meta-prorgramming constructs make it possible to write code that writes code – but are there limits to this – is it possible to write a macro that can generate a macro, and so on indefinitely?

Don’t forget to checkout the API. Everything you need to know, you can for the most part find in the source. Also TH does in fact have bugs, check the issue tracking page if you’re dealing with a known issue: see here.

(The MIT License)

Copyright (c) 2014 Sean Westfall

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the ‘Software’), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED ‘AS IS’, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Tim Sheard and Simon Peyton Jones, “Template meta-programming for Haskell,” ACM SIGPLAN 2002 Haskell Workshop 3 (October 2002): 1-6, doi: 10.1145/581690.581691↩

The Glorious Glasgow Haskell Compilation System User’s Guide, Version 7.8.3, 2007↩

Greg Weber, Code that writes code and conversation about conversations, 2011:↩

Peter Seibel,

*Practical Common Lisp*(Apress, 2005)↩Mike Ledger, “A look at QuasiQuotation,” 2012.↩

Sami Hangaslammi, Basic Tutorial of Template Haskell, 2011:↩

Peter Seibel,

*Practical Common Lisp*(Apress, 2005)↩Template Haskell, Haskell Wiki, last updated October 2014: https://www.haskell.org/haskellwiki/Template_Haskell↩

`Arrow`

type class.
```
> {-# LANGUAGE Arrows #-}
>
> import Control.Monad (guard)
> import Control.Monad.Identity (Identity, runIdentity)
> import Control.Arrow (returnA, Kleisli(Kleisli), runKleisli)
```

In Haskell we use the `Monad`

typeclass to provide an interface for encoding certain forms of computation. The `Arrow`

typeclass provides an interface that is in many ways similar. Everything that can be made an instance of `Monad`

(in a law-abiding way) can also be adapted to be an `Arrow`

(in a law-abiding way). This means that the `Arrow`

interface is less powerful than `Monad`

, but that more things can be made instances of `Arrow`

.

Working directly with `Monad`

combinators like `>>=`

and `>>`

can sometimes be awkward so Haskell provides “`do`

notation” as an alternative way of writing monad expressions in a way that can often be clearer. Likewise there is an “arrow notation” for writing arrow expressions. It is not enabled in GHC by default but can be turned on with the `Arrows`

language extension.

The subject of this blog post is how to understand and use arrow notation. It will also serve as an introduction to the `Arrow`

typeclass whilst keeping mention of the arrow combinators to a minimum. We will use develop our intuition for arrow computation rather than learn any technical and formal definitions!

Let’s kick off by refreshing our memory of `do`

notation. A very basic way to think of `do`

notation is that it is similar to a sequence of let bindings. For example, a basic Haskell expression to perform a numerical calculation might be

```
> f :: Int -> (Int, Int)
> f = \x ->
> let y = 2 * x
> z1 = y + 3
> z2 = y - 5
> in (z1, z2)
>
> -- ghci> f 10
> -- (23, 15)
```

`do`

notation supports expressing the exact same computation inside the `Identity`

monad, that is, a monad that has no “side effects”.

```
> fM :: Int -> Identity (Int, Int)
> fM = \x -> do
> y <- return (2 * x)
> z1 <- return (y + 3)
> z2 <- return (y - 5)
> return (z1, z2)
>
> -- ghci> runIdentity (fM 10)
> -- (23,15)
```

The `let`

bindings in `f`

become `<-`

under `do`

notation. (For technical reasons we have to wrap every intermediate expression in `return`

to lift them into the `Identity`

monad.) Arrow notation supports a similar translation:

```
> fA :: Int -> (Int, Int)
> fA = proc x -> do
> y <- (2 *) -< x
> z1 <- (+ 3) -< y
> z2 <- (subtract 5) -< y
> returnA -< (z1, z2)
>
> -- ghci> fA 10
> -- (23,15)
```

In arrow notation `proc`

plays the part of “lambda”, i.e. backslash, `\`

, `<-`

plays the part of `=`

and `-<`

feeds an argument into a “function”. We use `returnA`

instead of `return`

.

The benefit of `do`

notation comes when we want to encode a computation that can’t be written using pure `let`

bindings alone. Here’s an example that uses the list monad to generate all coordinates within a given radius of the origin:

```
> range :: Int -> [Int]
> range r = [-r..r]
>
> cM :: Int -> [(Int, Int)]
> cM = \r -> do
> x <- range 5
> y <- range 5
> guard (x*x + y*y <= r*r)
> return (x, y)
>
> -- ghci> take 10 (cM 5)
> -- [(-5,0),(-4,-3),(-4,-2),(-4,-1),(-4,0),(-4,1),(-4,2),(-4,3),(-3,-4),(-3,-3)]
```

We read this as

- for each
`x`

in -5 to 5 - for each
`y`

in -5 to 5 - where
`x*x + y*y <= r*r`

- return the pair of
`x`

and`y`

Now let’s see how to use arrow notation to express the same computation. For trivial technical reasons we need to wrap the list monad to make it suitable for use with arrow notation. The wrapping and unwrapping don’t actually do anything except shuffle some type parameters around. In arrow computations we will use `K [] a b`

where instead of `a -> [b]`

. We’ll use abbreviated versions of the relevant wrapping and unwrapping functions:

```
> type K = Kleisli
>
> k :: (a -> m b) -> Kleisli m a b
> k = Kleisli
>
> runK :: Kleisli m a b -> (a -> m b)
> runK = runKleisli
```

Then we can use arrow notation to implement the radius list computation as follows:

```
> cA :: Kleisli [] Int (Int, Int)
> cA = proc r -> do
> x <- k range -< 5
> y <- k range -< 5
> k guard -< (x*x + y*y <= r*r)
> returnA -< (x, y)
>
> -- ghci> take 10 (runK cA 5)
> -- [(-5,0),(-4,-3),(-4,-2),(-4,-1),(-4,0),(-4,1),(-4,2),(-4,3),(-3,-4),(-3,-3)]
```

What’s the point of arrow notation? So far we have only seen that it is able to replicate some examples in `do`

notation. Well, the point is that arrow notation forbids some computations that `do`

notation allows. In particular all “arrow actions” must be “statically” known“. That sentence was a mouthful! What does it mean? I am calling the expression that comes between `<-`

and `-<`

in a row of arrow notation the”arrow action“.”Statically known" means that if we have a couple of rows of arrow notation

```
> -- y <- action1 -< x
> -- z <- action2 -< y
```

then the expression `action2`

cannot depend on `x`

or indeed anything bound on the left hand side of an arrow notation row.

This restriction has important practical consequences. For example, our Haskell IO system might be based on the following primitives

```
> getLineM :: String -> IO String
> getLineM prompt = do
> print prompt
> getLine
>
> printM :: String -> IO ()
> printM = print
>
> writeFileM :: FilePath -> String -> IO ()
> writeFileM (filePath, string) = writeFile filePath string
```

then we could use `do`

notation to write a procedure which reads a line of user input and either prints something out or writes to a file based on that input.

```
> procedureM :: String -> IO ()
> procedureM = \prompt -> do
> input <- getLineM prompt
> if input == "Hello"
> then printM "You said 'Hello'"
> else writeFileM ("/tmp/output", "The user said '" ++ input ++ "'")
>
> -- ghci> procedureM "Say something"
> -- "Say something"
> -- Hello
> -- "You said 'Hello'"
>
> -- ghci> procedureM "Say something"
> -- "Say something"
> -- Bye bye
> -- (Writes to /tmp/output)
```

However, there is no way to express this in arrow notation using only the same primitives.

```
> -- procedureA :: K IO String ()
> -- procedureA = proc prompt -> do
> -- input <- k getLineM -< prompt
> -- if input == "Hello"
> -- then printM
> -- else writeFileM "/tmp/output" ...
>
> -- Oh no! This won't work because we were trying to refer to a <-
> -- bound variable on the left hand side of a -<
```

Why do we want to use arrows when they have these restrictions? Going into details would take a whole other blog post of its own, but I will mention briefly a few places where the full generality of monads is too much.

Firstly, an arrow-only interface can often allow you to take advantage of opmizations that a monadic interface could not. For example parsers written using parser combinators can be made more memory efficient if we know statically the parsing action they are going to perform. Similarly it can help to reduce the chance of memory leaks in functional reactive programming (e.g. with `netwire`

) if actions cannot depend in an unrestrained way on the result of previous actions.

In embedded domain specific languages this forcing non-dependence can make code generation more easily match a target language. For example in the SQL-generating relational query language Opaleye queries are built up from arrows called `QueryArr`

. Using an arrow rather than a monad allows the semantics of the domain specific language to more closely match the semantics of the underlying SQL language.

So in summary, arrows are a Haskell abstraction, similar to monads, and arrow notation is a way of writing arrow expressions which is similar to to `do`

notation. However, there are some restrictions on what you can do with arrows that are not shared by monads. The benefit of the restriction is you can often receive a performance benefit or use your more specific knowledge about the structure of an arrow computation to your advantage.

*This post is part of 24 Days of GHC Extensions - for more posts like this, check out the calendar*.

I’m a fairly conservative haskell developer. Most of my code is close to haskell 2010, with just the odd extension enabled here and there. I like learning new language features, but I’m hesitant to use them in day to day coding until I feel I understand them well.

ScopedTypeVariables is a straightforward extension that I use often. In this post, I’ll explain why.

Type inference is one of haskell’s many appealing features. But writing explicit type signatures is still important, for several reasons:

Type signatures are excellent documentation

Type signatures are a way to confirm that the compiler’s “understanding” of the code matches my mental model.

Certain type system extensions require type signatures.

It is considered good haskell style to write type signatures for all top level definitions. And, in complex code it’s often beneficial to write signatures for key subterms. The issue is that, in haskell 2010, explicit type signatures cannot be written for all well typed subterms.

Hence the motivation for ghc’s ScopedTypeVariables extension, as quoted from the original paper:

it should be possible for the programmer to write an explicit type signature for any sub-term of the program.

I am a lazy coder - sometimes the code just flows through my fingers into emacs, but as function types get more complex, I have to think harder. Hence, I like to use the compiler as an assistant, moving from an empty edit buffer to working code, with ghc checking each step along the way. I’ll work through a short example showing this.

For some reason, calls to `foldr`

and `foldl`

have never flowed easily for me - the intuition of folding is natural enough, but the code always needs a little thought, particularly when the folds are nested.

Consider a function where I want to insert a list of values into a map, combining new values with existing ones. Our function will look like this:

```
import qualified Data.Map as Map
insertMany :: Ord k => (v -> v -> v) -> [(k,v)] -> Map.Map k v -> Map.Map k v
insertMany acc vs m = undefined
```

Now, clearly this is a fold of some function over `vs`

, so lets write it as such, and get ghc to check it:

```
insertMany :: Ord k => (v -> v -> v) -> [(k,v)] -> Map.Map k v -> Map.Map k v
insertMany accf vs m = foldr f1 m vs
where
f1 = undefined
```

All good. So, what is the type of f1? I could read the documentation for `foldr`

, but given the above code type checks, the compiler already knows the type. Let’s have it tell us, by forcing a type error:

```
insertMany :: Ord k => (v -> v -> v) -> [(k,v)] -> Map.Map k v -> Map.Map k v
insertMany accf vs m = foldr f1 m vs
where
f1 :: Int
f1 = undefined
```

results in the error:

```
Couldn't match expected type ‘(k, v) -> Map.Map k v -> Map.Map k v’
with actual type ‘Int’
Relevant bindings include
...
In the first argument of ‘foldr’, namel
```

Let’s paste that type in:

```
insertMany :: Ord k => (v -> v -> v) -> [(k,v)] -> Map.Map k v -> Map.Map k v
insertMany accf vs m = foldr f1 m vs
where
f1 :: (k, v) -> Map.Map k v -> Map.Map k v
f1 = undefined
```

Again, ghc is happy, and now we can clearly see what we need to implement. Conveniently, Map has the `insertWith`

function, which look fit for the task:

```
insertMany :: Ord k => (v -> v -> v) -> [(k,v)] -> Map.Map k v -> Map.Map k v
insertMany accf vs m = foldr f1 m vs
where
f1 :: (k, v) -> Map.Map k v -> Map.Map k v
f1 (k,v) m = Map.insertWith accf k v m
```

But now, ghc is not happy at all:

```
Could not deduce (v ~ v1)
from the context (Ord k)
bound by the type signature for
insertMany :: Ord k =>
(v -> v -> v) -> [(k, v)] -> Map.Map k v -> Map.Map k v
at /Users/timd/annex/blog/scoped-type-variables/Test.hs:3:16-78
‘v’ is a rigid type variable bound by
the type signature for
insertMany :: Ord k =>
(v -> v -> v) -> [(k, v)] -> Map.Map k v -> Map.Map k v
at /Users/timd/annex/blog/scoped-type-variables/Test.hs:3:16
‘v1’ is a rigid type variable bound by
the type signature for
f1 :: (k1, v1) -> Map.Map k1 v1 -> Map.Map k1 v1
at /Users/timd/annex/blog/scoped-type-variables/Test.hs:6:11
Expected type: v1 -> v1 -> v1
Actual type: v -> v -> v
Relevant bindings include
...
In the first argument of ‘Map.insertWith’, namely ‘accf’
In the expression: Map.insertWith accf k v m
```

Yikes! Yet it compiles if I comment out the type annotation on f1:

```
insertMany :: Ord k => (v -> v -> v) -> [(k,v)] -> Map.Map k v -> Map.Map k v
insertMany accf vs m = foldr f1 m vs
where
-- f1 :: (k, v) -> Map.Map k v -> Map.Map k v
f1 (k,v) m = Map.insertWith accf k v m
```

and it works too:

```
> insertMany (+) [("a",5),("b",4),("c",2),("b",11)] Map.empty
fromList [("a",5),("b",15),("c",2)]
>
```

I could delete the signature and get on with coding, but it makes useful documentation. And why doesn’t it type check anyway? The key lines from the error message are:

```
Could not deduce (v ~ v1)
‘v’ is a rigid type variable bound by
the type signature for
insertMany :: Ord k => (v -> v -> v) -> [(k, v)] -> Map.Map k v -> Map.Map k v
‘v1’ is a rigid type variable bound by
the type signature for
f1 :: (k1, v1) -> Map.Map k1 v1 -> Map.Map k1 v1
```

Note that it has reproduced my two type signatures, but the signature for f1 has different type variables. The compiler is telling me that, as written, v and v1 are not the same type. This is the crux of the problem - by writing

`f1 :: (k, v) -> Map.Map k v -> Map.Map k v`

I have promised that this function will work for *any* types k and v. But as I use accf within the implementation of f1, v in f1 is constrained to match the v in insertMany.

The code type checks if I delete the type signature because haskell 2010 can *infer* the correct type for the f1. But there is no facility to directly express this type in a signature. In haskell 2010, a type variable is only scoped over the type signature in which it appears - I can’t directly “connect” type variables in different type signatures.

The ScopedTypeVariables extension lets us extend the scope of type variables into the corresponding expression, and hence to internal type signatures. Here’s the example, making use of the extension:

```
{-# LANGUAGE ScopedTypeVariables #-}
import qualified Data.Map as Map
insertMany :: forall k v . Ord k => (v -> v -> v) -> [(k,v)] -> Map.Map k v -> Map.Map k v
insertMany f vs m = foldr f1 m vs
where
f1 :: (k, v) -> Map.Map k v -> Map.Map k v
f1 (k,v) m = Map.insertWith f k v m
```

We enable the extension with the `LANGUAGE`

pragma, and then declare that the type variables in the `insertMany`

signature are to be scoped over the corresponding expression with the `forall k v`

quantifier. Hence, `v`

now means the same type wherever it appears in the implementation of `insertMany`

. And hence we have our completed code, annotated as desired.

This is a relatively simple example where I have incrementally written the code whilst using the compiler to ensure that, at each stage, I am proceeding towards a correct solution. It’s longwinded to describe the process, but in practice, with fast error checking from ghc-mod, the sequence of steps above takes only a minute or two. As the types become more complex, I find this technique very helpful. Using types to guide the writing of the code is sometimes referred to as “hole-driven development”. Mathew Brecknell has created a nice demonstration that shows the process in more detail.

A key part of this technique is writing the type signatures before the code. The ScopedTypeVariables extension makes it possible to write type signatures which cannot be expressed in haskell 2010.

One thing that I didn’t know until I wrote this post is that the `forall`

syntax is not the only way of introducing a scoped type variable. It turns out there are 3 other ways - see the ghc documentation for details.

A final note - the forall keyword is not part of haskell 2010, but is used by several different extensions (including ScopedTypeVariables, ExistentialQuantification, RankNTypes). So when you see it in code, it’s worth checking which of these extensions are active.

*This post is part of 24 Days of GHC Extensions - for more posts like this, check out the calendar*.