The *coin change problem* is a combinatorial problem in which, given a set of coin values and a target value, the goal is to find the amount of all possible coin value combinations to reach the target value. Because of the combinatorial nature, this problem is NP-complete. However, we can alleviate the hardships by employing dynamic programming with memoization and utilizing a neat little property of the problem. To stay true to our values, we solve this problem in Haskell, a purely functional language close to our hearts! :)

This blog post was inspired by a problem on Project Euler. Project Euler is a web platform containing a collection of over 700 puzzles that combine math and programming. If those topics are your thing, we highly recommend giving some problems a shot. You do not have to be a math genius (or have liked/understood anything Euler discovered) to solve the majority of these problems. A pen and paper, a few coding techniques and focus should suffice.

Project Euler strongly discourages posting solutions to their problems online. To upkeep the integrity of Project Euler, we will not provide the problem number this blog post stems from, nor use the same values in the problem description they have used.

The problem is fairly easy to state:

Given a set of coin values, how many ways are there to pick coins from each value so the sum equals the target value?

We can assume we have **infinite coins** of each value. The order of how the coins are picked does **not** matter.

As an example:

```
Set of coin values: {1, 2, 5, 10}
Target value: 15
Some combinations which all sum up to 15:
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
[1, 2, 2, 5, 5]
[5, 10] [10, 5] is the same solution
[1, 1, 1, 1, 1, 10]
...
```

There are 22 ways to reach the target of `15`

with the coin values `{1, 2, 5, 10}`

.

22 is a fairly small number and with 10 minutes on our hands we could list these off manually.
This, however, becomes a very cumbersome task with higher target values and even more coin values, so let's set up a difficult problem like that.

The problem we will try to solve is the following:

```
Set of coin values: {1, 2, 5, 10, 20, 50, 100, 150, 200, 300}
Target value: 300
```

The amount of combinations that reach the target value has 6 digits, so please don't try this by hand :). Let's instead get to our favorite part: Programming!

An initial attempt might be to create all possible amounts or partitions of a coin value, smaller or equal to the target value. Then, create the cross product of all these partitions and remove all the combinations which are not equal to the target value. For our problem, this tactic might look like this:

```
1.) List every partition (smaller or equal to 300) for every coin value
For 1: [], [1], [1, 1], [1, 1, 1], ... all the way up to 300x1
For 2: [], [2], [2, 2], [2, 2, 2], ... all the way up to 150x2
For 5: [], [5], [5, 5], ...
...
2.) Create the cross product of all these partitions
[1], [1, 2], [1, 5], [1, 2, 5], [2, 5], ...
3.) Remove all the lists where the sum does not equal to 300
```

Our Haskell code might look like this:

(For simplicity's sake, we will work with `Int`

and not `Integer`

.)

```
coins :: [Int]
coins = [1, 2, 5, 10, 20, 50, 100, 150, 200, 300]
-- Create all partitions for a number, sum smaller than 300
partitions :: Int -> [[Int]]
partitions nr = inits $ replicate (300 `div` nr) nr
-- Create the cross product with sequence (since List is a monad)
cartesian :: [[[Int]]]
cartesian = sequence $ map partitions coins
-- Remove all the sums that are not equal to 300
targets :: [Int]
targets = filter (== 300) $ map (sum . concat) cartesian
```

Unfortunately, this code **won't** finish running. Let's quickly check why not:

```
-- In GHCI
-- First, we check how many partitions are created for each coin value
map (length . partitions) coins
-- Gives [301, 151, 61, 31, 16, 7, 4, 3, 2, 2]
-- So there are 301 partitions for 1, 151 for 2 etc.
-- The cross product combines each with each, so we multiply all values
product [301, 151, 61, 31, 16, 7, 4, 3, 2, 2]
-- Gives 462 055 593 216
-- That's 462 billion LISTS
```

We can see that this just generates **way too many** lists to sum up and filter through. Also, this approach generates a lot of nonsense lists like `[100, 100, 100, 150, 150, 200, 300]`

. This list is way over target and should not even be considered. We need a better approach.

Let's instead use the following approach:

Start with `0`

and keep adding coin values one by one. If `300`

is reached, we add `1`

to our solution counter. If we go over `300`

, we don't add anything. Every time we hit `300`

or we go over it, we remove the last added coin and try the next one. If our coin list is exhausted, we remove the **last two** coins and try the next coin on that level. This process is repeated until we swap out the coin we started with. Let's reduce our problem set by a little to demonstrate how this would work.

```
Set of coin values: {2, 5, 10}
Target value: 10
```

If we try this approach manually, it would look something like this:

```
2-2-2-2-2 (is == 10, increment counter and replace with next coin)
2-2-2-2-5 (is > 10, replace with next coin)
2-2-2-2-10 (is > 10, tried all coins, remove last 2 coins and continue with next)
2-2-2-5 (is > 10, replace with next coin)
2-2-2-10 (is > 10, tried all coins, remove last 2 coins and continue with next)
2-2-5 (is < 10, add the first coin from our list)
2-2-5-2 (is > 10, replace with next coin)
...
```

This might seem confusing, but it becomes more clear if we think of this as building up a ternary tree (ternary, because we have three coin values) where the paths from the root to a leaf represent these coin-adding chains. The leafs are where the accumulator is bigger or equal to `10`

. Removing the last coin is equivalent to backtracking once and attempting the next element.

The following visualization of the tree should help understanding what is going on:

We first follow the orange path, because we start with `2`

and keep adding `2`

. Once we reach a node where the accumulated sum equals to or is more than `10`

, we backtrack once (remove the `2`

) and try the next coin in the list, so `5`

. The same procedure applies here, keep going until the sum is equal to or more than `10`

. In the case of `5`

, we are over immediately with 13, so we continue with the next coin, `10`

. We are over again with 18 and the list is exhausted, all coins were used. This shows why in this step two coins have to be removed: We jump up a parent
(to where the accumulative sum is `6`

) and continue with our coin list there, with `5`

. So the order of visiting the tree is orange, green, red, blue, etc. This is also called pre-order traversal of a tree.

Let's code this for our small example:

```
coins' :: [Int]
coins' = [2, 5, 10]
naive :: Int -> Int -> Int
naive acc coinval
| acc + coinval > 10 = 0
| acc + coinval == 10 = 1
| otherwise = sum $ map (naive $ acc + coinval) $ coins'
sol :: Int
sol = sum $ map (naive 0) coins'
```

For each node, we apply the `naive`

function recursively for each child (coin value) of that node. Each function call retrieves the amount of target hits. These are then summed up in the parent node, for each node. The base case is reached when the accumulator is equal to or more than the target `10`

. As depicted in the image, this function alone only builds up the tree for **one** coin type, so calling it with `naive 0 2`

will create the tree depicted above. We want to build the tree for every coin
and sum up all target hits. This is exactly what `sol`

does. It starts `naive`

with an empty accumulator for each coin value.

Tracing this function shows, that indeed, the traversal works as expected:

```
import Debug.Trace
naive :: Int -> Int -> Int
naive acc coinval
| acc + coinval > 10 = trace ("over 10 with " <> (show $ acc + coinval)) 0
| acc + coinval == 10 = trace "hit 10" 1
| otherwise =
sum $
map
(\coin -> trace
("call with acc: "
<> show (acc + coinval)
<> " and coin: "
<> show coin) $ naive (acc + coinval) coin) $ coins'
```

```
Calling sol in GHCI
Output:
call with acc: 2 and coin: 2
call with acc: 4 and coin: 2
call with acc: 6 and coin: 2
call with acc: 8 and coin: 2
hit 10
call with acc: 8 and coin: 5
over 10 with 13
call with acc: 8 and coin: 10
over 10 with 18
call with acc: 6 and coin: 5
over 10 with 11
call with acc: 6 and coin: 10
over 10 with 16
call with acc: 4 and coin: 5
call with acc: 9 and coin: 2
over 10 with 11
...
```

Calling the solver:

```
*Main> sol
3
```

This seems to be correct, since our mini problem has the following three solutions:

```
[2, 2, 2, 2, 2]
[5, 5]
[10]
```

If it works for this small problem, it will work for all problem sizes, right?

Let's change the target to `11`

and run the solver again:

```
*Main> sol
4
```

The solver says there are `4`

solutions. We can only think of one solution with the coin values we have though:

```
[2, 2, 2, 5]
```

The attentive reader might've already realized that something's not entirely right with how we are building the tree. Let's go back to the visualization and see what's going on: There are two solutions in this tree, the red path and the blue path (the purple path is the shared path). They are:

```
[2, 2, 2, 5]
[2, 2, 5, 2]
```

The problem becomes obvious. This tree assumes that the order of the coins picked **matters**, thus coming up with these two solutions. These two are the same solution for our problem. As stated in the beginning, the order is **irrelevant** and the four solutions our `sol`

comes up with are also all equivalent.

Luckily, we can get out of this pickle fairly easily. We simply say that each child of a node has to be bigger than or equal to the node itself. This means `2`

cannot be a child of `5`

anymore. This implicitly enforces that each solution is **ascendingly** ordered, because values in the paths from the root node to a leaf can never decrease, they always stay the same or increase. This automatically eliminates all possible permutations a solution could have, leaving only unique
solutions.

This is very easy to add in our code, only a minor adjustment to the `naive`

function is required:

```
naive :: Int -> Int -> Int
naive acc coinval
| acc + coinval > 11 = 0
| acc + coinval == 11 = 1
| otherwise = sum $ map (naive $ acc + coinval) $ dropWhile (< coinval) coins'
```

For each node, we simply drop (`dropWhile`

) all the coin values from the list where the value is smaller than the node. So a `2`

would be dropped, if our parent node were a `5`

. We then, as before, apply the function recursively to these child nodes.

We verify:

```
*Main> sol
1
```

We prune away **large** chunks of the tree this way and besides getting the correct answer, we speed up* our solver by a lot. Let's demonstrate this by changing our target value to `65`

. So, first we enable timing and memory information in `ghci`

with `:set +s`

and run `sol`

:

```
-- Without adjustment
*Main> sol
2007064
(13.76 secs, 11,177,397,344 bytes)
-- With adjustment:
*Main> sol
28
(0.03 secs, 793,976 bytes)
```

The solution is now correct and the execution time is **massively** reduced. Before we run our solver on the original problem, there is one small improvement to cut down execution time significantly.

Let's say we are in a child node, added that coin value and we see that we are over the target value or have reached it exactly. According to our algorithm, we backtrack once and try the next coin. The problem is, that the next coin is always bigger than the current coin. Therefore that coin is **guaranteed** to be over target as well. This means if we reach the target or go over it in a **direct** child, we shouldn't try the next coins. Looking at the tree visualization again, we can observe this
problem:

After the target of `10`

is reached in the outermost left path, there is no point in going back and checking coins `5`

and `10`

, they are over target guaranteed. A possible fix could be to check once we hit `10`

or go over it, and not process the coins anymore in that node and immediately backtrack. To implement this, we change the last guard of `naive`

from

```
| otherwise = sum
$ map (naive $ acc + coinval)
$ dropWhile (< coinval) coins'
```

to

```
| otherwise = sum
$ map (naive $ acc + coinval)
$ dropWhile (\c -> c < coinval || c + acc > 10) coins'
```

With this adjustment, we drop all coins directly in the parent node that would make the accumulator in the leaves shoot over `10`

. So we don't even descend into these leaves. Although this change gets rid of some unnecessary operations, adding this condition oddly decreases performance twofold. This might be because the additional boolean check is run on many nodes in the tree, creating a lot of extra work. You might now ask "But isn't `||`

short-circuited thanks to laziness?". While that is indeed so, short-circuiting only happens if `c < coinval`

is `True`

. This happens quite often, but it also is `False`

very often, which then leads to the evaluation of `c + acc > 10`

as well, adding huge overhead. Swapping the boolean conditions around makes it even slower, since `c + acc > 10`

only happens in some leaves, so even fewer nodes, leading to the evaluation of both boolean conditions more often.

Ok, so this won't work nicely. However, there is still a very simple adjustment we could do to improve performance by a lot. This fix is simply reversing the coin list and inverting the coin dropping condition from `<`

to `>`

:

```
coins' :: [Int]
coins' = reverse [2, 5, 10]
-- coins' = [2, 5, 10]
naive :: Int -> Int -> Int
naive acc coinval
| acc + coinval > 10 = 0
| acc + coinval == 10 = 1
| otherwise = sum $ map (naive $ acc + coinval) $ dropWhile (> coinval) coins'
-- | otherwise = sum $ map (naive $ acc + coinval) $ dropWhile (< coinval) coins'
```

This might seem way too easy. What does this improve? Well, this will construct our tree in an entirely different way. We're going to look at this reduced problem:

```
Set of coin values: {2, 5, 10}
Target value: 20
```

We are going to call `naive 0 10`

. This generates the following tree:

This looks different than the trees before. Because we reversed the coin list, we first work through the largest coin values, so the leftmost node is the largest coin value, while the rightmost is the smallest. Inverting the coin dropping condition now also causes all child nodes to be smaller or equal to their parent. So, instead of getting monotonously **increasing** values in our paths from root to a leaf, they are now monotonously **decreasing**. This means `5`

can no longer be a child of `2`

.
The uniqueness of our solutions is still maintained this way. This is also why we called `naive 0 10`

, since if we called `naive 0 2`

again, it would result in a long chain of `2`

s since only `2`

can be a child of `2`

with our adjustment.

In the illustration above, the behavior from before can no longer be observed. The left path found a solution, but since the middle and right path both continue with smaller coin values, they might give solutions too, since they won't overshoot the target value right away unlike before.

To give a concrete example what the improvement actually does:

```
A path like this was constructed in the non-reversed list:
2-2-2-2-2-2-2-2-2-10 (accumulator 28)
With the reversed list, that is not possible. The path would be pruned early:
10-2-2-2-2-2 (accumulator 20)
```

The pruning happens because we first allocate the big coin values and try to add smaller, more fine-grained values, thus we detect overshooting the target a lot earlier in many cases. We run both, the ascending and descending version on our original problem. As a quick reminder, this was the original problem we wanted to solve:

```
Set of coin values: {1, 2, 5, 10, 20, 50, 100, 150, 200, 300}
Target value: 300
```

We run `sol`

with the old, ascending version:

```
*Main> sol
493238
(116.98 secs, 62,083,710,928 bytes)
```

And with the new, descending version:

```
*Main> sol
493238
(65.50 secs, 24,040,201,216 bytes)
```

Apparently, the answer to how many ways you can reach the target value of `300`

with the coins `1, 2, 5, 10, 20, 50, 100, 150, 200, 300`

is `493238`

. This nicely shows how quickly the problem grows if you increase the target value and available coins.

Besides that, there is almost a twofold increase in performance and almost a threefold decrease in space with the improved version! It seems like we are constructing a tree with only a fraction of the nodes from before. This neatly illustrates how a very simple change can have a massive performance increase, achieved by simply looking at the problem more thoroughly. :)

The "quick" solution still takes an entire minute, which is not acceptable. So, we are not entirely done yet, there is one final (and the most impactful) improvement!

The current problem is that we are redoing a ton of computations. Let's quickly check what computations we do how many times in this smaller example:

```
Set of coin values: {2, 5, 10}
Target value: 100
```

We are trying to find out with which arguments `naive`

is being called how many times. A possible way to do this is to set up a nested `Map`

as an occurrence count and pass it into the next recursion iteration and increment the counter. However, this is going to involve
tedious wrapping and unwrapping of the `Map`

(unless we start using more complex constructs like a `Writer`

Monad). Instead, however, we will abuse the `trace`

function a little and do some processing with Unix commands.

We add the following `trace`

s to our `naive`

function:

```
naive :: Int -> Int -> Int
naive acc coinval
| acc + coinval > 100 =
trace ("Call with acc: " <> show acc <> " coin: " <> show coinval) 0
| acc + coinval == 100 =
trace ("Call with acc: " <> show acc <> " coin: " <> show coinval) 1
| otherwise =
sum $
map
(trace
("Call with acc: " <> show acc <> " coin: " <> show coinval)
(naive $ acc + coinval)) $
dropWhile (> coinval) coins'
```

Every time `naive`

is called, it just prints out which arguments were given. A call of `naive 10 5`

results in `"Call with acc: 10 coin: 5"`

to stderr. Since we want to do some processing, the output should not be in `ghci`

, thus we want an executable. To get that, we need to create a `main`

function like so:

```
main :: IO ()
main = do
print sol
return ()
```

`sol`

is the same function as before. We `print`

it, so it is actually run and evaluated. Using stack, compile the code with `stack ghc Coinchange.hs`

, this creates the executable.
We run it and pipe stderr to a file with `./Coinchange 2>out.txt`

. This creates a 57kB big `out.txt`

file. (As a quick fun fact, doing this with the ascending version creates a 70kB file). Running `sort out.txt | uniq -c | sort -nr`

gives us the following output:

```
55 Call with acc: 99 coin: 2
55 Call with acc: 98 coin: 2
55 Call with acc: 97 coin: 2
55 Call with acc: 96 coin: 2
55 Call with acc: 95 coin: 2
55 Call with acc: 94 coin: 2
55 Call with acc: 92 coin: 2
55 Call with acc: 90 coin: 2
45 Call with acc: 93 coin: 2
45 Call with acc: 91 coin: 2
45 Call with acc: 89 coin: 2
...
```

The command counts all duplicate lines and sorts them by occurence count. The top line indicates that the call `naive 99 2`

happened 55 times in our call stack. Every time, it was recalculated anew. Herein lies the core realization of the problem. **55** is not some arbitrary number. It's the solution to how many ways you can represent `99`

with the coins `{2, 5, 10}`

. For every single of these 55 ways, you can add `2`

as a child, since `100`

is not yet reached and `2`

can always be a child of any node in
the descending version, hence, the function is called 55 times with these arguments. This happens for many many values, we just listed the most common duplicates here.

It would be great if we could just store the work we have already done. If we already know all the ways to reach the target of `98`

, we just check how many ways there are to get from `98`

to our target `100`

and multiply the amount of ways. Storing the work that has already been done for part of the problem and then using it to construct the solution for the entire problem is called *dynamic programming*. The solution counter in every node of our tree the solver constructs, depends on
the solution counts the child nodes/trees return. This means that we are building the solution counter from the leaves of the tree to the root using recursion. This was illustrated by the pre-order traversal. Building the solutions this way is called the top-down approach in dynamic programming. Storing our work in this approach requires so-called *memoization*. Memoizing a function is simply creating a lookup table that maps the possible argument values of the function to the function outputs. This way the function does not have to be computed again if it's being run for the same arguments again, but rather, the output can be directly read from a lookup table. So, if we memoize `naive`

, calling `naive 99 2`

(with the problem listed above) will write `55`

into a map for the combined key `99 2`

. Next time we call `naive 99 2`

, the result is a simple lookup away.

In Haskell, we don't have shared state to represent the lookup table. So, how would one go about implementing something like this? We could simply, like before in the occurence counter example, introduce a `Map`

or `Writer`

with a solution counter and pass that around. We could then first check if the solution for the current arguments is contained in the structure and act accordingly. Again, this change is tedious due to the recursive nature of our solution. We have a better approach.

Haskell will, if done right, not recompute the same function calls multiple times. Key point here is "if done right". This is achieved by sharing a *thunk*. A thunk is basically an unevaluated value.

Let's quickly look at a memoization function:

```
memo :: (Int -> Int) -> [Int]
memo f = map f [0..]
```

Given a function `f`

(which maps an `Int`

to an `Int`

), `memo`

creates an infinite list of function values, where the index corresponds to the function value that was produced by that index. This only works because Haskell is lazy. It does not evaluate the entire infinite list as soon as a function is memoized with `memo fun`

. It only evaluates the values that are actually fetched from this infinite list.

The function we want to memoize has the type `Int -> Int -> Int`

, so it should memoize not just on a single argument, but two. The first argument does a lookup for a list which is in turn another lookup table for the second argument. This construct would look something like this:

```
memo2 :: (Int -> Int -> Int) -> [[Int]]
memo2 f = map (\x -> map (f x) [0 ..]) [0 ..]
```

This creates the nested list, apparent from the `[[Int]]`

output type.
We use our regular "coin-tree" building function, memoize it and access the values in it with a lookup function that takes two indices:

```
lookup_table :: [[Int]]
lookup_table = memo2 tree
lookup_h :: Int -> Int -> Int
lookup_h acc coinval = lookup_table !! acc !! coinval
tree :: Int -> Int -> Int
tree acc coinval
| acc + coinval > 300 = 0
| acc + coinval == 300 = 1
| otherwise = sum $ map (lookup_h $ acc + coinval) $ dropWhile (> coinval) coins
```

`tree`

is the function that builds the coin-tree (It is not called `naive`

anymore to illustrate the improvement :)). It uses `lookup_h`

to lookup the values in `lookup_table`

. `lookup_table`

is the memoized `tree`

function. So these functions are all intertwined. We illustrate how the memoization works on an example:

`tree 20 10`

is called`lookup_h`

is called with the accumulator + coin value (`30`

) and the first coin that is smaller than/equal to the coin value (`10`

) =>`lookup_h 30 10`

- The index
`30`

`10`

is accessed in the nested`lookup_table`

- If
`lookup_h`

is called with`30`

`10`

for the first time, because of laziness, that index is now needed, so it is being evaluated in`lookup_table`

by calling`tree 30 10`

, which in return will use`lookup_h`

- If
`lookup_h`

has already been called with`30`

`10`

the value was already evaluated and can just be accessed with a lookup

This sharing of evaluated values works because of referential transparency. `lookup_h`

is pure and accesses the top-level value `lookup_table`

, thus making sure the returned values never change if `lookup_h`

is called with the same arguments repeatedly. We run this version:

```
*Main> sol
493238
(0.06 secs, 1,363,512 bytes)
```

Now this is a **major** step-up in performance! A 1000 times decrease in computation time and 24'000 times decrease in space usage compared to the non-memoized version.

While this is a good enough solution, this problem can be scaled up even more, forcing us to grab deeper into our bag of tricks. Some of these tricks include:

- Since the calculations are independent from each other, branches of the tree can be parallelized. Memoization has to be taken care of then (because of sharing a thunk between threads).
`memo2`

memoizes into a nested list (which are singly-linked in Haskell). Lookup time for these lists is O(n). Using a nested tree as our memoization data structure would improve it to O(log n).- While the stack didn't blow up in our example, this is a possibility, so applying tail recursion to our mutual recursive functions
`tree`

,`lookup_h`

and`lookup_table`

could alleviate that problem, should it arise.

We think this problem nicely shows how analyzing a problem to a certain degree can give insights about the nature of it and the massive improvements that can be made regarding computation speed and space usage when solving it. Additionally, this problem can nicely be illustrated visually and the premise is very simple to explain. If you found this write-up interesting and would like to have a go at this and similar problems, we strongly recommend giving Project Euler a shot!

If you would like to hire us to help with coin-change problems or work on (functional) web projects, donβt hesitate to contact us. :)

*All computations are run on an Intel Core i7-6500U CPU @ 2.50GHz