Thursday, November 19, 2015

Simple construction of fixed-point data types

(This post is literate Haskell, available here.)

This post will show a trick for making it easier to work with terms representated as fixed-points of functors in Haskell. It is assumed that the reader is familiar with this representation.

First some extensions:

{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE TypeFamilies #-}

Representing terms as fixed-points

A common generic term representation is as a fixed-point of a functor:

newtype Term f = In { out :: f (Term f) }

A simple example functor is the following, representing arithmetic expressions:

data Arith a
    = Num Int
    | Add a a
  deriving (Functor)

One problem with the Term representation is that it’s quite cumbersome to construct with it. For example, the following is the representation of the expression (1+2)+3(1+2)+3:

term1 :: Term Arith
term1 =
    In (Add
      (In (Add
        (In (Num 1))
        (In (Num 2))
      (In (Num 3))

This post will show a simple trick for inserting the In constructors automatically, allowing us to write simply:

term2 :: Term Arith
term2 = deepIn10 $ Add (Add (Num 1) (Num 2)) (Num 3)

Deep conversion of terms

First, let’s have a look at the type of the above expression:

_ = Add (Add (Num 1) (Num 2)) (Num 3) :: Arith (Arith (Arith a))

We see that the type has three levels of Arith because the tree has depth three. So in general, we need a function with the following type scheme:

f (f (f ... (f (Term f)))) -> Term f

The purpose of this function is to insert an In at each level, from bottom to top. The reason for having Term f in the argument type (rather than just a) is that In expects an argument of type Term f.

This function has to be overloaded on the argument type, so we put it in a type class:

class Smart f a
    deepIn  :: f a -> Term f
    deepOut :: Term f -> f a

Here f is the functor and a can be Term f, f (Term f), f (f (Term f)), etc. The class also contains the inverse function deepOut for reasons that will be explained later.

First we make the base instance for when we only have one level:

instance Smart f (Term f)
    deepIn  = In
    deepOut = out

Next, we make a recusive instance for when there is more than one level:

instance (Smart f a, Functor f) => Smart f (f a)
    deepIn  = In . fmap deepIn
    deepOut = fmap deepOut . out

And that’s it!

Ambiguous types

Now there’s only one problem. If we try to use deepIn $ Add (Add (Num 1) (Num 2)) (Num 3), GHC will complain:

No instance for (Smart Arith a0) arising from a use of ‘deepIn’

As said earlier, the type of the given expression is Arith (Arith (Arith a)), which means that we get an ambiguous type a which means that it’s not possible to know which of the above instances to pick for this a.

So we need to constrain the type before calling deepIn. One idea might be to use a type function for constraining the a. Unfortunately, as far as I know, it’s not possible to make a type function that maps e.g.

Arith (Arith ... (Arith a))


Arith (Arith ... (Arith (Term Arith)))

This is because a is a polymorphic variable, and type functions cannot case split on whether or not a type is polymorphic.

But there is one thing we can do. The type Arith (Arith (Arith a)) shows that the expression has (at most) three levels. But there is nothing stopping us from adding more levels in the type.

Why not make a version of deepIn10 that forces 10 levels in the type?

type Iter10 f a = f (f (f (f (f (f (f (f (f (f a)))))))))

deepIn10 :: (Functor f, a ~ Iter10 f (Term f)) => f a -> Term f
deepIn10 = deepIn

This function can be used to insert In constructors in terms of at most depth 10.

Constructing terms from existing terms

The above example, term2, was constructed from scratch. We might also want to construct terms that contain other, already constructed, terms. This is where deepOut comes into play:

term3 = deepIn10 $ Add (deepOut term2) (Num 4)

Monday, October 26, 2015

Sudoku solver using SAT+

(This post is literate Haskell, available here.)

Inspired by an old(-ish) Galois post about using its SAT back end to solve sudokus, here is a similar sudoku solver written in Haskell using Koen Claessen’s SAT+ library.

Representing sudokus

We represent sudokus as a simple column-of-rows:

type Sudoku = [[Maybe Int]]

A valid sudoku will have 9 rows and 9 columns and each cell is either Nothing or a value between 1 and 9.

The rules of sudoku mandates that each of the 27 “blocks” (rows, columns and 3×33\times 3 squares) only contain distincts numbers in the interval 1-9. So the first thing we need is a function for extracting the blocks of a sudoku:

blocks :: [[a]] -> [[a]]

The argument is a sudoku (9×99\times 9 elements) and the result is a list of blocks (27×927\times 9 elements). We use a polymorphic function in order to make it work also for the SAT representation of sudokus below.

(We omit the definition of blocks from this post, because it’s part of a lab here at Chalmers.)

Solving constraints using SAT+

SAT+ is a library that adds abstractions and convenience functions on top of the SAT solver MiniSAT. Here we will only make use of quite a small part of the library.

The inequality constraints of sudokus can be expressed using the following functions:

newVal   :: Ord a => Solver -> [a] -> IO (Val a)
val      :: a -> Val a
notEqual :: Equal a => Solver -> a -> a -> IO ()

Function newVal creates a logical value that can take on any of the values in the provided list. A specific known value is created using val, and finally, inequality between two values is expressed using notEqual.

The Solver object required by newVal and other functions is created by newSolver:

newSolver :: IO Solver

After a solver has been created and constraints have been added using the functions above, we can use solve to try to find a solution:

solve :: Solver -> [Lit] -> IO Bool

If solve returns True we can then use modelValue to find out what values were assigned to unknowns:

modelValue :: Solver -> Val a -> IO a

These are all the SAT+ functions we need to solve a sudoku.

The solver

In order to solve a sudoku, we first convert it to a “matrix” of logical values:

type Sudoku' = [[Val Int]]

fromSudoku :: Solver -> Sudoku -> IO Sudoku'
fromSudoku sol = mapM (mapM fromCell)
    fromCell Nothing  = newVal sol [1..9]
    fromCell (Just c) = return (val c)

This simply converts the matrix of Maybe Int to a matrix of Val Int, such that Just c is translated to the logical value val c and Nothing is translated to a logical value in the range 1-9.

Next, we need to add inequality constraints according to the rules of sudoku. After obtaining the blocks using blocks, we just add constraints that all values in a block should be different:

isOkayBlock :: Solver -> [Val Int] -> IO ()
isOkayBlock sol cs = sequence_ [notEqual sol c1 c2  | c1 <- cs, c2 <- cs, c1/=c2]

isOkay :: Solver -> Sudoku' -> IO ()
isOkay sol = mapM_ (isOkayBlock sol) . blocks

After solving a sudoku, we need to be able to convert it back to the original representation. This is done using modelValue to query for the value of each cell:

toSudoku :: Solver -> Sudoku' -> IO Sudoku
toSudoku sol rs = mapM (mapM (fmap Just . modelValue sol)) rs

Now we have all the pieces needed to define the solver:

solveSud :: Sudoku -> IO (Maybe Sudoku)
solveSud sud = do
    sol  <- newSolver
    sud' <- fromSudoku sol sud
    isOkay sol sud'
    ok <- solve sol []
    if ok
      then Just <$> toSudoku sol sud'
      else return Nothing

Tuesday, September 29, 2015

Strictness can fix non-termination!

(This post is literate Haskell, available here.)

I’ve always thought that strictness annotations can only turn terminating programs into non-terminating ones. Turns out that this isn’t always the case. As always, unsafePerformIO changes things.

import Data.IORef
import System.IO.Unsafe

Consider the following function for turning an IORef into a pure value:

unsafeFreezeRef :: IORef a -> a
unsafeFreezeRef = unsafePerformIO . readIORef

(I explain why I’m interested in this function below.)

This function is of course unsafe, but we may naively expect it to work in the following setting where it’s used inside a function that returns in IO:

modRef :: (a -> a) -> IORef a -> IO ()
modRef f r = writeIORef r $ f $ unsafeFreezeRef r

However, this doesn’t work, because the readIORef inside unsafeFreezeRef happens to be performed after writeIORef, leading to a loop.

So, strangely, the solution is to use a strictness annotation to force the read to happen before the write:

modRef' :: (a -> a) -> IORef a -> IO ()
modRef' f r = writeIORef r $! f $ unsafeFreezeRef r

This is the first time I’ve seen a strictness annotation turn a non-terminating program into a terminating one.

Here is a small test program that terminates for modRef' but not for modRef:

test = do
    r <- newIORef "hello"
    modRef' tail r
    putStrLn =<< readIORef r

I’m not sure this tells us anything useful. We should probably stay away from such ugly programming anyway… But at least I found this example quite interesting and counterintuitive.

Why on earth would anyone write unsafeFreezeRef?

In case you wonder where this strange example comes from, I’m working on a generic library for EDSLs that generate C code. Among other things, this EDSL has references that work like Haskell’s IORefs.

The purpose of unsafeFreezeRef in the library is to avoid temporary variables in the generated code when we know they are not needed. The idea is that the user of unsafeFreezeRef must guarantee that result is not accessed after any subsequent writes to the reference. Reasoning about when a value is used is quite easy in the EDSL due to its strict semantics. For example, we don’t need any strictness annotation in the definition of modRef due to the fact that setRef (the equivalent of writeIORef) is already strict.

The reason I noticed the problem of laziness and unsafeFreezeRef was that the EDSL evaluator turned out to be too lazy when writing to references. So I had a program which produced perfectly fine C code, but which ran into a loop when evaluating it in Haskell. The solution was to make the evaluator more strict; see this commit and this.

Thursday, January 22, 2015

Trying out the Bound library

[UPDATE, 2015-02-23: The post has been slightly updated based on comments by Edward Kmett.]

In trying to understand the Bound library, and in particular, its limitations (if any) in terms of expressiveness and efficiency. I will here try to implement a small subset of the Feldspar EDSL using Bound.

DISCLAIMER: Statements in this text may very well reflect my misunderstandings rather than actual truths about Bound.

Note that I’m assuming basic familiarity with Bound. If you’re not familiar with the library, see its documentation.

I would be happy to receive feedback and hints for how to improve the code.

The whole code can be found here: FeldsparBound.lhs

Implementing Feldspar using Bound

Here is the subset of Feldspar that we will work with:

data Data a
    = Var a
    | Int Int
    | Add (Data a) (Data a)
    | Ix (Data a) (Data a)
    | Par (Data a) (Scope () Data a)
  deriving (Eq,Show,Functor,Foldable,Traversable)

Ix represents array indexing, and Par represents array construction, corresponding to the following function on lists:

parList :: Int -> (Int -> a) -> [a]
parList l f = [f i | i <- [0 .. l-1]]

For simplicity, I chose to put the Scope directly on Par instead of going through a lambda as we do in Feldspar.

The Monad instance defines substitution in a standard way:

instance Monad Data
    return = Var
    Var a   >>= k = k a
    Int a   >>= k = Int a
    Add a b >>= k = Add (a >>= k) (b >>= k)
    Ix a i  >>= k = Ix (a >>= k) (i >>= k)
    Par l f >>= k = Par (l >>= k) (f >>>= k)

instance Applicative Data where pure = return; (<*>) = ap

User interface

Here I choose to represent free variables as strings:

type Name = String

type D = Data Name

The user interface is then as follows:

v :: Name -> D
v = Var

instance Num D
    fromInteger = Int . fromInteger
    (+) = Add

(!) :: D -> D -> D
(!) = Ix

par :: Name -> D -> D -> D
par i l f = Par l $ abstract1 i f

Here are two simple example expressions:

ex1 = par "i" 10 $ v "i" + 0
  -- Corresponds to `parList 10 $ \i -> i + 0`

ex2 = par "i" 10 $ v "arr" ! (v "i" + 0)
  -- Corresponds to `parList 10 $ \i -> arr !! (i + 0)`

In the definition of par, we use abstract1 is to bind the free variable (convert to deBruijn). It has to traverse the whole body, so the complexity is quadratic in the number of nested binders.


Now to the interesting question of how one goes about to transform terms in the above representation.

Ideally, we want to do this without looking at deBruijn indices at all. Two questions arise:

  1. How to apply a transformation under a Scope?
  2. How to pattern match on binders and variables and rewrite them?

For the first question, my initial naive attempt was to use a combination of instantiate1 and abstract1 to lift a transformation on D to a transformation on Scope:

transScopeBAD :: (D -> D) -> Scope () Data Name -> Scope () Data Name
transScopeBAD f = abstract1 "x" . f . instantiate1 (Var "x")

The use of instantiate1 changes all occurrences of the bound variable to the free variable Var "x" and abstract1 re-binds that variable after f has been applied to the body. Unfortunately, this doesn’t work. If there are nested scopes, their corresponding variables will all be renamed to "x", so the innermost abstract1 is going to capture all of them. Also, since abstract1 has to traverse the whole body, nested uses of transScope are going to be inefficient.

A better approach is to work more directly with the representation and use fromScope and toScope (here with the type specialized to Data):

transScope :: (Data (Var b a) -> Data (Var b a)) -> Scope b Data a -> Scope b Data a
transScope f = toScope . f . fromScope

(This version of transScope was suggested by Edward. In my initial attempt, I used hoistScope instead.)

Note how the type clearly expresses the fact that the term passed to the transformation function f has one extra variable (of type b) in addition to the free variables (of type a). This information was missing in transScopeBAD.

Using transScope, we can now write the following simplifier:

simplify :: Data a -> Data a
simplify e = simp $ case e of
    Add a b -> Add (simplify a) (simplify b)
    Ix a i  -> Ix (simplify a) (simplify i)
    Par l f -> Par (simplify l) $ transScope simplify f
    _ -> e
    simp (Add (Int 0) b) = b
    simp (Add a (Int 0)) = a
    -- The catch-all case `simp e = e` will come later

Note that it is important for simplify to be polymorphic in the free variables since we’re using it at a different type when passing it to transScope. This requirement was not obvious to me at first. (In retrospect, it is obvious that the polymorpic type is preferred, since I don’t want the simplifier to do anything with the free variables.)

To make the simplifier more interesting, we add a case corresponding to the following rule for parList (defined above):

parList l f !! i   ==>   f i

This turns out to be easy:

    simp (Ix (Par l f) i) = instantiate1 i f

Another similar rule that we may want to express is

parList l (\i -> arr !! i)   ==>   arr   (where i `notElem` freeVars arr)

(In both of these rules, we ignore out-of-bounds indexing for simplicity.)

To express this rule, we need to pattern match on binders and variables. My initial attempt was the following:

--     simp (Par l f)
--       | Ix arr i <- instantiate1 (v "i") f
--       , i == v "i"
--       , "i" `Foldable.notElem` arr
--       = arr

This looks quite nice from a readability perspective, but it doesn’t work because transScope does not permit free variables as strings. (And, in general, using instantiate1 in this way has the same problem as transScopeBAD above, that the variable "i" might easily clash with other instantiated variables.)

Instead, it seems that one has to work more directly with the deBruijn representation:

    simp (Par l f)
      | Ix arr i   <- fromScope f
      , Var (B ()) <- i
      , Just arr'  <- lower arr
      = arr'

    -- Catch-all case to complete the definition of `simp`:
    simp e = e

The lower function tries to lower the scope of an expression, and succeeds if the expression does not refer to the closest bound variable:

lower :: Data (Var b a) -> Maybe (Data a)
lower = traverse $ unvar (const Nothing) Just

Note the use of fromScope rather than unscope above. This is to change the generalized deBruijn representation into one the standard representation which lower expects.

I would have hoped to avoid matching explicitly on the deBruijn representation of i (namely Var (B ())). But I guess this is just unavoidable?

The check that i does not occur in arr is replaced by the pattern match on the result of lower. One nice thing here is that the type system does not allow us to return arr directly, so we cannot accidentally forget to check whether i occurs in arr (which I tend to forget everytime!).


*Main> ex1
Par (Int 10) (Scope (Add (Var (B ())) (Int 0)))

*Main> simplify ex1
Par (Int 10) (Scope (Var (B ())))

*Main> ex2
Par (Int 10) (Scope (Ix (Var (F (Var "arr"))) (Add (Var (B ())) (Int 0))))

*Main> simplify ex2
Var "arr"


At first I was tempted to use instantiate1 and abstract1 everywhere and to represent free variables as strings. Although this leads to code that looks readable, it does not work in general:

  • Representing free variables as strings can easily lead to capturing bugs
  • It leads to inefficient code:
    • abstract1 has to traverse the whole expression, so it should be used sparingly
    • instantiate1 may not have to traverse the whole expression due to the generalized deBruijn representation, but I’m not sure what this means in practice

From a capturing point of view, it may be considered OK to use abstract1 and variables as strings the way I did in the user interface above. The user will just have to make sure to use distinct variable names. However, due to the nested uses of abstract1, it is still very inefficient to construct terms in this way. A better way is probably to construct terms in a different representation and then convert them all to deBruijn in a single pass.

Once I realized that I couldn’t use instantiate1 and abstract1 for transformations under a Scope, I was forced to use fromScope/toScope which expose a bit the representation of variables. However, this turned out to be a very good thing that just made the code safer.

The resulting code does not use abstract1, so it avoids that source of inefficiency. There is one use of instantiate1, but that is a substitution coming from the rewrite rule – it has nothing to do with the Bound library (and it could be avoided by rewriting to a let binding instead of performing the substitution).

In the end, there appears to be only one problematic inefficiency left: the lower function. In order to know whether an expression refers to the closest bound variable, lower naturally has to traverse the whole expression. Of course, this is true regardless of how variables are represented. But in a standard named representation of variables, it is e.g. possible to return a set of free variables from simplify, so that membership can be tested without traversing the expression. I don’t see how that could be done here.

In conclusion, I see many advantages of using Bound, especially when it comes to safety and not worrying about capturing. However, there are some things that make me hesitate on adopting it for something like Feldspar:

  • I wasn’t able to completely hide the deBruijn indices in the simplifier. Can it be done? Is this going to be a big problem for people trying to express rewrite rules?
  • The lower function is a potential source of inefficiency in a simplifier. Can it be fixed? Will there be other inefficiencies as the rules get more complex?

If someone has the answers to those questions, I would be happy to know!