Friday, November 9, 2012

Relating Syntactic and CompData

Update: toAST/fromAST have been rewritten using catamorphisms.

Syntactic and CompData are two packages for modeling compositional data types. Both packages use the technique from Data Types à la Carte to model polymorphic variants, but they model recursion quite differently: Syntactic uses an application tree, while CompData uses a type-level fixed-point.

This post demonstrates the relation between the two approaches by showing how to translate one to the other. As a side effect, I also show how to obtain pattern functors for free, avoiding the need to derive instances for various type classes in CompData.

To remain focused on the matter at hand, I will only introduce Syntactic briefly. For more information, see this paper. I will also assume some familiarity with CompData.


We start by importing the necessary stuff from Syntactic:

import Language.Syntactic
    ( AST (..)
    , Full
    , (:->)
    , DenResult
    , Args (..)
    , mapArgs
    , appArgs
    , fold
Data types in Syntactic correspond to Generalised Compositional Data Types in CompData. These are found under Data.Comp.Multi in the module hierarchy:
import Data.Comp.Multi
    ( Cxt (Term)
    , Term
    , HFunctor (..)
    , cata

import Data.Comp.Multi.HFoldable (HFoldable (..))

Cxt is the type-level fixed-point operator, which I will use by its alias Term. The type classes HFunctor and HFoldable are used for traversing terms.

Our example will also make use of the Monoid class.

import Data.Monoid


The idea behind Syntactic is to model a GADT such as the following

data Expr a
    Int :: Int -> Expr Int
    Add :: Expr Int -> Expr Int -> Expr Int

as a simpler non-recursive data type:

data ExprSym a
    IntSym :: Int -> ExprSym (Full Int)
    AddSym :: ExprSym (Int :-> Int :-> Full Int)

If we think of Expr as an AST, then ExprSym defines the symbols in this tree, but not the tree itself. The type parameter of each symbol is called the symbol signature. The signature for Add(Int :-> Int :-> Full Int) – should be read as saying that Add expects two integer sub-expressions and produces an integer result. In general, an ordinary constructor of the type

C :: T a -> T b -> ... -> T x

is modeled as a symbol of the type

CSym :: TSym (a :-> b :-> ... :-> Full x)

The AST type can be used to make an actual AST from the symbols:

type Expr' a = AST ExprSym (Full a)

This type is isomorphic to Expr. So, in what way is Expr' better than Expr? The answer is that Expr' can be traversed generically by matching only on the constructors of AST. For example, to compute the size of an AST, we only need two cases:

size :: AST sym sig -> Int
size (Sym _)  = 1
size (s :$ a) = size s + size a

The first case says that a symbol has size 1, and the second says how to join the sizes of a symbol and a sub-expression. (Note that s can contain nested uses of (:$).) Another advantage of using AST is that it supports composable data types.

Listing arguments

The size function is a bit special in that it operates on every symbol application in a compositional way. For more complex generic traversals, it is often needed to operate on a symbol and all its arguments at once. For example, Syntactic provides the following function for folding an AST:

fold :: (forall sig . sym sig -> Args c sig -> c (Full (DenResult sig)))
     -> AST sym (Full a)
     -> c (Full a)

The caller provides a function that, given a symbol and a list of results for its sub-expressions, produces a result for the combined expression. fold uses this function to compute a result for the whole AST. The type of fold requires some explanation. The c type (the folded result) is indexed by the type of the corresponding sub-expression; i.e. folding a sub-expression of type AST sym (Full a) results in a value type c (Full a). The type function DenResult gives the result type of a symbol signature; e.g. DenResult (Int :-> Full Bool) = Bool.

For our current purposes, the most important part of the type is the argument list. Args c sig is a list of results, each one corresponding to an argument in the signature sig (again, more information is found in the Syntactic paper). Args is defined so as to ensure that the length of the list is the same as the number of arguments in the signature. For example, Args c (Int :-> Int :-> Full Int) is a list of two elements, each one of type c (Full Int).

Pattern functors for free

Now to the gist of this post. Using Args, we can easily define a pattern functor (actually a higher-order functor) corresponding to a symbol:

data PF sym f a
    PF :: sym sig -> Args f sig -> PF sym f (Full (DenResult sig))

A pattern functor here is simply a symbol paired with an argument list. Each argument is of type f (Full b), where b is the type of the corresponding sub-expression. The HFunctor instance is straightforward (mapArgs is provided by Syntactic):

instance HFunctor (PF sym)
    hfmap f (PF s as) = PF s (mapArgs f as)

Now we can actually use the symbols defined earlier to make an expression type using CompData’s fixed-point operator:

type Expr'' = Term (PF ExprSym)

The correspondence between expressions based on AST and expressions based on Term can be seen by the following functions mapping back and forth between them (appArgs is provided by Syntactic):1

toAST :: Term (PF sym) (Full a) -> AST sym (Full a)
toAST = cata (\(PF s as) -> appArgs (Sym s) as)

fromAST :: AST sym (Full a) -> Term (PF sym) (Full a)
fromAST = fold (\s as -> Term (PF s as))

The translations are simply defined using catamorphisms; cata for CompData terms and fold for Syntactic terms.

To make the experiment a bit more complete, we also give an instance of HFoldable. For this, we need a helper function which is currently not included in Syntactic:

    :: (forall a . c (Full a) -> b -> b)
    -> b
    -> (forall sig . Args c sig -> b)
foldrArgs f b Nil       = b
foldrArgs f b (a :* as) = f a (foldrArgs f b as)

After this, the instance follows naturally:

instance HFoldable (PF sym)
    hfoldr f b (PF s as) = foldrArgs f b as


To try out the code, we define a small expression using the Expr' representation:

expr1 :: Expr' Int
expr1 = int 2 <+> int 3 <+> int 4

int :: Int -> Expr' Int
int i = Sym (IntSym i)

(<+>) :: Expr' Int -> Expr' Int -> Expr' Int
a <+> b = Sym AddSym :$ a :$ b

Next, we define the size function for expressions based on CompData’s Term (Sum is provided by Data.Monoid):

size' :: HFoldable f => Term f a -> Sum Int
size' (Term f) = Sum 1 `mappend` hfoldMap size' f

And, finally, the test:

*Main> size expr1

*Main> size' $ fromAST expr1
Sum {getSum = 5}


Hopefully, this post has provided some insights into the relation between Syntactic and CompData. The key observation is that symbol types defined for Syntactic give rise to free pattern functors that can be used to define terms in CompData. As we can translate freely between the two representations, there doesn’t appear to be any fundamental differences between them. A practical difference is that CompData uses type classes like HFunctor and HFoldable for generic traversals, while syntactic provides more direct access to the tree structure. As an interesting aside, the free pattern functors defined in this post avoids the need to use TemplateHaskell to derive instances for higher-order functor type classes in CompData.

The code in this post is available has been tested with GHC 7.6.1, syntactic-1.5.1 and compdata-0.6.1.

Download the source code (literate Haskell).

  1. As noted by Patrick Bahr, the toAST/fromAST translations destroy compositionality since a compound domain (S1 :+: S2) in Syntactic is mapped to a monolithic functor PF (S1 :+: S2) in CompData. However, with suitable type-level programming it should be possible to make the mapping to the compound type (PF S1 :+: PF S2) instead.

Tuesday, April 17, 2012

DSLs Are Generic Solutions

Last week, I wrote a grant proposal to develop generic methods for implementation of embedded domain-specific languages (DSLs). I tried to argue that DSLs are becoming increasingly important as software development is getting more and more complicated. The goal of the project is to make it dead easy to implement high-quality DSLs embedded in Haskell. As I was trying to promote this idea, I realized that it might not at all be obvious to the reviewer that an ever-increasing number of DSLs is a good thing. Is there not a risk of just causing a confusing jungle of different language flavors?

This is a valid concern; if the primary purpose of introducing a DSL is provide domain-specific syntax, then having multitude of different DSLs is indeed going to be confusing. However, the most important part of DSLs is not their syntax (even if syntax is important). My view of a DSL is this:1

A DSL is a way for an experienced programmer to make a generic solution available to other (perhaps less experienced) programmers in such a way that only problem-specific information needs to be supplied.

The Unix DSL jungle

Consider the following example: A Unix shell has access to lots of small programs aimed to solve specific classes of problems. For example, the program ls is a solution to the problem of listing the contents of a directory. Moreover, it is a generic solution, since it can list the contents in many different ways. Specific solutions are obtained by providing options at the command line. The command ls -a will include hidden files in the listing, and ls -a -R will additionally list the contents of sub-directories recursively. The different options to ls can be seen as a DSL to specialize ls to specific use-cases. ls is a very successful DSL – compare to programming the same functionality using lower-level constructs, e.g. looping over the directory contents testing whether or not a file is hidden, etc.2 Still, the important aspect of ls is not its syntax (though, again, that is also important), but rather the program ls itself (the DSL implementation), which is a generic solution that can be reused in many different situations.

If we consider all commands in a Unix shell environment, it may appear as a jungle of different DSLs. However, this is not at all a problem. Every command is a rather limited sub-DSL which can usually be summarized in a relatively short manual page. Moreover, the interaction between the sub-DSLs is done using a common data format, text strings, which makes it possible to write shell programs that seamlessly combine different sub-DSLs.

What would be confusing is if there were several different versions of ls, all with slightly different syntax. Luckily that is not the case.

The Hackage library jungle

We can also draw a parallel to programming libraries. A library can be seen as a DSL. (Especially when considering embedded DSLs, the border to libraries is not very clear.) The Haskell package database Hackage has since the launch in 2007(?) received nearly 4000 packages (2012-04-16). There are quite some occurrences of confusingly similar libraries, which may give a feeling of Hackage as a jungle, but in all Hackage has been a huge success. The risk of confusion does not mean that we should stop developing libraries, merely that some extra care is needed to specify the purpose of and relation between different libraries.

Reusability is key

I have claimed that a DSL is a way for an experienced programmer to make a generic solution available to other (perhaps less experienced) programmers in such a way that only problem-specific information needs to be supplied. Clearly, syntax is an important component to this definition (“only problem-specific information”), but I think the main focus should be on “making a generic solution available”. It is all about reusing programming effort! We might just as well say that a DSL is a way to have a one-time implementation effort potentially benefit a large number of DSL users for a long time.

As long as there are generalizable solutions out there, there will be a need for new DSLs!

  1. The general definition is, of course, broader.

  2. OK, I know at least one language where looping and testing can be expressed very succinctly, but you get the point.

Difference between LTL and CTL

(Note that this article assume some intuitive understanding of CTL.)

I’m currently teaching a course on Hardware Description and Verification, in which we cover computation tree logic (CTL) model checking. In preparing for the course, I wanted to fully understand the differences between CTL and LTL. The common informal explanation that LTL has linear time while CTL has branching time is useless without further explanation. I had a certain understanding of CTL, and my initial assumption was that LTL is what you get by excluding all E formulas from CTL. This assumption turned out to be wrong.

My assumption would imply that LTL is a subset of CTL, but the Wikipedia page claims that neither logic is a subset of the other. It seems like I’m not alone in being confused about the difference between the two logics: This Stack Exchange page has the same question. The answer on that page talks about the LTL formula


which, apparently, is not the same as the CTL formula


(as I previously thought). After reading the Stack Exchange answer I started to get a sense of the difference, but I still wasn’t able to formulate it easily. So I had no more excuses not to look into the actual formal semantics of LTL and CTL. This survey by Emerson served as a good reference for that.

Common superset: CTL*

The difference between LTL and CTL is easiest to understand by considering CTL* which has both LTL and CTL as subsets. CTL* has two syntactic categories giving rise to two different kinds of formulas: state formulas and path formulas. A state formula is something that holds for a given state in a system (modeled as a Kripke structure), while a path formula holds for a given path. A path is an infintie sequence of states such that each consecutive pair of states is in the transition relation; i.e. a trace of an infinite run of the system. Note that, even though a state formula is a property of a single state, it can still talk about future states by using the path quantifiers A and E.

Now, the important thing to note is that an LTL formula is a (restricted) CTL* path formula while a CTL formula is a (restricted) CTL* state formula. Path formulas are not very interesting on their own, since one is usually interested in expressing properties of all or some paths from the initial state. Hence, the real meaning of an LTL formula is obtained by adding an implicit A quantifier to it turning it into a state formula. Thus the meaning of the LTL formula f is the meaning of the CTL* formula Af.

The example

With this understanding, we can try to reformulate the above examples in English:

The LTL formula FGp can be read as

“Every path (implicit A) will eventually (F) reach a point after which p is always true (Gp).”

Or perhaps more clearly:

“Every path has a finite prefix after which p is always true.”

The CTL formula AFAGp can be read as

“Every path (A) will eventually (F) reach a state from which every path (A) has the property that p is always true (Gp).”

These formulas are subtly different, and they can be distinguished by the following system of three states:

S1S1orS2(atomic: p)

S2S3(atomic: ¬p)

S3S3(atomic: p)

Every run of this system will either (1) stay in S1 forever, or (2) wait some finite time and then make a transition to S2 and S3. In both cases, it holds that the trace has a finite prefix after which p is always true. Hence the LTL formula holds.

However, the CTL formula does not hold, because there is a path that never reaches a state at which AGp holds. This is the path in which the system stays in S1 forever. Even if it stays in S1 forever, it always has the possibility to escape to S2.


As demonstrated by the above example, the difference between LTL and CTL (except for the absense of E in LTL) is that an LTL formula can be verified by considering each path in isolation. If each individual path obeys the path formula, the LTL formula is true. To verify a CTL formula, one may also have to consider the alternative possibilities for each state in a path.