Difference between revisions of "Tying the Knot"
(Separated commentary from code.) 
m (→Other Examples: Update dead link) 

(30 intermediate revisions by 6 users not shown)  
Line 1:  Line 1:  
−  == Introduction == 

+  In a language like Haskell, where Lists are defined as <hask>Nil  Cons a (List a)</hask>, creating data structures like cyclic or doubly linked lists seems impossible. However, this is not the case: laziness allows for such definitions, and the procedure of doing so is called ''tying the knot''. The simplest example: 

+  
+  <haskell> 

+  cyclic = let x = 0 : y 

+  y = 1 : x 

+  in x 

+  </haskell> 

+  
+  This creates the cyclic list consisting of 0 and 1. It is important to stress that this procedure allocates only two numbers  0 and 1  in memory, making this a truly cyclic list. 

+  
+  The knot analogy stems from the fact that we produce two openended objects, and then link their ends together. Evaluation of the above therefore looks something like 

+  
+  <haskell> 

+  cyclic 

+  = x 

+  = 0 : y 

+  = 0 : 1 : x  Knot! Back to the beginning. 

+  = 0 : 1 : 0 : y 

+  =  etc. 

+  </haskell> 

+  
+  It can twist your brain a bit the first few times you do it, but it works fine  remember, Haskell is a ''lazy'' language. This means that while you are building the node, you can set the children to the final values ''straight away'', even though you don't know them yet! 

+  
+  == Overview == 

This example illustrates different ways to define recursive data structures. 
This example illustrates different ways to define recursive data structures. 

To demonstrate the different techniques we show how to solve the same problemwriting an interpreter for a simple programming languagein three different ways. This is a nice example because, (i) it is interesting, (ii) the abstract syntax of the language contains mutually recursive structures, and (iii) the interpreter illustrates how to work with the recursive structures. 
To demonstrate the different techniques we show how to solve the same problemwriting an interpreter for a simple programming languagein three different ways. This is a nice example because, (i) it is interesting, (ii) the abstract syntax of the language contains mutually recursive structures, and (iii) the interpreter illustrates how to work with the recursive structures. 

−  (It would be useful to have some more text describing the examples.) 
+  (''It would be useful to have some more text describing the examples.'') 
== Download the files == 
== Download the files == 

Line 13:  Line 36:  
== Other Examples == 
== Other Examples == 

−  * [ 
+  * [https://www.twanvl.nl/blog/haskell/KnuthMorrisPrattinHaskell KnuthMorrisPratt algorithm for substring matching] 
−  
−  == Migrated from the old wiki == 

−  
−  How to build a cyclic data structure. 

−  
−  At first, the lack of pointerupdating operations in Haskell makes it seem that building cyclic structures (circular lists, graphs, etc) is impossible. This is not the case, thanks to the ability to define data using recursive equations. Here is a little trick called `tying the knot'. 

−  Remember that Haskell is a lazy language. A consequence of this is that while you are building the node, you can set the children to the final values straight away, even though you don't know them yet! It twists your brain a bit the first few times you do it, but it works fine. 

+  == How to build a cyclic data structure == 

Here's an example. Say you want to build a circular, doublylinked list, given a standard Haskell list as input. The back pointers are easy, but what about the forward ones? 
Here's an example. Say you want to build a circular, doublylinked list, given a standard Haskell list as input. The back pointers are easy, but what about the forward ones? 

Line 35:  Line 58:  
takeF :: Integer > DList a > [a] 
takeF :: Integer > DList a > [a] 

takeF 0 _ = [] 
takeF 0 _ = [] 

−  takeF 
+  takeF n (DLNode _ x next) = x : (takeF (n1) next) 
takeR :: Show a => Integer > DList a > [a] 
takeR :: Show a => Integer > DList a > [a] 

takeR 0 _ = [] 
takeR 0 _ = [] 

−  takeR 
+  takeR n (DLNode prev x _) = x : (takeR (n1) prev) 
</haskell> 
</haskell> 

−  (takeF and takeR are simply to let you look at the results of mkDList: they take a specified number of elements, either forward or backward). 
+  (<code>takeF</code> and <code>takeR</code> are simply to let you look at the results of <code>mkDList</code>: they take a specified number of elements, either forward or backward). 
−  The trickery takes place in 
+  The trickery takes place in <code>go</code>. <code>go</code> builds a segment of the list, given a pointer to the node off to the left of the segment and off to the right. Look at the second case of <code>go</code>. We build the first node of the segment, using the given <code>prev</code> pointer for the left link, and the node pointer we are ''about'' to compute in the next step for the right link. 
−  This goes on right the way through the segment. But how do we manage to create a 
+  This goes on right the way through the segment. But how do we manage to create a ''circular'' list this way? How can we know right at the beginning what the pointer to the end of the list will be? 
−  Take a look at mkDList. Here, we simply take the (first,last) pointers we get from 
+  Take a look at <code>mkDList</code>. Here, we simply take the <code>(first,last)</code> pointers we get from <code>go</code>, and ''pass them back in'' as the <code>next</code> and <code>prev</code> pointers respectively, thus tying the knot. This all works because of lazy evaluation. 
−  Somehow the following seems more straightforward to me, though perhaps I'm missing the point here: 

+  == Tying bigger knots == 

−  <haskell> 

−  data DList a = DLNode (DList a) a (DList a) 

−  
−  rot :: Integer > [a] > [a] 

−  rot n xs  n < 0 = rot (n+1) ((last xs):(init xs)) 

−   n == 0 = xs 

−   n > 0 = rot (n1) (tail xs ++ [head xs]) 

−  
−  mkDList :: [a] > DList a 

−  mkDList [] = error "Must have at least one element." 

−  mkDList xs = DLNode (mkDList $ rot (1) xs) (head xs) (mkDList $ rot 1 xs) 

−  </haskell> 

−  
−   CaleGibbard 

−  
−  The problem with this is it won't make a truly cyclic data structure, rather it will constantly be generating the rest of the list. To see this use trace (in Debug.Trace for GHC) in mkDList (e.g. mkDList xs = trace "mkDList" $ ...) and then takeF 10 (mkDList "a"). Add a trace to mkDList or go or wherever you like in the other version and note the difference. 

−  
−   DerekElkins 

−  
−  Yeah, thanks, I see what you mean. 

−  
−  This is so amazing that everybody should have seen it, so here's the trace. I put trace "\ngo{1/2}" $ directly after the two go definitions: 

−  
−  <haskell> 

−  *Main> takeF 10 $ mkDList [1..3] 

−  
−  go2 

−  [1 

−  go2 

−  ,2 

−  go2 

−  ,3 

−  go1 

−  ,1,2,3,1,2,3,1] 

−  
−  </haskell> 

−  
−   CaleGibbard 

The above works for simple cases, but sometimes you need construct some very complex data structures, where the pattern of recursion is not known at compile time. If this is the case, may need to use an auxiliary dictionary data structure to help you tie your knots. 
The above works for simple cases, but sometimes you need construct some very complex data structures, where the pattern of recursion is not known at compile time. If this is the case, may need to use an auxiliary dictionary data structure to help you tie your knots. 

Line 122:  Line 107:  
Turning the indirect recursion into direct recursion requires tying knots, and it's not immediately obvious how to do this by holding onto lazy pointers, because any state can potentially point to any other state (or, indeed, every other state). 
Turning the indirect recursion into direct recursion requires tying knots, and it's not immediately obvious how to do this by holding onto lazy pointers, because any state can potentially point to any other state (or, indeed, every other state). 

−  What we can do is introduce a dictionary data structure to hold the (lazily evaluated) new states, then introducing a recursive reference can be done with a a simple dictionary lookup. In principle, you could use any dictionary data structure (e.g. Map). However, in this case, the state numbers are dense integers, so it's probably easiest to use an array: 
+  What we can do is introduce a dictionary data structure to hold the (lazily evaluated) new states, then introducing a recursive reference can be done with a a simple dictionary lookup. In principle, you could use any dictionary data structure (e.g. <code>Map</code>). However, in this case, the state numbers are dense integers, so it's probably easiest to use an array: 
<haskell> 
<haskell> 

indirectToDirect :: IndirectDfa a > DirectDfa a 
indirectToDirect :: IndirectDfa a > DirectDfa a 

Line 138:  Line 123:  
As noted previously, pretty much any dictionary data structure will do. Often you can even use structures with slow lookup (e.g. association lists). This is because fully lazy evaluation ensures that you only pay for each lookup the first time you use it; subsequent uses of the same part of the data structure are effectively free. 
As noted previously, pretty much any dictionary data structure will do. Often you can even use structures with slow lookup (e.g. association lists). This is because fully lazy evaluation ensures that you only pay for each lookup the first time you use it; subsequent uses of the same part of the data structure are effectively free. 

−   AndrewBromage 

+  == Transformations of cyclic graphs and the Credit Card Transform == 

−  Transformations of cyclic graphs and the Credit Card Transform 

+  Cycles certainly make it difficult to transform graphs in a pure nonstrict language. Cycles in a source graph require us to devise a way to mark traversed nodes  however we cannot mutate nodes and cannot even compare nodes with a generic (''derived'') equality operator. Cycles in a destination graph require us to keep track of the already constructed nodes so we can complete a cycle. 

−  +  An obvious solution is to use a state monad and <tt>IORefs</tt>. There is also a monadless solution, which is less obvious: seemingly we cannot add a node to the dictionary of already constructed nodes until we have built the node. This fact means that we cannot use the updated dictionary when building the descendants of the node  which need the updated dictionary to link back. The problem can be overcome however with a ''credit card transform'' (a.k.a. "buy now, pay later" transform). To avoid hitting the bottom, we just have to "pay" by the "due date". 

For illustration, we will consider the problem of printing out a nondeterministic finite automaton (NFA) and transforming it into a deterministic finite automaton (DFA). Both NFA and DFA are represented as cyclic graphs. The problem has been discussed on the Haskell/HaskellCafe mailing lists. The automata in question were to recognize strings over a binary alphabet. 
For illustration, we will consider the problem of printing out a nondeterministic finite automaton (NFA) and transforming it into a deterministic finite automaton (DFA). Both NFA and DFA are represented as cyclic graphs. The problem has been discussed on the Haskell/HaskellCafe mailing lists. The automata in question were to recognize strings over a binary alphabet. 

Line 153:  Line 138:  
trans1:: [FaState l]} 
trans1:: [FaState l]} 

</haskell> 
</haskell> 

−  whose fields have the obvious meaning. Label is used for printing out and comparing states. The flag acceptQ tells if the state is final. Since an FaState can generally represent a nondeterministic automaton, transitions are the ''lists'' of states. 
+  whose fields have the obvious meaning. Label is used for printing out and comparing states. The flag <code>acceptQ</code> tells if the state is final. Since an <code>FaState</code> can generally represent a nondeterministic automaton, transitions are the ''lists'' of states. 
An automaton is then a list of starting states. 
An automaton is then a list of starting states. 

Line 159:  Line 144:  
type FinAu l = [FaState l] 
type FinAu l = [FaState l] 

</haskell> 
</haskell> 

−  For example, an automaton equivalent to the regular expression 
+  For example, an automaton equivalent to the regular expression <code>0*(0(0+1)*)*</code> could be defined as: 
<haskell> 
<haskell> 

dom18 = [one] 
dom18 = [one] 

Line 173:  Line 158:  
</haskell> 
</haskell> 

−  Printing a FaState however poses a slight problem. For example, the state labeled 
+  Printing a <code>FaState</code> however poses a slight problem. For example, the state labeled 1 in the automaton <code>dom18</code> refers to itself. If we blindly "follow the links", we will loop forever. Therefore, we must keep track of the already printed states. We need a data structure for such an occurrence check, with the following obvious operations: 
<haskell> 
<haskell> 

Line 182:  Line 167:  
</haskell> 
</haskell> 

−  In this article, we realize such a data structure as a list. In the future, we can pull in something fancier from the Edison collection: 
+  In this article, we realize such a data structure as a list. In the future, we can pull in something fancier from the ''Edison'' collection: 
<haskell> 
<haskell> 

Line 191:  Line 176:  
</haskell> 
</haskell> 

−  We are now ready to print an automaton. To be more precise, we traverse the corresponding graph depthfirst, preorder, and keep track of the already printed states. A 
+  We are now ready to print an automaton. To be more precise, we traverse the corresponding graph depthfirst, preorder, and keep track of the already printed states. A <code>states_seen</code> datum accumulates the shown states, so we can be sure we print each state only once and thus avoid the looping. 
<haskell> 
<haskell> 

Line 218:  Line 203:  
</haskell> 
</haskell> 

−  The acceptance function for our automata can be written as follows. The function takes the list of starting states and the string over the boolean alphabet. The function returns True if the string is accepted. 
+  The acceptance function for our automata can be written as follows. The function takes the list of starting states and the string over the boolean alphabet. The function returns <code>True</code> if the string is accepted. 
<haskell> 
<haskell> 

finAuAcceptStringQ start_states str = 
finAuAcceptStringQ start_states str = 

−  +  any (\l > acceptP l str) start_states 

where acceptP (FaState _ acceptQ _ _) [] = acceptQ 
where acceptP (FaState _ acceptQ _ _) [] = acceptQ 

acceptP (FaState _ _ t0 t1) (s:rest) = 
acceptP (FaState _ _ t0 t1) (s:rest) = 

Line 237:  Line 222:  
</haskell> 
</haskell> 

−  We are now ready to write the 
+  We are now ready to write the NFA→DFA conversion, a determinization of an NFA. We implement the textbook algorithm of tracing set of NFA states. A state in the resulting DFA corresponds to a list of the NFA states. A DFA is generally a cyclic graph, often with cycles of length 1 (selfreferenced nodes). To be able to "link back" as we build DFA states, we have to remember the already constructed states. We need a data structure, a dictionary of states: 
<haskell> 
<haskell> 

Line 246:  Line 231:  
</haskell> 
</haskell> 

−  For now, we realize this dictionary as an associative list. If performance matters, we can use a fancier dictionary from the Edison 
+  For now, we realize this dictionary as an associative list. If performance matters, we can use a fancier dictionary from the ''Edison'' collection: 
<haskell> 
<haskell> 

Line 255:  Line 240:  
</haskell> 
</haskell> 

−  The work of the 
+  The work of the NFA→DFA conversion is done by the following function <code>determinize_cc</code>. The function takes a list of NFA states, the dictionary of the already built states, and returns a pair <hask>([dfa_state], updated_dictionary)</hask> where <hask>[dfa_state]</hask> is a singleton list. 
<haskell> 
<haskell> 

Line 268:  Line 253:  
where 
where 

 [NFA_labels] > DFA_labels 
 [NFA_labels] > DFA_labels 

−  det_labels = sort . nub . 
+  det_labels = sort . nub . map label 
dfa_label = det_labels states 
dfa_label = det_labels states 

Line 275:  Line 260:  
foldr (\st (f0,f1) > (trans0 st ++ f0, trans1 st ++ f1)) 
foldr (\st (f0,f1) > (trans0 st ++ f0, trans1 st ++ f1)) 

([],[]) states 
([],[]) states 

−  acceptQ' = 
+  acceptQ' = any acceptQ states 
 really build the dfa state and return ([dfa_state],updated_cache) 
 really build the dfa state and return ([dfa_state],updated_cache) 

Line 289:  Line 274:  
in ([dfa_state],converted_states3) 
in ([dfa_state],converted_states3) 

</haskell> 
</haskell> 

−  The front end of the 
+  The front end of the NFA→DFA transformer: 
<hask>finAuDeterminize states = fst $ determinize_cc states []</hask> 
<hask>finAuDeterminize states = fst $ determinize_cc states []</hask> 

At the heart of the credit card transform is the phrase from the above code: 
At the heart of the credit card transform is the phrase from the above code: 

−  <hask> converted_states1 = (dfa_label,dfa_state) `putd` converted_states</hask> 

+  { 

+  <haskell> converted_states1 = (dfa_label,dfa_state) `putd` converted_states</haskell> 

+  } 

+  The phrase expresses the addition to the dictionary of the <code>converted_states</code> of a <code>dfa_state</code> that we haven't built yet. The computation of the <code>dfa_state</code> is written 4 lines below the phrase in question. Because <code>(,)</code> is nonstrict in its arguments and <code>locate</code> is nonstrict in its result, we can get away with a mere promise to "pay". 

−  +  Note that the computation of the <code>dfa_state</code> needs <code>t0'</code> and <code>t1'</code>, which in turn rely on <code>converted_states1</code>. This fact shows that we can tie the knot by making a promise to compute a state, add this promise to the dictionary of the built states, and use the updated dictionary to build the descendants. Because Haskell is a nonstrict language, we don't need to do anything special to make the promise. Every computation is Haskell is by default a promise. 

−  We can print the DFA for dom18 to see what we've got: 
+  We can print the DFA for <code>dom18</code> to see what we've got: 
<haskell> 
<haskell> 

CCardFA> finAuDeterminize dom18 
CCardFA> finAuDeterminize dom18 

Line 305:  Line 293:  
CCardFA> {State [] False [[]] [[]] }@}] 
CCardFA> {State [] False [[]] [[]] }@}] 

</haskell> 
</haskell> 

−  which is indeed a DFA (which happens to be minimal) recognizing (0+1)*  1(0+1)* 
+  which is indeed a DFA (which happens to be minimal) recognizing <code>(0+1)*  1(0+1)*</code> 
−  We can run the determinized FA using the same function finAuAcceptStringQ: 
+  We can run the determinized FA using the same function <code>finAuAcceptStringQ</code>: 
<haskell> 
<haskell> 

test1' = finAuAcceptStringQ (finAuDeterminize dom18) $ map (>0) [0,1,0,1] 
test1' = finAuAcceptStringQ (finAuDeterminize dom18) $ map (>0) [0,1,0,1] 

Line 313:  Line 301:  
</haskell> 
</haskell> 

−  The complete code for this example is in http://pobox.com/~oleg/ftp/Haskell/CCardtransformDFA.lhs 
+  The complete code for this example is in http://pobox.com/~oleg/ftp/Haskell/CCardtransformDFA.lhs. 
−   Oleg 

+  Another example of tying a knot in the case of forward links, by using a fixedpoint combinator, is discussed in http://www.mailarchive.com/haskell@haskell.org/msg10687.html. 

+   

+  === Improved errorrecovery for transformations of cyclic graphs === 

+  <blockquote> 

+  <tt>(...some observations about the aforementioned [https://www.mailarchive.com/haskell@haskell.org/msg10687.html''forward links/fixedpoint combinator'' example])</tt> 

−  For a long time, I've had an issue with Oleg's reply to Hal Daume III, the "forward links" example. The problem is that it doesn't really exploit laziness or circular values. It's solution would work even in a strict language. It's simply a functional version of the standard approach: build the result with markers and patch it up afterwards 
+  For a long time, I've had an issue with Oleg's reply to Hal Daume III, the "forward links" example. The problem is that it doesn't really exploit laziness or circular values. It's solution would work even in a strict language. It's simply a functional version of the standard approach: build the result with markers and patch it up afterwards. 
−  The problem is that it no longer gives you control of the error message or anyway to recover from it. With GHC's extensions to exception handling you could do it, but you'd have to put readDecisionTree in the IO monad to recover from it, and if you wanted better messages you'd have to put most of the parsing in the IO monad so that you could catch the error earlier and provide more information then rethrow. What's kept me is that I couldn't figure out a way to tie the knot when the environment had a type like, Either String [(String,DecisionTree)]. This is because it's impossible for this case; we decide whether to return Left "could not find subtree" or Right someValue and therefore whether the environment is Left or Right based on whether we could find the subtree in the environment. In effect, we need to lookup a value in an environment we may return to know whether to return it. Obviously this is a truly circular dependency. This made me think that Oleg's solution was as good as any other and better than some (actually, ironically Oleg's solution also uses a let instead of a case, however, there's nothing stopping it from being a case, but it still would provide no way to recover from it without effectively doing what is mentioned below). Recently, I've thought about this again and the solution is obvious and follows directly from the original definition modified to use let. It doesn't loop because only particular values in the lookup table fail, in fact, you might never know there was a variable lookup error if you didn't touch all of the tree. This translates directly into the environment having type [(String,Either String DecisionTree)]. There are several benefits to this approach compared to Oleg's: 1) it solves my original problem, you are now able to specify the error messages (Oleg's can do this), 2) it goes beyond that (and beyond Hal Daume's original "specification") and also allows you to recover from an error without resorting to the IO monad and/or extensions (Oleg's can't do this), 3) it does implicitly what Oleg's version does explicitly, 4) because of (3) it shares properly while Oleg's does not*, 5) both the environment and the returned value are made up of showable values, not opaque functions, 6) it requires less changes to the original code and is more localized than Oleg's solution; only the variable lookup and toplevel function will need to change. 

+  It is a fairly clever way of doing purely something that is typically done with references and mutable update, but it doesn't really address what Hal Daume III was after. Fixing Hal Daume's example so that it won't loop is relatively trivial; simply change the <tt>case</tt> to a <tt>let</tt> or equivalently use a lazy pattern match in the case. However, if that's all there was to it, I would've written this a long time ago. 

−  To recover, all one needs to do is make sure all the values in the lookup table are Right values. If they aren't, there are various ways you could collect the information; there are also variations on how to combine error information and what to provide. Even without a correctness check, you can still provide better error messages for the erroneous thunks. A possible variation that loses some of the benefits, is to change the DecisionTree type (or have a different version, [[IndirectComposite]] comes to mind here) that has Either ErrorInfo ErrorDecisionTree subnodes, which will allow you to recover at any time (though, if you want to make a normal DecisionTree out of it you will lose sharing). Also, the circular dependency only comes up if you need to use the environment to decide on an error. For example, a plain old syntactic parse error can cyclicly use an Either ErrorInfo [(String,DecisionTree)] perfectly fine (pass in fromRight env where fromRight ~(Right x) = x). It will also work even with the above approach giving the environment the type Either [(String,Either ErrorInfo DecisionTree)]. Below is code for a simplified scenario that does most of these things, 

+  The problem is that it no longer gives you control of the error message or anyway to recover from it. With GHC's extensions to exception handling you could do it, but you'd have to put <code>readDecisionTree</code> in the <code>IO</code> monad to recover from it, and if you wanted better messages you'd have to put most of the parsing in the <code>IO</code> monad so that you could catch the error earlier and provide more information then rethrow. 

−  <haskell> 

+  What's kept me is that I couldn't figure out a way to tie the knot when the environment had a type like, <code>Either String [(String,DecisionTree)]</code>. This is because it's impossible for this case; we decide whether to return: 

+  * <code>Left "could not find subtree"</code> or 

+  * <code>Right someValue</code> 

+   and therefore whether the environment is <code>Left</code> or <code>Right</code> based on whether we could find the subtree in the environment. In effect, we need to lookup a value in an environment we may return to know whether to return it. Obviously this is a truly circular dependency. 

+  
+  This made me think that Oleg's solution was as good as any other and better than some (actually, ironically Oleg's solution also uses a <tt>let</tt> instead of a <tt>case</tt>, however, there's nothing stopping it from being a <tt>case</tt>, but it still would provide no way to recover from it without effectively doing what is mentioned below). Recently, I've thought about this again and the solution is obvious and follows directly from the original definition modified to use <tt>let</tt>. 

+  
+  It doesn't loop because only particular values in the lookup table fail, in fact, you might never know there was a variable lookup error if you didn't touch all of the tree. This translates directly into the environment having type <code>[(String,Either String DecisionTree)]</code>. 

+  
+  There are several benefits to this approach compared to Oleg's: 

+  # it solves my original problem, you are now able to specify the error messages (Oleg's can do this), 

+  # it goes beyond that (and beyond Hal Daume's original "specification") and also allows you to recover from an error without resorting to the <code>IO</code> monad and/or extensions (Oleg's can't do this), 

+  # it does implicitly what Oleg's version does explicitly, 

+  # because of (3) it shares properly while Oleg's ''does not'', 

+  # both the environment and the returned value are made up of showable values, not opaque functions, 

+  # it requires less changes to the original code and is more localized than Oleg's solution; only the variable lookup and toplevel function will need to change. 

+  
+  To recover, all one needs to do is make sure all the values in the lookup table are <code>Right</code> values. If they aren't, there are various ways you could collect the information; there are also variations on how to combine error information and what to provide. Even without a correctness check, you can still provide better error messages for the erroneous thunks. 

+  
+  A possible variation that loses some of the benefits, is to change the <code>DecisionTree</code> type (or have a different version, <code>[[IndirectComposite]]</code> comes to mind here) that has <code>Either ErrorInfo ErrorDecisionTree</code> subnodes, which will allow you to recover at any time (though, if you want to make a normal <code>DecisionTree</code> out of it you will lose sharing). Also, the circular dependency only comes up if you need to use the environment to decide on an error. 

+  
+  For example: 

+  
+  * a plain old syntactic parse error can cyclicly use an <code>Either ErrorInfo [(String,DecisionTree)]</code> perfectly fine (pass in <code>fromRight env</code> where <code>fromRight ~(Right x) = x)</code>. It will also work even with the above approach giving the environment the type <code>Either [(String,Either ErrorInfo DecisionTree)]</code>. Below is code for a simplified scenario that does most of these things, 

+  
+  :<haskell> 

module Main where 
module Main where 

import Maybe ( fromJust ) 
import Maybe ( fromJust ) 

Line 366:  Line 383:  
parse env input = Left $ "parse error with: "++unwords input 
parse env input = Left $ "parse error with: "++unwords input 

</haskell> 
</haskell> 

−  checkedFixup demonstrates how you could check and recover, but since the environment is the return value neither fixup or checkedFixup quite illustrate having potentially erroneous thunks in the actual return value. Some example 
+  :<code>checkedFixup</code> demonstrates how you could check and recover, but since the environment is the return value neither <code>fixup</code> or <code>checkedFixup</code> quite illustrate having potentially erroneous thunks in the actual return value. Some example: 
−  +  :{ 

+   style="textalign: center" 

+  '''input''' 

+  '''outputs''' 

+   

+  <pre> 

define x *y *y 
define x *y *y 

define y a b 
define y a b 

−  outputs Right [("x",Right "a b a b"),("y",Right "a b")] 

+  </pre> 

+   <haskell> 

+  Right [("x",Right "a b a b"), 

+  ("y",Right "a b")] 

+  </haskell> 

+   

+  <pre> 

define x *y *y 
define x *y *y 

aousht 
aousht 

define y a b 
define y a b 

−  outputs Left "parse error with: aousht" 

+  </pre> 

+   <haskell> 

+  Left "parse error with: aousht" 

+  </haskell> 

+   

+  <pre> 

define x *y *z 
define x *y *z 

define y a b 
define y a b 

define z *w 
define z *w 

−  outputs Right [("x",Left "couldn't find w in z"),("y",Right "a b"),("z",Left "couldn't find w in z")] 

+  </pre> 

+   <haskell> 

+  Right [("x",Left "couldn't find w in z"), 

+  ("y",Right "a b"), 

+  ("z",Left "couldn't find w in z")] 

</haskell> 
</haskell> 

+  } 

+  
+  * Consider a tree <tt>Y</tt> that contains the subtree <tt>X</tt> twice: 

−  +  : With Oleg's version, when we resolve the <code>X</code> variable we look up a (manually) delayed tree and then build <tt>X</tt>. Each subtree of <tt>Y</tt> will build it's own version of <tt>X</tt>. 

−   DerekElkins 

+  : With the truly circular version each subtree of <tt>Y</tt> will be the same, possibly erroneous, thunk that builds <tt>X</tt>, if the thunk isn't erroneous then when it is updated both of <tt>Y</tt>'s subtrees will point to the same <tt>X</tt>. 

−  == See also == 

+  [[User:DerekElkinsDerek Elkins]] 

−  http://haskell.org/wikisnapshot/TyingTheKnot.html 

+  </blockquote> 

[[Category:Code]] 
[[Category:Code]] 
Latest revision as of 17:16, 28 June 2021
In a language like Haskell, where Lists are defined as Nil  Cons a (List a)
, creating data structures like cyclic or doubly linked lists seems impossible. However, this is not the case: laziness allows for such definitions, and the procedure of doing so is called tying the knot. The simplest example:
cyclic = let x = 0 : y
y = 1 : x
in x
This creates the cyclic list consisting of 0 and 1. It is important to stress that this procedure allocates only two numbers  0 and 1  in memory, making this a truly cyclic list.
The knot analogy stems from the fact that we produce two openended objects, and then link their ends together. Evaluation of the above therefore looks something like
cyclic
= x
= 0 : y
= 0 : 1 : x  Knot! Back to the beginning.
= 0 : 1 : 0 : y
=  etc.
It can twist your brain a bit the first few times you do it, but it works fine  remember, Haskell is a lazy language. This means that while you are building the node, you can set the children to the final values straight away, even though you don't know them yet!
Contents
Overview
This example illustrates different ways to define recursive data structures. To demonstrate the different techniques we show how to solve the same problemwriting an interpreter for a simple programming languagein three different ways. This is a nice example because, (i) it is interesting, (ii) the abstract syntax of the language contains mutually recursive structures, and (iii) the interpreter illustrates how to work with the recursive structures.
(It would be useful to have some more text describing the examples.)
Download the files
Other Examples
How to build a cyclic data structure
Here's an example. Say you want to build a circular, doublylinked list, given a standard Haskell list as input. The back pointers are easy, but what about the forward ones?
data DList a = DLNode (DList a) a (DList a)
mkDList :: [a] > DList a
mkDList [] = error "must have at least one element"
mkDList xs = let (first,last) = go last xs first
in first
where go :: DList a > [a] > DList a > (DList a, DList a)
go prev [] next = (next,prev)
go prev (x:xs) next = let this = DLNode prev x rest
(rest,last) = go this xs next
in (this,last)
takeF :: Integer > DList a > [a]
takeF 0 _ = []
takeF n (DLNode _ x next) = x : (takeF (n1) next)
takeR :: Show a => Integer > DList a > [a]
takeR 0 _ = []
takeR n (DLNode prev x _) = x : (takeR (n1) prev)
(takeF
and takeR
are simply to let you look at the results of mkDList
: they take a specified number of elements, either forward or backward).
The trickery takes place in go
. go
builds a segment of the list, given a pointer to the node off to the left of the segment and off to the right. Look at the second case of go
. We build the first node of the segment, using the given prev
pointer for the left link, and the node pointer we are about to compute in the next step for the right link.
This goes on right the way through the segment. But how do we manage to create a circular list this way? How can we know right at the beginning what the pointer to the end of the list will be?
Take a look at mkDList
. Here, we simply take the (first,last)
pointers we get from go
, and pass them back in as the next
and prev
pointers respectively, thus tying the knot. This all works because of lazy evaluation.
Tying bigger knots
The above works for simple cases, but sometimes you need construct some very complex data structures, where the pattern of recursion is not known at compile time. If this is the case, may need to use an auxiliary dictionary data structure to help you tie your knots.
Consider, for example, how you would implement deterministic finite automata (DFAs). One possibility is:
type IndirectDfa a = (Int, [IndirectState a])
data IndirectState a =
IndirectState Bool [(a, Int)]
That is, a DFA is a set of states, one of which is distinguished as being the "start state". Each state has a number of transitions which lead to other states, as well as a flag which specifies or not the state is final.
This representation is fine for manipulation, but it's not as suitable for actually executing the DFA as it could be because we need to "look up" a state every time we make a transition. There are relatively cheap ways to implement this indirection, of course, but ideally we shouldn't have to pay much for it.
What we really want is a recursive data structure:
data DirectDfa a
= DirectState Bool [(a, DirectDfa a)]
Then we can just execute the DFA like this:
runDfa :: (Eq a) => DirectDfa a > [a] > Bool
runDfa (DirectState final trans) []
= final
runDfa (DirectState final trans) (x:xs)
= case [ s  (x',s) < trans, x == x' ] of
[] > False
(s:_) > runDfa s xs
(Note: We're only optimising state lookup here, not deciding which transition to take. As an exercise, consider how you might optimise transitions. You may wish to use RunTimeCompilation.)
Turning the indirect recursion into direct recursion requires tying knots, and it's not immediately obvious how to do this by holding onto lazy pointers, because any state can potentially point to any other state (or, indeed, every other state).
What we can do is introduce a dictionary data structure to hold the (lazily evaluated) new states, then introducing a recursive reference can be done with a a simple dictionary lookup. In principle, you could use any dictionary data structure (e.g. Map
). However, in this case, the state numbers are dense integers, so it's probably easiest to use an array:
indirectToDirect :: IndirectDfa a > DirectDfa a
indirectToDirect (start, states)
= tieArray ! start
where
tieArray = array (0,length states  1)
[ (i,direct s)  (i,s) < zip [0..] states ]
direct (IndirectState final trans)
= DirectState final [ (x, tieArray ! s)  (x,s) < trans ]
Note how similar this is to the technique of MemoisingCafs. In fact what we've done here is "memoised" the data structure, using something like HashConsing.
As noted previously, pretty much any dictionary data structure will do. Often you can even use structures with slow lookup (e.g. association lists). This is because fully lazy evaluation ensures that you only pay for each lookup the first time you use it; subsequent uses of the same part of the data structure are effectively free.
Transformations of cyclic graphs and the Credit Card Transform
Cycles certainly make it difficult to transform graphs in a pure nonstrict language. Cycles in a source graph require us to devise a way to mark traversed nodes  however we cannot mutate nodes and cannot even compare nodes with a generic (derived) equality operator. Cycles in a destination graph require us to keep track of the already constructed nodes so we can complete a cycle.
An obvious solution is to use a state monad and IORefs. There is also a monadless solution, which is less obvious: seemingly we cannot add a node to the dictionary of already constructed nodes until we have built the node. This fact means that we cannot use the updated dictionary when building the descendants of the node  which need the updated dictionary to link back. The problem can be overcome however with a credit card transform (a.k.a. "buy now, pay later" transform). To avoid hitting the bottom, we just have to "pay" by the "due date".
For illustration, we will consider the problem of printing out a nondeterministic finite automaton (NFA) and transforming it into a deterministic finite automaton (DFA). Both NFA and DFA are represented as cyclic graphs. The problem has been discussed on the Haskell/HaskellCafe mailing lists. The automata in question were to recognize strings over a binary alphabet.
A state of an automaton over a binary alphabet is a data structure:
data (Ord l,Show l) => FaState l =
FaState {label :: l, acceptQ :: Bool,
trans0:: [FaState l],
trans1:: [FaState l]}
whose fields have the obvious meaning. Label is used for printing out and comparing states. The flag acceptQ
tells if the state is final. Since an FaState
can generally represent a nondeterministic automaton, transitions are the lists of states.
An automaton is then a list of starting states.
type FinAu l = [FaState l]
For example, an automaton equivalent to the regular expression 0*(0(0+1)*)*
could be defined as:
dom18 = [one]
where one = FaState 1 True [one,two] []
two = FaState 2 True [two,one] [one,two]
using the straightforward translation from a regular expression to an NFA.
We would like to compare and print automata and their states:
instance (Ord l,Show l) => Eq (FaState l) where
(FaState l1 _ _ _) == (FaState l2 _ _ _) = l1 == l2
Printing a FaState
however poses a slight problem. For example, the state labeled 1 in the automaton dom18
refers to itself. If we blindly "follow the links", we will loop forever. Therefore, we must keep track of the already printed states. We need a data structure for such an occurrence check, with the following obvious operations:
class OCC occ where
empty:: occ a
seenp:: (Eq a) => a > occ a > Bool  occurrence check predicate
put:: a > occ a > occ a  add an item
In this article, we realize such a data structure as a list. In the future, we can pull in something fancier from the Edison collection:
instance OCC [] where
empty = []
seenp = elem
put = (:)
We are now ready to print an automaton. To be more precise, we traverse the corresponding graph depthfirst, preorder, and keep track of the already printed states. A states_seen
datum accumulates the shown states, so we can be sure we print each state only once and thus avoid the looping.
instance (Ord l,Show l) => Show (FaState l) where
show state = "{@" ++ showstates [state] (empty::[FaState l]) "@}"
where
 showstates worklist seen_states suffix
showstates [] states_seen suffix = suffix
showstates (st:rest) states_seen suffix
 st `seenp` states_seen = showstates rest states_seen suffix
showstates (st@(FaState l accept t0 t1):rest) states_seen suffix =
showstate st
$ showstates (t0++t1++rest) (st `put` states_seen) suffix
showstate (FaState l accept t0 t1) suffix
= "{State " ++ (show l) ++
" " ++ (show accept) ++ " " ++ (show $ map label t0) ++
" " ++ (show $ map label t1) ++ "}" ++ suffix
Now,
CCardFA> print dom18  prints as
CCardFA> [{@{State 1 True [1,2] []}{State 2 True [2,1] [1,2]}@}]
The acceptance function for our automata can be written as follows. The function takes the list of starting states and the string over the boolean alphabet. The function returns True
if the string is accepted.
finAuAcceptStringQ start_states str =
any (\l > acceptP l str) start_states
where acceptP (FaState _ acceptQ _ _) [] = acceptQ
acceptP (FaState _ _ t0 t1) (s:rest) =
finAuAcceptStringQ (if s then t1 else t0) rest
To test the automata, we can try
test1= finAuAcceptStringQ dom18 $ map (>0) [0,1,0,1]
test2= finAuAcceptStringQ dom18 $ map (>0) [1,1,0,1]
test3= finAuAcceptStringQ dom18 [True]
test4= finAuAcceptStringQ dom18 [False]
We are now ready to write the NFA→DFA conversion, a determinization of an NFA. We implement the textbook algorithm of tracing set of NFA states. A state in the resulting DFA corresponds to a list of the NFA states. A DFA is generally a cyclic graph, often with cycles of length 1 (selfreferenced nodes). To be able to "link back" as we build DFA states, we have to remember the already constructed states. We need a data structure, a dictionary of states:
class StateDict sd where
emptyd :: sd (l,FaState l)
locate :: (Eq l) => l > sd (l,FaState l) > Maybe (FaState l)
putd :: (l,FaState l) > sd (l,FaState l) > sd (l,FaState l)
For now, we realize this dictionary as an associative list. If performance matters, we can use a fancier dictionary from the Edison collection:
instance StateDict [] where
emptyd = []
locate = lookup
putd = (:)
The work of the NFA→DFA conversion is done by the following function determinize_cc
. The function takes a list of NFA states, the dictionary of the already built states, and returns a pair ([dfa_state], updated_dictionary)
where [dfa_state]
is a singleton list.
 [nfa_state] > dictionary_of_seen_states >
 ([dfa_state],updated_dictionary)
 [dfa_state] is a singleton list
determinize_cc states converted_states =
 first, check the cache to see if the state has been built already
case dfa_label `locate` converted_states of
Nothing > build_state
Just dfa_state > ([dfa_state],converted_states)
where
 [NFA_labels] > DFA_labels
det_labels = sort . nub . map label
dfa_label = det_labels states
 find out NFAfollowers for [nfa_state] upon ingestion of 0 and 1
(t0_followers,t1_followers) =
foldr (\st (f0,f1) > (trans0 st ++ f0, trans1 st ++ f1))
([],[]) states
acceptQ' = any acceptQ states
 really build the dfa state and return ([dfa_state],updated_cache)
build_state = let
 note, the dfa_state is computed _below_
converted_states1 = (dfa_label,dfa_state) `putd` converted_states
(t0', converted_states2) =
(determinize_cc t0_followers converted_states1)
(t1', converted_states3) =
(determinize_cc t1_followers converted_states2)
dfa_state =
(FaState dfa_label acceptQ' t0' t1')
in ([dfa_state],converted_states3)
The front end of the NFA→DFA transformer:
finAuDeterminize states = fst $ determinize_cc states []
At the heart of the credit card transform is the phrase from the above code:
converted_states1 = (dfa_label,dfa_state) `putd` converted_states

The phrase expresses the addition to the dictionary of the converted_states
of a dfa_state
that we haven't built yet. The computation of the dfa_state
is written 4 lines below the phrase in question. Because (,)
is nonstrict in its arguments and locate
is nonstrict in its result, we can get away with a mere promise to "pay".
Note that the computation of the dfa_state
needs t0'
and t1'
, which in turn rely on converted_states1
. This fact shows that we can tie the knot by making a promise to compute a state, add this promise to the dictionary of the built states, and use the updated dictionary to build the descendants. Because Haskell is a nonstrict language, we don't need to do anything special to make the promise. Every computation is Haskell is by default a promise.
We can print the DFA for dom18
to see what we've got:
CCardFA> finAuDeterminize dom18
CCardFA> which shows
CCardFA> [{@{State [1] True [[1,2]] [[]] }
CCardFA> {State [1,2] True [[1,2]] [[1,2]]}
CCardFA> {State [] False [[]] [[]] }@}]
which is indeed a DFA (which happens to be minimal) recognizing (0+1)*  1(0+1)*
We can run the determinized FA using the same function finAuAcceptStringQ
:
test1' = finAuAcceptStringQ (finAuDeterminize dom18) $ map (>0) [0,1,0,1]
test2' = finAuAcceptStringQ (finAuDeterminize dom18) $ map (>0) [1,1,0,1]
The complete code for this example is in http://pobox.com/~oleg/ftp/Haskell/CCardtransformDFA.lhs.
Another example of tying a knot in the case of forward links, by using a fixedpoint combinator, is discussed in http://www.mailarchive.com/haskell@haskell.org/msg10687.html.
Improved errorrecovery for transformations of cyclic graphs
(...some observations about the aforementioned forward links/fixedpoint combinator example)
For a long time, I've had an issue with Oleg's reply to Hal Daume III, the "forward links" example. The problem is that it doesn't really exploit laziness or circular values. It's solution would work even in a strict language. It's simply a functional version of the standard approach: build the result with markers and patch it up afterwards.
It is a fairly clever way of doing purely something that is typically done with references and mutable update, but it doesn't really address what Hal Daume III was after. Fixing Hal Daume's example so that it won't loop is relatively trivial; simply change the case to a let or equivalently use a lazy pattern match in the case. However, if that's all there was to it, I would've written this a long time ago.
The problem is that it no longer gives you control of the error message or anyway to recover from it. With GHC's extensions to exception handling you could do it, but you'd have to put
readDecisionTree
in theIO
monad to recover from it, and if you wanted better messages you'd have to put most of the parsing in theIO
monad so that you could catch the error earlier and provide more information then rethrow.What's kept me is that I couldn't figure out a way to tie the knot when the environment had a type like,
Either String [(String,DecisionTree)]
. This is because it's impossible for this case; we decide whether to return:
Left "could not find subtree"
orRight someValue
and therefore whether the environment is
Left
orRight
based on whether we could find the subtree in the environment. In effect, we need to lookup a value in an environment we may return to know whether to return it. Obviously this is a truly circular dependency.This made me think that Oleg's solution was as good as any other and better than some (actually, ironically Oleg's solution also uses a let instead of a case, however, there's nothing stopping it from being a case, but it still would provide no way to recover from it without effectively doing what is mentioned below). Recently, I've thought about this again and the solution is obvious and follows directly from the original definition modified to use let.
It doesn't loop because only particular values in the lookup table fail, in fact, you might never know there was a variable lookup error if you didn't touch all of the tree. This translates directly into the environment having type
[(String,Either String DecisionTree)]
.There are several benefits to this approach compared to Oleg's:
 it solves my original problem, you are now able to specify the error messages (Oleg's can do this),
 it goes beyond that (and beyond Hal Daume's original "specification") and also allows you to recover from an error without resorting to the
IO
monad and/or extensions (Oleg's can't do this), it does implicitly what Oleg's version does explicitly,
 because of (3) it shares properly while Oleg's does not,
 both the environment and the returned value are made up of showable values, not opaque functions,
 it requires less changes to the original code and is more localized than Oleg's solution; only the variable lookup and toplevel function will need to change.
To recover, all one needs to do is make sure all the values in the lookup table are
Right
values. If they aren't, there are various ways you could collect the information; there are also variations on how to combine error information and what to provide. Even without a correctness check, you can still provide better error messages for the erroneous thunks.A possible variation that loses some of the benefits, is to change the
DecisionTree
type (or have a different version,IndirectComposite
comes to mind here) that hasEither ErrorInfo ErrorDecisionTree
subnodes, which will allow you to recover at any time (though, if you want to make a normalDecisionTree
out of it you will lose sharing). Also, the circular dependency only comes up if you need to use the environment to decide on an error.For example:
 a plain old syntactic parse error can cyclicly use an
Either ErrorInfo [(String,DecisionTree)]
perfectly fine (pass infromRight env
wherefromRight ~(Right x) = x)
. It will also work even with the above approach giving the environment the typeEither [(String,Either ErrorInfo DecisionTree)]
. Below is code for a simplified scenario that does most of these things,
module Main where import Maybe ( fromJust ) import Monad main :: IO () main = do input < getContents length input `seq` print (fixup input) instance Monad (Either s) where return = Right m >>= f = either Left f m isLeft :: Either a b > Bool isLeft (Left _) = True isLeft _ = False fromRight :: Either a b > b fromRight ~(Right x) = x fixup :: String > Either String [(String,Either String String)] fixup input = env where env = mapM (parse (fromRight env) . words) (lines input) checkedFixup :: String > Either String [(String,String)] checkedFixup input = case fixup input of Left err > Left err Right env > case filter (isLeft . snd) env of [] > Right $ map (\(n,Right v) > (n,v)) env (_,Left err):_ > Left err parse :: [(String,Either String String)] > [String] > Either String (String,Either String String) parse env ("define":name:values) = Right (name,values') where values' = liftM unwords $ mapM lookupRef values lookupRef ('*':word) = maybe (Left $ "couldn't find "++word++" in "++name) id (lookup word env) lookupRef word = Right word parse env input = Left $ "parse error with: "++unwords inputcheckedFixup
demonstrates how you could check and recover, but since the environment is the return value neitherfixup
orcheckedFixup
quite illustrate having potentially erroneous thunks in the actual return value. Some example:
input outputs define x *y *y define y a b Right [("x",Right "a b a b"), ("y",Right "a b")] define x *y *y aousht define y a b Left "parse error with: aousht" define x *y *z define y a b define z *w Right [("x",Left "couldn't find w in z"), ("y",Right "a b"), ("z",Left "couldn't find w in z")]
 Consider a tree Y that contains the subtree X twice:
 With Oleg's version, when we resolve the
X
variable we look up a (manually) delayed tree and then build X. Each subtree of Y will build it's own version of X.
 With the truly circular version each subtree of Y will be the same, possibly erroneous, thunk that builds X, if the thunk isn't erroneous then when it is updated both of Y's subtrees will point to the same X.