https://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Practical_Graph_Handling&feed=atom&action=historyThe Monad.Reader/Issue5/Practical Graph Handling - Revision history2016-08-30T09:17:12ZRevision history for this page on the wikiMediaWiki 1.19.14+dfsg-1https://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Practical_Graph_Handling&diff=42601&oldid=prevSergv: /* foldG */ sync explanation and code2011-10-27T18:18:09Z<p><span dir="auto"><span class="autocomment">foldG: </span> sync explanation and code</span></p>
<table class='diff diff-contentalign-left'>
<tr valign='top'>
<td colspan='1' style="background-color: white; color:black;">← Older revision</td>
<td colspan='1' style="background-color: white; color:black;">Revision as of 18:18, 27 October 2011</td>
</tr></table>Sergvhttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Practical_Graph_Handling&diff=42600&oldid=prevSergv: Fix a typo2011-10-27T18:13:47Z<p>Fix a typo</p>
<table class='diff diff-contentalign-left'>
<tr valign='top'>
<td colspan='1' style="background-color: white; color:black;">← Older revision</td>
<td colspan='1' style="background-color: white; color:black;">Revision as of 18:13, 27 October 2011</td>
</tr></table>Sergvhttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Practical_Graph_Handling&diff=42599&oldid=prevSergv: Fix numbering in list2011-10-27T17:55:30Z<p>Fix numbering in list</p>
<table class='diff diff-contentalign-left'>
<tr valign='top'>
<td colspan='1' style="background-color: white; color:black;">← Older revision</td>
<td colspan='1' style="background-color: white; color:black;">Revision as of 17:55, 27 October 2011</td>
</tr></table>Sergvhttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Practical_Graph_Handling&diff=22882&oldid=prevTino: improved formatting / rendering of haskell code2008-09-13T19:02:55Z<p>improved formatting / rendering of haskell code</p>
<table class='diff diff-contentalign-left'>
<tr valign='top'>
<td colspan='1' style="background-color: white; color:black;">← Older revision</td>
<td colspan='1' style="background-color: white; color:black;">Revision as of 19:02, 13 September 2008</td>
</tr></table>Tinohttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Practical_Graph_Handling&diff=20896&oldid=prevGwern: /* Finite Automaton */ fmt2008-05-10T03:18:30Z<p><span dir="auto"><span class="autocomment">Finite Automaton: </span> fmt</span></p>
<table class='diff diff-contentalign-left'>
<tr valign='top'>
<td colspan='1' style="background-color: white; color:black;">← Older revision</td>
<td colspan='1' style="background-color: white; color:black;">Revision as of 03:18, 10 May 2008</td>
</tr></table>Gwernhttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Practical_Graph_Handling&diff=20895&oldid=prevGwern: /* LALR Automaton */ fmt2008-05-10T03:18:07Z<p><span dir="auto"><span class="autocomment">LALR Automaton: </span> fmt</span></p>
<table class='diff diff-contentalign-left'>
<tr valign='top'>
<td colspan='1' style="background-color: white; color:black;">← Older revision</td>
<td colspan='1' style="background-color: white; color:black;">Revision as of 03:18, 10 May 2008</td>
</tr></table>Gwernhttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Practical_Graph_Handling&diff=20894&oldid=prevGwern: /* Introduction */ fmt2008-05-10T03:17:16Z<p><span dir="auto"><span class="autocomment">Introduction: </span> fmt</span></p>
<table class='diff diff-contentalign-left'>
<tr valign='top'>
<td colspan='1' style="background-color: white; color:black;">← Older revision</td>
<td colspan='1' style="background-color: white; color:black;">Revision as of 03:17, 10 May 2008</td>
</tr></table>Gwernhttps://wiki.haskell.org/index.php?title=The_Monad.Reader/Issue5/Practical_Graph_Handling&diff=20843&oldid=prevWouterSwierstra at 14:26, 9 May 20082008-05-09T14:26:50Z<p></p>
<p><b>New page</b></p><div>'''This article needs reformatting! Please help tidy it up.'''--[[User:WouterSwierstra|WouterSwierstra]] 14:26, 9 May 2008 (UTC)<br />
<br />
= A Practical Approach to Graph Manipulation =<br />
''by JeanPhilippeBernardy for The Monad.Reader Issue 5''<br />
[[BR]]<br />
''[[Date(2005-07-08T20:48:51Z)]]''<br />
<br />
'''Abstract.'''<br />
<br />
Tree-based data structures are easy to deal with in haskell. <br />
However, working with graph-like structures in practice is much less obvious. <br />
In this article I present a solution that has worked for me in many cases.<br />
<br />
<br />
== Introduction ==<br />
<br />
I always found that dealing with graphs in haskell is a tricky subject. <br />
Even something like a implementing a depth-first search, which is <br />
trivally achieved in an imperative language, deserves an article <br />
on its own for haskell [[#dfs 4]].<br />
A PhD thesis has been written on the subject of graphs and functional programming [[#king-thesis 2]], and it seems that it still <br />
doesn't exhaust the <br />
design-space: radically different ideas have been proposed afterwards [[#induct 3]].<br />
<br />
In this article I'll present (a simplified version of) a solution that <br />
I think deserves more coverage [[#cycle-therapy 1]]. The idea is to abstract graph <br />
manipulation by anamorphisms and catamorphisms.<br />
This approch features "separation of concerns" and "composability", hence it<br />
can be readily applied to <br />
practical problems.<br />
<br />
* Section 2 shows how anamorphisms and catamorphisms can be generalised to graphs.<br />
* Section 3 details the data structures used to represent graphs<br />
* Section 4 discusses various problems where cata/anamorphisms can be applied<br />
* Section 5 gives a sample implementation for the catamorphism and anamorphism<br />
* Section 6 concludes<br />
<br />
=== Nota ===<br />
<br />
This article has been generated from a literate haskell source. <br />
So, although the text of this wiki page will not compile, all the examples are <br />
real and run. The source can be accessed here: attachment:PracticalGraphHandling.lhs<br />
<br />
We will assume you know the hierarchcal libraries. Refer to http://haskell.org/ghc/docs/latest/html/libraries/index.html in case of doubt.<br />
<br />
== Origami with Graphs ==<br />
<br />
=== Fold & Unfold (the big deal) ===<br />
<br />
Most of you probably know what a "fold" (also known as catamorphism) <br />
is. For those who don't, intuitively, it's an higher-order operation <br />
that reduces a complex structure to a single value. It applies a <br />
function given as parameter to each node, propagating the results <br />
up to the root. This is a highly imprecise definition, for more <br />
details please read [[#bananas-lenses 5]].<br />
<br />
For example, the fold operation on lists can be typed as follows:<br />
{{{#!syntax haskell<br />
foldr :: (a -> b -> b) -> -- ^ operation to apply<br />
b -> -- ^ initial value<br />
[a] -> -- ^ input list<br />
b -- ^ result<br />
}}}<br />
<br />
Conversely, "unfold" builds a complex structure out of a building<br />
function, applying it iteratively.<br />
<br />
{{{#!syntax haskell<br />
unfoldr :: (b -> Maybe (a, b)) -> -- ^ building function (Nothing => end of list)<br />
b -> -- ^ seed value<br />
[a] -- ^ result<br />
}}}<br />
<br />
The second argument is the initial value from which the <br />
whole resulting list will be derived, by applying the 1st argument.<br />
In the following we'll refer to it as the "seed".<br />
<br />
<br />
The catamorphism/anamorphism abstractions have proven to be <br />
very useful in practise. They're ubiquitous to any haskell <br />
programming, either explicitly, or implicitly (hidden in <br />
higher-level operations). In this article I'll show how <br />
those abstractions can be generalised to graph structures, <br />
and argue that they are equally useful in this case.<br />
<br />
The rest of the article assumes the reader is fairly familiar <br />
with fold and unfold. Fortunately there are many articles on the<br />
subject. For example you can refer to [[#bananas-lenses 5]] if you ever feel uncomfortable.<br />
<br />
=== Generalisation ===<br />
<br />
Let's examine how fold/unfold can be generalized for graphs.<br />
Since we are working on graphs instead of lists, we must account for<br />
<br />
1. Any number of children for a node;<br />
1. "Backwards" arcs (cycles);<br />
1. Labelled edges.<br />
<br />
The most relevant point being 2, of course.<br />
<br />
==== unfoldG ====<br />
<br />
From the above, we can deduce that the type of unfoldG will be:<br />
<br />
{{{#!syntax haskell<br />
unfoldG :: (Ord s) => (s -> (n, [(e, s)])) -> s -> (Vertex, LabGraph n e)<br />
unfoldG f r = (r', res)<br />
where ([r'], res) = unfoldGMany f [r]<br />
}}}<br />
where {{{s}}} is the seed type, {{{n}}} is the node labels, {{{e}}} the edges labels.<br />
<br />
The {{{Ord s}}} constraint reflects point 2 above. <br />
It is needed because the unfoldG function must record every <br />
seed value encountered.<br />
Whenever a seed is seen a second time, {{{unfoldG}}} will recognize <br />
it and create a "backward arc".<br />
We use {{{Ord}}} instead of {{{Eq}}} because a mere equality test rules out using {{{Data.Map}}}.<br />
<br />
The attentive reader will note that we return an additional <br />
Vertex value. This is needed to identifty which node the root<br />
seed corresponds to.<br />
<br />
In order to get an intuitive feeling of how {{{unfoldG}}} works,<br />
let's examine a simple example.<br />
<br />
{{{#!syntax haskell<br />
gr1 :: LabGraph Int Char<br />
(_,gr1) = unfoldG gen (0::Int) <br />
where gen x = (x,[('a',(x+1) `mod` 10), ('b', (x+2) `mod` 10)])<br />
}}}<br />
<br />
{{{gr1}}} being defined as above, its structure is:<br />
<br />
attachment:gr1.png<br />
<br />
Because we might want to build a graph from a set of seeds <br />
instead of a single one, we will also need the following function:<br />
{{{#!syntax haskell<br />
unfoldGMany :: (Ord s) => (s -> (n, [(e, s)])) -> [s] -> ([Vertex], LabGraph n e)<br />
unfoldGMany f roots = runST ( unfoldGManyST f roots ) -- detailed later<br />
}}} <br />
<br />
{{{unfoldG}}}, alone, is already very a practical tool, because it <br />
lets you reify a function ({{{a -> a}}}) graph. It then can be examined, <br />
processed, etc. whereas the function can only be evaluated.<br />
<br />
==== foldG ====<br />
<br />
On a graph, the catamorphism (fold) type will become:<br />
{{{#!syntax haskell<br />
foldG :: (Eq r) => r -> (Vertex -> [(e, r)] -> r) -> Graph e -> Vertex -> r<br />
foldG i f g v = foldGAll i f g ! v<br />
}}}<br />
<br />
As for {{{unfoldG}}}, the {{{foldG}}} <br />
function must include a special mechanism to handle cycles.<br />
The idea is to apply the operation iteratively until the result <br />
converges. It's the purpose of the first <br />
parameter is to "bootstrapp" the process: <br />
it will be used as an initial value.<br />
<br />
Thus, {{{foldG i f g v}}} will iteratively <br />
apply {{{f}}} on nodes of graph {{{g}}}, <br />
using {{{i}}} as "bottom" value. It will return <br />
the value computed at vertex {{{v}}}. <br />
Of course, this will work only if {{{f}}} is well-behaved: <br />
it must converge at some point.<br />
I won't dwelve in to the theoretical details <br />
here, see [[#cycle-therapy 1]] for a<br />
formal explanation.<br />
<br />
Notice that {{{foldG}}} can work on a graph without node labels. <br />
If the parameter function needs to access node labels, it can <br />
do so without {{{foldG}}} needing to know.<br />
<br />
It's also worth noticing that, in our implementation, the <br />
information will be propagated in the reverse direction of arcs. <br />
<br />
It's very common to need the result value for each vertex, <br />
hence the need for :<br />
<br />
{{{#!syntax haskell<br />
foldGAll :: (Eq r) => r -> (Vertex -> [(e, r)] -> r) -> Graph e -> Table r<br />
}}}<br />
<br />
<br />
<br />
<br />
<br />
The implementation of these functions doesn't matter much. <br />
The point of the article is not how these can be implemented, <br />
but how they can be used for daily programming tasks. <br />
For completeness though, we'll provide a <br />
sample implemenation at the end of the article.<br />
<br />
== Data Structure & Accessors ==<br />
<br />
Without further ado, let's define the data structures we'll work on. <br />
<br />
<br />
{{{#!syntax haskell<br />
type Vertex = Int<br />
type Table a = Array Vertex a<br />
type Graph e = Table [(e, Vertex)]<br />
type Bounds = (Vertex, Vertex)<br />
type Edge e = (Vertex, e, Vertex)<br />
}}}<br />
A graph is a mere adjacency list table, tagged with edge labels.<br />
<br />
The above structure lacks labels for nodes.<br />
This is easily fixed by adding a labeling (or coloring) function.<br />
{{{#!syntax haskell<br />
type Labeling a = Vertex -> a<br />
data LabGraph n e = LabGraph (Graph e) (Labeling n)<br />
<br />
vertices (LabGraph gr _) = indices gr<br />
<br />
labels (LabGraph gr l) = map l (indices gr)<br />
}}}<br />
<br />
<br />
The above departs slightly from what's prescribed in [[#cycle-therapy 1]]. Instead of <br />
a ''true graph'' built by knot-tying, we chose to use an {{{Array}}}<br />
with integers as explicit vertex references.<br />
This is closely follows <br />
Data.Graph in the hierarchical libraries, <br />
the only difference being that we have labelled edges. <br />
<br />
Not only this is simpler, but it has the advantage that we can reuse<br />
most of the algorithms from Data.Graph with only minor changes:<br />
<br />
{{{#!syntax haskell<br />
-- | Build a graph from a list of edges.<br />
buildG :: Bounds -> [Edge e] -> Graph e<br />
buildG bounds0 edges0 = accumArray (flip (:)) [] bounds0 [(v, (l,w)) | (v,l,w) <- edges0]<br />
<br />
-- | The graph obtained by reversing all edges.<br />
transposeG :: Graph e -> Graph e<br />
transposeG g = buildG (bounds g) (reverseE g)<br />
<br />
reverseE :: Graph e -> [Edge e]<br />
reverseE g = [ (w, l, v) | (v, l, w) <- edges g ]<br />
}}}<br />
<br />
<br />
However, as previously said, we'll try to abstract <br />
away from the details of the structure.<br />
This is not always possible, but in such cases, <br />
I believe the array representation to be<br />
a good choice, because it's easy to work with. <br />
If anything, one can readily use all the<br />
standard array functions.<br />
<br />
For example, here's the function to output a graph as a GraphViz file:<br />
{{{#!syntax haskell<br />
showGraphViz (LabGraph gr lab) = <br />
"digraph name {\n" ++<br />
"rankdir=LR;\n" ++<br />
(concatMap showNode $ indices gr) ++<br />
(concatMap showEdge $ edges gr) ++<br />
"}\n"<br />
where showEdge (from, t, to) = show from ++ " -> " ++ show to ++<br />
" [label = \"" ++ show t ++ "\"];\n"<br />
showNode v = show v ++ " [label = " ++ (show $ lab v) ++ "];\n"<br />
<br />
edges :: Graph e -> [Edge e]<br />
edges g = [ (v, l, w) | v <- indices g, (l, w) <- g!v ]<br />
}}} <br />
<br />
<br />
== Applications ==<br />
<br />
I'll now enumerate a few problems where the "origami" approach can be applied successfully.<br />
<br />
=== Closure ===<br />
<br />
A simple application (special case) of "unfoldG" the <br />
computation of the transitive closure of a non-deterministic function.<br />
<br />
{{{#!syntax haskell<br />
closure :: Ord a => (a -> [a]) -> (a -> [a])<br />
closure f i = labels $ snd $ unfoldG f' i <br />
where f' x = (x, [((), fx) | fx <- f x])<br />
}}}<br />
<br />
In this context, "non deterministic" means that it yields many <br />
values, as a list. As noted before, this will work only when <br />
everything remains finite in size.<br />
<br />
<br />
For example, if we define<br />
<br />
{{{#!syntax haskell<br />
interleave (x1:x2:xs) = (x1:x2:xs) : (map (x2:) (interleave (x1:xs)))<br />
interleave xs = [xs]<br />
<br />
interleave "abcd" ==> ["abcd","bacd","bcad","bcda"]<br />
}}}<br />
<br />
a very bad way to compute the permutations of list can be<br />
<br />
{{{#!syntax haskell<br />
permutations = closure interleave<br />
<br />
permutations "abcd" ==> ["abcd","bacd","acbd","cabd","abdc","badc", <br />
"adbc","dabc","dbac","bdac","dacb","adcb",<br />
"dcab","cdab","cadb","acdb","cdba","dcba",<br />
"cbda","bcda","bdca","dbca","bcad","cbad"]<br />
}}}<br />
<br />
But sometimes the function to 'close' is more complicated than {{{interleave}}} and<br />
then {{{closure}}} becomes really useful.<br />
<br />
<br />
=== Shortest Path ===<br />
<br />
Let us now examine the toy problem of finding the distance <br />
to a given node from all the other nodes of the graph. <br />
Most readers probably know the Dijkstra's algorithm to <br />
compute the solution to the problem. We will not try <br />
to reproduce it here, instead we will define the computation in terms of {{{foldG}}}.<br />
<br />
Here it goes:<br />
{{{#!syntax haskell<br />
-- | Compute the distance to v for every vertex of gr.<br />
distsTo :: Vertex -> Graph Float -> Table Float<br />
distsTo v gr = foldGAll infinite distance gr <br />
where infinite = 10000000 -- well, you get the idea<br />
distance v' neighbours <br />
| v == v' = 0<br />
| otherwise = minimum [distV+arcWeight | (distV, arcWeight) <- neighbours]<br />
}}}<br />
<br />
So clear that it barely needs to be explained. :) <br />
Just notice how the minimize function assumes that the <br />
distance is already computed for all its neighbours. <br />
This works because {{{foldG}}} will iterate until it finds the fixed point.<br />
<br />
On this simple graph,<br />
<br />
{{{#!syntax haskell<br />
grDist = buildG (1,5) [(1,5.0,2), (2,5.0,3), (2,7.0,4), (3,5.0,4), (4,5.0,5), (4,3.0,1)]<br />
}}}<br />
<br />
attachment:grdist.png<br />
<br />
the result of {{{#!syntax haskell<br />
dists = distsTo 5 grDist<br />
}}} is<br />
<br />
attachment:grdist2.png<br />
<br />
(labeling each node with the its result, ie. distance to vertex {{{5}}})<br />
<br />
=== Finite Automaton ===<br />
<br />
Finite automatons are basically graphs, so let's see how we can apply the <br />
framework to their analysis.<br />
<br />
First, let's define an automaton. For our purposes, it is a graph <br />
of states/transitions, some of the states being marked as initial or final.<br />
<br />
{{{#!syntax haskell<br />
type Automaton t = (Vertex, Graph t, Set Vertex) -- ^ Initial, transitions, finals<br />
}}}<br />
<br />
For starters, here is how the {{{showGraphViz}}} function can be applied to automaton display:<br />
<br />
{{{#!syntax haskell<br />
automatonToGraphviz (i, gr, fs) = showGraphViz (LabGraph gr lab)<br />
where lab :: Labeling String<br />
lab v = (if v == i then (">"++) else id) $ <br />
(if v `Set.member` fs then (++"|") else id) []<br />
}}}<br />
<br />
Nothing ground breaking. We only label the nodes accordingly to<br />
their final or initial status.<br />
<br />
{{{#!syntax haskell<br />
aut1 = (1, buildG (1,3) [(1,'a',2),(2,'a',2),(2,'b',2),(2,'c',3),(1,'a',3)], Set.fromList [3])<br />
}}}<br />
<br />
attachment:aut1.png<br />
<br />
A more interesting example is how to transform a non-deterministic <br />
automaton to an equivalent deterministic one. The underlying idea <br />
is that non-deterministic execution of the automaton is equivalent <br />
to deterministic execution on all possible transitions at once. <br />
Refer to [[#hop&ull 6]] for details. This is relatively easily done using {{{unfoldG}}}.<br />
{{{#!syntax haskell<br />
simpleGenerator f x = (x, f x)<br />
<br />
nfaToDfa :: Ord e => Automaton e -> Automaton e<br />
nfaToDfa (initial1, aut1, finals1) = (initial2, aut2, finals2)<br />
where (initial2, LabGraph aut2 mapping) = unfoldG (simpleGenerator build) seed<br />
seed = Set.singleton initial1<br />
build state = Map.toList $ Map.fromListWith Set.union $ map lift $<br />
concat $ map (aut1 !) $ Set.toList state<br />
lift (t,s) = (t, Set.singleton s)<br />
isFinal = setAny (`Set.member` finals1) . mapping<br />
finals2 = Set.fromList $ filter isFinal $ indices aut2<br />
setAny f = any f . Set.toList<br />
}}}<br />
<br />
The 'build' function is the tricky part. Yet, it's not as complicated as it seems: all it does is <br />
1. Find all reachable nodes from a set of nodes; <br />
1. Classify them by transition label<br />
1. Build target state-sets accordingly.<br />
<br />
{{{#!syntax haskell<br />
aut2 = nfaToDfa aut1<br />
}}}<br />
<br />
attachment:aut2.png<br />
<br />
Another thing we possibly wish to compute is the set of <br />
strings accepted by the automaton, (aka. the language it <br />
defines). Most of the time this will be infinite, so <br />
we will limit ourselves to strings of length {{{n}}} maximum.<br />
We need finiteness because otherwise {{{foldG}}} would not find<br />
a fixed point: string sets would keep growing idefinitely.<br />
<br />
{{{#!syntax haskell<br />
accepted n (initial1, aut1, finals1) = Set.unions [resultTable ! v | v <- Set.toList finals1]<br />
-- gather what's accepted at all final states<br />
where resultTable = foldGAll Set.empty step (transposeG aut1)<br />
step v trans = Set.unions ((if v == initial1 then Set.singleton [] else Set.empty) : <br />
[Set.map ((++[t]) . take (n-1) ) s | (t,s) <- trans])<br />
}}}<br />
<br />
Notice that we need to reverse the graph arcs, otherwise the information propagates in the wrong direction.<br />
<br />
With <br />
{{{#!syntax haskell<br />
accAut1 = accepted 4 aut1<br />
accAut2 = accepted 4 aut2<br />
}}}<br />
we have <br />
{{{#!syntax haskell<br />
accAut1 == accAut2 == {"a","aaac","aabc","aac","abac","abbc","abc","ac"}<br />
}}}<br />
<br />
=== LALR Automaton ===<br />
<br />
Another area where I applied graph (un)folding is LALR(1) parser generation. The detailed code<br />
depends on just too many things to fit in this paper, <br />
thus we will only sketch how pieces fit<br />
together. Also, since a course on parsing is clearly beyond the scope of this article, <br />
please refer to local copy of the dragon book [[#dragon 7]] for details on the method.<br />
<br />
In the process of generating tables for a LALR automaton, <br />
there are three steps amenable to implementation by {{{foldG}}} and {{{unfoldG}}}.<br />
<br />
1. Construction of the closure of a LR-items kernel. This one is very similar to the {{{closure}}} function described above, except that we don't discard the graph structure. It'll be of use for step 3.<br />
1. LR(0) automaton generation. Then again a use for {{{unfoldG}}}.<br />
1. Propagation of the lookahead. It is a fold over the whole graph of LR-items, basically using set union as coalescing operation. It is very similar to computation of acceptable strings above.<br />
<br />
<br />
== Implementation ==<br />
<br />
=== UnfoldG ===<br />
<br />
For the sake of completeness, here's how to implement the {{{unfoldG}}} function.<br />
<br />
The algorithm effectively a depth-first search, written in imperative style.<br />
The only difference is that the search graph is remembered and returned as result.<br />
<br />
{{{#!syntax haskell<br />
<br />
unfoldGManyST :: (Ord a) => (a -> (c, [(b, a)]))<br />
-> [a] -> ST s ([Vertex], LabGraph c b)<br />
unfoldGManyST gen seeds =<br />
do mtab <- newSTRef (Map.empty)<br />
allNodes <- newSTRef []<br />
vertexRef <- newSTRef firstId<br />
let allocVertex = <br />
do vertex <- readSTRef vertexRef<br />
writeSTRef vertexRef (vertex + 1)<br />
return vertex<br />
let cyc src =<br />
do probe <- memTabFind mtab src<br />
case probe of<br />
Just result -> return result<br />
Nothing -> do<br />
v <- allocVertex<br />
memTabBind src v mtab <br />
let (lab, deps) = gen src<br />
ws <- mapM (cyc . snd) deps<br />
let res = (v, lab, [(fst d, w) | d <- deps | w <- ws])<br />
modifySTRef allNodes (res:)<br />
return v<br />
mapM_ cyc seeds<br />
list <- readSTRef allNodes<br />
seedsResult <- (return . map fromJust) =<< mapM (memTabFind mtab) seeds<br />
lastId <- readSTRef vertexRef<br />
let cycamore = array (firstId, lastId-1) [(i, k) | (i, a, k) <- list]<br />
let labels = array (firstId, lastId-1) [(i, a) | (i, a, k) <- list]<br />
return (seedsResult, LabGraph cycamore (labels !))<br />
where firstId = 0::Vertex<br />
memTabFind mt key = return . Map.lookup key =<< readSTRef mt<br />
memTabBind key val mt = modifySTRef mt (Map.insert key val)<br />
<br />
}}}<br />
<br />
Notice how every time a seed is encountered, its corresponding vertex number stored. <br />
Whenever the seed is encountered again, the stored is just returned.<br />
<br />
<br />
=== FoldG ===<br />
<br />
{{{#!syntax haskell<br />
foldGAllImplementation bot f gr = finalTbl<br />
where finalTbl = fixedPoint updateTbl initialTbl<br />
initialTbl = listArray bnds (replicate (rangeSize bnds) bot)<br />
<br />
fixedPoint f x = fp x<br />
where fp z = if z == z' then z else fp z'<br />
where z' = f z<br />
updateTbl tbl = listArray bnds $ map recompute $ indices gr<br />
where recompute v = f v [(b, tbl!k) | (b, k) <- gr!v]<br />
bnds = bounds gr<br />
}}}<br />
<br />
<br />
The proposed implementation for foldG is rather bold.<br />
It just applies the coalescing <br />
function repeatedly till it converges.<br />
<br />
While this is not an ideal situation, it's perfectly suited for a first-trial <br />
implementation, or when performance is not crucial.<br />
<br />
If execution time becomes critical, then more specialized <br />
versions can be crafted.<br />
In the case of the shortest path algorithm, for example, <br />
it could take advantage<br />
of the nice properties of the coalescing function to use <br />
a priority queue and greedily<br />
find the fixed point. This would restore the optimal O(n * log n) complexity.<br />
<br />
<br />
== Conclusion ==<br />
<br />
The approach presented may not be excellent for controlling details of implementation <br />
and tuning run-time performance, but I think that's not the point <br />
of haskell programming anyway.<br />
On the other hand, it is very good for quick implementation <br />
of a large range of graph algorithms. The fact that it's mostly based on a<br />
generalisation on fold and unfold should appeal to haskell<br />
programmers.<br />
<br />
<br />
== References ==<br />
<br />
*[[Anchor(cycle-therapy)]] [1] ''Cycle Therapy: A Prescription for Fold and Unfold on Regular Trees'', F. Turbak and J.B. Wells, http://cs.wellesley.edu/~fturbak/pubs/ppdp01.pdf<br />
*[[Anchor(king-thesis)]] [2] ''Functional Programming and Graph Algorithms'', D. J. King, http://www.macs.hw.ac.uk/~gnik/publications <br />
*[[Anchor(induct)]] [3] ''Inductive Graphs and Functional Graph Algorithms'', Martin Erwig, http://web.engr.oregonstate.edu/~erwig/papers/abstracts.html<br />
*[[Anchor(dfs)]] [4] , D. J. King and John Launchbury, http://www.cse.ogi.edu/~jl/Papers/dfs.ps<br />
*[[Anchor(bananas-lenses)]] [5] ''Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire'', Erik Meijer, Maarten Fokkinga, Ross Paterson. http://citeseer.ist.psu.edu/meijer91functional.html<br />
*[[Anchor(hop&ull)]] [6] ''Introduction to Automata Theory, Languages, and Computation'', JE Hopcroft, and JD Ullman, http://www-db.stanford.edu/~ullman/ialc.html <br />
*[[Anchor(dragon)]] [7] ''Compilers: Principles, Techniques and Tools'', Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. (Addison-Wesley 1986; ISBN 0-201-10088-6)<br />
<br />
----<br />
CategoryArticle</div>WouterSwierstra