https://wiki.haskell.org/api.php?action=feedcontributions&user=Julianporter&feedformat=atomHaskellWiki - User contributions [en]2015-06-30T20:31:36ZUser contributionsMediaWiki 1.19.14+dfsg-1https://wiki.haskell.org/MapReduce_with_CloudHaskellMapReduce with CloudHaskell2011-11-01T16:31:33Z<p>Julianporter: /* Storage */</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
This is documentation of my work on developing a proof-of-concept demonstrator for MapReduce using [[GHC/CloudAndHPCHaskell|CloudHaskell]] to provide a framewok for distributed applications, and the monadic approach to MapReduce described [[MapReduce_as_a_monad|here]].<br />
<br />
==Status==<br />
<br />
===Storage===<br />
<br />
I have developed a very simple distributed storage service that provides what is needed to ship data to each processing node at the start of each round of processing, and then to assemble their outputs ready to form the input for the next round.<br />
<br />
* Description of what I have done [http://jpembeddedsolutions.files.wordpress.com/2011/10/storage.pdf here]<br />
* A working application can be found at [http://github.com/Julianporter/Distributed-Haskell git://github.com/Julianporter/Distributed-Haskell.git]<br />
<br />
Code to be placed on Hackage when more robust.<br />
<br />
===Distributed job scheduler===<br />
<br />
Not yet started.<br />
<br />
[[User:Julianporter|julianporter]] 18:20, 31 October 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/MapReduce_with_CloudHaskellMapReduce with CloudHaskell2011-10-31T23:00:29Z<p>Julianporter: /* Dstributed job scheduler */</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
This is documentation of my work on developing a proof-of-concept demonstrator for MapReduce using [[GHC/CloudAndHPCHaskell|CloudHaskell]] to provide a framewok for distributed applications, and the monadic approach to MapReduce described [[MapReduce_as_a_monad|here]].<br />
<br />
==Status==<br />
<br />
===Storage===<br />
<br />
I have developed a very simple distributed storage service that provides what is needed to ship data to each processing node at the start of each round of processing, and then to assemble their outputs ready to form the input for the next round.<br />
<br />
* Description of what I have done [http://jpembeddedsolutions.files.wordpress.com/2011/10/storage.pdf here]<br />
* A working application can be found at <span style="color:blue">git://github.com/Julianporter/Distributed-Haskell.git</span><br />
<br />
Code to be placed on Hackage when more robust.<br />
<br />
===Distributed job scheduler===<br />
<br />
Not yet started.<br />
<br />
[[User:Julianporter|julianporter]] 18:20, 31 October 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/MapReduce_as_a_monadMapReduce as a monad2011-10-31T18:22:05Z<p>Julianporter: /* The monad transformer approach */</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
MapReduce is a general technique for massively parallel programming developed by Google. It takes its inspiration from ideas in functional programming, but has moved away from that paradigm to a more imperative approach.<br />
I have noticed that MapReduce can be expressed naturally, using functional programming techniques, as a form of monad. The standard implementation of MapReduce is the JAVA-based HADOOP framework, which is very complex and somewhat temperamental. Moreover, it is necessary to write HADOOP-specific code into mappers and reducers. My prototype library takes about 100 lines of code and can wrap generic mapper / reducer functions.<br />
<br />
Having shown that we can implement MapReduce as a generalised monad, it transpires that in fact, we can generalise this still further and define a <hask>MapReduceT</hask> monad transformer, so there is a MapReduce type and operation associated to any monad. In particular, it turns out that the <hask>State</hask> monad is just the MapReduce type of the monad <hask>Hom a</hask> of maps <hask>h -> a</hask> where <hask>h</hask> is some fixed type.<br />
<br />
==Initial Approach==<br />
<br />
===Why a monad?===<br />
<br />
What the monadic implementation lets us do is the following:<br />
*Map and reduce look the same.<br />
*You can write a simple wrapper function that takes a mapper / reducer and wraps it in the monad, so authors of mappers / reducers do not need to know anything about the MapReduce framework: they can concentrate on their algorithms.<br />
*All of the guts of MapReduce are hidden in the monad's <hask>bind</hask> function<br />
*The implementation is naturally parallel<br />
*Making a MapReduce program is trivial:<br/><br />
<hask><br />
... >>= wrapMR mapper >>= wrapMR reducer >>= ...<br />
</hask><br/><br />
<br />
===Details===<br />
Full details of the implementation and sample code can be found [http://jpembeddedsolutions.wordpress.com/2011/04/02/mapreduce/ here]. I'll just give highlights here.<br />
<br />
====Generalised mappers / reducers====<br />
One can generalise MapReduce a bit, so that each stage (map, reduce, etc) becomes a function of signature<br/><br />
<hask><br />
a -> ([(s,a)] -> [(s',b)])<br />
</hask><br/><br />
where <hask>s</hask> and <hask>s'</hask> are data types and <hask>a</hask> and <hask>b</hask> are key values. <br />
<br />
====Generalised Monad====<br />
Now, this is suggestive of a monad, but we can't use a monad ''per se'', because the transformation changes the key and value types, and we want to be able to access them separately. Therefore we do the following. <br />
<br />
Let <hask>m</hask> be a <hask>Monad'</hask>, a type with four parameters: <hask>m s a s' b</hask>.<br />
<br />
Generalise the monadic <hask>bind</hask> operation to:<br/><br />
<hask><br />
m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
</hask><br/><br />
<br />
See [http://blog.sigfpe.com/2009/02/beyond-monads.html Parametrized monads].<br />
<br />
Then clearly the generalised mapper/reducer above can be written as a <hask>Monad'</hask>, meaning that we can write MapReduce as<br/><br />
<hask><br />
... >>= mapper >>= reducer >>= mapper' >>= reducer' >>= ...<br />
</hask><br />
<br />
====Implementation details====<br />
<br />
<hask><br />
class Monad' m where<br />
return :: a -> m s x s a<br />
(>>=) :: (Eq b) => m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
<br />
newtype MapReduce s a s' b = MR { runMR :: ([(s,a)] -> [(s',b)]) }<br />
<br />
retMR :: a -> MapReduce s x s a<br />
retMR k = MR (\ss -> [(s,k) | s <- fst <$> ss])<br />
<br />
bindMR :: (Eq b,NFData s'',NFData c) => MapReduce s a s' b -> (b -> MapReduce s' b s'' c) -> MapReduce s a s'' c<br />
bindMR f g = MR (\s -><br />
let<br />
fs = runMR f s<br />
gs = P.map g $ nub $ snd <$> fs<br />
in<br />
concat $ map (\g' -> runMR g' fs) gs)<br />
</hask><br/><br />
The key point here is that <hask>P.map</hask> is a parallel version of the simple <hask>map</hask> function.<br />
<br />
Now we can write a wrapper function<br/><br />
<hask><br />
wrapMR :: (Eq a) => ([s] -> [(s',b)]) -> (a -> MapReduce s a s' b)<br />
wrapMR f = (\k -> MR (g k))<br />
where<br />
g k ss = f $ fst <$> filter (\s -> k == snd s) ss<br />
</hask><br/><br />
which takes a conventional mapper / reducer and wraps it in the <hask>Monad'</hask>. Note that this means that the mapper / reducer functions ''do not need to know anything about the way MapReduce is implemented''. So a standard MapReduce job becomes<br/><br />
<hask><br />
mapReduce :: [String] -> [(String,Int)]<br />
mapReduce state = runMapReduce mr state<br />
where<br />
mr = return () >>= wrapMR mapper >>= wrapMR reducer<br />
</hask><br/><br />
I have tested the implementation with the standard word-counter mapper and reducer, and it works perfectly (full code is available via the link above).<br />
<br />
==The monad transformer approach==<br />
<br />
Define the monad transformer type <hask>MapReduceT</hask> by:<br/><br />
<br />
<hask><br />
newtype (Monad m) => MapReduceT m t u = MR {run :: m t -> m u}<br />
</hask><br />
<br />
with operations<br/><br />
<br />
<hask><br />
lift :: (Monad m) => m t -> MapReduceT m t t<br />
lift x = MR (const x)<br />
<br />
return :: (Monad m) => t -> MapReduceT m t t<br />
return x = lift (return x)<br />
<br />
bind :: (Monad m) => MapReduceT m u u -> MapReduceT m t u -> (u -> MapReduceT m u v) -> MapReduceT m t v<br />
bind p f g = MR (\ xs -> ps xs >>= gs xs)<br />
where<br />
ps xs = (f >>> p) -< xs<br />
gs xs x = (f >>> g x) -< xs<br />
</hask><br />
<br />
where <hask> >>> </hask> and <hask> -< </hask> are the obvious arrow operations on <hask>MapeduceT</hask> types.<br />
<br />
Then we show in [http://media.jpembeddedsolutions.com/pdf/mrmonad.pdf this paper] that:<br />
* <hask>MapReduce = MapReduceT []</hask> with <hask> (>>=) = bind nub</hask><br />
* For a suitable choice of <hask>p</hask> the standard <hask>State</hask> monad is <hask>MapReduceT Hom</hask> where<br />
<br />
:<hask><br />
data Hom a b = H {run :: (a -> b)}<br />
<br />
return x = H (const x)<br />
f >>= g = H (\ x -> g' (f' x) x)<br />
where <br />
f' = run f <br />
g' x y = run (g x) y<br />
</hask><br />
<br />
==Future Directions==<br />
<br />
*My code so far runs concurrently and in multiple threads within a single OS image. It won't work on clustered systems. I have started work in this, see [[MapReduce_with_CloudHaskell|here]].<br />
*Currently all of the data is sent to all of the mappers / reducers at each iteration. This is okay on a single machine, but may be prohibitive on a cluster.<br />
<br />
I would be eager for collaborative working on taking this forward.<br />
<br />
[[User:Julianporter|julianporter]] 18:10, 31 October 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/MapReduce_with_CloudHaskellMapReduce with CloudHaskell2011-10-31T18:20:37Z<p>Julianporter: New page: Category:ApplicationsCategory:MonadCategory:LibrariesCategory:ConcurrencyCategory:ParallelCategory:Research ==Introduction== This is documentation of my work on d...</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
This is documentation of my work on developing a proof-of-concept demonstrator for MapReduce using [[GHC/CloudAndHPCHaskell|CloudHaskell]] to provide a framewok for distributed applications, and the monadic approach to MapReduce described [[MapReduce_as_a_monad|here]].<br />
<br />
==Status==<br />
<br />
===Storage===<br />
<br />
I have developed a very simple distributed storage service that provides what is needed to ship data to each processing node at the start of each round of processing, and then to assemble their outputs ready to form the input for the next round.<br />
<br />
* Description of what I have done [http://jpembeddedsolutions.files.wordpress.com/2011/10/storage.pdf here]<br />
* A working application can be found at <span style="color:blue">git://github.com/Julianporter/Distributed-Haskell.git</span><br />
<br />
Code to be placed on Hackage when more robust.<br />
<br />
===Dstributed job scheduler===<br />
<br />
Not yet started.<br />
<br />
[[User:Julianporter|julianporter]] 18:20, 31 October 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/User:JulianporterUser:Julianporter2011-10-31T18:13:54Z<p>Julianporter: /* MapReduce */</p>
<hr />
<div>=About my work=<br />
==Background==<br />
My particular areas of interest in programming are:<br />
*Functional programming<br />
*Formal modelling / model based programming<br />
*Concurrency / cloud programming<br />
*Embedded systems<br />
I am also establishing a small business developing control systems and software for robots. The key idea is to make the robot part of the cloud rather than a stand-alone device. Further information:<br />
*[http://www.jpembedded.co.uk Company website]<br />
*[http://jpembeddedsolutions.wordpress.com Blog] (also contains research papers on Haskell and functional programming)<br />
<br />
==Current projects==<br />
<br />
===MapReduce===<br />
<br />
I am looking at ways of implementing MapReduce-type algorithms using the functional approach. The key insight is that a generalised MapReduce algorithm is simply the repeated application of a sequence of <hask> >>= </hask> operations in a suitable monad. There are two strands of activity:<br />
<br />
*Development of a [[MapReduce_as_a_monad|monadic view of MapReduce]]<br />
*Developing a [[MapReduce_with_CloudHaskell|proof-of-concept demonstrator for monadic MapReduce]], using [[GHC/CloudAndHPCHaskell|CloudHaskell]] as a framework for distributed Haskell applications.<br />
<br />
The second activity is undertaken with the support of the authors of CloudHaskell. I would be very happy if others joined in the development effort.<br />
<br />
===Catskell===<br />
<br />
I'm defining and then coding a language ([[Catskell]]) in the spirit of [http://lolcode.com/ LOLCODE] which is basically a feline-friendly subset of Haskell. My intention is to write a Catskell-to-Haskell translator. This should be a good exercise in making sure I really understand the language.<br />
<br />
=About me=<br />
<br />
By training I am a mathematician. I have been programming computers of some form or other since the early 1980s. I also have a keen interest in philosophy and music. My personal website is [http://www.porternet.org here].</div>Julianporterhttps://wiki.haskell.org/User:JulianporterUser:Julianporter2011-10-31T18:12:43Z<p>Julianporter: /* MapReduce */</p>
<hr />
<div>=About my work=<br />
==Background==<br />
My particular areas of interest in programming are:<br />
*Functional programming<br />
*Formal modelling / model based programming<br />
*Concurrency / cloud programming<br />
*Embedded systems<br />
I am also establishing a small business developing control systems and software for robots. The key idea is to make the robot part of the cloud rather than a stand-alone device. Further information:<br />
*[http://www.jpembedded.co.uk Company website]<br />
*[http://jpembeddedsolutions.wordpress.com Blog] (also contains research papers on Haskell and functional programming)<br />
<br />
==Current projects==<br />
<br />
===MapReduce===<br />
<br />
I am looking at ways of implementing MapReduce-type algorithms using the functional approach. The key insight is that a generalised MapReduce algorithm is simply the repeated application of a sequence of <hask> >>= </hask> operations in a suitable monad. There are two strands of activity:<br />
<br />
*Development of a [[MapReduce_as_a_monad|monadic view of MapReduce]]<br />
*Developing a [[MapReduce_with_CloudHaskell|proof-of-concept demonstrator for monadic MapReduce]], using [[CloudAndHPCHaskell|CloudHaskell]] as a framework for distributed Haskell applications.<br />
<br />
The second activity is undertaken with the support of the authors of CloudHaskell. I would be very happy if others joined in the development effort.<br />
<br />
===Catskell===<br />
<br />
I'm defining and then coding a language ([[Catskell]]) in the spirit of [http://lolcode.com/ LOLCODE] which is basically a feline-friendly subset of Haskell. My intention is to write a Catskell-to-Haskell translator. This should be a good exercise in making sure I really understand the language.<br />
<br />
=About me=<br />
<br />
By training I am a mathematician. I have been programming computers of some form or other since the early 1980s. I also have a keen interest in philosophy and music. My personal website is [http://www.porternet.org here].</div>Julianporterhttps://wiki.haskell.org/MapReduce_as_a_monadMapReduce as a monad2011-10-31T18:10:17Z<p>Julianporter: New material about monad transformer</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
MapReduce is a general technique for massively parallel programming developed by Google. It takes its inspiration from ideas in functional programming, but has moved away from that paradigm to a more imperative approach.<br />
I have noticed that MapReduce can be expressed naturally, using functional programming techniques, as a form of monad. The standard implementation of MapReduce is the JAVA-based HADOOP framework, which is very complex and somewhat temperamental. Moreover, it is necessary to write HADOOP-specific code into mappers and reducers. My prototype library takes about 100 lines of code and can wrap generic mapper / reducer functions.<br />
<br />
Having shown that we can implement MapReduce as a generalised monad, it transpires that in fact, we can generalise this still further and define a <hask>MapReduceT</hask> monad transformer, so there is a MapReduce type and operation associated to any monad. In particular, it turns out that the <hask>State</hask> monad is just the MapReduce type of the monad <hask>Hom a</hask> of maps <hask>h -> a</hask> where <hask>h</hask> is some fixed type.<br />
<br />
==Initial Approach==<br />
<br />
===Why a monad?===<br />
<br />
What the monadic implementation lets us do is the following:<br />
*Map and reduce look the same.<br />
*You can write a simple wrapper function that takes a mapper / reducer and wraps it in the monad, so authors of mappers / reducers do not need to know anything about the MapReduce framework: they can concentrate on their algorithms.<br />
*All of the guts of MapReduce are hidden in the monad's <hask>bind</hask> function<br />
*The implementation is naturally parallel<br />
*Making a MapReduce program is trivial:<br/><br />
<hask><br />
... >>= wrapMR mapper >>= wrapMR reducer >>= ...<br />
</hask><br/><br />
<br />
===Details===<br />
Full details of the implementation and sample code can be found [http://jpembeddedsolutions.wordpress.com/2011/04/02/mapreduce/ here]. I'll just give highlights here.<br />
<br />
====Generalised mappers / reducers====<br />
One can generalise MapReduce a bit, so that each stage (map, reduce, etc) becomes a function of signature<br/><br />
<hask><br />
a -> ([(s,a)] -> [(s',b)])<br />
</hask><br/><br />
where <hask>s</hask> and <hask>s'</hask> are data types and <hask>a</hask> and <hask>b</hask> are key values. <br />
<br />
====Generalised Monad====<br />
Now, this is suggestive of a monad, but we can't use a monad ''per se'', because the transformation changes the key and value types, and we want to be able to access them separately. Therefore we do the following. <br />
<br />
Let <hask>m</hask> be a <hask>Monad'</hask>, a type with four parameters: <hask>m s a s' b</hask>.<br />
<br />
Generalise the monadic <hask>bind</hask> operation to:<br/><br />
<hask><br />
m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
</hask><br/><br />
<br />
See [http://blog.sigfpe.com/2009/02/beyond-monads.html Parametrized monads].<br />
<br />
Then clearly the generalised mapper/reducer above can be written as a <hask>Monad'</hask>, meaning that we can write MapReduce as<br/><br />
<hask><br />
... >>= mapper >>= reducer >>= mapper' >>= reducer' >>= ...<br />
</hask><br />
<br />
====Implementation details====<br />
<br />
<hask><br />
class Monad' m where<br />
return :: a -> m s x s a<br />
(>>=) :: (Eq b) => m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
<br />
newtype MapReduce s a s' b = MR { runMR :: ([(s,a)] -> [(s',b)]) }<br />
<br />
retMR :: a -> MapReduce s x s a<br />
retMR k = MR (\ss -> [(s,k) | s <- fst <$> ss])<br />
<br />
bindMR :: (Eq b,NFData s'',NFData c) => MapReduce s a s' b -> (b -> MapReduce s' b s'' c) -> MapReduce s a s'' c<br />
bindMR f g = MR (\s -><br />
let<br />
fs = runMR f s<br />
gs = P.map g $ nub $ snd <$> fs<br />
in<br />
concat $ map (\g' -> runMR g' fs) gs)<br />
</hask><br/><br />
The key point here is that <hask>P.map</hask> is a parallel version of the simple <hask>map</hask> function.<br />
<br />
Now we can write a wrapper function<br/><br />
<hask><br />
wrapMR :: (Eq a) => ([s] -> [(s',b)]) -> (a -> MapReduce s a s' b)<br />
wrapMR f = (\k -> MR (g k))<br />
where<br />
g k ss = f $ fst <$> filter (\s -> k == snd s) ss<br />
</hask><br/><br />
which takes a conventional mapper / reducer and wraps it in the <hask>Monad'</hask>. Note that this means that the mapper / reducer functions ''do not need to know anything about the way MapReduce is implemented''. So a standard MapReduce job becomes<br/><br />
<hask><br />
mapReduce :: [String] -> [(String,Int)]<br />
mapReduce state = runMapReduce mr state<br />
where<br />
mr = return () >>= wrapMR mapper >>= wrapMR reducer<br />
</hask><br/><br />
I have tested the implementation with the standard word-counter mapper and reducer, and it works perfectly (full code is available via the link above).<br />
<br />
==The monad transformer approach==<br />
<br />
Define the monad transformer type <hask>MapReduceT</hask> by:<br/><br />
<br />
<hask><br />
newtype (Monad m) => MapReduceT m t u = MR {run :: m t -> m u}<br />
</hask><br />
<br />
with operations<br/><br />
<br />
<hask><br />
lift :: (Monad m) => m t -> MapReduceT m t t<br />
lift x = MR (const x)<br />
<br />
return :: (Monad m) => t -> MapReduceT m t t<br />
return x = lift (return x)<br />
<br />
bind :: (Monad m) => MapReduceT m u u -> MapReduceT m t u -> (u -> MapReduceT m u v) -> MapReduceT m t v<br />
bind p f g = MR (\ xs -> ps xs >>= gs xs)<br />
where<br />
ps xs = (f >>> p) -< xs<br />
gs xs x = (f >>> g x) -< xs<br />
</hask><br />
<br />
where <hask> >>> </hask> and <hask> -< </hask> are the obvious arrow operations on <hask>MapeduceT</hask> types.<br />
<br />
Then we show in [http://media.jpembeddedsolutions.com/pdf/mrmonad.pdf this paper] that:<br />
* <hask>MapReduce == MapReduceT []</hask> with <hask> >>= = bind nub</hask><br />
* For a suitable choice of <hask>p</hask> the standard <hask>State</hask> monad is <hask>MapReduceT Hom</hask> where<br />
<br />
:<hask><br />
data Hom a b = H {run :: (a -> b)}<br />
<br />
return x = H (const x)<br />
f >>= g = H (\ x -> g' (f' x) x)<br />
where <br />
f' = run f <br />
g' x y = run (g x) y<br />
</hask><br />
<br />
==Future Directions==<br />
<br />
*My code so far runs concurrently and in multiple threads within a single OS image. It won't work on clustered systems. I have started work in this, see [[MapReduce_with_CloudHaskell|here]].<br />
*Currently all of the data is sent to all of the mappers / reducers at each iteration. This is okay on a single machine, but may be prohibitive on a cluster.<br />
<br />
I would be eager for collaborative working on taking this forward.<br />
<br />
[[User:Julianporter|julianporter]] 18:10, 31 October 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/MapReduce_as_a_monadMapReduce as a monad2011-10-31T18:06:08Z<p>Julianporter: New material about monad transformer</p>
<hr />
<div></div>Julianporterhttps://wiki.haskell.org/User:JulianporterUser:Julianporter2011-10-31T17:47:22Z<p>Julianporter: /* Current projects */</p>
<hr />
<div>=About my work=<br />
==Background==<br />
My particular areas of interest in programming are:<br />
*Functional programming<br />
*Formal modelling / model based programming<br />
*Concurrency / cloud programming<br />
*Embedded systems<br />
I am also establishing a small business developing control systems and software for robots. The key idea is to make the robot part of the cloud rather than a stand-alone device. Further information:<br />
*[http://www.jpembedded.co.uk Company website]<br />
*[http://jpembeddedsolutions.wordpress.com Blog] (also contains research papers on Haskell and functional programming)<br />
<br />
==Current projects==<br />
<br />
===MapReduce===<br />
<br />
I am looking at ways of implementing MapReduce-type algorithms using the functional approach. The key insight is that a generalised MapReduce algorithm is simply the repeated application of a sequence of <hask> >>= </hask> operations in a suitable monad. There are two strands of activity:<br />
<br />
*Development of a [[MapReduce_as_a_monad|monadic view of MapReduce]]<br />
*Developing a [[MapReduce_with_CloudHaskell|proof-of-concept demonstrator for monadic MapReduce]], using [[CloudHaskell]] as a framework for distributed Haskell applications.<br />
<br />
The second activity is undertaken with the support of the authors of CloudHaskell. I would be very happy if others joined in the development effort.<br />
<br />
===Catskell===<br />
<br />
I'm defining and then coding a language ([[Catskell]]) in the spirit of [http://lolcode.com/ LOLCODE] which is basically a feline-friendly subset of Haskell. My intention is to write a Catskell-to-Haskell translator. This should be a good exercise in making sure I really understand the language.<br />
<br />
=About me=<br />
<br />
By training I am a mathematician. I have been programming computers of some form or other since the early 1980s. I also have a keen interest in philosophy and music. My personal website is [http://www.porternet.org here].</div>Julianporterhttps://wiki.haskell.org/CatskellCatskell2011-04-03T09:21:40Z<p>Julianporter: The CATSKELL manifesto</p>
<hr />
<div>[[Category:Humor]]<br />
==O HAI!==<br />
For too long, the feline programming community has been restricted to imperative languages. Kittehs have been mewing, nay yowling, for a chance to use the paradigm of the future: functional programming.<br />
<br />
And now their time has come. For I am pleased to announce the foundation of the Catskell project. My intention is, in the spirit of [http://lolcode.com LOLCODE] to create a feline friendly language which provides the core functionality of a pure functional language. Which might be suspiciously like Haskell if you happen to look at it closely enough . . . And when it is done, I will have proved that I am not just a any old nerd, but a nerd who is a friend to kittehs everywhere.<br />
<br />
==O RLY?==<br />
And no, this is no joke. In technical terms, what I intend to do is to:<br />
<br />
*Define a language which corresponds with core Haskell and its basic types (atomic types, algebraic types, list), monads (including the IO monad) and basic library functions (folds, map, filter, list manipulation)<br />
*Write a translator, which will convert Catskell code into Haskell, allowing it to be compiled and executed<br />
*(If I’m feeling really brave) An interpreter like GHCI<br />
<br />
==SRSLY==<br />
<br />
So there. Work has already begun. If you want to take part, get in touch, and we will see what we can do.<br />
<br />
==KTHXBAI==</div>Julianporterhttps://wiki.haskell.org/MapReduce_as_a_monadMapReduce as a monad2011-04-03T09:18:24Z<p>Julianporter: /* Implementation details */</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
MapReduce is a general technique for massively parallel programming developed by Google. It takes its inspiration from ideas in functional programming, but has moved away from that paradigm to a more imperative approach.<br />
I have noticed that MapReduce can be expressed naturally, using functional programming techniques, as a form of monad.<br />
<br />
The standard implementation of MapReduce is the JAVA-based HADOOP framework, which is very complex and somewhat temperamental. Moreover, it is necessary to write HADOOP-specific code into mappers and reducers. My prototype library takes about 100 lines of code and can wrap generic mapper / reducer functions.<br />
<br />
==Why a monad?==<br />
<br />
What the monadic implementation lets us do is the following:<br />
*Map and reduce look the same.<br />
*You can write a simple wrapper function that takes a mapper / reducer and wraps it in the monad, so authors of mappers / reducers do not need to know anything about the MapReduce framework: they can concentrate on their algorithms.<br />
*All of the guts of MapReduce are hidden in the monad's <hask>bind</hask> function<br />
*The implementation is naturally parallel<br />
*Making a MapReduce program is trivial:<br/><br />
<hask><br />
... >>= wrapMR mapper >>= wrapMR reducer >>= ...<br />
</hask><br/><br />
<br />
==Details==<br />
Full details of the implementation and sample code can be found [http://jpembeddedsolutions.wordpress.com/2011/04/02/mapreduce/ here]. I'll just give highlights here.<br />
<br />
===Generalised mappers / reducers===<br />
One can generalise MapReduce a bit, so that each stage (map, reduce, etc) becomes a function of signature<br/><br />
<hask><br />
a -> ([(s,a)] -> [(s',b)])<br />
</hask><br/><br />
where <hask>s</hask> and <hask>s'</hask> are data types and <hask>a</hask> and <hask>b</hask> are key values. <br />
<br />
===Generalised Monad===<br />
Now, this is suggestive of a monad, but we can't use a monad ''per se'', because the transformation changes the key and value types, and we want to be able to access them separately. Therefore we do the following. <br />
<br />
Let <hask>m</hask> be a <hask>Monad'</hask>, a type with four parameters: <hask>m s a s' b</hask>.<br />
<br />
Generalise the monadic <hask>bind</hask> operation to:<br/><br />
<hask><br />
m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
</hask><br/><br />
Then clearly the generalised mapper/reducer above can be written as a <hask>Monad'</hask>, meaning that we can write MapReduce as<br/><br />
<hask><br />
... >>= mapper >>= reducer >>= mapper' >>= reducer' >>= ...<br />
</hask><br />
<br />
===Implementation details===<br />
<br />
<hask><br />
class Monad' m where<br />
return :: a -> m s x s a<br />
(>>=) :: (Eq b) => m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
<br />
newtype MapReduce s a s' b = MR { runMR :: ([(s,a)] -> [(s',b)]) }<br />
<br />
retMR :: a -> MapReduce s x s a<br />
retMR k = MR (\ss -> [(s,k) | s <- fst <$> ss])<br />
<br />
bindMR :: (Eq b,NFData s'',NFData c) => MapReduce s a s' b -> (b -> MapReduce s' b s'' c) -> MapReduce s a s'' c<br />
bindMR f g = MR (\s -><br />
let<br />
fs = runMR f s<br />
gs = P.map g $ nub $ snd <$> fs<br />
in<br />
concat $ map (\g' -> runMR g' fs) gs)<br />
</hask><br/><br />
The key point here is that <hask>P.map</hask> is a parallel version of the simple <hask>map</hask> function.<br />
<br />
Now we can write a wrapper function<br/><br />
<hask><br />
wrapMR :: (Eq a) => ([s] -> [(s',b)]) -> (a -> MapReduce s a s' b)<br />
wrapMR f = (\k -> MR (g k))<br />
where<br />
g k ss = f $ fst <$> filter (\s -> k == snd s) ss<br />
</hask><br/><br />
which takes a conventional mapper / reducer and wraps it in the <hask>Monad'</hask>. Note that this means that the mapper / reducer functions ''do not need to know anything about the way MapReduce is implemented''. So a standard MapReduce job becomes<br/><br />
<hask><br />
mapReduce :: [String] -> [(String,Int)]<br />
mapReduce state = runMapReduce mr state<br />
where<br />
mr = return () >>= wrapMR mapper >>= wrapMR reducer<br />
</hask><br/><br />
I have tested the implementation with the standard word-counter mapper and reducer, and it works perfectly (full code is available via the link above).<br />
<br />
==Future Directions==<br />
<br />
*My code so far runs concurrently and in multiple threads within a single OS image. It won't work on clustered systems. This is clearly where work should go next.<br />
*Currently all of the data is sent to all of the mappers / reducers at each iteration. This is okay on a single machine, but may be prohibitive on a cluster.<br />
<br />
I would be eager for collaborative working on taking this forward.<br />
<br />
[[User:Julianporter|julianporter]] 18:32, 2 April 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/MapReduce_as_a_monadMapReduce as a monad2011-04-03T09:18:02Z<p>Julianporter: /* Generalised Monad */</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
MapReduce is a general technique for massively parallel programming developed by Google. It takes its inspiration from ideas in functional programming, but has moved away from that paradigm to a more imperative approach.<br />
I have noticed that MapReduce can be expressed naturally, using functional programming techniques, as a form of monad.<br />
<br />
The standard implementation of MapReduce is the JAVA-based HADOOP framework, which is very complex and somewhat temperamental. Moreover, it is necessary to write HADOOP-specific code into mappers and reducers. My prototype library takes about 100 lines of code and can wrap generic mapper / reducer functions.<br />
<br />
==Why a monad?==<br />
<br />
What the monadic implementation lets us do is the following:<br />
*Map and reduce look the same.<br />
*You can write a simple wrapper function that takes a mapper / reducer and wraps it in the monad, so authors of mappers / reducers do not need to know anything about the MapReduce framework: they can concentrate on their algorithms.<br />
*All of the guts of MapReduce are hidden in the monad's <hask>bind</hask> function<br />
*The implementation is naturally parallel<br />
*Making a MapReduce program is trivial:<br/><br />
<hask><br />
... >>= wrapMR mapper >>= wrapMR reducer >>= ...<br />
</hask><br/><br />
<br />
==Details==<br />
Full details of the implementation and sample code can be found [http://jpembeddedsolutions.wordpress.com/2011/04/02/mapreduce/ here]. I'll just give highlights here.<br />
<br />
===Generalised mappers / reducers===<br />
One can generalise MapReduce a bit, so that each stage (map, reduce, etc) becomes a function of signature<br/><br />
<hask><br />
a -> ([(s,a)] -> [(s',b)])<br />
</hask><br/><br />
where <hask>s</hask> and <hask>s'</hask> are data types and <hask>a</hask> and <hask>b</hask> are key values. <br />
<br />
===Generalised Monad===<br />
Now, this is suggestive of a monad, but we can't use a monad ''per se'', because the transformation changes the key and value types, and we want to be able to access them separately. Therefore we do the following. <br />
<br />
Let <hask>m</hask> be a <hask>Monad'</hask>, a type with four parameters: <hask>m s a s' b</hask>.<br />
<br />
Generalise the monadic <hask>bind</hask> operation to:<br/><br />
<hask><br />
m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
</hask><br/><br />
Then clearly the generalised mapper/reducer above can be written as a <hask>Monad'</hask>, meaning that we can write MapReduce as<br/><br />
<hask><br />
... >>= mapper >>= reducer >>= mapper' >>= reducer' >>= ...<br />
</hask><br />
<br />
===Implementation details===<br />
<br />
<hask><br />
class Monad' m where<br />
return :: a -> m s x s a<br />
(>>=) :: (Eq b) => m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
<br />
newtype MapReduce s a s' b = MR { runMR :: ([(s,a)] -> [(s',b)]) }<br />
<br />
retMR :: a -> MapReduce s x s a<br />
retMR k = MR (\ss -> [(s,k) | s <- fst <$> ss])<br />
<br />
bindMR :: (Eq b,NFData s'',NFData c) => MapReduce s a s' b -> (b -> MapReduce s' b s'' c) -> MapReduce s a s'' c<br />
bindMR f g = MR (\s -><br />
let<br />
fs = runMR f s<br />
gs = P.map g $ nub $ snd <$> fs<br />
in<br />
concat $ map (\g' -> runMR g' fs) gs)<br />
</hask><br/><br />
The key point here is that <hask>P.map</hask> is a parallel version of the simple <hask>map</hask> function.<br />
<br />
Now we can write a wrapper function<br/><br />
<hask><br />
wrapMR :: (Eq a) => ([s] -> [(s',b)]) -> (a -> MapReduce s a s' b)<br />
wrapMR f = (\k -> MR (g k))<br />
where<br />
g k ss = f $ fst <$> filter (\s -> k == snd s) ss<br />
</hask><br/><br />
which takes a conventional mapper / reducer and wraps it in the <hask>Monad'</hask>. Note that this means that the mapper / reducer functions ''do not need to know anything about the way MapReduce is implemented''. So a standard MapReduce job becomes<br/><br />
<hask><br />
mapReduce :: [String] -> [(String,Int)]<br />
mapReduce state = runMapReduce mr state<br />
where<br />
mr = return () >>= wrapMR mapper >>= wrapMR reducer<br />
</hask></br><br />
I have tested the implementation with the standard word-counter mapper and reducer, and it works perfectly (full code is available via the link above).<br />
<br />
==Future Directions==<br />
<br />
*My code so far runs concurrently and in multiple threads within a single OS image. It won't work on clustered systems. This is clearly where work should go next.<br />
*Currently all of the data is sent to all of the mappers / reducers at each iteration. This is okay on a single machine, but may be prohibitive on a cluster.<br />
<br />
I would be eager for collaborative working on taking this forward.<br />
<br />
[[User:Julianporter|julianporter]] 18:32, 2 April 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/MapReduce_as_a_monadMapReduce as a monad2011-04-03T09:17:39Z<p>Julianporter: /* Why a monad? */</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
MapReduce is a general technique for massively parallel programming developed by Google. It takes its inspiration from ideas in functional programming, but has moved away from that paradigm to a more imperative approach.<br />
I have noticed that MapReduce can be expressed naturally, using functional programming techniques, as a form of monad.<br />
<br />
The standard implementation of MapReduce is the JAVA-based HADOOP framework, which is very complex and somewhat temperamental. Moreover, it is necessary to write HADOOP-specific code into mappers and reducers. My prototype library takes about 100 lines of code and can wrap generic mapper / reducer functions.<br />
<br />
==Why a monad?==<br />
<br />
What the monadic implementation lets us do is the following:<br />
*Map and reduce look the same.<br />
*You can write a simple wrapper function that takes a mapper / reducer and wraps it in the monad, so authors of mappers / reducers do not need to know anything about the MapReduce framework: they can concentrate on their algorithms.<br />
*All of the guts of MapReduce are hidden in the monad's <hask>bind</hask> function<br />
*The implementation is naturally parallel<br />
*Making a MapReduce program is trivial:<br/><br />
<hask><br />
... >>= wrapMR mapper >>= wrapMR reducer >>= ...<br />
</hask><br/><br />
<br />
==Details==<br />
Full details of the implementation and sample code can be found [http://jpembeddedsolutions.wordpress.com/2011/04/02/mapreduce/ here]. I'll just give highlights here.<br />
<br />
===Generalised mappers / reducers===<br />
One can generalise MapReduce a bit, so that each stage (map, reduce, etc) becomes a function of signature<br/><br />
<hask><br />
a -> ([(s,a)] -> [(s',b)])<br />
</hask><br/><br />
where <hask>s</hask> and <hask>s'</hask> are data types and <hask>a</hask> and <hask>b</hask> are key values. <br />
<br />
===Generalised Monad===<br />
Now, this is suggestive of a monad, but we can't use a monad ''per se'', because the transformation changes the key and value types, and we want to be able to access them separately. Therefore we do the following. <br />
<br />
Let <hask>m</hask> be a <hask>Monad'</hask>, a type with four parameters: <hask>m s a s' b</hask>, where <hask>s, s'</hask> are data types and <hask>a, b</hask> are key types.<br />
<br />
Generalise the monadic <hask>bind</hask> operation to:<br/><br />
<hask><br />
m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
</hask><br/><br />
Then clearly the generalised mapper/reducer above can be written as a <hask>Monad'</hask>, meaning that we can write MapReduce as<br/><br />
<hask><br />
... >>= mapper >>= reducer >>= mapper' >>= reducer' >>= ...<br />
</hask><br />
<br />
===Implementation details===<br />
<br />
<hask><br />
class Monad' m where<br />
return :: a -> m s x s a<br />
(>>=) :: (Eq b) => m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
<br />
newtype MapReduce s a s' b = MR { runMR :: ([(s,a)] -> [(s',b)]) }<br />
<br />
retMR :: a -> MapReduce s x s a<br />
retMR k = MR (\ss -> [(s,k) | s <- fst <$> ss])<br />
<br />
bindMR :: (Eq b,NFData s'',NFData c) => MapReduce s a s' b -> (b -> MapReduce s' b s'' c) -> MapReduce s a s'' c<br />
bindMR f g = MR (\s -><br />
let<br />
fs = runMR f s<br />
gs = P.map g $ nub $ snd <$> fs<br />
in<br />
concat $ map (\g' -> runMR g' fs) gs)<br />
</hask><br/><br />
The key point here is that <hask>P.map</hask> is a parallel version of the simple <hask>map</hask> function.<br />
<br />
Now we can write a wrapper function<br/><br />
<hask><br />
wrapMR :: (Eq a) => ([s] -> [(s',b)]) -> (a -> MapReduce s a s' b)<br />
wrapMR f = (\k -> MR (g k))<br />
where<br />
g k ss = f $ fst <$> filter (\s -> k == snd s) ss<br />
</hask><br/><br />
which takes a conventional mapper / reducer and wraps it in the <hask>Monad'</hask>. Note that this means that the mapper / reducer functions ''do not need to know anything about the way MapReduce is implemented''. So a standard MapReduce job becomes<br/><br />
<hask><br />
mapReduce :: [String] -> [(String,Int)]<br />
mapReduce state = runMapReduce mr state<br />
where<br />
mr = return () >>= wrapMR mapper >>= wrapMR reducer<br />
</hask></br><br />
I have tested the implementation with the standard word-counter mapper and reducer, and it works perfectly (full code is available via the link above).<br />
<br />
==Future Directions==<br />
<br />
*My code so far runs concurrently and in multiple threads within a single OS image. It won't work on clustered systems. This is clearly where work should go next.<br />
*Currently all of the data is sent to all of the mappers / reducers at each iteration. This is okay on a single machine, but may be prohibitive on a cluster.<br />
<br />
I would be eager for collaborative working on taking this forward.<br />
<br />
[[User:Julianporter|julianporter]] 18:32, 2 April 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/MapReduce_as_a_monadMapReduce as a monad2011-04-03T09:17:01Z<p>Julianporter: </p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
MapReduce is a general technique for massively parallel programming developed by Google. It takes its inspiration from ideas in functional programming, but has moved away from that paradigm to a more imperative approach.<br />
I have noticed that MapReduce can be expressed naturally, using functional programming techniques, as a form of monad.<br />
<br />
The standard implementation of MapReduce is the JAVA-based HADOOP framework, which is very complex and somewhat temperamental. Moreover, it is necessary to write HADOOP-specific code into mappers and reducers. My prototype library takes about 100 lines of code and can wrap generic mapper / reducer functions.<br />
<br />
==Why a monad?==<br />
<br />
What the monadic implementation lets us do is the following:<br />
*Map and reduce look the same.<br />
*You can write a simple wrapper function that takes a mapper / reducer and wraps it in the monad, so authors of mappers / reducers do not need to know anything about the MapReduce framework: they can concentrate on their algorithms.<br />
*All of the guts of MapReduce are hidden in the monad's <hask>bind</hask> function<br />
*The implementation is naturally parallel<br />
*Making a MapReduce program is trivial:<br/><br />
<hask><br />
... >>= wrapMR mapper >>= wrapMR reducer >>= ...<br />
</hask></br><br />
<br />
==Details==<br />
Full details of the implementation and sample code can be found [http://jpembeddedsolutions.wordpress.com/2011/04/02/mapreduce/ here]. I'll just give highlights here.<br />
<br />
===Generalised mappers / reducers===<br />
One can generalise MapReduce a bit, so that each stage (map, reduce, etc) becomes a function of signature<br/><br />
<hask><br />
a -> ([(s,a)] -> [(s',b)])<br />
</hask><br/><br />
where <hask>s</hask> and <hask>s'</hask> are data types and <hask>a</hask> and <hask>b</hask> are key values. <br />
<br />
===Generalised Monad===<br />
Now, this is suggestive of a monad, but we can't use a monad ''per se'', because the transformation changes the key and value types, and we want to be able to access them separately. Therefore we do the following. <br />
<br />
Let <hask>m</hask> be a <hask>Monad'</hask>, a type with four parameters: <hask>m s a s' b</hask>, where <hask>s, s'</hask> are data types and <hask>a, b</hask> are key types.<br />
<br />
Generalise the monadic <hask>bind</hask> operation to:<br/><br />
<hask><br />
m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
</hask><br/><br />
Then clearly the generalised mapper/reducer above can be written as a <hask>Monad'</hask>, meaning that we can write MapReduce as<br/><br />
<hask><br />
... >>= mapper >>= reducer >>= mapper' >>= reducer' >>= ...<br />
</hask><br />
<br />
===Implementation details===<br />
<br />
<hask><br />
class Monad' m where<br />
return :: a -> m s x s a<br />
(>>=) :: (Eq b) => m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
<br />
newtype MapReduce s a s' b = MR { runMR :: ([(s,a)] -> [(s',b)]) }<br />
<br />
retMR :: a -> MapReduce s x s a<br />
retMR k = MR (\ss -> [(s,k) | s <- fst <$> ss])<br />
<br />
bindMR :: (Eq b,NFData s'',NFData c) => MapReduce s a s' b -> (b -> MapReduce s' b s'' c) -> MapReduce s a s'' c<br />
bindMR f g = MR (\s -><br />
let<br />
fs = runMR f s<br />
gs = P.map g $ nub $ snd <$> fs<br />
in<br />
concat $ map (\g' -> runMR g' fs) gs)<br />
</hask><br/><br />
The key point here is that <hask>P.map</hask> is a parallel version of the simple <hask>map</hask> function.<br />
<br />
Now we can write a wrapper function<br/><br />
<hask><br />
wrapMR :: (Eq a) => ([s] -> [(s',b)]) -> (a -> MapReduce s a s' b)<br />
wrapMR f = (\k -> MR (g k))<br />
where<br />
g k ss = f $ fst <$> filter (\s -> k == snd s) ss<br />
</hask><br/><br />
which takes a conventional mapper / reducer and wraps it in the <hask>Monad'</hask>. Note that this means that the mapper / reducer functions ''do not need to know anything about the way MapReduce is implemented''. So a standard MapReduce job becomes<br/><br />
<hask><br />
mapReduce :: [String] -> [(String,Int)]<br />
mapReduce state = runMapReduce mr state<br />
where<br />
mr = return () >>= wrapMR mapper >>= wrapMR reducer<br />
</hask></br><br />
I have tested the implementation with the standard word-counter mapper and reducer, and it works perfectly (full code is available via the link above).<br />
<br />
==Future Directions==<br />
<br />
*My code so far runs concurrently and in multiple threads within a single OS image. It won't work on clustered systems. This is clearly where work should go next.<br />
*Currently all of the data is sent to all of the mappers / reducers at each iteration. This is okay on a single machine, but may be prohibitive on a cluster.<br />
<br />
I would be eager for collaborative working on taking this forward.<br />
<br />
[[User:Julianporter|julianporter]] 18:32, 2 April 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/User:JulianporterUser:Julianporter2011-04-03T09:10:30Z<p>Julianporter: /* Current projects */</p>
<hr />
<div>=About my work=<br />
==Background==<br />
My particular areas of interest in programming are:<br />
*Functional programming<br />
*Formal modelling / model based programming<br />
*Concurrency / cloud programming<br />
*Embedded systems<br />
I am also establishing a small business developing control systems and software for robots. The key idea is to make the robot part of the cloud rather than a stand-alone device. Further information:<br />
*[http://www.jpembedded.co.uk Company website]<br />
*[http://jpembeddedsolutions.wordpress.com Blog] (also contains research papers on Haskell and functional programming)<br />
<br />
==Current projects==<br />
<br />
*I have developed a simple implementation of MapReduce using a modified form of monad. It's described [[MapReduce_as_a_monad|here]]. I would be very happy if others joined in the development effort.<br />
*I'm defining and then coding a language ([[Catskell]]) in the spirit of [http://lolcode.com/ LOLCODE] which is basically a feline-friendly subset of Haskell. My intention is to write a Catskell-to-Haskell translator. This should be a good exercise in making sure I really understand the language.<br />
<br />
=About me=<br />
<br />
By training I am a mathematician. I have been programming computers of some form or other since the early 1980s. I also have a keen interest in philosophy and music. My personal website is [http://www.porternet.org here].</div>Julianporterhttps://wiki.haskell.org/Talk:MapReduce_as_a_monadTalk:MapReduce as a monad2011-04-02T21:16:17Z<p>Julianporter: /* Monads */</p>
<hr />
<div>== Monads ==<br />
<br />
Why is it mapreduce as a ''monad''? Map just requires Functor, and reduce sounds like `mappend`, so it'd just be MapReduce as a monoid. --[[User:Gwern|Gwern]] 20:31, 2 April 2011 (UTC)<br />
<br />
Because the key point is that both Map and Reduce can be seen as monadic functions, and so then MapReduce is just a matter of repeated bind operations. Think of it as a generalised State monad. [[User:Julianporter|julianporter]] 21:16, 2 April 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/User:JulianporterUser:Julianporter2011-04-02T18:37:38Z<p>Julianporter: /* Background */</p>
<hr />
<div>=About my work=<br />
==Background==<br />
My particular areas of interest in programming are:<br />
*Functional programming<br />
*Formal modelling / model based programming<br />
*Concurrency / cloud programming<br />
*Embedded systems<br />
I am also establishing a small business developing control systems and software for robots. The key idea is to make the robot part of the cloud rather than a stand-alone device. Further information:<br />
*[http://www.jpembedded.co.uk Company website]<br />
*[http://jpembeddedsolutions.wordpress.com Blog] (also contains research papers on Haskell and functional programming)<br />
<br />
==Current projects==<br />
<br />
*I have developed a simple implementation of MapReduce using a modified form of monad. It's described [[MapReduce_as_a_monad|here]]. I would be very happy if others joined in the development effort.<br />
<br />
=About me=<br />
<br />
By training I am a mathematician. I have been programming computers of some form or other since the early 1980s. I also have a keen interest in philosophy and music. My personal website is [http://www.porternet.org here].</div>Julianporterhttps://wiki.haskell.org/MapReduce_as_a_monadMapReduce as a monad2011-04-02T18:35:54Z<p>Julianporter: /* Details */</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
MapReduce is a general technique for massively parallel programming developed by Google. It takes its inspiration from ideas in functional programming, but has moved away from that paradigm to a more imperative approach.<br />
I have noticed that MapReduce can be expressed naturally, using functional programming techniques, as a form of monad.<br />
<br />
The standard implementation of MapReduce is the JAVA-based HADOOP framework, which is very complex and somewhat temperamental. Moreover, it is necessary to write HADOOP-specific code into mappers and reducers. My prototype library takes about 100 lines of code and can wrap generic mapper / reducer functions.<br />
<br />
==Details==<br />
Full details of the implementation and sample code can be found [http://jpembeddedsolutions.wordpress.com/2011/04/02/mapreduce/ here]. I'll just give highlights here.<br />
<br />
===Generalised mappers / reducers===<br />
One can generalise MapReduce a bit, so that each stage (map, reduce, etc) becomes a function of signature<br/><br />
<hask><br />
a -> ([(s,a)] -> [(s',b)])<br />
</hask><br/><br />
where <hask>s</hask> and <hask>s'</hask> are data types and <hask>a</hask> and <hask>b</hask> are key values. <br />
<br />
===Generalised Monad===<br />
Now, this is suggestive of a monad, but we can't use a monad per se, because the transformation changes the key and value types, and we want to be able to access them separately. Therefore we do the following. Let <hask>m</hask> be a <hask>Monad'</hask>, a type with four parameters. Generalise monadic binding to:<br/><br />
<hask><br />
m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
</hask><br/><br />
Then clearly the generalised mapper/reducer above can be written as a <hask>Monad'</hask>, meaning that we can write MapReduce as<br/><br />
<hask><br />
... >>= mapper >>= reducer >>= mapper' >>= reducer' >>= ...<br />
</hask><br />
<br />
===Implementation details===<br />
<br />
<hask><br />
class Monad' m where<br />
return :: a -> m s x s a<br />
(>>=) :: (Eq b) => m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
<br />
newtype MapReduce s a s' b = MR { runMR :: ([(s,a)] -> [(s',b)]) }<br />
<br />
retMR :: a -> MapReduce s x s a<br />
retMR k = MR (\ss -> [(s,k) | s <- fst <$> ss])<br />
<br />
bindMR :: (Eq b,NFData s'',NFData c) => MapReduce s a s' b -> (b -> MapReduce s' b s'' c) -> MapReduce s a s'' c<br />
bindMR f g = MR (\s -><br />
let<br />
fs = runMR f s<br />
gs = P.map g $ nub $ snd <$> fs<br />
in<br />
concat $ map (\g' -> runMR g' fs) gs)<br />
</hask><br/><br />
The key point here is that <hask>P.map</hask> is a parallel version of the simple <hask>map</hask> function.<br />
<br />
Now we can write a wrapper function<br/><br />
<hask><br />
wrapMR :: (Eq a) => ([s] -> [(s',b)]) -> (a -> MapReduce s a s' b)<br />
wrapMR f = (\k -> MR (g k))<br />
where<br />
g k ss = f $ fst <$> filter (\s -> k == snd s) ss<br />
</hask><br/><br />
which takes a conventional mapper / reducer and wraps it in the <hask>Monad'</hask>. Note that this means that the mapper / reducer functions ''do not need to know anything about the way MapReduce is implemented''. So a standard MapReduce job becomes<br/><br />
<hask><br />
mapReduce :: [String] -> [(String,Int)]<br />
mapReduce state = runMapReduce mr state<br />
where<br />
mr = return () >>= wrapMR mapper >>= wrapMR reducer<br />
</hask></br><br />
I have tested the implementation with the standard word-counter mapper and reducer, and it works perfectly (full code is available via the link above).<br />
<br />
==Future Directions==<br />
<br />
*My code so far runs concurrently and in multiple threads within a single OS image. It won't work on clustered systems. This is clearly where work should go next.<br />
*Currently all of the data is sent to all of the mappers / reducers at each iteration. This is okay on a single machine, but may be prohibitive on a cluster.<br />
<br />
I would be eager for collaborative working on taking this forward.<br />
<br />
[[User:Julianporter|julianporter]] 18:32, 2 April 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/MapReduce_as_a_monadMapReduce as a monad2011-04-02T18:34:41Z<p>Julianporter: /* Implementation details */</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
MapReduce is a general technique for massively parallel programming developed by Google. It takes its inspiration from ideas in functional programming, but has moved away from that paradigm to a more imperative approach.<br />
I have noticed that MapReduce can be expressed naturally, using functional programming techniques, as a form of monad.<br />
<br />
The standard implementation of MapReduce is the JAVA-based HADOOP framework, which is very complex and somewhat temperamental. Moreover, it is necessary to write HADOOP-specific code into mappers and reducers. My prototype library takes about 100 lines of code and can wrap generic mapper / reducer functions.<br />
<br />
==Details==<br />
Full details of the implementation can be found [http://jpembeddedsolutions.files.wordpress.com/2011/04/mapreduceweb.pdf here]. I'll just give highlights here.<br />
<br />
===Generalised mappers / reducers===<br />
One can generalise MapReduce a bit, so that each stage (map, reduce, etc) becomes a function of signature<br/><br />
<hask><br />
a -> ([(s,a)] -> [(s',b)])<br />
</hask><br/><br />
where <hask>s</hask> and <hask>s'</hask> are data types and <hask>a</hask> and <hask>b</hask> are key values. <br />
<br />
===Generalised Monad===<br />
Now, this is suggestive of a monad, but we can't use a monad per se, because the transformation changes the key and value types, and we want to be able to access them separately. Therefore we do the following. Let <hask>m</hask> be a <hask>Monad'</hask>, a type with four parameters. Generalise monadic binding to:<br/><br />
<hask><br />
m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
</hask><br/><br />
Then clearly the generalised mapper/reducer above can be written as a <hask>Monad'</hask>, meaning that we can write MapReduce as<br/><br />
<hask><br />
... >>= mapper >>= reducer >>= mapper' >>= reducer' >>= ...<br />
</hask><br />
<br />
===Implementation details===<br />
<br />
<hask><br />
class Monad' m where<br />
return :: a -> m s x s a<br />
(>>=) :: (Eq b) => m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
<br />
newtype MapReduce s a s' b = MR { runMR :: ([(s,a)] -> [(s',b)]) }<br />
<br />
retMR :: a -> MapReduce s x s a<br />
retMR k = MR (\ss -> [(s,k) | s <- fst <$> ss])<br />
<br />
bindMR :: (Eq b,NFData s'',NFData c) => MapReduce s a s' b -> (b -> MapReduce s' b s'' c) -> MapReduce s a s'' c<br />
bindMR f g = MR (\s -><br />
let<br />
fs = runMR f s<br />
gs = P.map g $ nub $ snd <$> fs<br />
in<br />
concat $ map (\g' -> runMR g' fs) gs)<br />
</hask><br/><br />
The key point here is that <hask>P.map</hask> is a parallel version of the simple <hask>map</hask> function.<br />
<br />
Now we can write a wrapper function<br/><br />
<hask><br />
wrapMR :: (Eq a) => ([s] -> [(s',b)]) -> (a -> MapReduce s a s' b)<br />
wrapMR f = (\k -> MR (g k))<br />
where<br />
g k ss = f $ fst <$> filter (\s -> k == snd s) ss<br />
</hask><br/><br />
which takes a conventional mapper / reducer and wraps it in the <hask>Monad'</hask>. Note that this means that the mapper / reducer functions ''do not need to know anything about the way MapReduce is implemented''. So a standard MapReduce job becomes<br/><br />
<hask><br />
mapReduce :: [String] -> [(String,Int)]<br />
mapReduce state = runMapReduce mr state<br />
where<br />
mr = return () >>= wrapMR mapper >>= wrapMR reducer<br />
</hask></br><br />
I have tested the implementation with the standard word-counter mapper and reducer, and it works perfectly (full code is available via the link above).<br />
<br />
==Future Directions==<br />
<br />
*My code so far runs concurrently and in multiple threads within a single OS image. It won't work on clustered systems. This is clearly where work should go next.<br />
*Currently all of the data is sent to all of the mappers / reducers at each iteration. This is okay on a single machine, but may be prohibitive on a cluster.<br />
<br />
I would be eager for collaborative working on taking this forward.<br />
<br />
[[User:Julianporter|julianporter]] 18:32, 2 April 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/User:JulianporterUser:Julianporter2011-04-02T18:33:40Z<p>Julianporter: /* Current projects */</p>
<hr />
<div>=About my work=<br />
==Background==<br />
I have been working in IT for about 20 years. My particular areas of interest in programming are:<br />
*Functional programming<br />
*Formal modelling / model based programming<br />
*Concurrency / cloud programming<br />
*Embedded systems<br />
I am also establishing a small business developing control systems and software for robots. The key idea is to make the robot part of the cloud rather than a stand-alone device. Further information:<br />
*[http://www.jpembedded.co.uk Company website]<br />
*[http://jpembeddedsolutions.wordpress.com Blog] (also contains research papers on Haskell and functional programming)<br />
<br />
==Current projects==<br />
<br />
*I have developed a simple implementation of MapReduce using a modified form of monad. It's described [[MapReduce_as_a_monad|here]]. I would be very happy if others joined in the development effort.<br />
<br />
=About me=<br />
<br />
By training I am a mathematician. I have been programming computers of some form or other since the early 1980s. I also have a keen interest in philosophy and music. My personal website is [http://www.porternet.org here].</div>Julianporterhttps://wiki.haskell.org/MapReduce_as_a_monadMapReduce as a monad2011-04-02T18:32:34Z<p>Julianporter: A description of a prototype MapReduce library</p>
<hr />
<div>[[Category:Applications]][[Category:Monad]][[Category:Libraries]][[Category:Concurrency]][[Category:Parallel]][[Category:Research]]<br />
<br />
==Introduction==<br />
<br />
MapReduce is a general technique for massively parallel programming developed by Google. It takes its inspiration from ideas in functional programming, but has moved away from that paradigm to a more imperative approach.<br />
I have noticed that MapReduce can be expressed naturally, using functional programming techniques, as a form of monad.<br />
<br />
The standard implementation of MapReduce is the JAVA-based HADOOP framework, which is very complex and somewhat temperamental. Moreover, it is necessary to write HADOOP-specific code into mappers and reducers. My prototype library takes about 100 lines of code and can wrap generic mapper / reducer functions.<br />
<br />
==Details==<br />
Full details of the implementation can be found [http://jpembeddedsolutions.files.wordpress.com/2011/04/mapreduceweb.pdf here]. I'll just give highlights here.<br />
<br />
===Generalised mappers / reducers===<br />
One can generalise MapReduce a bit, so that each stage (map, reduce, etc) becomes a function of signature<br/><br />
<hask><br />
a -> ([(s,a)] -> [(s',b)])<br />
</hask><br/><br />
where <hask>s</hask> and <hask>s'</hask> are data types and <hask>a</hask> and <hask>b</hask> are key values. <br />
<br />
===Generalised Monad===<br />
Now, this is suggestive of a monad, but we can't use a monad per se, because the transformation changes the key and value types, and we want to be able to access them separately. Therefore we do the following. Let <hask>m</hask> be a <hask>Monad'</hask>, a type with four parameters. Generalise monadic binding to:<br/><br />
<hask><br />
m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
</hask><br/><br />
Then clearly the generalised mapper/reducer above can be written as a <hask>Monad'</hask>, meaning that we can write MapReduce as<br/><br />
<hask><br />
... >>= mapper >>= reducer >>= mapper' >>= reducer' >>= ...<br />
</hask><br />
<br />
===Implementation details===<br />
<br />
<hask><br />
class Monad' m where<br />
return :: a -> m s x s a<br />
(>>=) :: (Eq b) => m s a s' b -> ( b -> m s' b s'' c ) -> m s a s'' c<br />
<br />
newtype MapReduce s a s' b = MR { runMR :: ([(s,a)] -> [(s',b)]) }<br />
<br />
retMR :: a -> MapReduce s x s a<br />
retMR k = MR (\ss -> [(s,k) | s <- fst <$> ss])<br />
<br />
bindMR :: (Eq b,NFData s'',NFData c) => MapReduce s a s' b -> (b -> MapReduce s' b s'' c) -> MapReduce s a s'' c<br />
bindMR f g = MR (\s -><br />
let<br />
fs = runMR f s<br />
gs = P.map g $ nub $ snd <$> fs<br />
in<br />
concat $ map (\g' -> runMR g' fs) gs)<br />
</hask><br/><br />
The key point here is that <hask>P.map</hask> is a parallel version of the simple <hask>map</hask> function.<br />
<br />
Now we can write a wrapper function<br/><br />
<hask><br />
wrapMR :: (Eq a) => ([s] -> [(s',b)]) -> (a -> MapReduce s a s' b)<br />
wrapMR f = (\k -> MR (g k))<br />
where<br />
g k ss = f $ fst <$> filter (\s -> k == snd s) ss<br />
</hask><br/><br />
which takes a conventional mapper / reducer and wraps it in the <hask>Monad'</hask>. Note that this means that the mapper / reducer functions ''do not need to know anything about the way MapReduce is implemented''. So a standard MapReduce job becomes<br/><br />
<hask><br />
mapReduce :: [String] -> [(String,Int)]<br />
mapReduce state = runMapReduce mr state<br />
where<br />
mr = return () >>= wrapMR mapper >>= wrapMR reducer<br />
</hask><br />
<br />
==Future Directions==<br />
<br />
*My code so far runs concurrently and in multiple threads within a single OS image. It won't work on clustered systems. This is clearly where work should go next.<br />
*Currently all of the data is sent to all of the mappers / reducers at each iteration. This is okay on a single machine, but may be prohibitive on a cluster.<br />
<br />
I would be eager for collaborative working on taking this forward.<br />
<br />
[[User:Julianporter|julianporter]] 18:32, 2 April 2011 (UTC)</div>Julianporterhttps://wiki.haskell.org/User:JulianporterUser:Julianporter2011-04-02T18:03:00Z<p>Julianporter: New page: =About my work= ==Background== I have been working in IT for about 20 years. My particular areas of interest in programming are: *Functional programming *Formal modelling / model based pr...</p>
<hr />
<div>=About my work=<br />
==Background==<br />
I have been working in IT for about 20 years. My particular areas of interest in programming are:<br />
*Functional programming<br />
*Formal modelling / model based programming<br />
*Concurrency / cloud programming<br />
*Embedded systems<br />
I am also establishing a small business developing control systems and software for robots. The key idea is to make the robot part of the cloud rather than a stand-alone device. Further information:<br />
*[http://www.jpembedded.co.uk Company website]<br />
*[http://jpembeddedsolutions.wordpress.com Blog] (also contains research papers on Haskell and functional programming)<br />
<br />
==Current projects==<br />
<br />
*I have developed a simple implementation of MapReduce using a modified form of monad. It's described [http://jpembeddedsolutions.wordpress.com/2011/04/02/mapreduce/ here]. I'll be posting code, etc on this wiki soon. I would be very happy if others joined in the development effort.<br />
<br />
=About me=<br />
<br />
By training I am a mathematician. I have been programming computers of some form or other since the early 1980s. I also have a keen interest in philosophy and music. My personal website is [http://www.porternet.org here].</div>Julianporter