https://wiki.haskell.org/api.php?action=feedcontributions&user=Falcon&feedformat=atomHaskellWiki - User contributions [en]2019-11-19T09:08:10ZUser contributionsMediaWiki 1.27.4https://wiki.haskell.org/index.php?title=Talk:Applicative_functor&diff=16119Talk:Applicative functor2007-10-09T02:23:59Z<p>Falcon: Better explanation of AF?</p>
<hr />
<div>There has been some talk of using Applicative Functors for streams/data flow programming [1,2]. Perhaps someone can explain that at an introductory level?<br />
<br />
While we are at it, perhaps someone can expand on the "Another Essence of Dataflow Programming" paper [2].<br />
<br />
[1] http://haskell.org/haskellwiki/Phooey<br />
[2] http://citeseer.ist.psu.edu/denckla06another.html</div>Falconhttps://wiki.haskell.org/index.php?title=Wanted_libraries&diff=16093Wanted libraries2007-10-07T05:06:04Z<p>Falcon: </p>
<hr />
<div>[[Category:Libraries]]<br />
This page provides a list of missing and found libraries for solving<br />
common ''real world'' programmer tasks.<br />
<br />
If you think there should be a library for some common task, add it to<br />
this page, and someone may well step up and write it. If you know of a<br />
library to solve an open problem domain, link to it.<br />
<br />
The first place to look is [[Libraries_and_tools|the full libraries list]].<br />
<br />
<br />
== Missing libraries (by problem domain) ==<br />
<br />
=== IMAP ===<br />
A functional IMAP library would be nice, particularly one that can be used in conjunction with encryption and authentication. References have been found to a SoC project although it appears to be derelict. <br />
<br />
=== Numerical algorithms ===<br />
Stuff to compute bessel functions, struve H1 etc.<br />
<br />
=== .NET/Java Interoperability ===<br />
We could use a .NET FFI library that lets you easily call into the .NET framework. Perhaps the (abandoned?) [http://galois.com/~sof/hugs98.net/ Hugs .NET FFI library] could be resurrected/updated and ported to GHC? Maybe a tool could even be created that automates the interoperability of Haskell and arbitrary .NET libraries?<br />
<br />
: I'm working on a Haskell to .NET bridge for my undergraduate thesis. The goal is to provide access to arbitrary .NET libraries from Haskell, with a convenient syntax. I expect to have something releasable in a few weeks. [[User:AndrewA|AndrewA]] 13:33, 6 October 2007 (UTC)<br />
<br />
=== AMQP Client ===<br />
It will be great to have a client for the open source protocol AMQ (http://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol). This is a new transport protocol, solving problems currently addressed by Java Messaging Service, MQ, Tibco RV, etc. Financial companies such infrastructure to distribute massive amounts of market data (although the protocol can be used for effeciently distributing anything)<br />
<br />
== Could be improved ==<br />
<br />
Problems with existing solutions, that could be improved.<br />
<br />
* The core HTTP libraries need a lot of work. There is the beginnings of a useful library there, but many http codes are not handled, and there should be simple 'get' and 'head' functionality. <br />
* Binary serialisation support should be ported to bytestring, and added to the extralibs bundle<br />
* Support for wider Word values in bytestring.<br />
* Adding bytestring support to Parsec.<br />
* Simple process forking/popen, of type <hask>String -> IO String</hask>, is needed (the implementation should be in base)<br />
<br />
If you have code that improves on something in the base libraries, you might consider [[Library_submissions|submitting it to the libraries process]], so it can appear in the standard libraries.<br />
<br />
== Existing libraries (by problem domain) ==<br />
<br />
See [[Libraries_and_tools|the full libraries list]].<br />
<br />
=== GUIs ===<br />
<br />
See [[Libraries_and_tools/GUI_libraries|GUI libraries]]<br />
<br />
=== High performance string IO ===<br />
<br />
See [[Libraries_and_tools/Data_structures#Strings|String libraries]]<br />
<br />
=== Binary IO ===<br />
<br />
See [[Libraries_and_tools/Data_structures#Serialising_data|Binary IO]]<br />
<br />
=== Compression ===<br />
<br />
See [[Libraries_and_tools/Data_structures#Compressing_data|Compressing data]]<br />
<br />
=== Encryption ===<br />
<br />
See [[Libraries_and_tools/Cryptography|Crypto libraries]]<br />
<br />
=== Web frameworks ===<br />
<br />
See [[Libraries_and_tools/Web_programming#Web_frameworks|Web frameworks]]<br />
<br />
=== XML processing ===<br />
<br />
See [[Libraries_and_tools/Web_programming#XML|XML handling]]<br />
<br />
=== Database access ===<br />
<br />
See [[Libraries_and_tools/Database_interfaces|Databases]]<br />
<br />
== Libraries to package ==<br />
<br />
Several cool libraries are out there, not yet in hackage / cabal format.<br />
Some on the hitlist , that would be great to package up would be:<br />
<br />
* [http://www.eyrie.org/~zednenem/2004/hsce/Control.Comonad.html Control.Comonad]<br />
* [http://okmij.org/ftp/Computation/monads.html#LogicT LogicT]<br />
* [http://okmij.org/ftp/Computation/monads.html#random-var-monad probabilistic monads]<br />
* delimited continuations<br />
<br />
== C libraries ==<br />
<br />
C libraries are often very popular, and cheap to bind to. A good task<br />
would be to just write bindings to C libraries, sorted by popularity.<br />
<br />
Possible library rankings are provided by:<br />
<br />
* Debian packages ranked by:<br />
** [http://popcon.debian.org/source/by_inst Installations]<br />
** [http://popcon.debian.org/source/by_vote Votes]</div>Falconhttps://wiki.haskell.org/index.php?title=Talk:Grapefruit&diff=15963Talk:Grapefruit2007-09-30T23:31:53Z<p>Falcon: Hi Wolfgang, could you post some screenshots of Grapefruit</p>
<hr />
<div>Which are the main differences between Grapefruit and wxFruit? -- [[User:ARG|ARG]] 14:40, 3 May 2007 (UTC)<br />
: Hello ARG,<br />
<br />
: sorry that I didn’t answer earlier but it was problematic for me to do so.<br />
<br />
: My following statements about [[wxFruit]] are based on the paper “wxFruit: A Practical GUI Toolkit for Functional Reactive Programming” by Bart Robinson. Is there any newer information about wxFruit? Or did wxFruit die after Bart had written his paper?<br />
<br />
: Now the differences:<br />
<br />
:; Implementation and efficiency: The implementation of Grapefruit differs notably from wxFruit’s implementation. Grapefruits implementation doesn’t use Yampa. It is very similar to FranTk’s “data-driven” approach instead. Internally, a GUI description is an action which builds the GUI and does any necessary event handler registration. The registered event handlers do the necessary updates and notifications of other components autonomously. Grapefruit’s approach is probably much more efficient than wxFruit’s one. I still have to have a closer look at wxFruit’s and Yampa’s source code to give a more detailed comparison.<br />
<br />
:; Signals: Signals are implicit in wxFruit but explicit in Grapefruit. While explicitness of signals makes Grapefruit not as declarative as I would like, it might be necessary for efficiency. Implicit signals may lead to unnecessary updates. Consider the arrows <hask>pure id</hask> and <hask>pure $ \(a,b) -> (b,a)</hask> in wxFruit. Both have type <hask>Arrow arrow => arrow (a,a) -> arrow (a,a)</hask>. If only the first component of the input changes, only consumers of one of the output components have to react. In the first case, these are the consumers of the first component, in the second case, these are the consumers of the second component. Since you cannot analyse a function in Haskell, the GUI library cannot know upon GUI construction time which components have to be updated. So it might have to unnecessarily update all components which depend on the output of the pure arrow. Checking for actual changes at runtime might help but I’m currently not sure about the consequences.<br />
<br />
:; Event streams: In wxFruit, event streams are basically just signals with values of a <hask>Maybe</hask> type. However, this is too broad. Dense event streams are actually forbidden but I cannot see a way to force this restriction via Haskell’s type system. In Grapefruit, event streams are not special signals but different from signals. This seems much more reasonable to me. It is another reason for the expliciteness of signals (and event streams) since this expliciteness enables you to distinguish between signals and event streams via their types.<br />
<br />
:; Graphics and animations: Graphics are currently not supported by Grapefruit but they hopefully will after some months. The same holds for animations.<br />
<br />
:; Dynamic user interfaces: Support for dynamic user interfaces is under development. wxFruit has no support for them and adding this support to wxFruit is difficult.<br />
<br />
:; Event cascades: If an event causes several other events, Grapefruit handles those other events one after another, whereby each handling can cause new events which are handled immediately. wxFruit probably behaves different here and its behavior might be better.<br />
<br />
: -- [[User:Wolfgang Jeltsch|Wolfgang Jeltsch]] 17:07, 11 May 2007 (UTC)<br />
<br />
::Thanks for the detailed reply. [[User:ARG|ARG]] 12:13, 12 May 2007 (UTC)<br />
<br />
Hi Wolfgang, could you post some screenshots of Grapefruit. Thanks. --[[User:Falcon|Falcon]] 23:31, 30 September 2007 (UTC)</div>Falconhttps://wiki.haskell.org/index.php?title=Simonpj/Talk:LwConc&diff=13833Simonpj/Talk:LwConc2007-06-28T14:13:21Z<p>Falcon: </p>
<hr />
<div>= Talk page for "Lightweight concurrency primitives for GHC" =<br />
<br />
This is a discussion page for the paper [http://research.microsoft.com/~simonpj/papers/lw-conc Lightweight concurrency primitives for GHC].<br />
<br />
If you are kind enough to read this paper, you may like to jot down any thoughts it triggers off, and see what others have written. This talk-page lets you do just that. <br />
<br />
You can identify your entries by preceding them with four tildes. Doing so adds your name, and the date. Thus:<br />
<br />
:[[User:Simonpj|Simonpj]] 08:42, 19 April 2007 (UTC) Note from Simon<br />
<br />
If you say who you are in this way, we'll be able to acknowledge your help in a revised version of the paper.<br />
<br />
--------------------------------</div>Falconhttps://wiki.haskell.org/index.php?title=Simonpj/Talk:ListComp&diff=13618Simonpj/Talk:ListComp2007-06-20T18:53:55Z<p>Falcon: </p>
<hr />
<div>= Talk page for "Comprehensive Comprehensions" =<br />
<br />
This is a discussion page for the paper [http://research.microsoft.com/~simonpj/papers/list-comp Comprehensive Comprehensions].<br />
<br />
If you are kind enough to read this paper, you may like to jot down any thoughts it triggers off, and see what others have written. This talk-page lets you do just that. <br />
<br />
You can identify your entries by preceding them with four tildes. Doing so adds your name, and the date. Thus:<br />
<br />
:[[User:Simonpj|Simonpj]] 08:42, 19 April 2007 (UTC) Note from Simon<br />
<br />
If you say who you are in this way, we'll be able to acknowledge your help in a revised version of the paper.<br />
<br />
--------------------------------<br />
<br />
== [[User:MichaelAdams|MichaelAdams]] 14:51, 19 June 2007 (UTC) ==<br />
<br />
In theory these operators (order-by and group-by)<br />
should generalize to monads and once you do<br />
several other design options open up. These two operators could even<br />
be unified into a single operator.<br />
<br />
Starting with sort-by, I think the monadic version is fairly obvious.<br />
Take something like the following code.<br />
<br />
<haskell><br />
do a <- ma<br />
...<br />
b <- mb<br />
c <- mc<br />
sort by (b, c) using foo<br />
d <- md<br />
...<br />
return (a, b, c, d)<br />
</haskell><br />
<br />
It would de-sugar to:<br />
<br />
<haskell><br />
((do a <- ma<br />
...<br />
b <- mb<br />
c <- mc<br />
return ((b, c), (a, b, c))<br />
) `foo` fst) >>= \result -><br />
do let (a, _, _) = result<br />
(_, b, _) = result<br />
(_, _, c) = result<br />
d <- md<br />
...<br />
return (a, b, c, d)<br />
</haskell><br />
<br />
Where we have:<br />
<haskell><br />
foo :: forall a. (a -> t) -> m a -> m a<br />
</haskell><br />
<br />
==== Generalizing Order-by to Group-by ====<br />
In fact after a few tweaks it turns out that order-by and group-by could operate under the exact same de-sugaring.<br />
<br />
Suppose we let the type of foo be:<br />
<haskell><br />
foo :: (Functor n) => forall a. (a -> t) -> m a -> m (n a)<br />
</haskell><br />
Notice that I said "m (n a)" and not "m (m a)". The group-by de-sugaring that I'm going to show works for any Functor "n", and if we imagine "n" to be some type like the following then order-by is just a special case of group-by.<br />
<haskell><br />
type Ident a = a<br />
</haskell><br />
(It wouldn't actually be valid Haskell code to use a type synonym in that way though, but it conveys the idea.)<br />
<br />
The de-sugaring to support this would take something like the following (inspired by arrow syntax).<br />
<br />
<haskell><br />
do a <- ma<br />
b <- mb<br />
c <- mc<br />
foo args $-< (a, b) -- group by (a, b) using (foo args)<br />
g <- mg<br />
return (a, b, c, d, e, f, g)<br />
</haskell><br />
<br />
It would produce:<br />
<br />
<haskell><br />
(ma >>= \a -><br />
mb >>= \b -><br />
mc >>= \c -><br />
return ((a, b), (a, b, c))) `foo` args >>= \result -><br />
let a = fmap (\(_, (a, _, _)) -> a) result<br />
b = fmap (\(_, (_, b, _)) -> b) result<br />
c = fmap (\(_, (_, _, c)) -> c) result in<br />
md >>= \d -><br />
return (a, b, c, d)<br />
</haskell><br />
<br />
(Note that after the "foo", the types of "a", "b" and "c" have changed. Their types before "foo" get wrapped by "n" after "foo".)<br />
<br />
==== Even more generalization ====<br />
<br />
Once we have done this, another possibility opens up.<br />
Notice that the first of each pair in "result" was never used except<br />
from within "foo". It doesn't do any work in the result.<br />
Now suppose we change the type of foo to:<br />
<br />
<haskell><br />
foo :: (Functor n) => forall a. m (t, a) -> m (n (u, a))<br />
</haskell><br />
<br />
Now "foo" can not only read existing bindings from "t", but<br />
it can also create new bindings with "u".<br />
(This hard wires the extraction function to always be "fst",<br />
but it simplifies the presentation a bit.<br />
The other form with a general extraction could still be used.)<br />
<br />
The previous syntax could then be extended to something like:<br />
<br />
<haskell><br />
do a <- ma<br />
b <- mb<br />
c <- mc<br />
(d, e, f) <-$ foo args $-< (a, b)<br />
g <- mg<br />
return (a, b, c, d, e, f, g)<br />
</haskell><br />
<br />
This would then de-sugar to:<br />
<br />
<haskell><br />
(ma >>= \a -><br />
mb >>= \b -><br />
mc >>= \c -><br />
return ((a, b), (a, b, c))) `foo` args >>= \result -><br />
let d = fmap (\((d, _, _), (_, _, _)) -> d) result<br />
e = fmap (\((_, e, _), (_, _, _)) -> e) result<br />
f = fmap (\((_, _, f), (_, _, _)) -> f) result<br />
a = fmap (\((_, _, _), (a, _, _)) -> a) result<br />
b = fmap (\((_, _, _), (_, b, _)) -> b) result<br />
c = fmap (\((_, _, _), (_, _, c)) -> c) result in<br />
mg >>= \g -><br />
return (a, b, c, d, e, f, g)<br />
</haskell><br />
<br />
==== Conclusion ====<br />
<br />
The above might have gone too far down the generalization road and<br />
put to many bells and whistles on the thing,<br />
so it may be worth trimming it down.<br />
I also haven't given any thought to what applications would need this.<br />
I just wanted to consider how far these operations<br />
<br />
--------------------------------<br />
<br />
Just a few little things:<br />
# Page 2, "sortBy is part of the Haskell Prelude" - it's actually in the List module. (I just spotted you've got the same thing in SYB with Class, 7.1)<br />
# The Down trick is very neat, perhaps that should be a part of the standard libraries.<br />
# Page 5, MSFT is in 'quotes', but should be in "quotes".<br />
<br />
Your syntax requires four new keywords, at least one of which is already a standard function (group). Plus with the knowledge of the keywords the parse tree is entirely different. From your paper:<br />
<br />
order by x >= y using takeWhile<br />
<br />
At first reading I parsed >= as the root node, since in Haskell that would be the way it works. Your 'then' syntax in 6.1 seems preferable as it doesn't take any additional keywords.<br />
<br />
--[[User:NeilMitchell|Neil Mitchell]] 16:04, 19 June 2007 (UTC)<br />
------------------------------<br />
The paper also mentions a function "the." I wasn't able to find this function through hoogle or ":t the" in ghci. Perhaps you could add a one line description the way "nub" is described.<br />
<br />
:see section 3.4, its a custom function complete with implementation --[[User:NeilMitchell|Neil Mitchell]] 23:28, 19 June 2007 (UTC)<br />
<br />
::Thanks Neil, I missed it.<br />
<br />
As a non-expert, the sense I get is that whenever I see "by" in a list comprehension, I should expect functions or expressions that operate on lists and not on individual elements of lists.<br />
<br />
When I first started reading the paper, I was going to recommend another extension (for time series) to allow things like moving averages...imagine my surprise when I find exactly the example I was going to suggest :)<br />
[[User:Falcon|Falcon]] 17:10, 19 June 2007 (UTC)<br />
<br />
Maybe I didn't get some part of the paper, but is it really necessary to have 'order by' or 'group by' as syntax extensions? Isn't it possible to allow developers to use any function as long it matches the types of 'order' or 'group'?<br />
[[User:Falcon|Falcon]] 18:53, 20 June 2007 (UTC)</div>Falconhttps://wiki.haskell.org/index.php?title=Simonpj/Talk:ListComp&diff=13594Simonpj/Talk:ListComp2007-06-20T00:38:24Z<p>Falcon: </p>
<hr />
<div>= Talk page for "Comprehensive Comprehensions" =<br />
<br />
This is a discussion page for the paper [http://research.microsoft.com/~simonpj/papers/list-comp Comprehensive Comprehensions].<br />
<br />
If you are kind enough to read this paper, you may like to jot down any thoughts it triggers off, and see what others have written. This talk-page lets you do just that. <br />
<br />
You can identify your entries by preceding them with four tildes. Doing so adds your name, and the date. Thus:<br />
<br />
:[[User:Simonpj|Simonpj]] 08:42, 19 April 2007 (UTC) Note from Simon<br />
<br />
If you say who you are in this way, we'll be able to acknowledge your help in a revised version of the paper.<br />
<br />
--------------------------------<br />
<br />
== [[User:MichaelAdams|MichaelAdams]] 14:51, 19 June 2007 (UTC) ==<br />
<br />
In theory these operators (order-by and group-by)<br />
should generalize to monads and once you do<br />
several other design options open up. These two operators could even<br />
be unified into a single operator.<br />
<br />
Starting with sort-by, I think the monadic version is fairly obvious.<br />
Take something like the following code.<br />
<br />
<haskell><br />
do a <- ma<br />
...<br />
b <- mb<br />
c <- mc<br />
sort by (b, c) using foo<br />
d <- md<br />
...<br />
return (a, b, c, d)<br />
</haskell><br />
<br />
It would de-sugar to:<br />
<br />
<haskell><br />
((do a <- ma<br />
...<br />
b <- mb<br />
c <- mc<br />
return ((b, c), (a, b, c))<br />
) `foo` fst) >>= \result -><br />
do let (a, _, _) = result<br />
(_, b, _) = result<br />
(_, _, c) = result<br />
d <- md<br />
...<br />
return (a, b, c, d)<br />
</haskell><br />
<br />
Where we have:<br />
<haskell><br />
foo :: forall a. (a -> t) -> m a -> m a<br />
</haskell><br />
<br />
==== Generalizing Order-by to Group-by ====<br />
In fact after a few tweaks it turns out that order-by and group-by could operate under the exact same de-sugaring.<br />
<br />
Suppose we let the type of foo be:<br />
<haskell><br />
foo :: (Functor n) => forall a. (a -> t) -> m a -> m (n a)<br />
</haskell><br />
Notice that I said "m (n a)" and not "m (m a)". The group-by de-sugaring that I'm going to show works for any Functor "n", and if we imagine "n" to be some type like the following then order-by is just a special case of group-by.<br />
<haskell><br />
type Ident a = a<br />
</haskell><br />
(It wouldn't actually be valid Haskell code to use a type synonym in that way though, but it conveys the idea.)<br />
<br />
The de-sugaring to support this would take something like the following (inspired by arrow syntax).<br />
<br />
<haskell><br />
do a <- ma<br />
b <- mb<br />
c <- mc<br />
foo args $-< (a, b) -- group by (a, b) using (foo args)<br />
g <- mg<br />
return (a, b, c, d, e, f, g)<br />
</haskell><br />
<br />
It would produce:<br />
<br />
<haskell><br />
(ma >>= \a -><br />
mb >>= \b -><br />
mc >>= \c -><br />
return ((a, b), (a, b, c))) `foo` args >>= \result -><br />
let a = fmap (\(_, (a, _, _)) -> a) result<br />
b = fmap (\(_, (_, b, _)) -> b) result<br />
c = fmap (\(_, (_, _, c)) -> c) result in<br />
md >>= \d -><br />
return (a, b, c, d)<br />
</haskell><br />
<br />
(Note that after the "foo", the types of "a", "b" and "c" have changed. Their types before "foo" get wrapped by "n" after "foo".)<br />
<br />
==== Even more generalization ====<br />
<br />
Once we have done this, another possibility opens up.<br />
Notice that the first of each pair in "result" was never used except<br />
from within "foo". It doesn't do any work in the result.<br />
Now suppose we change the type of foo to:<br />
<br />
<haskell><br />
foo :: (Functor n) => forall a. m (t, a) -> m (n (u, a))<br />
</haskell><br />
<br />
Now "foo" can not only read existing bindings from "t", but<br />
it can also create new bindings with "u".<br />
(This hard wires the extraction function to always be "fst",<br />
but it simplifies the presentation a bit.<br />
The other form with a general extraction could still be used.)<br />
<br />
The previous syntax could then be extended to something like:<br />
<br />
<haskell><br />
do a <- ma<br />
b <- mb<br />
c <- mc<br />
(d, e, f) <-$ foo args $-< (a, b)<br />
g <- mg<br />
return (a, b, c, d, e, f, g)<br />
</haskell><br />
<br />
This would then de-sugar to:<br />
<br />
<haskell><br />
(ma >>= \a -><br />
mb >>= \b -><br />
mc >>= \c -><br />
return ((a, b), (a, b, c))) `foo` args >>= \result -><br />
let d = fmap (\((d, _, _), (_, _, _)) -> d) result<br />
e = fmap (\((_, e, _), (_, _, _)) -> e) result<br />
f = fmap (\((_, _, f), (_, _, _)) -> f) result<br />
a = fmap (\((_, _, _), (a, _, _)) -> a) result<br />
b = fmap (\((_, _, _), (_, b, _)) -> b) result<br />
c = fmap (\((_, _, _), (_, _, c)) -> c) result in<br />
mg >>= \g -><br />
return (a, b, c, d, e, f, g)<br />
</haskell><br />
<br />
==== Conclusion ====<br />
<br />
The above might have gone too far down the generalization road and<br />
put to many bells and whistles on the thing,<br />
so it may be worth trimming it down.<br />
I also haven't given any thought to what applications would need this.<br />
I just wanted to consider how far these operations<br />
<br />
--------------------------------<br />
<br />
Just a few little things:<br />
# Page 2, "sortBy is part of the Haskell Prelude" - it's actually in the List module. (I just spotted you've got the same thing in SYB with Class, 7.1)<br />
# The Down trick is very neat, perhaps that should be a part of the standard libraries.<br />
# Page 5, MSFT is in 'quotes', but should be in "quotes".<br />
<br />
Your syntax requires four new keywords, at least one of which is already a standard function (group). Plus with the knowledge of the keywords the parse tree is entirely different. From your paper:<br />
<br />
order by x >= y using takeWhile<br />
<br />
At first reading I parsed >= as the root node, since in Haskell that would be the way it works. Your 'then' syntax in 6.1 seems preferable as it doesn't take any additional keywords.<br />
<br />
--[[User:NeilMitchell|Neil Mitchell]] 16:04, 19 June 2007 (UTC)<br />
------------------------------<br />
The paper also mentions a function "the." I wasn't able to find this function through hoogle or ":t the" in ghci. Perhaps you could add a one line description the way "nub" is described.<br />
<br />
:see section 3.4, its a custom function complete with implementation --[[User:NeilMitchell|Neil Mitchell]] 23:28, 19 June 2007 (UTC)<br />
<br />
::Thanks Neil, I missed it.<br />
<br />
As a non-expert, the sense I get is that whenever I see "by" in a list comprehension, I should expect functions or expressions that operate on lists and not on individual elements of lists.<br />
<br />
When I first started reading the paper, I was going to recommend another extension (for time series) to allow things like moving averages...imagine my surprise when I find exactly the example I was going to suggest :)<br />
[[User:Falcon|Falcon]] 17:10, 19 June 2007 (UTC)</div>Falconhttps://wiki.haskell.org/index.php?title=Simonpj/Talk:ListComp&diff=13583Simonpj/Talk:ListComp2007-06-19T17:10:58Z<p>Falcon: </p>
<hr />
<div>= Talk page for "Comprehensive Comprehensions" =<br />
<br />
This is a discussion page for the paper [http://research.microsoft.com/~simonpj/papers/list-comp Comprehensive Comprehensions].<br />
<br />
If you are kind enough to read this paper, you may like to jot down any thoughts it triggers off, and see what others have written. This talk-page lets you do just that. <br />
<br />
You can identify your entries by preceding them with four tildes. Doing so adds your name, and the date. Thus:<br />
<br />
:[[User:Simonpj|Simonpj]] 08:42, 19 April 2007 (UTC) Note from Simon<br />
<br />
If you say who you are in this way, we'll be able to acknowledge your help in a revised version of the paper.<br />
<br />
--------------------------------<br />
<br />
== [[User:MichaelAdams|MichaelAdams]] 14:51, 19 June 2007 (UTC) ==<br />
<br />
In theory these operators (order-by and group-by)<br />
should generalize to monads and once you do<br />
several other design options open up. These two operators could even<br />
be unified into a single operator.<br />
<br />
Starting with sort-by, I think the monadic version is fairly obvious.<br />
Take something like the following code.<br />
<br />
<haskell><br />
do a <- ma<br />
...<br />
b <- mb<br />
c <- mc<br />
sort by (b, c) using foo<br />
d <- md<br />
...<br />
return (a, b, c, d)<br />
</haskell><br />
<br />
It would de-sugar to:<br />
<br />
<haskell><br />
((do a <- ma<br />
...<br />
b <- mb<br />
c <- mc<br />
return ((b, c), (a, b, c))<br />
) `foo` fst) >>= \result -><br />
do let (a, _, _) = result<br />
(_, b, _) = result<br />
(_, _, c) = result<br />
d <- md<br />
...<br />
return (a, b, c, d)<br />
</haskell><br />
<br />
Where we have:<br />
<haskell><br />
foo :: forall a. (a -> t) -> m a -> m a<br />
</haskell><br />
<br />
==== Generalizing Order-by to Group-by ====<br />
In fact after a few tweaks it turns out that order-by and group-by could operate under the exact same de-sugaring.<br />
<br />
Suppose we let the type of foo be:<br />
<haskell><br />
foo :: (Functor n) => forall a. (a -> t) -> m a -> m (n a)<br />
</haskell><br />
Notice that I said "m (n a)" and not "m (m a)". The group-by de-sugaring that I'm going to show works for any Functor "n", and if we imagine "n" to be some type like the following then order-by is just a special case of group-by.<br />
<haskell><br />
type Ident a = a<br />
</haskell><br />
(It wouldn't actually be valid Haskell code to use a type synonym in that way though, but it conveys the idea.)<br />
<br />
The de-sugaring to support this would take something like the following (inspired by arrow syntax).<br />
<br />
<haskell><br />
do a <- ma<br />
b <- mb<br />
c <- mc<br />
foo args $-< (a, b) -- group by (a, b) using (foo args)<br />
g <- mg<br />
return (a, b, c, d, e, f, g)<br />
</haskell><br />
<br />
It would produce:<br />
<br />
<haskell><br />
(ma >>= \a -><br />
mb >>= \b -><br />
mc >>= \c -><br />
return ((a, b), (a, b, c))) `foo` args >>= \result -><br />
let a = fmap (\(_, (a, _, _)) -> a) result<br />
b = fmap (\(_, (_, b, _)) -> b) result<br />
c = fmap (\(_, (_, _, c)) -> c) result in<br />
md >>= \d -><br />
return (a, b, c, d)<br />
</haskell><br />
<br />
(Note that after the "foo", the types of "a", "b" and "c" have changed. Their types before "foo" get wrapped by "n" after "foo".)<br />
<br />
==== Even more generalization ====<br />
<br />
Once we have done this, another possibility opens up.<br />
Notice that the first of each pair in "result" was never used except<br />
from within "foo". It doesn't do any work in the result.<br />
Now suppose we change the type of foo to:<br />
<br />
<haskell><br />
foo :: (Functor n) => forall a. m (t, a) -> m (n (u, a))<br />
</haskell><br />
<br />
Now "foo" can not only read existing bindings from "t", but<br />
it can also create new bindings with "u".<br />
(This hard wires the extraction function to always be "fst",<br />
but it simplifies the presentation a bit.<br />
The other form with a general extraction could still be used.)<br />
<br />
The previous syntax could then be extended to something like:<br />
<br />
<haskell><br />
do a <- ma<br />
b <- mb<br />
c <- mc<br />
(d, e, f) <-$ foo args $-< (a, b)<br />
g <- mg<br />
return (a, b, c, d, e, f, g)<br />
</haskell><br />
<br />
This would then de-sugar to:<br />
<br />
<haskell><br />
(ma >>= \a -><br />
mb >>= \b -><br />
mc >>= \c -><br />
return ((a, b), (a, b, c))) `foo` args >>= \result -><br />
let d = fmap (\((d, _, _), (_, _, _)) -> d) result<br />
e = fmap (\((_, e, _), (_, _, _)) -> e) result<br />
f = fmap (\((_, _, f), (_, _, _)) -> f) result<br />
a = fmap (\((_, _, _), (a, _, _)) -> a) result<br />
b = fmap (\((_, _, _), (_, b, _)) -> b) result<br />
c = fmap (\((_, _, _), (_, _, c)) -> c) result in<br />
mg >>= \g -><br />
return (a, b, c, d, e, f, g)<br />
</haskell><br />
<br />
==== Conclusion ====<br />
<br />
The above might have gone too far down the generalization road and<br />
put to many bells and whistles on the thing,<br />
so it may be worth trimming it down.<br />
I also haven't given any thought to what applications would need this.<br />
I just wanted to consider how far these operations<br />
<br />
--------------------------------<br />
<br />
Just a few little things:<br />
# Page 2, "sortBy is part of the Haskell Prelude" - it's actually in the List module.<br />
# The Down trick is very neat, perhaps that should be a part of the standard libraries.<br />
# Page 5, MSFT is in 'quotes', but should be in "quotes".<br />
<br />
Your syntax requires four new keywords, at least one of which is already a standard function (group). Plus with the knowledge of the keywords the parse tree is entirely different. From your paper:<br />
<br />
order by x >= y using takeWhile<br />
<br />
At first reading I parsed >= as the root node, since in Haskell that would be the way it works. Your 'then' syntax in 6.1 seems preferable as it doesn't take any additional keywords.<br />
<br />
--[[User:NeilMitchell|Neil Mitchell]] 16:04, 19 June 2007 (UTC)<br />
------------------------------<br />
The paper also mentions a function "the." I wasn't able to find this function through hoogle or ":t the" in ghci. Perhaps you could add a one line description the way "nub" is described.<br />
<br />
As a non-expert, the sense I get is that whenever I see "by" in a list comprehension, I should expect functions or expressions that operate on lists and not on individual elements of lists.<br />
<br />
When I first started reading the paper, I was going to recommend another extension (for time series) to allow things like moving averages...imagine my surprise when I find exactly the example I was going to suggest :)<br />
[[User:Falcon|Falcon]] 17:10, 19 June 2007 (UTC)</div>Falconhttps://wiki.haskell.org/index.php?title=Talk:Applicative_data-driven_programming&diff=13374Talk:Applicative data-driven programming2007-06-04T05:52:48Z<p>Falcon: </p>
<hr />
<div>Hi Conal - Great to see some programming research here! So far I've just done a cursory review. The first thing that jumps out at me is that I like the style of the paper, the level of coding and the accesibility of the examples. One part that feels like it is missing is a section (or subsection) discussing ''why'' you chose applicative functors rather than monads or arrows. That is, what is missing or too heavy in those Haskell standards? I'm going to give it a more detailed read and may have more comments later. [[User:BrettGiles|BrettGiles]] 14:54, 2 June 2007 (UTC)<br />
<br />
: I'm glad to know the examples work for you. :) I like your suggestion about comparing with monadic and arrow approaches (which I also implemented for [[Phooey]]). I'll add that comparison. My short answer is that a ''monadic'' formulation forces me to expose sources and more explicit lifting, while an ''arrow'' formulation (my previous favorite) requires an input and an output type, when it's very rare to use both. Personally, my favorite approach is none of these, but rather the [[TV]] style, because of its composability. [[User:Conal|Conal]] 18:41, 2 June 2007 (UTC)<br />
<br />
----<br />
<br />
Hello Conal, could you explain what advantages the use of unique IDs brings? You give adding a value to itself as an example. However, if you use <hask>liftA2 (+) s s</hask>, the effect of the signal <hask>s</hask> is doubled, meaning that the widgets belonging to <hask>s</hask> are created twice&mdash;with different IDs for the different instances. So you have no advantage here. On the other hand, if you use <hask>liftA (\a -> a + a) s</hask>, there is no double notification since you use a unary function in fact. Am I missing something here? -- [[User:Wolfgang Jeltsch|Wolfgang Jeltsch]] 17:20, 2 June 2007 (UTC)<br />
<br />
: I think the answer depends on which applicative functor (AF): data-driven computations (<hask>DataDriven nfr xtr</hask>, e.g. Source) or UIs. Redundant notification (eliminated with unique IDs) can arise from direct use of <hask>Source</hask>. With AF-based or arrows-based UIs, on the other hand, I think your point is right on, since the abstraction prevents direct access to value sources. Any ''monadic'' UI formulation I've imagined would expose sources, in which case the tagging trick would be helpful. Thanks again for your penetrating insights! --[[User:Conal|Conal]] 18:41, 2 June 2007 (UTC)<br />
<br />
----<br />
<br />
Conal, ever since I read "Another Essence of Dataflow," I've been trying to understand AFs. Your paper doesn't explain them but makes me more interested in using them for my problem. My interest is in using AFs for dealing with streaming data, as in stock market quotes. I would love to be able to say something like: "buy (.1 * lastquantity) msft at bidprice" (in a more Haskellish syntax obviously). In the previous expression, the system is asked to buy Microsoft's stock, the number of shares to buy is 10% some value in a stream and the price to buy it is equal to another value in another stream. The system may have two different streams [(time,symbol,lastprice,lastquantity)] and [(time,symbol,bidprice,askprice,bidquantity,askquantity)]. So, can the ideas in your paper be used to solve such 'event processing' problems without littering my code with callbacks or polling current values every few milliseconds? Thanks! --[[User:Falcon|Falcon]] 21:26, 2 June 2007 (UTC)<br />
<br />
: I'd like to explore your application. If you contact me with more info, we'll see what we can come up with. I'm delighted with the AF abstraction. I used them for years without realizing it: Fran/FRP behaviors, Pan images. --[[User:Conal|Conal]] 01:36, 3 June 2007 (UTC)<br />
<br />
: Hello Falcon, have you read [http://www.soi.city.ac.uk/~ross/papers/Applicative.pdf Applicative programming with effects] by Conor McBride and Ross Paterson? In my opinion, it’s a very comprehensible introduction to applicative functors. -- [[User:Wolfgang Jeltsch|Wolfgang Jeltsch]] 20:16, 3 June 2007 (UTC)<br />
<br />
::Wolfgang, I have indeed. Although I must say that it still takes me a while to understand most Haskell papers (and I do read many :) ). --[[User:Falcon|Falcon]] 05:52, 4 June 2007 (UTC)<br />
<br />
----<br />
<br />
Typo: In 1.2, third paragraph "... an widget ... " -> " ... a widget ..." -- [[User:Bas van Dijk|Bas van Dijk]] 21:32, Jun 3 2007 (UTC)</div>Falconhttps://wiki.haskell.org/index.php?title=Talk:Applicative_data-driven_programming&diff=13356Talk:Applicative data-driven programming2007-06-02T21:26:11Z<p>Falcon: </p>
<hr />
<div>Hi Conal - Great to see some programming research here! So far I've just done a cursory review. The first thing that jumps out at me is that I like the style of the paper, the level of coding and the accesibility of the examples. One part that feels like it is missing is a section (or subsection) discussing ''why'' you chose applicative functors rather than monads or arrows. That is, what is missing or too heavy in those Haskell standards? I'm going to give it a more detailed read and may have more comments later. [[User:BrettGiles|BrettGiles]] 14:54, 2 June 2007 (UTC)<br />
<br />
: I'm glad to know the examples work for you. :) I like your suggestion about comparing with monadic and arrow approaches (which I also implemented for [[Phooey]]). I'll add that comparison. My short answer is that a ''monadic'' formulation forces me to expose sources and more explicit lifting, while an ''arrow'' formulation (my previous favorite) requires an input and an output type, when it's very rare to use both. Personally, my favorite approach is none of these, but rather the [[TV]] style, because of its composability. [[User:Conal|Conal]] 18:41, 2 June 2007 (UTC)<br />
<br />
----<br />
<br />
Hello Conal, could you explain what advantages the use of unique IDs brings? You give adding a value to itself as an example. However, if you use <hask>liftA2 (+) s s</hask>, the effect of the signal <hask>s</hask> is doubled, meaning that the widgets belonging to <hask>s</hask> are created twice&mdash;with different IDs for the different instances. So you have no advantage here. On the other hand, if you use <hask>liftA (\a -> a + a) s</hask>, there is no double notification since you use a unary function in fact. Am I missing something here? -- [[User:Wolfgang Jeltsch|Wolfgang Jeltsch]] 17:20, 2 June 2007 (UTC)<br />
<br />
: I think the answer depends on which applicative functor (AF): data-driven computations (<hask>DataDriven nfr xtr</hask>, e.g. Source) or UIs. Redundant notification (eliminated with unique IDs) can arise from direct use of <hask>Source</hask>. With AF-based or arrows-based UIs, on the other hand, I think your point is right on, since the abstraction prevents direct access to value sources. Any ''monadic'' UI formulation I've imagined would expose sources, in which case the tagging trick would be helpful. Thanks again for your penetrating insights! [[User:Conal|Conal]] 18:41, 2 June 2007 (UTC)<br />
<br />
----<br />
Conal, ever since I read "Another Essence of Dataflow," I've been trying to understand AFs. Your paper doesn't explain them but makes me more interested in using them for my problem. My interest is in using AFs for dealing with streaming data, as in stock market quotes. I would love to be able to say something like: "buy (.1 * lastquantity) msft at bidprice" (in a more Haskellish syntax obviously). In the previous expression, the system is asked to buy Microsoft's stock, the number of shares to buy is 10% some value in a stream and the price to buy it is equal to another value in another stream. The system may have two different streams [(time,symbol,lastprice,lastquantity)] and [(time,symbol,bidprice,askprice,bidquantity,askquantity)]. So, can the ideas in your paper be used to solve such 'event processing' problems without littering my code with callbacks or polling current values every few milliseconds? Thanks! --[[User:Falcon|Falcon]] 21:26, 2 June 2007 (UTC)</div>Falconhttps://wiki.haskell.org/index.php?title=Talk:Applicative_data-driven_programming&diff=13355Talk:Applicative data-driven programming2007-06-02T21:25:12Z<p>Falcon: </p>
<hr />
<div>Hi Conal - Great to see some programming research here! So far I've just done a cursory review. The first thing that jumps out at me is that I like the style of the paper, the level of coding and the accesibility of the examples. One part that feels like it is missing is a section (or subsection) discussing ''why'' you chose applicative functors rather than monads or arrows. That is, what is missing or too heavy in those Haskell standards? I'm going to give it a more detailed read and may have more comments later. [[User:BrettGiles|BrettGiles]] 14:54, 2 June 2007 (UTC)<br />
<br />
: I'm glad to know the examples work for you. :) I like your suggestion about comparing with monadic and arrow approaches (which I also implemented for [[Phooey]]). I'll add that comparison. My short answer is that a ''monadic'' formulation forces me to expose sources and more explicit lifting, while an ''arrow'' formulation (my previous favorite) requires an input and an output type, when it's very rare to use both. Personally, my favorite approach is none of these, but rather the [[TV]] style, because of its composability. [[User:Conal|Conal]] 18:41, 2 June 2007 (UTC)<br />
<br />
----<br />
<br />
Hello Conal, could you explain what advantages the use of unique IDs brings? You give adding a value to itself as an example. However, if you use <hask>liftA2 (+) s s</hask>, the effect of the signal <hask>s</hask> is doubled, meaning that the widgets belonging to <hask>s</hask> are created twice&mdash;with different IDs for the different instances. So you have no advantage here. On the other hand, if you use <hask>liftA (\a -> a + a) s</hask>, there is no double notification since you use a unary function in fact. Am I missing something here? -- [[User:Wolfgang Jeltsch|Wolfgang Jeltsch]] 17:20, 2 June 2007 (UTC)<br />
<br />
: I think the answer depends on which applicative functor (AF): data-driven computations (<hask>DataDriven nfr xtr</hask>, e.g. Source) or UIs. Redundant notification (eliminated with unique IDs) can arise from direct use of <hask>Source</hask>. With AF-based or arrows-based UIs, on the other hand, I think your point is right on, since the abstraction prevents direct access to value sources. Any ''monadic'' UI formulation I've imagined would expose sources, in which case the tagging trick would be helpful. Thanks again for your penetrating insights! [[User:Conal|Conal]] 18:41, 2 June 2007 (UTC)<br />
<br />
----<br />
Conal, ever since I read "Another Essence of Dataflow," I've been trying to understand AFs. Your paper doesn't explain them but makes me more interested in using them for my problem. My interest is in using AFs for dealing with streaming data, as in stock market quotes. I would love to be able to say something like: "buy (.1 * lastquantity) msft at bidprice" (in a more Haskellish syntax obviously). In the previous expression, the system is asked to buy Microsoft's stock, the number of shares to buy is 10% some value in a stream and the price to buy it is equal to another value in another stream. The system may have two different streams [(time,symbol,lastprice,lastquantity)] and [(time,symbol,bidprice,askprice,bidquantity,askquantity)]. So, can the ideas in your paper be used to solve such 'event processing' problems without littering my code with callbacks or polling current values every few milliseconds? Thanks! --[[User: Falcon|Falcon]]</div>Falconhttps://wiki.haskell.org/index.php?title=AI&diff=13122AI2007-05-21T00:46:27Z<p>Falcon: </p>
<hr />
<div>[[Category:Community]]<br />
== Introduction ==<br />
This is the home for the Haskell AI Strike Force! Here we will collect code, problems, papers, ideas, and people for putting together a flexible AI toolkit in Haskell.<br />
<br />
== People ==<br />
If interested in contributing to or monitoring this project, please put your name, nickname (if applicable - e.g., if you talk on #haskell), and email address so we can keep each other up-to-date.<br />
<br />
Andrew Wagner (chessguy) <wagner dot andrew at gmail><br />
<br />
Bryan Green (shevek) <dbryan dot green at gmail><br />
<br />
Ricardo Herrmann <rherrmann at gmail><br />
<br />
Dan Doel (dolio) <dan dot doel at gmail><br />
<br />
Chung-chieh Shan (ccshan) <ccshan at cs dot rutgers dot edu><br />
<br />
Adam Wyner (Lawman) <adam dot wyner dot info><br />
<br />
Allan Erskine (thedatabase) <allan dot erskine at gmail><br />
<br />
Dave Tapley (DukeDave) <dukedave at gmail><br />
<br />
Lloyd Allison <lloyd dot allison at infotech dot monash dot edu dot au><br />
<br />
Paul Berg (Procyon) <procyon at procyondevelopments dot com><br />
<br />
Eric Kow (kowey) <eric dot kow at gmail> [watching on the sidelines]<br />
<br />
Charles Blundell <blundellc at gmail><br />
<br />
David Amos <polyomino (at) f2s (dot) com><br />
<br />
Mathew Mills (mathewm) <mathewmills (at) gmail (dot) com><br />
<br />
Jason Morton (inverselimit) <jason.morton at gmail><br />
<br />
Jiri Hysek (dvekravy) <xhysek02 at stud dot fit dot vutbr dot cz><br />
<br />
Shahbaz Chaudhary <shahbazc at gmail> [interested in GP]<br />
<br />
== Ideas ==<br />
<br />
This is where we need to start. Please put your ideas here for how to structure the contents of this toolkit/wiki-page(s). I've ripped and wiki-fied the table of contents of the main sections of Russell and Norvig's classic "Artificial Intelligence: A Modern Approach", for inspiration. One way of structuring things would be to turn various section names of this into links to new pages. If we do this, we should agree on the format for each new page to link to: e.g., each page could have a list of papers, links to code, a general discussion area, and a list of benchmark problems for that particular topic. Comments, please!<br />
<br />
* In short, parts of this project can range from established ideas to new syntheses. ccshan: The high level of domain-specific abstraction that Haskell enables is ideal for AI, because AI programs are often "meta": we need to model agents who model the world, and sometimes to model agents who model agents who model the world, etc. In particular, monads are a good way to structure and solve decision processes, [http://conway.rutgers.edu/~ccshan/wiki/cs504/posts/Second_week.html as I've started to explore as part of a course on computational modeling that I'm teaching]. Given that [http://www.cs.yale.edu/homes/hudak-paul/hudak-dir/ACM-WS/position.html Haskell is a good language for modular interpreters and compilers], it would also be nice to create and refactor in Haskell an implementation of a [http://ai.stanford.edu/~shoham/www%20papers/RatProg.pdf rational programming language] like [http://www.eecs.harvard.edu/~avi/ Avi Pfeffer]'s [http://www.eecs.harvard.edu/~avi/IBAL/index.html IBAL] -- not only [http://www.eecs.harvard.edu/~nr/pubs/pmonad-abstract.html is probability distribution a monad], I just realized that [http://ttic.uchicago.edu/~dmcallester/bayes.ps a certain kind of variable elimination] is simply garbage collection in a call-by-need language!<br />
<br />
== Things that need a home ==<br />
<br />
If there are things that should be included in the project, but you're not sure where it should go, place it here! I'll start with:<br />
* http://catenova.org/~awagner/Simplifier<br />
**This was given to me by Alfonso Acosta (mentioned recently on haskell-cafe)<br />
<br />
*http://catenova.org/~awagner/GPLib<br />
**[[GPLib]] is a work in progress by yours truly, hopefully a future framework for genetic algorithms in haskell.<br />
<br />
*http://www.haskell.org/haskellwiki/Libraries_and_tools/Linguistics<br />
<br />
I've proposed a machine learning library for this year's Google Summer of Code. [http://hackage.haskell.org/trac/summer-of-code/ticket/1127] There has been a few interested (and seemingly well qualified) students, too. I'm not sure if it qualifes as "AI", but if you are interested in this project (as a potential student, mentor, or just...well, interested), please add yourself to the above link, and/or get in touch with me at <ketil at malde dot org>. --[[User:Ketil|Ketil]] 07:46, 26 March 2007 (UTC)<br />
<br />
Martin Erwig's probabilistic functional programming (PFP) project, including an implementation of the probability monad:<br />
*http://web.engr.oregonstate.edu/~erwig/pfp/<br />
<br />
Culmination of some recent posts about the probability monad on Random Hacks (including a darcs repository):<br />
*http://www.randomhacks.net/articles/2007/03/03/smart-classification-with-haskell <br />
<br />
sigfpe's coverage and highly algebraic view of the probability monad in Haskell:<br />
*http://sigfpe.blogspot.com/2007/02/monads-for-vector-spaces-probability.html<br />
<br />
Two links I found today that are interesting:<br />
*http://perception.inf.um.es/darcs/darcsweb.cgi<br />
*http://www-student.cs.york.ac.uk/~cb224/<br />
<br />
Polytypic unification - unification seems particularly useful for AI tasks (at least natural language stuff)... wouldn't be nice to have a generic library that does it for you?<br />
*http://www.cs.chalmers.se/~patrikj/poly/unify/<br />
<br />
== Table of Contents for AI: A Modern Approach ==<br />
*Part I: Artificial Intelligence<br />
**1. Introduction ... 1<br />
***1.1. What is AI? ... 1<br />
***1.2. The Foundations of Artificial Intelligence ... 5<br />
***1.3. The History of Artificial Intelligence ... 16<br />
***1.4. The State of the Art ... 27<br />
***1.5. Summary ... 28<br />
**2. Intelligent Agents ... 32<br />
***2.1. Agents and Environments ... 32<br />
***2.2. Good Behavior: The Concept of Rationality ... 34<br />
***2.3. The Nature of Environments ... 38<br />
***2.4. The Structure of Agents ... 44<br />
***2.5. Summary ... 54<br />
**3. Solving Problems by Searching ... 59<br />
***3.1. Problem-Solving Agents ... 59<br />
***3.2. Example Problems ... 64<br />
***3.3. Searching for Solutions ... 69<br />
***3.4. Uninformed Search Strategies ... 73<br />
***3.5. Avoiding Repeated States ... 81<br />
***3.6. Searching with Partial Information ... 83<br />
***3.7. Summary ... 87<br />
**4. Informed Search and Exploration ... 94<br />
***4.1. Informed (Heuristic) Search Strategies ... 94<br />
***4.2. Heuristic Functions ... 105<br />
***4.3. Local Search Algorithms and Optimization Problems ... 110<br />
***4.4. Local Search in Continuous Spaces ... 119<br />
***4.5. Online Search Agents and Unknown Environments ... 122<br />
***4.6. Summary ... 129<br />
**5. Constraint Satisfaction Problems ... 137<br />
***5.1. Constraint Satisfaction Problems ... 137<br />
***5.2. Backtracking Search for CSPs ... 141<br />
***5.3. Local Search for Constraint Satisfaction Problems ... 150<br />
***5.4. The Structure of Problems ... 151<br />
***5.5. Summary ... 155<br />
**6. Adversarial Search ... 161<br />
***6.1. Games ... 161<br />
***6.2. Optimal Decisions in Games ... 162<br />
***6.3. Alpha-Beta Pruning ... 167<br />
***6.4. Imperfect, Real-Time Decisions ... 171<br />
***6.5. Games That Include an Element of Chance ... 175<br />
***6.6. State-of-the-Art Game Programs ... 180<br />
***6.7. Discussion ... 183<br />
***6.8. Summary ... 185<br />
*Part III: Knowledge and reasoning<br />
**7. Logical Agents ... 194<br />
***7.1. Knowledge-Based Agents ... 195<br />
***7.2. The Wumpus World ... 197<br />
***7.3. Logic ... 200<br />
***7.4. Propositional Logic: A Very Simple Logic ... 204<br />
***7.5. Reasoning Patterns in Propositional Logic ... 211<br />
***7.6. Effective propositional inference ... 220<br />
***7.7. Agents Based on Propositional Logic ... 225<br />
***7.8. Summary ... 232<br />
**8. First-Order Logic ... 240<br />
***8.1. Representation Revisited ... 240<br />
***8.2. Syntax and Semantics of First-Order Logic ... 245<br />
***8.3. Using First-Order Logic ... 253<br />
***8.4. Knowledge Engineering in First-Order Logic ... 260<br />
***8.5. Summary ... 266<br />
**9. Inference in First-Order Logic ... 272<br />
***9.1. Propositional vs. First-Order Inference ... 272<br />
***9.2. Unification and Lifting ... 275<br />
***9.3. Forward Chaining ... 280<br />
***9.4. Backward Chaining ... 287<br />
***9.5. Resolution ... 295<br />
***9.6. Summary ... 310<br />
**10. Knowledge Representation ... 320<br />
***10.1. Ontological Engineering ... 320<br />
***10.2. Categories and Objects ... 322<br />
***10.3. Actions, Situations, and Events ... 328<br />
***10.4. Mental Events and Mental Objects ... 341<br />
***10.5. The Internet Shopping World ... 344<br />
***10.6. Reasoning Systems for Categories ... 349<br />
***10.7. Reasoning with Default Information ... 354<br />
***10.8. Truth Maintenance Systems ... 360<br />
***10.9. Summary ... 362<br />
*Part IV: Planning<br />
**11. Planning ... 375<br />
***11.1. The Planning Problem ... 375<br />
***11.2. Planning with State-Space Search ... 382<br />
***11.3. Partial-Order Planning ... 387<br />
***11.4. Planning Graphs ... 395<br />
***11.5. Planning with Propositional Logic ... 402<br />
***11.6. Analysis of Planning Approaches ... 407<br />
***11.7. Summary ... 408<br />
**12. Planning and Acting in the Real World ... 417<br />
***12.1. Time, Schedules, and Resources ... 417<br />
***12.2. Hierarchical Task Network Planning ... 422<br />
***12.3. Planning and Acting in Nondeterministic Domains ... 430<br />
***12.4. Conditional Planning ... 433<br />
***12.5. Execution Monitoring and Replanning ... 441<br />
***12.6. Continuous Planning ... 445<br />
***12.7. MultiAgent Planning ... 449<br />
***12.8. Summary ... 454<br />
*Part V: Uncertain knowledge and reasoning<br />
**13. Uncertainty ... 462<br />
***13.1. Acting under Uncertainty ... 462<br />
***13.2. Basic Probability Notation ... 466<br />
***13.3. The Axioms of Probability ... 471<br />
***13.4. Inference Using Full Joint Distributions ... 475<br />
***13.5. Independence ... 477<br />
***13.6. Bayes' Rule and Its Use ... 479<br />
***13.7. The Wumpus World Revisited ... 483<br />
***13.8. Summary ... 486<br />
**14. Probabilistic Reasoning ... 492<br />
***14.1. Representing Knowledge in an Uncertain Domain ... 492<br />
***14.2. The Semantics of Bayesian Networks ... 495<br />
***14.3. Efficient Representation of Conditional Distributions ... 500<br />
***14.4. Exact Inference in Bayesian Networks ... 504<br />
***14.5. Approximate Inference in Bayesian Networks ... 511<br />
***14.6. Extending Probability to First-Order Representations ... 519<br />
***14.7. Other Approaches to Uncertain Reasoning ... 523<br />
***14.8. Summary ... 528<br />
**15. Probabilistic Reasoning over Time ... 537<br />
***15.1. Time and Uncertainty ... 537<br />
***15.2. Inference in Temporal Models ... 541<br />
***15.3. Hidden Markov Models ... 549<br />
***15.4. Kalman Filters ... 551<br />
***15.5. Dynamic Bayesian Networks ... 559<br />
***15.6. Speech Recognition ... 568<br />
***15.7. Summary ... 578<br />
**16. Making Simple Decisions ... 584<br />
***16.1. Combining Beliefs and Desires under Uncertainty ... 584<br />
***16.2. The Basis of Utility Theory ... 586<br />
***16.3. Utility Functions ... 589<br />
***16.4. Multiattribute Utility Functions ... 593<br />
***16.5. Decision Networks ... 597<br />
***16.6. The Value of Information ... 600<br />
***16.7. Decision-Theoretic Expert Systems ... 604<br />
***16.8. Summary ... 607<br />
**17. Making Complex Decisions ... 613<br />
***17.1. Sequential Decision Problems ... 613<br />
***17.2. Value Iteration ... 618<br />
***17.3. Policy Iteration ... 624<br />
***17.4. Partially observable MDPs ... 625<br />
***17.5. Decision-Theoretic Agents ... 629<br />
***17.6. Decisions with Multiple Agents: Game Theory ... 631<br />
***17.7. Mechanism Design ... 640<br />
***17.8. Summary ... 643<br />
*Part VI: Learning<br />
**18. Learning from Observations ... 649<br />
***18.1. Forms of Learning ... 649<br />
***18.2. Inductive Learning ... 651<br />
***18.3. Learning Decision Trees ... 653<br />
***18.4. Ensemble Learning ... 664<br />
***18.5. Why Learning Works: Computational Learning Theory ... 668<br />
***18.6. Summary ... 673<br />
**19. Knowledge in Learning ... 678<br />
***19.1. A Logical Formulation of Learning ... 678<br />
***19.2. Knowledge in Learning ... 686<br />
***19.3. Explanation-Based Learning ... 690<br />
***19.4. Learning Using Relevance Information ... 694<br />
***19.5. Inductive Logic Programming ... 697<br />
***19.6. Summary ... 707<br />
**20. Statistical Learning Methods ... 712<br />
***20.1. Statistical Learning ... 712<br />
***20.2. Learning with Complete Data ... 716<br />
***20.3. Learning with Hidden Variables: The EM Algorithm ... 724<br />
***20.4. Instance-Based Learning ... 733<br />
***20.5. Neural Networks ... 736<br />
***20.6. Kernel Machines ... 749<br />
***20.7. Case Study: Handwritten Digit Recognition ... 752<br />
***20.8. Summary ... 754<br />
**21. Reinforcement Learning ... 763<br />
***21.1. Introduction ... 763<br />
***21.2. Passive Reinforcement Learning ... 765<br />
***21.3. Active Reinforcement Learning ... 771<br />
***21.4. Generalization in Reinforcement Learning ... 777<br />
***21.5. Policy Search ... 781<br />
***21.6. Summary ... 784<br />
*Part VII: Communicating, perceiving, and acting<br />
**22. Communication ... 790<br />
***22.1. Communication as Action ... 790<br />
***22.2. A Formal Grammar for a Fragment of English ... 795<br />
***22.3. Syntactic Analysis (Parsing) ... 798<br />
***22.4. Augmented Grammars ... 806<br />
***22.5. Semantic Interpretation ... 810<br />
***22.6. Ambiguity and Disambiguation ... 818<br />
***22.7. Discourse Understanding ... 821<br />
***22.8. Grammar Induction ... 824<br />
***22.9. Summary ... 826<br />
**23. Probabilistic Language Processing ... 834<br />
***23.1. Probabilistic Language Models ... 834<br />
***23.2. Information Retrieval ... 840<br />
***23.3. Information Extraction ... 848<br />
***23.4. Machine Translation ... 850<br />
***23.5. Summary ... 857<br />
**24. Perception ... 863<br />
***24.1. Introduction ... 863<br />
***24.2. Image Formation ... 865<br />
***24.3. Early Image Processing Operations ... 869<br />
***24.4. Extracting Three-Dimensional Information ... 873<br />
***24.5. Object Recognition ... 885<br />
***24.6. Using Vision for Manipulation and Navigation ... 892<br />
***24.7. Summary ... 894<br />
**25. Robotics ... 901<br />
***25.1. Introduction ... 901<br />
***25.2. Robot Hardware ... 903<br />
***25.3. Robotic Perception ... 907<br />
***25.4. Planning to Move ... 916<br />
***25.5. Planning uncertain movements ... 923<br />
***25.6. Moving ... 926<br />
***25.7. Robotic Software Architectures ... 932<br />
***25.8. Application Domains ... 935<br />
***25.9. Summary ... 938<br />
*Part VIII: Conclusions<br />
**26. Philosophical Foundations ... 947<br />
***26.1. Weak AI: Can Machines Act Intelligently? ... 947<br />
***26.2. Strong AI: Can Machines Really Think? ... 952<br />
***26.3. The Ethics and Risks of Developing Artificial Intelligence ... 960<br />
***26.4. Summary ... 964<br />
**27. AI: Present and Future ... 968<br />
***27.1. Agent Components ... 968<br />
***27.2. Agent Architectures ... 970<br />
***27.3. Are We Going in the Right Direction? ... 972<br />
***27.4. What if AI Does Succeed? ... 974<br />
*A. Mathematical background ... 977<br />
**A.1. Complexity Analysis and O() Notation ... 977<br />
**A.2. Vectors, Matrices, and Linear Algebra ... 979<br />
**A.3. Probability Distributions ... 981<br />
*B. Notes on Languages and Algorithms ... 984<br />
**B.1. Defining Languages with Backus-Naur Form (BNF) ... 984<br />
**B.2. Describing Algorithms with Pseudocode ... 985<br />
**B.3. Online Help ... 985<br />
<br />
AI: A Modern Approach by Stuart Russell and Peter Norvig Modified: Dec 14, 2002</div>Falcon