# Difference between revisions of "AI"

(Some ideas and programming-language links) |
m (Link edits) |
||

Line 20: | Line 20: | ||

This is where we need to start. Please put your ideas here for how to structure the contents of this toolkit/wiki-page(s). I've ripped and wiki-fied the table of contents of the main sections of Russell and Norvig's classic "Artificial Intelligence: A Modern Approach", for inspiration. One way of structuring things would be to turn various section names of this into links to new pages. If we do this, we should agree on the format for each new page to link to: e.g., each page could have a list of papers, links to code, a general discussion area, and a list of benchmark problems for that particular topic. Comments, please! | This is where we need to start. Please put your ideas here for how to structure the contents of this toolkit/wiki-page(s). I've ripped and wiki-fied the table of contents of the main sections of Russell and Norvig's classic "Artificial Intelligence: A Modern Approach", for inspiration. One way of structuring things would be to turn various section names of this into links to new pages. If we do this, we should agree on the format for each new page to link to: e.g., each page could have a list of papers, links to code, a general discussion area, and a list of benchmark problems for that particular topic. Comments, please! | ||

− | * In short, parts of this project can range from established ideas to new syntheses. ccshan: The high level of domain-specific abstraction that Haskell enables is ideal for AI, because AI programs are often "meta": we need to model agents who model the world, and sometimes to model agents who model agents who model the world, etc. In particular, monads are a good way to structure and solve decision processes, [http://conway.rutgers.edu/~ccshan/wiki/cs504/posts/Second_week.html as I've started to explore as part of a course on computational modeling that I'm teaching]. Given that Haskell is a good language for modular interpreters and compilers, it would also be nice to create and refactor in Haskell an implementation of a [http://ai.stanford.edu/~shoham/www%20papers/RatProg.pdf rational programming language] like [http://www.eecs.harvard.edu/~avi/ Avi Pfeffer]'s [http://www.eecs.harvard.edu/~avi/ | + | * In short, parts of this project can range from established ideas to new syntheses. ccshan: The high level of domain-specific abstraction that Haskell enables is ideal for AI, because AI programs are often "meta": we need to model agents who model the world, and sometimes to model agents who model agents who model the world, etc. In particular, monads are a good way to structure and solve decision processes, [http://conway.rutgers.edu/~ccshan/wiki/cs504/posts/Second_week.html as I've started to explore as part of a course on computational modeling that I'm teaching]. Given that [http://www.cs.yale.edu/homes/hudak-paul/hudak-dir/ACM-WS/position.html Haskell is a good language for modular interpreters and compilers], it would also be nice to create and refactor in Haskell an implementation of a [http://ai.stanford.edu/~shoham/www%20papers/RatProg.pdf rational programming language] like [http://www.eecs.harvard.edu/~avi/ Avi Pfeffer]'s [http://www.eecs.harvard.edu/~avi/IBAL/index.html IBAL] -- not only [http://www.eecs.harvard.edu/~nr/pubs/pmonad-abstract.html is probability distribution a monad], I just realized that [http://ttic.uchicago.edu/~dmcallester/bayes.ps a certain kind of variable elimination] is simply garbage collection in a call-by-need language! |

== Things that need a home == | == Things that need a home == |

## Revision as of 19:34, 23 March 2007

## Contents

## Introduction

This is the home for the Haskell AI Strike Force! Here we will collect code, problems, papers, ideas, and people for putting together a flexible AI toolkit in Haskell.

## People

If interested in contributing to or monitoring this project, please put your name, nickname (if applicable - e.g., if you talk on #haskell), and email address so we can keep each other up-to-date.

Andrew Wagner (chessguy) <wagner dot andrew at gmail>

Bryan Green (shevek) <dbryan dot green at gmail>

Ricardo Herrmann <rherrmann at gmail>

Dan Doel (dolio) <dan dot doel at gmail>

Chung-chieh Shan (ccshan) <ccshan at cs dot rutgers dot edu>

## Ideas

This is where we need to start. Please put your ideas here for how to structure the contents of this toolkit/wiki-page(s). I've ripped and wiki-fied the table of contents of the main sections of Russell and Norvig's classic "Artificial Intelligence: A Modern Approach", for inspiration. One way of structuring things would be to turn various section names of this into links to new pages. If we do this, we should agree on the format for each new page to link to: e.g., each page could have a list of papers, links to code, a general discussion area, and a list of benchmark problems for that particular topic. Comments, please!

- In short, parts of this project can range from established ideas to new syntheses. ccshan: The high level of domain-specific abstraction that Haskell enables is ideal for AI, because AI programs are often "meta": we need to model agents who model the world, and sometimes to model agents who model agents who model the world, etc. In particular, monads are a good way to structure and solve decision processes, as I've started to explore as part of a course on computational modeling that I'm teaching. Given that Haskell is a good language for modular interpreters and compilers, it would also be nice to create and refactor in Haskell an implementation of a rational programming language like Avi Pfeffer's IBAL -- not only is probability distribution a monad, I just realized that a certain kind of variable elimination is simply garbage collection in a call-by-need language!

## Things that need a home

If there are things that should be included in the project, but you're not sure where it should go, place it here! I'll start with:

- http://catenova.org/~awagner/Simplifier
- This was given to me by Alfonso Acosta (mentioned recently on haskell-cafe)

- http://catenova.org/~awagner/GPLib
- This is a work in progress by yours truly, hopefully a future framework for genetic algorithms in haskell

## Table of Contents for AI: A Modern Approach

- Part I: Artificial Intelligence
- 1. Introduction ... 1
- 1.1. What is AI? ... 1
- 1.2. The Foundations of Artificial Intelligence ... 5
- 1.3. The History of Artificial Intelligence ... 16
- 1.4. The State of the Art ... 27
- 1.5. Summary ... 28

- 2. Intelligent Agents ... 32
- 2.1. Agents and Environments ... 32
- 2.2. Good Behavior: The Concept of Rationality ... 34
- 2.3. The Nature of Environments ... 38
- 2.4. The Structure of Agents ... 44
- 2.5. Summary ... 54

- 3. Solving Problems by Searching ... 59
- 3.1. Problem-Solving Agents ... 59
- 3.2. Example Problems ... 64
- 3.3. Searching for Solutions ... 69
- 3.4. Uninformed Search Strategies ... 73
- 3.5. Avoiding Repeated States ... 81
- 3.6. Searching with Partial Information ... 83
- 3.7. Summary ... 87

- 4. Informed Search and Exploration ... 94
- 4.1. Informed (Heuristic) Search Strategies ... 94
- 4.2. Heuristic Functions ... 105
- 4.3. Local Search Algorithms and Optimization Problems ... 110
- 4.4. Local Search in Continuous Spaces ... 119
- 4.5. Online Search Agents and Unknown Environments ... 122
- 4.6. Summary ... 129

- 5. Constraint Satisfaction Problems ... 137
- 5.1. Constraint Satisfaction Problems ... 137
- 5.2. Backtracking Search for CSPs ... 141
- 5.3. Local Search for Constraint Satisfaction Problems ... 150
- 5.4. The Structure of Problems ... 151
- 5.5. Summary ... 155

- 6. Adversarial Search ... 161
- 6.1. Games ... 161
- 6.2. Optimal Decisions in Games ... 162
- 6.3. Alpha-Beta Pruning ... 167
- 6.4. Imperfect, Real-Time Decisions ... 171
- 6.5. Games That Include an Element of Chance ... 175
- 6.6. State-of-the-Art Game Programs ... 180
- 6.7. Discussion ... 183
- 6.8. Summary ... 185

- 1. Introduction ... 1
- Part III: Knowledge and reasoning
- 7. Logical Agents ... 194
- 7.1. Knowledge-Based Agents ... 195
- 7.2. The Wumpus World ... 197
- 7.3. Logic ... 200
- 7.4. Propositional Logic: A Very Simple Logic ... 204
- 7.5. Reasoning Patterns in Propositional Logic ... 211
- 7.6. Effective propositional inference ... 220
- 7.7. Agents Based on Propositional Logic ... 225
- 7.8. Summary ... 232

- 8. First-Order Logic ... 240
- 8.1. Representation Revisited ... 240
- 8.2. Syntax and Semantics of First-Order Logic ... 245
- 8.3. Using First-Order Logic ... 253
- 8.4. Knowledge Engineering in First-Order Logic ... 260
- 8.5. Summary ... 266

- 9. Inference in First-Order Logic ... 272
- 9.1. Propositional vs. First-Order Inference ... 272
- 9.2. Unification and Lifting ... 275
- 9.3. Forward Chaining ... 280
- 9.4. Backward Chaining ... 287
- 9.5. Resolution ... 295
- 9.6. Summary ... 310

- 10. Knowledge Representation ... 320
- 10.1. Ontological Engineering ... 320
- 10.2. Categories and Objects ... 322
- 10.3. Actions, Situations, and Events ... 328
- 10.4. Mental Events and Mental Objects ... 341
- 10.5. The Internet Shopping World ... 344
- 10.6. Reasoning Systems for Categories ... 349
- 10.7. Reasoning with Default Information ... 354
- 10.8. Truth Maintenance Systems ... 360
- 10.9. Summary ... 362

- 7. Logical Agents ... 194
- Part IV: Planning
- 11. Planning ... 375
- 11.1. The Planning Problem ... 375
- 11.2. Planning with State-Space Search ... 382
- 11.3. Partial-Order Planning ... 387
- 11.4. Planning Graphs ... 395
- 11.5. Planning with Propositional Logic ... 402
- 11.6. Analysis of Planning Approaches ... 407
- 11.7. Summary ... 408

- 12. Planning and Acting in the Real World ... 417
- 12.1. Time, Schedules, and Resources ... 417
- 12.2. Hierarchical Task Network Planning ... 422
- 12.3. Planning and Acting in Nondeterministic Domains ... 430
- 12.4. Conditional Planning ... 433
- 12.5. Execution Monitoring and Replanning ... 441
- 12.6. Continuous Planning ... 445
- 12.7. MultiAgent Planning ... 449
- 12.8. Summary ... 454

- 11. Planning ... 375
- Part V: Uncertain knowledge and reasoning
- 13. Uncertainty ... 462
- 13.1. Acting under Uncertainty ... 462
- 13.2. Basic Probability Notation ... 466
- 13.3. The Axioms of Probability ... 471
- 13.4. Inference Using Full Joint Distributions ... 475
- 13.5. Independence ... 477
- 13.6. Bayes' Rule and Its Use ... 479
- 13.7. The Wumpus World Revisited ... 483
- 13.8. Summary ... 486

- 14. Probabilistic Reasoning ... 492
- 14.1. Representing Knowledge in an Uncertain Domain ... 492
- 14.2. The Semantics of Bayesian Networks ... 495
- 14.3. Efficient Representation of Conditional Distributions ... 500
- 14.4. Exact Inference in Bayesian Networks ... 504
- 14.5. Approximate Inference in Bayesian Networks ... 511
- 14.6. Extending Probability to First-Order Representations ... 519
- 14.7. Other Approaches to Uncertain Reasoning ... 523
- 14.8. Summary ... 528

- 15. Probabilistic Reasoning over Time ... 537
- 15.1. Time and Uncertainty ... 537
- 15.2. Inference in Temporal Models ... 541
- 15.3. Hidden Markov Models ... 549
- 15.4. Kalman Filters ... 551
- 15.5. Dynamic Bayesian Networks ... 559
- 15.6. Speech Recognition ... 568
- 15.7. Summary ... 578

- 16. Making Simple Decisions ... 584
- 16.1. Combining Beliefs and Desires under Uncertainty ... 584
- 16.2. The Basis of Utility Theory ... 586
- 16.3. Utility Functions ... 589
- 16.4. Multiattribute Utility Functions ... 593
- 16.5. Decision Networks ... 597
- 16.6. The Value of Information ... 600
- 16.7. Decision-Theoretic Expert Systems ... 604
- 16.8. Summary ... 607

- 17. Making Complex Decisions ... 613
- 17.1. Sequential Decision Problems ... 613
- 17.2. Value Iteration ... 618
- 17.3. Policy Iteration ... 624
- 17.4. Partially observable MDPs ... 625
- 17.5. Decision-Theoretic Agents ... 629
- 17.6. Decisions with Multiple Agents: Game Theory ... 631
- 17.7. Mechanism Design ... 640
- 17.8. Summary ... 643

- 13. Uncertainty ... 462
- Part VI: Learning
- 18. Learning from Observations ... 649
- 18.1. Forms of Learning ... 649
- 18.2. Inductive Learning ... 651
- 18.3. Learning Decision Trees ... 653
- 18.4. Ensemble Learning ... 664
- 18.5. Why Learning Works: Computational Learning Theory ... 668
- 18.6. Summary ... 673

- 19. Knowledge in Learning ... 678
- 19.1. A Logical Formulation of Learning ... 678
- 19.2. Knowledge in Learning ... 686
- 19.3. Explanation-Based Learning ... 690
- 19.4. Learning Using Relevance Information ... 694
- 19.5. Inductive Logic Programming ... 697
- 19.6. Summary ... 707

- 20. Statistical Learning Methods ... 712
- 20.1. Statistical Learning ... 712
- 20.2. Learning with Complete Data ... 716
- 20.3. Learning with Hidden Variables: The EM Algorithm ... 724
- 20.4. Instance-Based Learning ... 733
- 20.5. Neural Networks ... 736
- 20.6. Kernel Machines ... 749
- 20.7. Case Study: Handwritten Digit Recognition ... 752
- 20.8. Summary ... 754

- 21. Reinforcement Learning ... 763
- 21.1. Introduction ... 763
- 21.2. Passive Reinforcement Learning ... 765
- 21.3. Active Reinforcement Learning ... 771
- 21.4. Generalization in Reinforcement Learning ... 777
- 21.5. Policy Search ... 781
- 21.6. Summary ... 784

- 18. Learning from Observations ... 649
- Part VII: Communicating, perceiving, and acting
- 22. Communication ... 790
- 22.1. Communication as Action ... 790
- 22.2. A Formal Grammar for a Fragment of English ... 795
- 22.3. Syntactic Analysis (Parsing) ... 798
- 22.4. Augmented Grammars ... 806
- 22.5. Semantic Interpretation ... 810
- 22.6. Ambiguity and Disambiguation ... 818
- 22.7. Discourse Understanding ... 821
- 22.8. Grammar Induction ... 824
- 22.9. Summary ... 826

- 23. Probabilistic Language Processing ... 834
- 23.1. Probabilistic Language Models ... 834
- 23.2. Information Retrieval ... 840
- 23.3. Information Extraction ... 848
- 23.4. Machine Translation ... 850
- 23.5. Summary ... 857

- 24. Perception ... 863
- 24.1. Introduction ... 863
- 24.2. Image Formation ... 865
- 24.3. Early Image Processing Operations ... 869
- 24.4. Extracting Three-Dimensional Information ... 873
- 24.5. Object Recognition ... 885
- 24.6. Using Vision for Manipulation and Navigation ... 892
- 24.7. Summary ... 894

- 25. Robotics ... 901
- 25.1. Introduction ... 901
- 25.2. Robot Hardware ... 903
- 25.3. Robotic Perception ... 907
- 25.4. Planning to Move ... 916
- 25.5. Planning uncertain movements ... 923
- 25.6. Moving ... 926
- 25.7. Robotic Software Architectures ... 932
- 25.8. Application Domains ... 935
- 25.9. Summary ... 938

- 22. Communication ... 790
- Part VIII: Conclusions
- 26. Philosophical Foundations ... 947
- 26.1. Weak AI: Can Machines Act Intelligently? ... 947
- 26.2. Strong AI: Can Machines Really Think? ... 952
- 26.3. The Ethics and Risks of Developing Artificial Intelligence ... 960
- 26.4. Summary ... 964

- 27. AI: Present and Future ... 968
- 27.1. Agent Components ... 968
- 27.2. Agent Architectures ... 970
- 27.3. Are We Going in the Right Direction? ... 972
- 27.4. What if AI Does Succeed? ... 974

- 26. Philosophical Foundations ... 947
- A. Mathematical background ... 977
- A.1. Complexity Analysis and O() Notation ... 977
- A.2. Vectors, Matrices, and Linear Algebra ... 979
- A.3. Probability Distributions ... 981

- B. Notes on Languages and Algorithms ... 984
- B.1. Defining Languages with Backus-Naur Form (BNF) ... 984
- B.2. Describing Algorithms with Pseudocode ... 985
- B.3. Online Help ... 985

AI: A Modern Approach by Stuart Russell and Peter Norvig Modified: Dec 14, 2002