Online Slides
September 2, 2002
These are slides from Computational
Intelligence, A
Logical Approach, Oxford University Press, 1998. Copyright ©David
Poole, Alan
Mackworth, Randy
Goebel and Oxford University Press,
1999-2002. You may prefer the pdf interface for
which these slides were designed (you can read these using the free acrobat
reader.
Computational Intelligence
A Logical Approach
David Poole
Alan Mackworth
Randy Goebel
Oxford University Press
1998
- What is Computational Intelligence?
- Agents acting in an environment
- Representations
The study of the design of intelligent agents.
An agent is something that acts in an environment.
An intelligent agent is an agent that acts intelligently:
- its actions are appropriate for its goals and circumstances
- it is flexible to changing
environments and goals
- it learns from experience
- it makes appropriate choices given perceptual limitations and finite
computation
- The field is often called Artificial Intelligence.
- Scientific goal: to understand the principles that make
intelligent behavior possible, in natural or artificial systems.
- Engineering goal: to
specify methods for the design of useful, intelligent artifacts.
- Analogy between studying flying machines and thinking machines.
Symbol-system hypothesis:
- Reasoning is symbol manipulation.
Church-Turing thesis:
- Any symbol manipulation can be carried out on a Turing machine.
- actions: movement, grippers, speech, facial
expressions,...
- observations: vision, sonar, sound, speech
recognition, gesture recognition,...
- goals: deliver food, rescue people, score goals, explore,...
- past experiences: effect of steering, slipperiness,
how people move,...
- prior knowledge: what is important feature,
categories of objects, what a sensor tell us,...
- actions: present new concept, drill, give test,
explain concept,...
- observations: test results, facial expressions,
errors, focus,...
- goals: particular knowledge, skills, inquisitiveness,
social skills,...
- past experiences: prior test results, effects of
teaching strategies, ...
- prior knowledge: subject material, teaching strategies,...
- actions: operate, test, prescribe drugs, explain instructions,...
- observations: verbal symptoms, test results, visual appearance...
- goals: remove disease, relieve pain, increase life expectancy,
reduce costs,...
- past experiences: treatment outcomes, effects of
drugs, test results given symptoms...
- prior knowledge: possible diseases, symptoms, possible
causal relationships...
- actions: present information, ask user,
find another information source, filter information, interrupt,...
- observations: users request, information retrieved,
user feedback, facial expressions...
- goals: present information, maximize useful
information, minimize irrelevant information, privacy,...
- past experiences: effect of presentation modes,
reliability of information sources,...
- prior knowledge: information sources, presentation modalities...
Example representations: machine language, C, Java, Prolog, natural language
We want a representation to be
- rich enough to express the knowledge needed to solve the problem.
- as close to the problem as
possible: compact, natural and maintainable.
- amenable to efficient
computation;
able to express features of the problem we can exploit for computational gain.
- learnable from data and
past experiences.
- able to trade off accuracy and computation time.
Problem => representation=> computation
A representation and reasoning system (RRS) consists of
- Language to communicate with the computer.
- A way to assign meaning to the symbols.
- Procedures to compute answers or solve problems.
Example RRSs:
- Programming languages: Fortran, C++,...
- Natural Language
We want something between these extremes.
- Autonomous delivery robot roams around an office
environment and delivers coffee, parcels,...
- Diagnostic assistant helps a human troubleshoot
problems and suggests repairs or treatments. E.g., electrical problems,
medical diagnosis.
- Infobot searches for information on a computer system or
network.
Example inputs:
- Prior knowledge:
its capabilities, objects it may encounter, maps.
- Past experience: which actions are useful and when, what objects are there, how its actions
affect its position.
- Goals: what it needs to deliver and when,
tradeoffs between acting quickly and acting
safely.
- Observations: about its environment from
cameras, sonar, sound, laser range finders, or keyboards.
Determine where Craig's office is. Where coffee is...Find a path between locations.
Plan how to carry out multiple tasks.
Make default assumptions about where Craig is.
Make tradeoffs under uncertainty: should it go near the stairs?
Learn from experience.
Sense the world, avoid obstacles, pickup and put down coffee.
Example inputs:
- Prior knowledge:
how switches and lights work, how malfunctions manifest
themselves, what information tests provide, the side effects of
repairs.
- Past experience: the effects of repairs or treatments, the prevalence of faults or
diseases.
- Goals: fixing the device and tradeoffs
between fixing or replacing different components.
- Observations: symptoms of a device or patient.
Derive the effects of faults and interventions.
Search through the space of possible fault complexes.
Explain its reasoning to the human who is using it.
Derive possible causes for symptoms; rule out
other causes.
Plan courses of tests and treatments to address the problems.
Reason about the uncertainties/ambiguities given symptoms.
Trade off alternate courses of action.
Learn about what symptoms are associated with the faults, the effects of
treatments, and the accuracy of tests.
Infobot interacts with an information environment:
- It takes in
high-level, perhaps informal, queries.
- It finds relevant information.
- It presents the information in
a meaningful way.
- Prior knowledge:
the meaning of words, the types of information sources,
and how to access information.
- Past experience: where information can be obtained, the
relative speed of various servers, and information about the
preferences of the user.
- Goals: the information it needs to find out;
tradeoffs between the volume and quality of
information and the expense involved.
- Observations: what information is at the
current sites; what links are available; the load on various
connections.
Derive information that is only implicit in a knowledge base.
Interact in natural language.
Find good representations of knowledge.
Explain how an answer was derived and why some information was
unavailable.
Make conclusions about the lack of knowledge or conflicting knowledge.
Make default inferences about where to find information.
Make tradeoffs between information quality and cost.
Learn the preferences of users.
- Modeling the environment
Build models of the physical environment, patient, or
information environment.
- Evidential reasoning or perception Given
observations, determine what the world is like.
- Action Given a model of the world and a goal,
determine what should be done.
- Learning from past experiences Learn about the
specific case and the population of cases.
- Our goal is to study these four tasks.
- We build the tools needed from the bottom up.
- We start with some restrictive simplifying assumptions and lift them
as we get more sophisticated representations and more powerful
reasoning strategies.
- The theory and practice are built from solid foundations.
©David
Poole, Alan
Mackworth, Randy
Goebel and Oxford University Press,
1998-2002