Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).
6.7 References and Further Reading
Introductions to probability theory from an AI perspective, and belief (Bayesian) networks, are by Darwiche (2009), [Koller and Friedman (2009)], Pearl (1988), Jensen (1996), and Castillo et al. (1996). Halpern (1997) reviews the relationship between logic and probability. Bacchus et al. (1996) present a random worlds approach to probabilistic reasoning.
Variable elimination for evaluating belief networks is presented in Zhang and Poole (1994) and Dechter (1996). Treewidth is discussed by Bodlaender (1993).
For comprehensive reviews of information theory, see Cover and Thomas (1991) and Grünwald (2007).
For discussions of causality, see Pearl (2000) and Spirtes et al. (2000).
For introductions to stochastic simulation, see Rubinstein (1981) and Andrieu et al. (2003). The forward sampling in belief networks is based on Henrion (1988), who called it logic sampling. The use of importance sampling in belief networks described here is based on Cheng and Druzdzel (2000), who also consider how to learn the proposal distribution. There is a collection of articles on particle filtering in Doucet et al. (2001).
HMMs are described by Rabiner (1989). Dynamic Bayesian networks were introduced by Dean and Kanazawa (1989). Markov localization and other issues on the relationship of probability and robotics are described by Thrun et al. (2005). The use of particle filtering for localization is due to Dellaert et al. (1999).
The annual Conference on Uncertainty in Artificial Intelligence, and the general AI conferences, provide up-to-date research results.