N-body methods: KD trees, multipole methods & distance transform
This package includes software for fast multipole methods, the distance transform, KD-trees and dual trees.
Click here
for code
Rao Blackwellised Particle Filtering for dynamic mixtures of Gaussians
The software implements
particle filtering and Rao
Blackwellised particle filtering for
conditionally Gaussian
Models. The RB algorithm can be
interpreted as an efficient
stochastic mixture of Kalman
filters. The software also includes
efficient state-of-the-art resampling
routines. These are generic and
suitable for any application.
For details, please refer to
Rao-Blackwellised
Particle Filtering for Fault Diagnosis and
On Sequential Simulation-Based Methods for Bayesian Filtering
After downloading the file,
type "
tar -xf
demo_rbpf_gauss.tar"
to uncompress it. This
creates the directory
webalgorithm containing the
required m files. Go to this
directory, load matlab and
run the demo.
Click here for program code
Rao Blackwellised Particle Filtering for Dynamic Bayesian Networks
In this demo, we show how to use Rao-Blackwellised particle filtering to exploit the conditional independence structure of a simple DBN.
The derivation and details are presented in
A Simple Tutorial on Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks. This detailed discussion of the ABC network should complement the
UAI2000 paper by Arnaud Doucet, Nando de Freitas, Kevin Murphy and Stuart Russell.
After downloading the file, type "
tar -xf demorbpfdbn.tar" to uncompress it. This creates the directory webalgorithm containing the required m files. Go to this directory, load matlab5 and type "
dbnrbpf" for the demo.
Click here for program code
Unscented Particle Filter
In these demos, we demonstrate the use of the extended Kalman filter (EKF), unscented Kalman filter (UKF), standard particle filter (a.k.a. condensation, survival of the fittest, bootstrap filter, SIR, sequential Monte Carlo, etc.), particle filter with MCMC steps, particle filter with EKF proposal and unscented particle filter (particle filter with UKF proposal) on a simple state estimation problem and on a financial time series forecasting problem. The algorithms are coded in a way that makes it trivial to apply them to other problems. Several generic routines for resampling are provided.
The derivation and details are presented in: Rudolph van der Merwe, Arnaud Doucet, Nando de Freitas and Eric Wan.
The Unscented Particle Filter. Technical report CUED/F-INFENG/TR 380, Cambridge University Department of Engineering, May 2000.
After downloading the file, type "
tar -xf upf_demos.tar" to uncompress it. This creates the directory webalgorithm containing the required m files. Go to this directory, load matlab5 and type "
demo_MC" for the demo.
Click here for program code
EM for neural networks
In this demo, I use the EM algorithm with a Rauch-Tung-Striebel smoother and an M step, which I've recently derived, to train a two-layer perceptron, so as to classify medical data (kindly provided by Steve Roberts and Will Penny from EE, Imperial College). The data and simulations are described in: Nando de Freitas, Mahesan Niranjan and Andrew Gee
Nonlinear State Space Estimation with Neural Networks and the EM algorithm
After downloading the file, type "
tar -xf EMdemo.tar" to uncompress it. This creates the directory EMdemo containing the required m files. Go to this directory, load matlab5 and type "
EMtremor". The figures will then show you the simulation results, including ROC curves, likelihood plots, decision boundaries with error bars, etc.
WARNING: Do make sure that you monitor the log-likelihood and check that it is increasing. Due to numerical errors, it might show glitches for some data sets.
Click here for program code
Reversible Jump MCMC Bayesian Model Selection
This demo demonstrates the use of the reversible jump MCMC algorithm for neural networks. It uses a hierarchical full Bayesian model for neural networks. This model treats the model dimension (number of neurons), model parameters, regularisation parameters and noise parameters as random variables that need to be estimated. The derivations and proof of geometric convergence are presented, in detail, in: Christophe Andrieu, Nando de Freitas and Arnaud Doucet.
Robust Full Bayesian Learning for Neural Networks. Technical report CUED/F-INFENG/TR 343, Cambridge University Department of Engineering, May 1999.
After downloading the file, type "
tar -xf rjMCMC.tar" to uncompress it. This creates the directory rjMCMC containing the required m files. Go to this directory, load matlab5 and type "
rjdemo1". In the header of the demo file, one can select to monitor the simulation progress (with par.doPlot=1) and modify the simulation parameters.
Click here for program code
Reversible Jump MCMC Simulated Annealing
This demo demonstrates the use of the reversible jump MCMC simulated annealing for neural networks. This algorithm enables us to maximise the joint posterior distribution of the network parameters and the number of basis function. It performs a global search in the joint space of the parameters and number of parameters, thereby surmounting the problem of local minima. It allows the user to choose among various model selection criteria, including AIC, BIC and MDL. The derivations and proof of convergence are presented, in detail, in: Christophe Andrieu, Nando de Freitas and Arnaud Doucet.
Robust Full Bayesian Learning for Neural Networks. Technical report CUED/F-INFENG/TR 343, Cambridge University Department of Engineering, May 1999.
After downloading the file, type "
tar -xf rjMCMCsa.tar" to uncompress it. This creates the directory rjMCMCsa containing the required m files. Go to this directory, load matlab5 and type "
rjdemo1sa". In the header of the demo file, one can select to monitor the simulation progress (with par.doPlot=1), modify the simulation parameters and select the model selection criterion.
Click here for program code
Adaptive Online Learning of Neural Networks with the EKF
In this demo, I use the EKF and EKF with noise adaptation to train a neural network with data generated from a nonlinear,
non-stationary state space model. Adaptation is done by matching the innovations ensemble covariance to the covariance
over time so as to make the one-step-ahead predictions become white (i.e. all the information in the data is absorbed
by the model). All the derivations are presented, in detail, in:
Nando de Freitas, Mahesan Niranjan and Andrew Gee.
Hierarchical Bayesian models for regularisation in sequential
learning.
Neural Computation. Vol 12 No 4, pages 955-993.
After downloading the file, type "
tar -xf demo3.tar" to uncompress it.
This creates the directory demo3 containing the required m files.
Go to this directory, load matlab and type "
ekfdemo1".
Figure 1 will then show you the simulation results.
Click here for program code