We will assume that turbidity is measured on a scale of 1 to 4, where 1
represents 'very safe', and 4 represents 'very unsafe'. We will assume that the water system can be
represented by the following graph:
The node labeled 'Source' is the location where water enters the system.
Similarly, the nodes labeled 'Tap' are locations where water
exits the system, while nodes labeled 'I' are internal nodes.
The graph structure above is a 'tree', meaning that there are no loops in the graph and
the graph is connected (i.e., there exists a path of edges between all pairs of nodes).
We will assume that the node potentials for the source node are:
State | Potential |
1 | 0.9 |
2 | 0.09 |
3 | 0.009 |
4 | 0.001 |
From/to | 1 | 2 | 3 | 4 |
1 | 0.9890 | 0.0099 | 0.0010 | 0.0001 |
2 | 0.1309 | 0.8618 | 0.0066 | 0.0007 |
3 | 0.0420 | 0.0841 | 0.8682 | 0.0057 |
4 | 0.0667 | 0.0333 | 0.1667 | 0.7333 |
load('waterSystem.mat'); % Loads adj nNodes = length(adj); nStates = 4; edgeStruct = UGM_makeEdgeStruct(adj,nStates);In the adjacency matrix, node 4 is the source node. We therefore make the nodpePot matrix using:
source = 4; nodePot = ones(nNodes,nStates); nodePot(source,:) = [.9 .09 .009 .001];Making the edgePot array is slightly more difficult, because we need to know which end of the edge is closer to the source. Further, if the 2nd node is closer to the source, we need to use the transpose of the edge potential matrix above. We do this is in a fairly simple (but not particularly efficient) way:
transition = [ 0.9890 0.0099 0.0010 0.0001 0.1309 0.8618 0.0066 0.0007 0.0420 0.0841 0.8682 0.0057 0.0667 0.0333 0.1667 0.7333]; colored = zeros(nNodes,1); colored(source) = 1; done = 0; edgePot = zeros(nStates,nStates,edgeStruct.nEdges); while ~done done = 1; colored_old = colored; for e = 1:edgeStruct.nEdges if sum(colored_old(edgeStruct.edgeEnds(e,:))) == 1 % Determine direction of edge and color nodes if colored(edgeStruct.edgeEnds(e,1)) == 1 edgePot(:,:,e) = transition; else edgePot(:,:,e) = transition'; end colored(edgeStruct.edgeEnds(e,:)) = 1; done = 0; end end end
In the last demo, we mentioned that the forward-backward algorithms would give the same result if we reversed the order of the nodes. That is, instead of picking the last node as the 'end' and the first node as the 'start', we could have made the first node the 'end' and the last node the 'start', and we would get the same result. Towards extending this method to general trees, consider the following variation: we pick some node in the middle of the chain, and define this to be the 'end'. We then pick TWO 'start' nodes, the first node and the last node. We then run the forward pass of the algorithm starting from both 'start' nodes until both of these algorithms reach the 'end' node in the middle of the chain. We then combine the 'messages' coming from both directions, compute the optimal value of the 'end' node in the middle of the chain, and then 'backtrack' in both directions to get the optimal values for the rest of the chain.
It is straightforward to apply this idea to a general tree-structured graph:
optimalDecoding = UGM_Decode_Tree(nodePot,edgePot,edgeStruct)In our model, the optimal decoding is that all nodes in the water system are 'very safe'.
[nodeBel,edgeBel,logZ] = UGM_Infer_Tree(nodePot,edgePot,edgeStruct) nodeBel = 0.9103 0.0779 0.0110 0.0008 0.9110 0.0771 0.0111 0.0008 0.9129 0.0750 0.0114 0.0008 0.9000 0.0900 0.0090 0.0010 0.9073 0.0814 0.0104 0.0008 0.9129 0.0750 0.0114 0.0008 0.9116 0.0764 0.0112 0.0008 0.9085 0.0800 0.0106 0.0008 0.9043 0.0850 0.0099 0.0009 0.9085 0.0800 0.0106 0.0008 0.9129 0.0750 0.0114 0.0008 0.9121 0.0759 0.0112 0.0008 0.9141 0.0736 0.0115 0.0008 0.9110 0.0771 0.0111 0.0008 0.9085 0.0800 0.0106 0.0008 0.9059 0.0830 0.0102 0.0009 0.9073 0.0814 0.0104 0.0008 0.9103 0.0779 0.0110 0.0008 0.9125 0.0754 0.0113 0.0008 0.9116 0.0764 0.0112 0.0008 0.9138 0.0739 0.0115 0.0008 0.9125 0.0754 0.0113 0.0008 0.9085 0.0800 0.0106 0.0008 0.9103 0.0779 0.0110 0.0008 0.9095 0.0789 0.0108 0.0008 0.9132 0.0746 0.0114 0.0008 0.9129 0.0750 0.0114 0.0008 0.9129 0.0750 0.0114 0.0008 0.9116 0.0764 0.0112 0.0008 0.9129 0.0750 0.0114 0.0008 0.9116 0.0764 0.0112 0.0008 0.9110 0.0771 0.0111 0.0008 0.9116 0.0764 0.0112 0.0008 0.9121 0.0759 0.0112 0.0008 0.9140 0.0737 0.0115 0.0008 0.9132 0.0746 0.0114 0.0008 0.9121 0.0759 0.0112 0.0008 0.9073 0.0814 0.0104 0.0008 0.9116 0.0764 0.0112 0.0008 0.9116 0.0764 0.0112 0.0008 0.9134 0.0743 0.0114 0.0008 0.9073 0.0814 0.0104 0.0008 0.9134 0.0743 0.0114 0.0008 0.9095 0.0789 0.0108 0.0008 0.9138 0.0739 0.0115 0.0008 0.9125 0.0754 0.0113 0.0008 0.9132 0.0746 0.0114 0.0008 0.9116 0.0764 0.0112 0.0008 0.9095 0.0789 0.0108 0.0008 0.9134 0.0743 0.0114 0.0008 0.9073 0.0814 0.0104 0.0008 0.9059 0.0830 0.0102 0.0009 0.9138 0.0739 0.0115 0.0008 0.9116 0.0764 0.0112 0.0008 0.9132 0.0746 0.0114 0.0008 0.9140 0.0737 0.0115 0.0008 0.9059 0.0830 0.0102 0.0009 0.9103 0.0779 0.0110 0.0008 0.9132 0.0746 0.0114 0.0008 0.9132 0.0746 0.0114 0.0008 0.9110 0.0771 0.0111 0.0008 0.9095 0.0789 0.0108 0.0008 0.9132 0.0746 0.0114 0.0008 0.9073 0.0814 0.0104 0.0008 0.9137 0.0741 0.0115 0.0008 0.9116 0.0764 0.0112 0.0008 0.9059 0.0830 0.0102 0.0009 0.9134 0.0743 0.0114 0.0008 0.9138 0.0739 0.0115 0.0008 0.9023 0.0873 0.0095 0.0009 0.9116 0.0764 0.0112 0.0008 0.9103 0.0779 0.0110 0.0008 0.9125 0.0754 0.0113 0.0008 0.9121 0.0759 0.0112 0.0008 0.9085 0.0800 0.0106 0.0008 0.9073 0.0814 0.0104 0.0008 0.9116 0.0764 0.0112 0.0008 0.9121 0.0759 0.0112 0.0008 0.9095 0.0789 0.0108 0.0008 0.9134 0.0743 0.0114 0.0008 0.9137 0.0741 0.0115 0.0008 0.9116 0.0764 0.0112 0.0008 0.9110 0.0771 0.0111 0.0008 0.9129 0.0750 0.0114 0.0008 0.9121 0.0759 0.0112 0.0008 0.9059 0.0830 0.0102 0.0009 0.9085 0.0800 0.0106 0.0008 0.9095 0.0789 0.0108 0.0008 0.9132 0.0746 0.0114 0.0008 0.9121 0.0759 0.0112 0.0008 0.9085 0.0800 0.0106 0.0008 0.9129 0.0750 0.0114 0.0008 0.9110 0.0771 0.0111 0.0008 0.9103 0.0779 0.0110 0.0008 0.9110 0.0771 0.0111 0.0008 0.9073 0.0814 0.0104 0.0008 0.9043 0.0850 0.0099 0.0009 0.9132 0.0746 0.0114 0.0008 0.9121 0.0759 0.0112 0.0008 0.9121 0.0759 0.0112 0.0008We see in row 4 that the source node's marginals match its node potentials. We also see that the probability of the water being 'very safe' is higher than the source's maginals for all other nodes, while the probability of the other levels is decreased.
In this specific model, the node potentials of the source node can be interpreted as a marginal probability and the values of the edge potentials can be interpreted as transition probabilities. Further, it is possible to use simpler forward inference and sampling methods, similar to the Chapman-Kolmogorov equations for Markov chains (in this case, we apply the analogous equations to an appropriately defined branching process, where we start at the source, branch at internal nodes, and terminate at tap nodes). This probabilistic interpretation will not hold for general tree-structured UGMs. However, the methods implemented in UGM handle the case of general tree-structured UGMs, and can therefore handle the extensions to non-unit potentials (beyond the source node), or edge potentials that aren't normalized probabilities as we move away from the source.
edgeStruct.maxIter = 100; samples = UGM_Sample_Tree(nodePot,edgePot,edgeStruct);Here is a visualization of 100 samples:
In the samples generated above, the state '4' was
never encountered. Although we would eventually generate samples where state 4
was encountered, knowing how the model behaves in this scenario is clearly
important and we may not want to wait for these unlikely scenarios to
occur. In the next demo, we examine performing conditional
decoding/inference/sampling, where we perform these operations under the
assumption that the values of one or more nodes are known (ie. what
happens to the other nodes when the source node is in state 4).
PREVIOUS DEMO NEXT DEMO