
Go up to Top
Go forward to 2 Decision Tree Evaluation
1 Decision-tree Learning
Consider the data on 4 Boolean attributes a, b, c, and d, where d is
the target classification.
| a | b | c | d |
e1 | true | true | false | false |
e2 | false | true | false | true |
e3 | false | true | true | true |
e4 | false | false | true | false |
e5 | true | false | false | false
|
In this question we will consider decision-tree learning based on this
data.
- What is a good attribute to split on first? Explain why.
- Draw a decision tree that the top-down myopic decision tree learning
algorithm could build. For each node (including the leaves) show which
examples are used to determine the classification at that node. (The
root note of the tree will be labelled with the list of all of the
examples).
- Explain how the learning bias inherent in learning
decision-trees can be used to classify unseen instances. Give an
instance that is not in the training data, show how the above tree
classifies that instance. Justify why this is an appropriate classification.
Computational
Intelligence online
material, ©David Poole, Alan Mackworth and Randy Goebel, 1998
