# Appendix 6: decision trees.

Decision Tree Methodology: Decision trees can be used to help make
decisions. The idea is to concretely identify the choice points and map
the sequence of decisions from beginning to end. The advantage is that
how a decision is made is made explicit, and others can use the decision
tree if faced with the same questions.

A decision tree is started with a decision that must be made: whether or not to arrest a suspect. A square (representing this decision) is drawn on the left hand side of the paper. From this box, lines are drawn out towards the right for each possible solution, and the solutions are written along those lines. At the end of each line, the results are considered. If the result is outside the decision-maker's control--that is, if nature makes the next move--then a circle is drawn at the end of the line. If the result is another decision, a square is drawn. If there is a final consequence, a solid dot is drawn.

[ILLUSTRATION OMITTED]

The procedure for choosing a decision strategy from a decision tree is called backward induction analysis. This analysis can be summarized as follows: First, each terminal node (marked by the black dot) is assigned a number that represents the worth or utility of the final consequence to the decision maker. For example, at the top of the figure above, the consequence of "describes plot to attack" is given a high value of 10; at the bottom right hand corner, the consequence of "released on bail" is given a low value of 1. Second, each event node (the circles) is assigned a sum that represents the expected utility of the node. This is the weighted average utility of the event node--for node (1) for example, this is the consequence of "describing a plot to attack" multiplied by the likelihood of that outcome (p) which is estimated at 1/10 or 0.1, or 10*0.1=1.0, plus the consequence of "no evidence of plot" multiplied by the likelihood of that outcome (p) which is estimated at 8/10 or 0.8, or 2*0.8=2.6, that is, [10(.10)+2(.8)=2.6]. Finally, each decision node is a assigned a number that is the maximum value of the nodes that branch out from it. Thus, for decision [2], since 2.6>1(.1), at decision point [2]. "interrogate" is the best choice.

Working from right to left, these calculations are:

[ILLUSTRATION OMITTED]

The value of decision trees are (1) the choices are made very explicit; (2) each choice is explicitly evaluated in terms of the importance of its outcome and the probability of that outcome; (3) how a decision will be made--or was made--can be communicated to another person. As with any technique or tool suggested here, decision trees can be used to guide decisions, not make them. The final decision is left up to the operator.

A decision tree is started with a decision that must be made: whether or not to arrest a suspect. A square (representing this decision) is drawn on the left hand side of the paper. From this box, lines are drawn out towards the right for each possible solution, and the solutions are written along those lines. At the end of each line, the results are considered. If the result is outside the decision-maker's control--that is, if nature makes the next move--then a circle is drawn at the end of the line. If the result is another decision, a square is drawn. If there is a final consequence, a solid dot is drawn.

[ILLUSTRATION OMITTED]

The procedure for choosing a decision strategy from a decision tree is called backward induction analysis. This analysis can be summarized as follows: First, each terminal node (marked by the black dot) is assigned a number that represents the worth or utility of the final consequence to the decision maker. For example, at the top of the figure above, the consequence of "describes plot to attack" is given a high value of 10; at the bottom right hand corner, the consequence of "released on bail" is given a low value of 1. Second, each event node (the circles) is assigned a sum that represents the expected utility of the node. This is the weighted average utility of the event node--for node (1) for example, this is the consequence of "describing a plot to attack" multiplied by the likelihood of that outcome (p) which is estimated at 1/10 or 0.1, or 10*0.1=1.0, plus the consequence of "no evidence of plot" multiplied by the likelihood of that outcome (p) which is estimated at 8/10 or 0.8, or 2*0.8=2.6, that is, [10(.10)+2(.8)=2.6]. Finally, each decision node is a assigned a number that is the maximum value of the nodes that branch out from it. Thus, for decision [2], since 2.6>1(.1), at decision point [2]. "interrogate" is the best choice.

Working from right to left, these calculations are:

[ILLUSTRATION OMITTED]

The value of decision trees are (1) the choices are made very explicit; (2) each choice is explicitly evaluated in terms of the importance of its outcome and the probability of that outcome; (3) how a decision will be made--or was made--can be communicated to another person. As with any technique or tool suggested here, decision trees can be used to guide decisions, not make them. The final decision is left up to the operator.

Printer friendly Cite/link Email Feedback | |

Title Annotation: | procedure for choosing a decision strategy from a decision tree (backward induction analysis) |
---|---|

Publication: | Countering Terrorism: Integration of Practice and Theory |

Geographic Code: | 1USA |

Date: | Feb 28, 2002 |

Words: | 491 |

Previous Article: | Appendix 5: data mining. |

Next Article: | Appendix 7: training guide for hate crime training program. |

Topics: |