### Quantitative finance ideas in decision making

Suppose you have a decision to make and you know some quantitative finance. How can you use what you know to help in your decision?

First you construct a outcome metric. This is just a function that expresses whether one distribution of outcomes is better or worse than another. Next you deduce the distribution of outcomes for various choices, often by building a model of how the initial conditions determine the outcome, then determining the distribution of initial conditions and putting that through the model*. Apply the metric and make the decision.

Now here's where it gets interesting. Being a quantitative person, you know that there is model risk. Specifically here three kinds of model risk:

Two things stimulated me to write this account: one was the shameful mess that is UK energy policy mentioned earlier in the week; the other was hearing an item on a new book Mistakes Were Made (But Not by Me) on the Today programme. The book discusses how, when faced with evidence that our decisions are bad, rather than recant and change our minds, we engage in self justification. I will let the authors take over at this point:

The quantitative way of thinking - acknowledging up front that we do not have all the relevant information (and might never have) so that we need to review the decision regularly - is a good way of depersonalising matters. We do not need to engage in self-justification because it is not

* This is a stylised version of what any kind of economic capital model such as VAR does. The outcome metric in finance is sometimes obvious - it's expected return, with more being better - and one of the many things that makes non-financial decisions harder is that such an obvious metric is not easy to find. In particular away from the risk neutral measure, how much compensation should you require for uncertainty in outcomes? (Oh, and if there are any mathematicians reading, it does not need to be a metric in the technical sense: a well-founded total order with all GLBs and LUBs will do.)

First you construct a outcome metric. This is just a function that expresses whether one distribution of outcomes is better or worse than another. Next you deduce the distribution of outcomes for various choices, often by building a model of how the initial conditions determine the outcome, then determining the distribution of initial conditions and putting that through the model*. Apply the metric and make the decision.

Now here's where it gets interesting. Being a quantitative person, you know that there is model risk. Specifically here three kinds of model risk:

- Your outcome metric might be wrong. One common way this can happen is that there is something that you have neglected entirely - unexpected consequences.
- The distribution of outcomes is wrong because the model is wrong.
- The distribution of outcomes is wrong because the future does not behave like the past.

Two things stimulated me to write this account: one was the shameful mess that is UK energy policy mentioned earlier in the week; the other was hearing an item on a new book Mistakes Were Made (But Not by Me) on the Today programme. The book discusses how, when faced with evidence that our decisions are bad, rather than recant and change our minds, we engage in self justification. I will let the authors take over at this point:

One of the most obvious examples is of course Blair and the Iraq war, but there are many many more.The engine that drives self-justification ... [is] cognitive dissonance. Cognitive dissonance is a state of tension that occurs whenever a person holds two cognitions (ideas, attitudes, beliefs, opinions) that are psychologically inconsistent, such as "Smoking is a dumb thing to do because it could kill me" and "I smoke two packs a day." Dissonance produces mental discomfort, ranging from minor pangs to deep anguish; people don't rest easy until they find a way to reduce it. In this example, the most direct way for a smoker to reduce dissonance is by quitting. But if she has tried to quit and failed, now she must reduce dissonance by convincing herself that smoking isn't really so harmful, or that smoking is worth the risk because it helps her relax or prevents her from gaining weight (and after all, obesity is a health risk, too), and so on. Most smokers manage to reduce dissonance in many such ingenious, if self-deluding, ways.

The quantitative way of thinking - acknowledging up front that we do not have all the relevant information (and might never have) so that we need to review the decision regularly - is a good way of depersonalising matters. We do not need to engage in self-justification because it is not

*our*decision: it is the result of some modelling. If it goes wrong, we fix the model. Our course there may very well be opinions that go into the model, but by explicitly including the monitoring step we acknowledge*before we make the decision*that it might be wrong. That must be helpful.* This is a stylised version of what any kind of economic capital model such as VAR does. The outcome metric in finance is sometimes obvious - it's expected return, with more being better - and one of the many things that makes non-financial decisions harder is that such an obvious metric is not easy to find. In particular away from the risk neutral measure, how much compensation should you require for uncertainty in outcomes? (Oh, and if there are any mathematicians reading, it does not need to be a metric in the technical sense: a well-founded total order with all GLBs and LUBs will do.)

Labels: Decision Making

## 0 Comments:

Post a Comment

## Links to this post:

Create a Link

<< Home