Tuesday 25 September 2007

Model risk in economics and finance

Over at Overcoming Bias, there is a discussion of model checking in the economics literature:

One thing that bugs me is that there seems to be so little model checking done in statistics. Data-based model checking is a powerful tool for overcoming bias, and it's frustrating to see this tool used so rarely. [...]

But, why, if this is such a good idea, do people not do it?

I don't buy the cynical answer that people don't want to falsify their own models. My preferred explanation might be called sociological and goes as follows: We're often told to check model fit. But suppose we fit a model, write a paper, and check the model fit with a graph. If the fit is ok, then why bother with the graph: the model is OK, right? If the fit shows problems (which, realistically, it should, if you think hard enough about how to make your model-checking graph), then you better not include the graph in the paper, or the reviewers will reject, saying that you should fix your model. And once you've fit the better model, no need for the graph.

The result is: (a) a bloodless view of statistics in which only the good models appear, leaving readers in the dark about all the steps needed to get there; or, worse, (b) statisticians (and, in general, researchers) not checking the fit of their model in the first place, so that neither the original researchers nor the readers of the journal learn about the problems with the model.

Sadly this rings all too much of a bell in Finance too. If a firm has one pricing model and no really incisive model verification, it is often happy. If it has more than one model or a good model verification unit, the scope of possible error, uncertainty and failure of assumptions becomes visible. People dislike uncertainty so this kind of mature attitude can be rare. I don't go quite as far as this (from Equity Private):

Large unwieldy models are almost universally produced by financial "professionals" who have no clue whatsoever about their predictive value (hint: it is vanishingly small) and therefore the size of the model is, in my view, inversely proportional to the technical competence of the employee.

But I would like to to see something like this:

We are doing our best with this product given the resources available but the model might be wrong. By stressing parameter inputs, using other models, and reviewing assumptions we have quantified our likely error as $28M.

It might appear to be a mess, but it does properly characterise (to use what is becoming word of the week) the epistemology of the situation.

Labels:

0 Comments:

Post a Comment

<< Home