Non-classical Cost Benefit Analysis
Remember Schroedinger's cat? Poor moggy is trapped in a box with a radio active isotope. The isotope decays randomly. Let's say that there is half a chance of detecting a beta particle using a detector inside the box during some time interval. If the particle is detected, then poison gas is released and the cat dies. If not, it lives.
This setup is usually used to explain superposition of states: basically until you open the box, the cat is in the superposed state 'dead and alive'. You force it to be one or the other by observing it.
I want to use it to talk about something else, though - cost/benefit analysis. If you like cats, then you will want to save moggy. But there is a cost to opening the box early, a pound say. If you judge the worth of a cat as more than two pounds, then you'd spend the pound and guarantee that the cat is safe. In general, 'classical' cost benefit analysis says that if a bad event has cost x and probability p, then it is worth spending px to prevent the bad thing.
Concretely this is usually expressed with less likely events: a certain kind of brake failure on your car is a one in a million event, and the average cost of an accident if your brakes fail is ten thousand pounds, so it is only worth spending a penny to prevent the problem.
This classical cost benefit analysis makes two big assumptions. First it assumes that the bad events are independent and identically distributed. In many applications, this is not true: making the system safer sometimes encourages people to take more risk (and not fixing an obvious safety issue makes them more careful). There is good evidence for this from the development of ABS brakes, amongst other things: they didn't make cars nearly as much safer as the developers hoped, since drivers absorbed the safety margin by driving more aggressively.
In the cat experiment, you can think of this as moggy learning (perhaps by cat telepathy) that if it gets between the isotope and the detector, the particle is less likely to be detected, and hence it is more likely to live. This changes p from 1/2 to 1/3, say, and now your pound is only worth spending if you think a cat's life is worth three pounds.
The second major problem is that your estimate of the probability of failure for small p, is likely to be wrong. Your estimate of the cost, x, might be wrong too. As James Kwak says:
This setup is usually used to explain superposition of states: basically until you open the box, the cat is in the superposed state 'dead and alive'. You force it to be one or the other by observing it.
I want to use it to talk about something else, though - cost/benefit analysis. If you like cats, then you will want to save moggy. But there is a cost to opening the box early, a pound say. If you judge the worth of a cat as more than two pounds, then you'd spend the pound and guarantee that the cat is safe. In general, 'classical' cost benefit analysis says that if a bad event has cost x and probability p, then it is worth spending px to prevent the bad thing.
Concretely this is usually expressed with less likely events: a certain kind of brake failure on your car is a one in a million event, and the average cost of an accident if your brakes fail is ten thousand pounds, so it is only worth spending a penny to prevent the problem.
This classical cost benefit analysis makes two big assumptions. First it assumes that the bad events are independent and identically distributed. In many applications, this is not true: making the system safer sometimes encourages people to take more risk (and not fixing an obvious safety issue makes them more careful). There is good evidence for this from the development of ABS brakes, amongst other things: they didn't make cars nearly as much safer as the developers hoped, since drivers absorbed the safety margin by driving more aggressively.
In the cat experiment, you can think of this as moggy learning (perhaps by cat telepathy) that if it gets between the isotope and the detector, the particle is less likely to be detected, and hence it is more likely to live. This changes p from 1/2 to 1/3, say, and now your pound is only worth spending if you think a cat's life is worth three pounds.
The second major problem is that your estimate of the probability of failure for small p, is likely to be wrong. Your estimate of the cost, x, might be wrong too. As James Kwak says:
imagine that the government had considered the idea of systemic risk regulation five years ago. It would have cost money; it would have created new disclosure requirements for banks and possibly hedge funds; it would have required countercyclical measures in a boom that would dampen economic growth. Those are the costs of regulation. And how would anyone have estimated the benefits? No one would have estimated the scenario we face today – trillions of dollars of asset writedowns, 3.3% contraction in the U.S. economy and counting, even more severe damage elsewhere in the world economy. And as a result, the regulation would have died.In other words, cost benefit analysis is all very well if you can measure the costs, the benefits, and their probability distribution accurately. But if you can't, as in the costs of financial regulation vs. its benefits, then you shouldn't use spurious theory to try to justify the decision you wanted anyway. If you don't know the probability of the particle being detected, you simply have to fall back on an ethical judgement: killing cats is wrong.
... it’s a mistake to think that all policies can be boiled down to cost-benefit calculations when one side of the equation is difficult or impossible to measure accurately, and the last thing we need today is more economics-based overconfidence.
Labels: Cost Benefit Analysis
1 Comments:
I agree with your conclusions: that uncertainty and moral hazard can make CBA unreliable and sometimes it is better to rely on qualitative objectives.
But I disagree with your applying these to systemic risk. Firstly, with systemic risk it is the "worst-case scenario" that is important. If regulators had used the great depression as the worst case scenario, they wouldn't have been far wrong.
Secondly, I can't see how systemic risk regulation would cause bankers to take greater risks. So, I don't see where moral hazard fits in.
Thirdly, how do you take a "moral" position on systemic risk? I don't think this gets you very far.
Finally, the main impact of systemic risk regulation would be to encourage smaller banking/trading institutions. I would think that this a good thing in itself. And I disagree with James Kwak that "countercylical measures in a boom dampen economic growth". Surely the opposite is true (in the long run).
So the cost of systemic risk regulation might even be negative.
Post a Comment
<< Home