Through the maths darkly
Seeing a recent Boing Boing post on a talk by Mandelbrot, it occurred to me to blog about a worry I have had for a while about probability...
It's like this. The classical formulation of probability theory due to Kolmogorov is based on the idea of repeated identical experiments. We take many copies of a system, perform a measurement lots of time, and review the distribution of outcomes.
This make sense for many application areas: the most obvious place it works is statistical mechanics. Here we look at a physical system comprised of many many identical units each taking properties from the same distribution. But for everything from the mathematics of poker to the behaviour of random biological mutations, the idea of having samples from the same random process makes sense. If we only have one experiment but we can imagine the possibility of conducting further tests to the same system the set up remains reasonable.
One of Mandelbrot's areas of interest is finance, and here when we are examining the random behaviour of markets, we can think of the underlying as being locally stable. The FTSE is not the same today as ten years ago, nor does it behave in a similar fashion, but it does seem (mostly) similar enough today to yesterday that (at least for short periods of time) we can talk about the random process generating the FTSE. This still makes sense since today's observation is done on a system that is fairly close to that used for yesterday's. (Of course whether that process is Lévy, Fréchet or whatever is another matter entirely, and I am assuming there wasn't a crash today or yesterday.)
The problem comes when we cannot even in principle imagine conducting the same experiment more than once. For instance, one 'theory' of jet lag is that the body clock goes 'chaotic'. The idea here is that if I fly from London to New York, rather than my body clock going from London time to New York time, it becomes disrupted, then eventually settles at the new time. The word 'chaos' is helpful in that it highlights the disruption (and the need for forcing to speed the return to normality) but it is unhelpful in that it suggests that it makes sense to talk about what 'time' my body clock is showing when I am jet lagged. It doesn't since you cannot copy my body clock so you can see how tired I feel in various situations, or when I wake up during various tests on the same jet lag. If you cannot sample the process more than once is this situation really one where the idea of a random variable taking a value is meaningful?
For that matter, one might have a similar philosophical issue with using Poisson (or any other kind of random) processes to model corporate defaults. There are not many corporates that default more than once, so again is this a situation where the idea of an underlying random process makes sense? How do we even know that there is a stable generating process if we cannot even in theory do more than one experiment? Epistemologically it looks dodgy to me...
It's like this. The classical formulation of probability theory due to Kolmogorov is based on the idea of repeated identical experiments. We take many copies of a system, perform a measurement lots of time, and review the distribution of outcomes.
This make sense for many application areas: the most obvious place it works is statistical mechanics. Here we look at a physical system comprised of many many identical units each taking properties from the same distribution. But for everything from the mathematics of poker to the behaviour of random biological mutations, the idea of having samples from the same random process makes sense. If we only have one experiment but we can imagine the possibility of conducting further tests to the same system the set up remains reasonable.
One of Mandelbrot's areas of interest is finance, and here when we are examining the random behaviour of markets, we can think of the underlying as being locally stable. The FTSE is not the same today as ten years ago, nor does it behave in a similar fashion, but it does seem (mostly) similar enough today to yesterday that (at least for short periods of time) we can talk about the random process generating the FTSE. This still makes sense since today's observation is done on a system that is fairly close to that used for yesterday's. (Of course whether that process is Lévy, Fréchet or whatever is another matter entirely, and I am assuming there wasn't a crash today or yesterday.)
The problem comes when we cannot even in principle imagine conducting the same experiment more than once. For instance, one 'theory' of jet lag is that the body clock goes 'chaotic'. The idea here is that if I fly from London to New York, rather than my body clock going from London time to New York time, it becomes disrupted, then eventually settles at the new time. The word 'chaos' is helpful in that it highlights the disruption (and the need for forcing to speed the return to normality) but it is unhelpful in that it suggests that it makes sense to talk about what 'time' my body clock is showing when I am jet lagged. It doesn't since you cannot copy my body clock so you can see how tired I feel in various situations, or when I wake up during various tests on the same jet lag. If you cannot sample the process more than once is this situation really one where the idea of a random variable taking a value is meaningful?
For that matter, one might have a similar philosophical issue with using Poisson (or any other kind of random) processes to model corporate defaults. There are not many corporates that default more than once, so again is this a situation where the idea of an underlying random process makes sense? How do we even know that there is a stable generating process if we cannot even in theory do more than one experiment? Epistemologically it looks dodgy to me...
Labels: Kolmogorov, Probability Theory, Random Process
0 Comments:
Post a Comment
<< Home