I am currently enjoying Math on Trial by Schneps and Colmez. The book dwells on abuses of math in court rooms. I am not sure if lawyers really are as dump as described in the book. Could be, the book is somewhat exaggerating the role of the math arguments in the trials. I am also not always fond of the statistical explanations given in the text.

The first case is about the sudden infant death syndrome (SIDS). This sad event did hit a pair twice in spite of close surveillance after the death of the first child. The case is brilliantly described in the book. In the court, the argument was made that by observation in the social class of the parents cases of SIDS occurs with a likelihood of 1 to about 8000, and the probability that it happens twice should be the square 1/8000^2, which is too small to be explained by randomness. In fact, the mother was convicted of murder. I cannot believe that the jury followed this fishy argument, but read for yourself.

For me, this is an interesting argument against statistics without a precise experimental plan. Statistics tries to predict the outcome of experiments in the long run. That is the meaning of probability, and nothing else. In our case the experiment is to select one child of the social class. By comparing to previous data we predict to find 1 case of SIDS in 8000 picks. We say that the probability is 1/8000. If we pick another child by random we can easily see that we can expect both with SIDS once in 8000^2 cases. This is a matter of counting all possible outcomes. So the argument would be correct, if we picked twice from all kids.

However, taking the second child in the same family is not picking at random. In common speech, the second pick depends on the first one. There is a problem here due to differences in the common and the mathematical definition of independence. Mathematically, independence is defined as being able to multiply the two probabilities. On base of the mathematical definition, the problem as described in the book is a tautology and does not make sense. If we look at it closely, the mathematical definition of independence looks worthless if we want to check real experiments.

Of course, the correct way of analyzing the case in the book is that in some families cases of SIDS will be quite frequent. If it happens to 50% of their kids, it will happen twice to 25% of such families with two kids. We can discuss if a surveillance of the second kid helps or harms. That can be answered by studies only.

Now to mathematical independence. We can reduce our thinking to the case of n equal events. Those are events where one is not preferred over the other by assumption or on grounds of a logic. In the long run, we expect each to come up with the same frequency. If we repeat the experiment twice we get n^2 possible events, which are again equal. By simple counting, we get the expected frequencies, aka probabilities. Note that the implied logic is that repeating twice leads to n^2 equal outcomes. So the multiplication rule is where the independence comes from. This is a subtle argument.

Unequal events with probabilities can be approximated by equal events as closely as desired. Continuous events likewise. E.g., a computer will produce a random number obeying a distribution F by selecting one of n evenly distributed numbers in [0,1] by „random“ and applying the inverse of F to this number. Usually n is very large, but of course it is not infinite. So everything can be approximated by the finite experiment with n equal events.

In fact, the Lebesgue measure can be explained by simple counting if we use non-standard methods. But this carries too far.