That's the same kind of reasoning that sent a mom of whose babies died to prison for decades. (I can't remember her name so can't link, sorry - it's a well known story of a mother getting convicted for double child murder because a doctor did some shitty statistics in court).
Sure, the chance is high. But if there is no conclusive evidence, then there's no conclusive evidence.
That might be enough for angry Twitter mobs, but I sure hope that it's not enough for court.
> Clark's first son died in December 1996 within a few weeks of his birth. Her second son died in similar circumstances in January 1998. A month later, Clark was arrested and tried for both deaths. The defence argued that the children had died of sudden infant death syndrome (SIDS). The prosecution case relied on flawed statistical evidence presented by paediatrician Professor Sir Roy Meadow, who testified that the chance of two children from an affluent family suffering SIDS was 1 in 73 million. He had arrived at this figure erroneously by squaring 1 in 8500, as being the likelihood of SIDS in similar circumstances. The Royal Statistical Society later issued a statement arguing that there was no statistical basis for Meadow's claim, and expressed concern at the "misuse of statistics in the courts".
No, that's not the argument. The argument is that something that's highly improbable for this person to do is probable for someone to do.
Consider if you had 1,000,000 people flipping coins. You'd expect one of them to flip 20 heads in a row. However, if you took the person who did it to court, you'd argue: it's highly unlikely this person could flip 20 heads in a row. They must be cheating!
I don't know enough about this case to say if it's correct, but OP is saying more than "dur people use bad statistics". They're saying it's the same fallacy. In the woman with the babies case, SIDS is rare, but in a country with millions of mothers it can happen to the same one twice. In this case, playing perfect hands is rare, but in a country of millions of players, you'd expect someone to, somewhere, maybe in a podunk town of small scale games.
Of course, the fallacy doesn't make forward predictions. If he continues to win improbably after he's been identified, then it's not that.
It's not a fallacy; it's that using a purely probabilistic approach here is not warranted. It requires some Bayesian reasoning.
The main difference is in the SIDS case, you have a sample size of 2.
In the Postle case, we have hundreds of hands he is involved in; dozens of sessions. Multiple instances of where he makes incredibly good reads, contrary to optimal theoretical play. (the article also notably does not detail any incorrect reads).
We also have a good understanding of how playing lots (50 %) of hands should affect his variance, yet he seems to come out of almost every session a winner. He should be coming away a loser a lot more.
The idea in popular culture that great poker players can "read" other players like a book is overblown, and a bit obsolete. Modern live players have learned to conceal their intentions much better than in the past. Yes some players may still have slight tells, but unless his opponents' eyes are bulging out of their head like a cartoon wolf when they get a flush, that can only account for a slight edge.
So sure, if you take a purely probabilistic approach and say "well, somebody somewhere could go on that sort of run", then it looks like a fallacy. But if you take the other data: That he doesn't play in other cash games, that he cashes almost every time despite a high variance style, that he makes theoretically unsound plays at critical times and they always turn out to be correct, that other legendary players do not have these sorts of results...then it seems to me there is a very high likelihood that he is cheating.
No, it merely means that there is no proof, there is just a very high likelihood. That may be good enough for you but in the case of a crime I love me some proof before getting to a conviction.
Statistical evidence can be used as proof in the court.
Even if it's not given in mathematical form, common sense reasoning uses common sense statistics every day. You could always claim freak accident if that would not be the case.
Sure, the chance is high. But if there is no conclusive evidence, then there's no conclusive evidence.
That might be enough for angry Twitter mobs, but I sure hope that it's not enough for court.