>The Science link is basically a retraction from the journalist who wrote the original summary article. She quotes the study authors making excuses about not having enough time to explain the subtleties under press deadlines. I've never read anything quite like it.
I don't agree with that characterization of the Science news article at all. It's a summary of the secondary reaction to the press coverage, but I can't construe it as a retraction of the original news coverage. For example, here's what David Spiegelhalter says in it, who's entire research lately has been on accurately communicating risk:
>In this case, he felt, “the gist of the coverage is very reasonable—most cases of cancer are due to chance.”
I too feel that the coverage was quite accurate, perhaps the most accurate that I've seen of a science tidbit in quite some time. If one's entire research program is on how to minimize environmental causes of cancer, I can see how it would feel like one's research was being minimized entirely, and require a vigorous defense. However, these "defenses", in particular Aaron Meyer's, seem to be fact free. In contrast, the epidemiology world seems to agree almost exactly with the Vogelstein paper's estimate of 65% chance, 35% modifiable causes (though perhaps somewhat by chance, as their measurements are not exactly of the same thing):
>I don't agree with that characterization of the Science news article at all.
Here's the section of the news article that I feel reads like the journalist walking back the central claim in her original news article. It also seemed to me that she was attempting to pin the blame for the misunderstanding on Tomasetti (by quoting from her initial interview with him, and reporting that he had vetted her initial piece):
"...[W]as the “two-thirds” figure actually referring to a fraction of cancer cases? Tomasetti had explained to Science that “if you go to the American Cancer Society website and you check what are the causes of cancer, you will find a list of either inherited or environmental things. We are saying two-thirds is neither of them.” He also confirmed the news story's language describing the study before it was published. In a follow-up interview [...] Tomasetti clarified that the study argued that bad luck explained two-thirds of the variation in cancer rates in different tissues—a subtly different claim.
Despite the confusion among reporters, Tomasetti did not feel they had been careless[...] And, he believes, he did his best to convey his findings to nonexperts. “If given enough time, or space, I can explain the subtleties of any given scientific result to anyone really,” but there were only so many hours he could spend speaking with reporters on deadline. The material is complicated even for statistical gurus, he believes. He has been busy preparing a technical report with additional details, and Johns Hopkins also sent a follow-up explainer to journalists and posted it online."
I don't agree at all with your contention that the criticisms of the original paper are 'fact-free' and based on researchers feeling threatened. There are lots of problems with this paper; the fact that their conclusions may agree with data from other fields (or may even be right) doesn't change that.
I don't agree with that characterization of the Science news article at all. It's a summary of the secondary reaction to the press coverage, but I can't construe it as a retraction of the original news coverage. For example, here's what David Spiegelhalter says in it, who's entire research lately has been on accurately communicating risk:
>In this case, he felt, “the gist of the coverage is very reasonable—most cases of cancer are due to chance.”
I too feel that the coverage was quite accurate, perhaps the most accurate that I've seen of a science tidbit in quite some time. If one's entire research program is on how to minimize environmental causes of cancer, I can see how it would feel like one's research was being minimized entirely, and require a vigorous defense. However, these "defenses", in particular Aaron Meyer's, seem to be fact free. In contrast, the epidemiology world seems to agree almost exactly with the Vogelstein paper's estimate of 65% chance, 35% modifiable causes (though perhaps somewhat by chance, as their measurements are not exactly of the same thing):
http://www.thelancet.com/journals/lancet/article/PIIS0140-67...
If there's broadly accepted epidemiological data that contradicts the molecular approach from Vogelstein, I would love to see it.