--- Or, How to Get Your Research Published
The front-page, right-upper-corner headline in my local newspaper last Saturday (January 2, 2015) was "Cancer often just random bad luck." I immediately figured this statement was wrong. Many studies have shown that behavior (e.g., smoking), environment (e.g., sun exposure), and genetics (e.g., kidney cancer and some breast cancers) can all affect a person's chances of getting cancer. So I assumed this was some ill-informed science writer who had gotten the details wrong. I was wrong about that, though.
Apparent digression which however will turn out to be important to the rest of this post:
Last semester I sat in on a graduate-level entomology seminar which focused not only on biological methods of controlling pests but also on what makes research papers more attractive to, and therefore more likely to published in, major scholarly journals.
With each of the four articles we discussed each session, we talked about the issues the authors addressed (such as the harmful effects of many pesticides, the effectiveness of biological controls, possible good or bad effects of combining the two methods); as well as the experimental design, conclusions, flaws in the study, and how similar research might be done in the future.
Because it is possible to do some ground-breaking research in the field of farming (puns intended) without anyone's noticing, if you don't get your work published; and you won't get your work published unless it's obvious early in the editorial/publishing process that your article will make the journal look good.
I bring this up because it turns out that the science reporter in this case did not get it wrong. Instead, the authors of the study made a very effective pitch to Science, the scientific journal in which it was published.
Not only that, but the publicity people at Johns Hopkins, where authors Cristian Tomasetti and Bert Vogelstein performed their research, made an even more effective pitch to the public via the media.
Because here's the thing: Even though the study does not report ANY NEW RESULTS whatsoever. many science and medical writers came up with those headlines about "random bad luck" directly from the subject line of the Johns Hopkins news release, "Bad Luck of Random Mutations Plays Predominant Role in Cancer, Study Shows." Here's part of that news release:
"Some tissue types give rise to human cancers millions of times more often than other tissue types. Although this has been recognized for more than a century, it has never been explained. Here, we show that the lifetime risk of cancers of many different types is strongly correlated (0.81) with the total number of divisions of the normal self-renewing cells maintaining that tissue’s homeostasis. These results suggest that only a third of the variation in cancer risk among tissues is attributable to environmental factors or inherited predispositions. The majority is due to “bad luck,” that is, random mutations arising during DNA replication in normal, noncancerous stem cells. This is important not only for understanding the disease but also for designing strategies to limit the mortality it causes."In other words, "random mutations" during DNA replication cause cancer in cells that were normal before replication, and this happens in more than two-thirds of the cancers this research team studied. Wait! We already knew that, didn't we? Yes, we did!
|From an Evolution 101 class at U.C. Berkeley, but we've all seen illustrations like this in high-school textbooks, haven't we?|
From the body of the article, which I cannot access(*) but which science writer David Gorski accessed and quoted in his analysis of the article:
Gorski comments, "In other words, even if taken at face value as reported in the media, Tomasetti and Vogelstein haven’t really demonstrated anything new. We’ve known for a long time that there is a strong stochastic (probabilistic) component to cancer development" (my emphasis).In formal terms, our analyses show only that there is some stochastic factor related to stem cell division that seems to play a major role in cancer risk. This situation is analogous to that of the classic studies of Nordling and of Armitage and Doll (10, 29). These investigators showed that the relationship between age and the incidence of cancer was exponential, suggesting that many cellular changes, or stages, were required for carcinogenesis. On the basis of research since that time, these events are now interpreted as somatic mutations. Similarly, we interpret the stochastic factor underlying the importance of stem cell divisions to be somatic mutations. This interpretation is buttressed by the large number of somatic mutations known to exist in cancer cells (14–16, 30).
GrrlScientist** at The Guardian. They title the piece, "Bad luck, bad journalism, and cancer."
They write, "These data suggest there is a relationship between risk of cancer and number of cell divisions. But it says nothing about the proportion of cancers due to cell division.
They continue, "So where did this two-thirds ratio come from? It is the proportion of variation in the log of the cancer risk that can be explained by cell divisions. But this variation could be the same regardless of whether the baseline risk is high or low. For example, the depth of the water in the Marianas Trench goes up and down with the position of the moon, so this explains a bit of the variation in its depth. But that reveals bugger all about the absolute depth of the trench" (again, the emphasis is mine).
And, they add, again using figures from the paper:
"We can see this visually below: in the two data sets, x explains just under 80 percent of the variation. In the black points, x explains more about the absolute risk rates (about 75 percent). But it explains less in the red points (which is about 30 percent, as it happens) because there is much more risk (i.e. more cases of cancer) when x is zero (i.e. it has no effect). So adding some cases to that baseline only increases the total risk by a small percentage.
They point out another flaw in the title/headline for the paper:
"How could we decide how much of the cancer risk is due to bad luck? Well, first, we have to decide what is bad luck, which is an entirely different argument. But after we’ve done that, the only real way to suss out how much of the cancer risk is due to bad luck this is to either estimate the rates at which people get cancer through bad luck, or (perhaps easier) to estimate the non-bad luck rates" (yes, again, my emphasis).
Another analyst, Andrew Maynard, concluded that it wasn't the science writers but the authors themselves, and the Johns Hopkins publicists, who reported the story wrong:
"In the case of this paper, it’s hard to see clear evidence of bad reporting. There is a lack of balance and contextualization though that, it seems, has its roots in the original paper" (my emphasis).Maynard goes on to say that he's not criticizing the paper, but asks, "...how can we encourage exploratory risk research without it prematurely impacting consumer and regulatory decisions?”
And I would ask this question, which I think is more important: How can we encourage researchers and research journals to be more honest, even if less headline-producing, in their reporting of research results?(***)
(*I will be writing more about this --- accessibility of articles from science, medical, and other research journals --- in the future.)
(***I will be writing more about this, too --- the reliability, or unreliability, of published research results --- in the future.)