Science, health, medical news freaking you out? Do the Double X Double-Take first

Handy short-form version.

Have you seen the headlines? Skip them
You’ve probably seen a lot of headlines lately about autism and various behaviors, ways of being, or “toxins” that, the headlines tell you, are “linked” to it. Maybe you’re considering having a child and are mentally tallying up the various risk factors you have as a parent. Perhaps you have a child with autism and are now looking back, loaded with guilt that you ate high-fructose corn syrup or were overweight or too old or too near a freeway or not something enough that led to your child’s autism. Maybe you’re an autistic adult who’s getting a little tired of reading in these stories about how you don’t exist or how using these “risk factors” might help the world reduce the number of people who are like you.

Here’s the bottom line: No one knows precisely what causes the extremely diverse developmental difference we call autism. Research from around the world suggests a strong genetic component [PDF]. What headlines in the United States call an “epidemic” is, in all likelihood, largely attributable to expanded diagnostic inclusion, better identification, and, ironically, greater awareness of autism. In countries that have been able to assess overall population prevalence, such as the UK, rates seem to have held steady at about 1% for decades, which is about the current levels now identified among 8-year-olds in the United States. 

What anyone needs when it comes to headlines honking about a “link” to a specific condition is a mental checklist of what the article–and whatever research underlies it–is really saying. Previously, we brought you Real vs Fake Science: How to tell them apart. Now we bring you our Double X Double-Take checklist. Use it when you read any story about scientific research and human health, medicine, biology, or genetics.

The Double X Double-Take: What to do when reading science in the news
1. Skip the headline. Headlines are often misleading, at best, and can be wildly inaccurate. Forget about the headline. Pretend you never even saw the headline.

2. What is the basis of the article? Science news originates from several places. Often it’s a scientific paper. These papers come in several varieties. The ones that report a real study–lots of people or mice or flies, lots of data, lots of analysis, a hypothesis tested, statistics done–is considered “original research.” Those papers are the only ones that are genuinely original scientific studies. Words to watch for–terms that suggest no original research at all–are “review,” “editorial,” “perspective,” “commentary,” “case study” (these typically involve one or only a handful of cases, so no statistical analysis), and “meta-analysis.” None of these represents original findings from a scientific study. All but the last two are opinion. Also watch for “scientific meeting” and “conference.” That means that this information was presented without peer review at a scientific meeting. It hasn’t been vetted in any way.

3. Look at the words in the article. If what you’re reading contains words like “link,” “association,” “correlation,” or “risk,” then what the article is describing is a mathematical association between one thing (e.g., autism) and another (e.g., eating ice cream). It is likely not describing a biological connection between the two. In fact, popular articles seem to very rarely even cover scientific research that homes in on the biological connections. Why? Because these findings usually come in little bits and pieces that over time–often quite a bit of time–build into a larger picture showing a biological pathway by which Variable 1 leads to Outcome A. That’s not generally a process that’s particularly newsworthy, and the pathways can be both too specific and extremely confusing.

4. Look at the original source of the information. Google is your friend. Is the original source a scientific journal? At the very least, especially for original research, the abstract will be freely available. A news story based on a journal paper should provide a link to that abstract, but many, many news outlets do not do this–a huge disservice to the interested, engaged reader. At any rate, the article probably includes the name of a paper author and the journal of publication, and a quick Google search on both terms along with the subject (e.g., autism) will often find you the paper. If all you find is a news release about the paper–at outlets like ScienceDaily or PhysOrg–you are reading marketing materials. Period. And if there is no mention of publication in a journal, be very, very cautious in your interpretation of what’s being reported.

5. Remember that every single person involved in what you’re reading has a dog in the hunt. The news outlet wants clicks. For that reason, the reporter needs clicks. The researchers probably want attention to their research. The institutions where the researchers do their research want attention, prestige, and money. A Website may be trying to scare you into buying what they’re selling. Some people are not above using “sexy” science topics to achieve all of the above. Caveat lector

6. Ask a scientist. Twitter abounds with scientists and sciencey types who may be able to evaluate an article for you. I receive daily requests via email, Facebook, and Twitter for exactly that assistance, and I’m glad to provide it. Seriously, ask a scientist. You’ll find it hard to get us to shut up. We do science because we really, really like it. It sure ain’t for the money. [Edited to add: But see also an important caveat and an important suggestion from Maggie Koerth-Baker over at Boing Boing and, as David Bradley has noted over at ScienceBase, always remember #5 on this list when applying #6.] 

—————————————————————————–

Case Study
Lately, everyone seems to be using “autism” as a way to draw eyeballs to their work. Below, I’m giving my own case study of exactly that phenomenon as an example of how to apply this checklist.

1. Headline: “Ten chemicals most likely to cause autism and learning disabilities” and “Could autism be caused by one of these 10 chemicals?” Double X Double-Take 1: Skip the headline. Check. Especially advisable as there is not one iota of information about “cause” involved here.

2. What is the basis of the articleEditorialConference. In other words, those 10 chemicals aren’t something researchers identified in careful studies as having a link to autism but instead are a list of suspects the editorial writers derived, a list that they’d developed two years ago at the mentioned conference. 

3. Look at the words in the articles. Suspected. Suggesting a link. In other words, what you’re reading below those headlines does not involve studies linking anything to autism. Instead, it’s based on an editorial listing 10 compounds [PDF] that the editorial authors suspect might have something to do with autism (NB: Both linked stories completely gloss over the fact that most experts attribute the rise in autism diagnoses to changing and expanded diagnostic criteria, a shift in diagnosis from other categories to autism, and greater recognition and awareness–i.e., not to genetic changes or environmental factors. The editorial does the same). The authors do not provide citations for studies that link each chemical cited to autism itself, and the editorial itself is not focused on autism, per se, but on “neurodevelopmental” derailments in general.

4. Look at the original source of information. The source of the articles is an editorial, as noted. But one of these articles also provides a link to an actual research paper. The paper doesn’t even address any of the “top 10″ chemicals listed but instead is about cigarette smoking. News stories about this study describe it as linking smoking during pregnancy and autism. Yet the study abstract states that they did not identify a link, saying “We found a null association between maternal smoking and pregnancy in ASDs and the possibility of an association with a higher-functioning ASD subgroup was suggested.” In other words: No link between smoking and autism. But the headlines and how the articles are written would lead you to believe otherwise. 

5. Remember that every single person involved has a dog in this hunt. Read with a critical eye. Ask yourself, what are people saying vs what real support exists for their assertions? Who stands to gain and in what way from having this information publicized? Think about the current culture–does the article or the research drag in “hot” topics (autism, obesity, fats, high-fructose corn syrup, “toxins,” Kim Kardashian) without any real basis for doing so? 

6. Ask a scientist. Why, yes, I am a scientist, so I’ll respond. My field of research for 10 years happens to have been endocrine-disrupting compounds. I’ve seen literally one drop of a compound dissolved in a trillion drops of solvent shift development of a turtle from male to female. I’ve seen the negative embryonic effects of pesticides and an over-the-counter antihistamine on penile development in mice. I know well the literature that runs to the thousands of pages indicating that we’ve got a lot of chemicals around us and in us that can have profound influences during sensitive periods of development, depending on timing, dose, species, and what other compounds may be involved. Endocrine disruptors or “toxins” are a complex group with complex interactions and effects and can’t be treated as a monolith any more than autism should be.

What I also know is that synthetic endocrine-disruptors have been around for more than a century and that natural ones for far, far longer. Do I think that the “top 10″ chemicals require closer investigation and regulation? Yes. But not because I think they’re causative in some autism “epidemic.” We’ve got sufficiently compelling evidence of their harm already without trying to use “autism” as a marketing tool to draw attention to them. Just as a couple of examples: If coal-burning pollution (i.e., mercury) were causative in autism, I’d expect some evidence of high rates in, say, Victorian London, where the average household burned 11 tons of coal a year. If modern lead exposures were causative, I’d be expecting records from notoriously lead-burdened ancient Rome containing descriptions of the autism epidemic that surely took it over. 

Bottom line: We’ve got plenty of reasons for concern about the developmental effects of the compounds on this list. But we’ve got very limited reasons to make autism a focal point for testing them. Using the Double X Double-Take checklist helps demonstrate that.

By Emily Willingham, DXS managing editor 

28 thoughts on “Science, health, medical news freaking you out? Do the Double X Double-Take first

  1. I’ve been getting some comments regarding the observation in this article that “What headlines in the United States call an (autism) ‘epidemic’ is, in all likelihood, largely attributable to expanded diagnostic inclusion, better identification, and, ironically, greater awareness of autism.” This statement is not an unequivocal exclusion of other possibilities, but it is based on scientific findings. The following sources provide more information for anyone interested:

    First, 20 years ago, monitoring and diagnosis for autism were very, very different from what they are today. Increased diagnosis is, indeed, a very logical explanation for identifying rates today that match those in countries where data are far more carefully collected.

    Here are some links to studies regarding better diagnosis:
    About the UK study here: http://arstechnica.com/science/news/2011/05/autism-epidemic-more-likely-were-just-better-at-diagnosis.ars

    The CDC summarizes that 1% rates have been identified elsewhere, too: http://www.cdc.gov/ncbddd/autism/data.html

    Example of decrease in other diagnostic categories concordant with increase in autism dx: http://www.springerlink.com/content/yf24aj15k9b7kcp1/

    “Health officials attribute the increase largely to better recognition of cases, through wide screening and better diagnosis.”
    http://healthland.time.com/2012/03/29/autism-rates-up-screening-better-diagnosis-cited/#ixzz1tLpkVYry

    From Thomas Insel, head of the National Institutes of Health: “Total population epidemiological studies suggest much or all of the increase is due to better and wider detection.”

    Study after study has found these rationales to explain the US data, but the increases instead are often reported as being unexplained.

  2. As I became more interested in science reporting and started paying attention to the process–I also learned that headlines at some sources are not written by the author of the piece. They can be written later by people who don’t understand the full detail, and who need quick, attention-grabbing, short fodder.

    A blogger might have control over their headlines, but many writers don’t. And I’m trying not to hold them responsible for outrageous headlines now.

  3. @Mary Thanks for the insightful comment. Indeed, writers very often aren’t responsible for headlines; I have had uncomfortable experiences with that, too. Another good reason just to skip the headline.

  4. I love how you’ve approached this as an opportunity to provide some helpful guidelines for assessing news stories. After years of working in science, I’ve only recently really started paying attention to science and health news, and it is still shocking to me how much of a disconnect there is between science and the way it is reported. I’ve been thinking particularly about how much science news is targeted towards parents, I guess playing on all our natural fears and insecurities about raising kids. We want to do the best we can for our kids, dammit, and so we make a perfect audience for these stories. And yet, so few parents have the background to sort through the hype and find the science, and it isn’t their fault! Your tips are great for bridging that gap. We really need to be teaching these same skills in high school biology. Thank you for your awesomeness Emily and DoubleXSci!

  5. Thank you, Alice. I know that the headlines and stories can lead to needless worry and anxiety because worried and anxious people send me stories like this to ask about validity, how much they should be worried, etc. Yes, most of us are scared to death at our core about bollixing something up as parents, and these kinds of stories really do feed that. Agreed about teaching these skills–I think the first skill we should teach ANY one of ANY age is to ask: “Really? What’s your source on that?” and never stop.

  6. Excellent list. I’m going to use it for thinking about creationism, since you produce a good one for the anti-vaccine lunacy. I’m also big on the impact factor of journals–Nature is a more important source than say, the Journal of How Vaccines Kill Everyone. (Oops, I’m guilty of a false dichotomy.) Of course, I think the Lancet is a high impact journal, and see what that got us with Andy Wakefield and his fraud.

    Anyways, thanks for the list. I’m appropriating it for my blog (though I will give you due credit). Maybe if we find just one person to employ it when they see something on the internet, we will have converted one person from using homeopathy to cure rectal cancer.

    • I wish people would actually read Wakefield et al (1998; retracted) rather than assuming that news stories about it are an accurate summary.

      Step #4, right? It’s not hard to find.

      On reading, one finds that they do not make a link between autism and MMR vaccination. They quote others who suggest such a link, of which 8 are anecdotal (parents of subjects) and 2 are other papers (citations 16 and 17), but they never make any link themselves. They also only have 12 subjects, total, which is a preliminary study at best…

      Of course, “scientists use the words ‘autism’ and ‘vaccine’ in the same sentence” doesn’t really make a good headline, no matter how accurately it might describe the situation.

  7. Hi, Michael– You are welcome to use it, of course, with a link back and credit. :)

    I do think lists like this one and the Real vs Fake Science can be applicable for a wide variety of situations, as their basis is really just using a critical eye.

    Impact factors (ratings of a journal’s impact in terms of how much others cite the papers it publishes, etc.) make me edgy because niche journals for subspecialties can be quite good places for research, but they’ll never have huge impact factors simply because they are niche.

  8. Thanks, Emily! Great list! It’s only going to be more difficult to judge studies in the media if only for the sheer number of media sources and increasing complexity of science. Sometimes I think researchers could us a review of these basics too. It’s not only the media at fault. When a researcher has built a career on an endocrine-disrupter, breastmilk or whatever, they may not be motivated to find let alone publish conflicting or null results. We know the journals don’t publish a lot of null results.

    • Thanks, Polly. Whether you’re motivated to publish null results or not, it can be difficult to do so (as you note), which is unfortunate. As for conflicting results, my experience has been that it’s kind of fun to publish those, especially that free-wheeling part in the discussion where one can speculate wildly about why they might conflict. “The reasons for this discrepancy are unclear, but we suggest…” ;)

  9. Maggie Koerth-Baker has written about this list at Boing Boing (http://boingboing.net/2012/04/30/how-to-read-science-news.html) and added an *excellent* tip and caveat about asking a scientist. About asking a scientist, she writes, “It’s not something most people can easily do. Twitter helps, but only if you’re already tied into social networks of scientists and science writers. Again, most people aren’t. If you want to connect to these networks, I’d recommend starting out by picking up a copy of The Open Laboratory, an annual anthology of the best science writing on the web. Use that to find scientists who write for the public and whose voice you enjoy.” A great suggestion. More on Open Lab is available here: http://blogs.scientificamerican.com/cocktail-party-physics/2011/12/06/open-lab-2011-and-the-finalists-are/

    Koerth-Baker also notes that not just any scientist will do, writing, “An expert in one subject is not the same thing as an expert. It doesn’t make sense to ask a mechanical engineer for their opinion on cancer treatments. It doesn’t make sense to as an oncologist about building better engines.”

    Many thanks to Maggie Koerth-Baker for highlighting the post and adding this suggestion and caveat.

  10. Good tips here. Solid good sense. The main thing I do differently is, I use the headlines as a BS indicator — it tells me what the bias is in the publication or the author’s mind. This is often a good indicator of which data were cherry-picked and which were ignored, so I can read with more awareness. Since I’m liable to read at least 4 articles on an interesting science topic, this isn’t wasted information and it helps remind me what to look for & how to weight things.

    There’s a lively article on reading science on my medical blog. I’m a sometime RN and permanent geek with an intractable illness, so it’s a big part of my life, but I refuse to not have fun with something so important. If you can stand a little wit and a few ferrets, it dovetails nicely with your clear and solid piece here, going a bit further with just how to dissect an article as you read it: http://biowizardry.blogspot.com/2011/05/numeric-literacy-and-mental-integrity.html

  11. Good tips here. Solid good sense. The main thing I do differently is, I use the headlines as a BS indicator — it tells me what the bias is in the publication or the author’s mind. This is often a good indicator of which data were cherry-picked and which were ignored, so I can read with more awareness. Since I’m liable to read at least 4 articles on an interesting science topic, this isn’t wasted information and it helps remind me what to look for & how to weight things.

    There’s a lively article on reading science on my medical blog. I’m a sometime RN and permanent geek with an intractable illness, so it’s a big part of my life, but I refuse to not have fun with something so important. If you can stand a little wit and a few ferrets, it dovetails nicely with your clear and solid piece here, going a bit further with just how to dissect an article as you read it: http://biowizardry.blogspot.com/2011/05/numeric-literacy-and-mental-integrity.html

  12. @Isy Thanks for commenting. The only problem with using headlines as a barometer is that a completely good story can get a headline that’s not at all a good reflection of what’s reported. That arises from the fact that writers often don’t get to write their own heds–someone else does that, often with an eye to SEO (search engine optimization) rather than to accurate reflection of the article’s content. It’s just a fact of life in this search-engine-driven world, which is one reason I’d just advise skipping the headline entirely.

  13. Great post! I’ve shared with all my librarian colleagues– it has great relevance to what we teach: information literacy. Thanks! -Jeff

  14. Excellent checklist! I’m going to translate it for Italian readers (I will post it on Facebook and Twitter with the link to your Great blog!) .
    I want to add two more points to help with evaluating the reliability of scientific news:
    Good if a- & b- as follows
    “a- In the article are reported opinions of at least two other scientists on the original research. It benefits critical reading. But beware: pay more attention to phrases mentioned, and less to conclusions which are often enthusiastic and inaccurate (such as headlines).
    b- At the end of the article is given the opportunity (unfortunately rare) to leave comments: they are often more interesting to read.”
    Thanks!
    Tiziana

  15. Only two kinds of reliable data: Experimental, and case reports. Why case reports? Because unlike any other, they’re the ones where the greatest number of variables are measured, and with the most precision – two aspects essential to science. Experimental is self-explanatory. Without poking and prodding, it’s impossible to determine causality of the observed association. Did it rain because I danced? Or did I dance because it rained? Or is there some other as-of-yet undetermined cause? Let’s dance to find out. Or wait for the rain to see if it drives me to dance.

    Lots of subjects does not good science make. It’s precise measurements and repeatability. Does it matter if a study had 2 million subjects when another study of the same number of subjects contradicts the first one? Yes, it means the hypotheses generated by both studies are wrong. Two case reports that agree with each other is many orders of magnitude more reliable. I will go as far as to say that more progress is made with case reports being read by other doctors who then apply it to their own patients, than with entire populations who try to emulate other entire populations.

    • Case reports are great and make the best reading, but they are not data acquired via testing of a hypothesis, and they don’t supply either correlation or causation.

    • That’s not entirely true. A case report isn’t just a report on a condition, but also on a treatment. This treatment was determined by hypothesis of the known aspects of the case. This treatment constitutes the experiment, the test of the hypothesis. For example, one doctor in Italy successfully restored normal testosterone production in a young man with a drug not normally used for this (on the contrary, it was used to inhibit testosterone production, i.e. chemically castrate). Wrote a case report, and this case report has been used by at least one other doctor to treat at least one other patient with the same condition, with equal success. This doctor is planning to write a case report as well, which will add to the empirical evidence reported in the first. Basically, it’s experimental science, but one subject at a time.

    • I didn’t say case reports are a “report on a condition,” which would be an odd thing to say.

      It’s the “one subject at a time” that makes a case report nothing more than anecdote in scientific form. I love case reports. I read several each week, and they are my favorite kind of scientific paper to read. But that report of a single case or a handful of cases has frequently translated in news media reports as “x causes y” or “x cures y,” which is why it’s important for readers to know that case reports are nice stories, but they usually represent an n of 1.

    • When taken literally, science is merely a series of anecdotes neatly arranged for statistical analysis.

      When the effect is very obvious as in the testosterone case, then only a few subjects are sufficient to confirm or refute the hypothesis. But when the effect is subtle then we need a much greater pool of subjects for the effect to show up at all in statistical analysis. The point here is that there’s no difference between a series of n1 anecdotes, and one giant study with millions of subjects.

      Granted, one case study does not make science. But think of it as merely patient zero.

    • In the context of this post, it’s simply useful for a general reader to understand that it’s generally an n of 1 with no statistical analysis at all and should be approached with caution, especially in the context of extraordinary claims.

      I’d take you up on the rest of that last comment, but that would go well off the topic of this post. You are always free to email me at ejwillingham at ye olde mail o’ G.

  16. @Emily.. there are only few people like you who not only share a informative information but even clear the doubts of the visitors in a very detail way ! … Thanks for everything

Comments are closed.