Is health news helpful or just hype?
Knowing the basics of scientific research and statistics can help you understand what medical studies really say
In the late 1990s, word that selenium and vitamin E might lower the risk of prostate cancer was reported by newspapers and magazines, broadcast on television and radio, and announced on Web sites. Eager to prevent the disease — and convinced that vitamins and minerals couldn’t be harmful — men around the world began taking the supplements.
But at the end of October 2008, the National Cancer Institute stopped a study, dubbed SELECT, which was designed to test whether selenium and vitamin E, alone or in combination, really could lower prostate cancer risk. The trial was slated to run until 2011, but it came to a halt when researchers grew concerned that the supplements were actually causing more harm than good. (For details on SELECT, see Volume 3, Number 1, of Perspectives.)
Men who heeded the early advice to take selenium and vitamin E undoubtedly felt angry and frustrated. How was it possible that scientific studies, published in renowned medical journals, came to such different conclusions? Why can’t the researchers get it right? And given the sudden “flip-flop,” how can anyone take the right steps to protect their health?
Knowing what’s best for you means being an intelligent consumer of health information. You don’t have to become a scientist or doctor to make wise decisions about your health. But you should understand the different types of research you are likely to hear about, familiarize yourself with some basic statistical terms, and know how to respond to the barrage of medical bulletins you see or hear about every week.
What kind of study is it?
To get answers about the causes, diagnosis, and treatment of disease, scientists use a variety of methods. Not all studies are created equal. Check to see what type of study was conducted to decide if the results apply to you.
Laboratory studies are test-tube experiments that typically use animal or human cells. One advantage of laboratory studies is that a large number of tests can be performed because human subjects aren’t required. Laboratory studies provide the scientific basis for all of clinical medicine, but most findings are years away from having any practical application in medical tests and treatments. In general, you should not rush to change your lifestyle because of test-tube triumphs.
Animal studies often bridge the gap between test-tube experiments and observations in humans. But here, too, perspective is crucial; if you acted on every animal study, your health decisions might begin to resemble the moves of a mouse in a maze.
Human studies bring science into your daily life. Here’s how researchers study human health:
Case reports are the simplest, most limited type of study. Doctors record medical details about individual patients or a small group of patients, generally people with rare or unusual problems. These reports — little more than anecdotes — can help other practitioners recognize and possibly treat diseases they haven’t seen before. They can also prompt research, but don’t stake your health on them.
Observational studies provide systematic, objective information about large groups of patients using one of these techniques:
- Cohort analysis begins when researchers recruit apparently healthy volunteers to serve as the study group, or cohort. They observe the cohort over time, relying on questionnaires, medical tests, and health records to track the group. Finally, the investigators compare members of the cohort who have remained healthy with those who have fallen ill, trying to identify the factors associated with illness. These studies are considered prospective because researchers wait for disease to develop.
- Case-control studies have the same goals as cohort analyses, but they proceed from the opposite direction. Instead of observing a group of initially healthy people, researchers begin case-control studies by identifying a group of patients who are already ill. (Because these studies look back at participants’ lifestyle and other factors, they are considered retrospective.) Next, researchers compare the patients with an equal number of demographically similar healthy people to identify factors that may account for the difference between illness and health.
Clinical trials are “active” studies. Rather than wait to see what happens to study volunteers, researchers randomly assign some of their subjects to take medications or undergo new procedures (the experimental group) while others receive standard interventions or a placebo (the control group). Often, neither researchers nor participants know who is in which group, which means the study is double-blind. The goal is to prevent people’s expectations from affecting the results. By comparing outcomes between the experimental and control groups, scientists can determine which intervention is best — or, for that matter, if an intervention is better than no treatment at all.
The randomized double-blind placebo-controlled trial is the gold standard for clinical research: it is really the only way to prove whether an intervention is beneficial.
Meta-analysis is a research technique that attempts to combine the results of many different studies, especially studies that might not have been large enough to yield meaningful data on their own. But it’s not easy. Researchers have to select studies that have similar hypotheses, screen them to be sure they are all technically competent, and then use sophisticated statistical techniques to analyze the pooled data.
Regardless of the type of study, it will have strengths and limitations. For example, even though one of the early studies that suggested selenium might reduce prostate cancer risk was a randomized controlled trial (a strength), it wasn’t specifically designed to examine a relationship between selenium and prostate cancer (a limitation). Instead, the primary goal was to see if selenium supplementation would prevent skin cancer. Although the apparent protection against prostate cancer was intriguing, it can’t be considered conclusive.
It is important that studies undergo peer review, an evaluation by experts who did not participate in the research, to ensure that strengths and limitations of the work are represented accurately in medical and scientific journals. You can’t rely on newspaper articles to present this information fairly, so you should consider reading the original study yourself. Strengths and limitations are generally outlined near the end, just before the conclusions.
You’ll also want to consider bias, or influences on the study that may affect the outcome. One bias often cited in medical studies is that people who are healthier than average tend to enroll in clinical trials because, the thinking goes, they are more interested in maintaining their good health. (See “Self-selection bias.”) This phenomenon was apparent in the European Randomized Study of Screening for Prostate Cancer (ERSPC). There was no difference in the number of deaths between the experimental and control arms, but overall mortality among study participants was lower than expected compared with the general population. (For more information on the ERSPC, see Volume 3, Number 1, of Perspectives.)
Other types of biases include these:
Lead-time bias. To understand this, consider identical twins who die of prostate cancer at age 75. One had regular PSA screening, and was diagnosed and treated with radical prostatectomy at age 60, though his cancer later recurred. The other didn’t undergo regular screening and didn’t know he had cancer until he experienced symptoms at age 70. Early diagnosis and treatment appears to be the winner because the first brother survived longer after diagnosis (15 years versus five), but in reality, there was no difference in the outcome. Both men died at the same age.
Length-time bias. In Facts and Figures 2006, the American Cancer Society states that the five-year survival rates for all stages of prostate cancer have increased during the last 20 years, from 67% to nearly 100%. Sounds impressive, but is the change due to earlier diagnosis of prostate cancer with PSA testing or better treatment?
Selection bias. In many studies, prostate cancer patients who are treated surgically tend to be younger and to have earlier disease and better overall health than patients who receive radiotherapy. Similarly, patients who are observed without treatment tend to be older and have poorer general health. These differences produce selection bias, which can muddy comparisons between various types of treatment.
Self-selection biasOtto SJ, Schröder FH, de Koning HJ. Low All-Cause Mortality in the Volunteer-Based Rotterdam Section of the European Randomized Study of Screening for Prostate Cancer: Self-Selection Bias? Journal of Medical Screening 2004;11:89–92. PMID: 15153324. |
Test the evidence and interpret the results
An old adage reminds us that nothing in life is certain except death and taxes. That’s why scientists examine the probability of a particular outcome happening again under the same or similar conditions. The numerical expression of probability in scientific and medical studies is called the P value. The P value helps answer the question of whether the result could have occurred by chance alone. The lower the P value, the less likely the findings are due to chance; the higher the P value, the more likely the findings are due to chance.
So what constitutes a low P value? By convention, a P value of .05 or less is considered low; it means that there’s a 5% chance that the findings are due to chance. P values of .05 or less are deemed statistically significant.
Consider the first study on perineural invasion cited in “Impact of PNI.” The researchers found that the estimated prostate cancer–specific mortality — dying of prostate cancer instead of another cause — for patients who had radiation therapy was 14% after eight years for those with perineural invasion versus 5% for those without perineural invasion. The P value was .0008, or just a .08% chance that the result was due to chance. In other words, the finding is very nearly certain.
But even statistical significance may need to be probed further, especially when it comes to cause and effect. For example, just because all of the male relatives of one man with prostate cancer develop the disease, too, doesn’t mean that all cases of prostate cancer are hereditary. Other factors, such as diet, may be to blame.
Nor does a statistically significant finding make it relevant for you. You need to look at your own situation to determine whether study findings are personally significant. You are likely to find a result important if the study was conducted in people like you, if you are at risk for the disease being investigated, and if you are willing to make the changes suggested by the findings.
Personal significance gets at the question of risks and benefits. If you are like most men, the first question that you’ll ask when you read about prostate cancer is, “Will it happen to me?” Despite the simplicity of this question, it does not have a simple answer. Instead, it usually gets a statistical reply: compared with an average person, an individual with your health history is X times more likely to become ill. In this case, X marks relative risk.
If your relative risk of prostate cancer is greater than 1, you are more likely than average to be stricken; a relative risk of 2, for example, means you are twice as likely to develop prostate cancer. Men with fathers or brothers who’ve had prostate cancer are 1.5 to 3 times as likely to develop the disease as men with no family history, and if multiple relatives have been diagnosed before age 55, a man’s risk rises even further. Although even a small risk can be important when your health is involved, you should interpret relative risks below 1.5 with caution.
Relative risk is important, but to learn what it means for you, you must also consider your absolute risk. Suppose, for example, that a study finds that someone like you is three times as likely to develop Disease Y compared with the average man. Your relative risk of 3 might sound scary, but if the average man’s risk is only one in a million, your risk of three in a million shouldn’t keep you awake at night. In the words of one statistician, a difference, to be a difference, must make a difference.
Benefit is the converse of risk. In the Prostate Cancer Prevention Trial (see “Can prostate cancer be prevented?”), researchers found that taking finasteride could cut the risk of prostate cancer by 25%. A 25% reduction in risk sounds great, and it is — especially if you have a high risk to begin with. Approximately 16% of men born today will be diagnosed with prostate cancer at some point in their lifetime. So taking finasteride might provide an absolute risk reduction of 4% (25% of 16%). Worth it? Only you can decide after weighing the potential risks, benefits, and costs.
Admittedly, a one-in-six chance of being diagnosed with prostate cancer sounds scary, but keep in mind, that’s just the risk of being diagnosed with the disease. The risk of dying from prostate cancer is different: a white American man’s risk is only about 3%; an African American’s, about twice that. For all men, the chances of dying from cardiovascular disease are greater than dying from prostate cancer.
The public has a voracious appetite for health information, and the media are eager to feed that hunger. Too often, though, complex findings are reduced to a sound bite or a headline proclaiming “New Hope” or “No Hope.” To be an informed health consumer you will have to read behind the headlines. Focus on results that have been published in peer-reviewed medical journals. Note any potential conflicts of interest, particularly when research is funded by pharmaceutical companies and other commercial enterprises. (Conflicts of interest don’t necessarily invalidate research, but they should encourage you to read the findings with a slightly more critical eye.)
Most importantly, when you read about medical research, see how new information fits into your personal health puzzle before you decide to change your ways. Keep the big picture in mind, and remember to factor in your personal preferences and priorities. If you have lingering questions, discuss them with your doctor.
About the Author
Disclaimer:
As a service to our readers, Harvard Health Publishing provides access to our library of archived content. Please note the date of last review or update on all articles.
No content on this site, regardless of date, should ever be used as a substitute for direct medical advice from your doctor or other qualified clinician.