Printer Friendly

Interpreting research studies.

Yet valuable research is often communicated in technical language and rigid formats that make it difficult to interpret and evaluate the findings. This document is intended to help demystify de·mys·ti·fy  
tr.v. de·mys·ti·fied, de·mys·ti·fy·ing, de·mys·ti·fies
To make less mysterious; clarify: an autobiography that demystified the career of an eminent physician.
 social science research for those who could make use of the findings but lack specialized training in research methods. It identifies the key questions to ask when evaluating a research report, explains why the answers matter and offers tips on where to find the information in the body of the report.

What makes the study important?

What makes a study newsworthy news·wor·thy  
adj. news·wor·thi·er, news·wor·thi·est
Of sufficient interest or importance to the public to warrant reporting in the media.



news
, or useful for informing policies and programs? It depends on how the study contributes to what we already know. This information may be summarized in the abstract of a scientific journal article or the executive summary of a longer report. More in-depth information is usually found in the "Discussion" or "Conclusions" sections at the end of the report. Look for answers to these questions:

* Does the study answer a previously unaddressed question?

* Does it address an old question in a new way or with surprising results?

* Does it confirm the results of previous studies, strengthening the evidence or showing that a program can be effective in multiple settings?

* Does it build on past work to show trends over time?

Reading through the abstract or executive summary with these questions in mind can help you evaluate the study's relevance even before you review the full publication.

Do the findings make sense?

The abstract or summary will also present the study's key "findings" or "results." Do they make sense, given what you already know about the subject? And are they rooted in the existing body of research? A scientific report should be properly referenced, with original sources for all factual statements and data from other research clearly cited.

But just because a study's findings challenge conventional wisdom, they are not necessarily incorrect. One function of research is to test common assumptions and reexamine re·ex·am·ine also re-ex·am·ine  
tr.v. re·ex·am·ined, re·ex·am·in·ing, re·ex·am·ines
1. To examine again or anew; review.

2. Law To question (a witness) again after cross-examination.
 earlier findings. A study with unexpected results can be particularly important or newsworthy, as it can lead to new insights and approaches. Findings that go against the conventional wisdom, however, require more careful evaluation.

Who conducted the research and wrote the report?

It is important to consider whether the study results could be influenced by a researcher's conflict of interest. You cannot always know this just by reading a report, but some knowledge of the field can guide you. Are the authors well regarded in the scientific community? What are their professional credentials? Have they published previously and, if so, in what journals?

Studies generally indicate where the authors work and who funded their research. Are the researchers independent, or could their work have been influenced by the company, government agency or advocacy group that employed or funded them? Who might stand to profit from the findings? Any potential conflict of interest should be identified up front. That said, researchers have opinions and beliefs just like everyone else; good researchers committed to a political or social agenda can still conduct unbiased, trustworthy studies that can withstand independent evaluation, provided they follow practices designed to protect the quality and integrity of research.

Who published the report?

Social science research is often disseminated through journal articles. An article published in a peer-reviewed journal peer-reviewed journal Refereed journal Academia A professional journal that only publishes articles subjected to a rigorous peer validity review process. Cf Throwaway journal.  has been evaluated by experts in the field to help ensure that it meets high scientific standards. Each field has its own hierarchy of journals; if you are familiar with the field, you can look to the prestige of the journal as one indication of a study's quality. If you are not sure how a journal ranks, look on its front pages for a statement that it is peer-reviewed and a list of who serves on its editorial committee or review board (if one exists).

Studies from sources other than journals (including reports that research institutions publish themselves) may also contain solid, useful information. Look to the "acknowledgments" (usually at the very beginning of a report) to see if the authors mention outside sources of input and advice, such as an expert advisory panel or external reviewers.

With the exception of some online journals, information on the Internet is not reviewed as rigorously before being posted, but some sites do have a review process. In general, if an external review process is not mentioned, you should assume that one does not exist--which means you will need to be cautious about accepting the study's conclusions.

Did the researcher select an appropriate group for study?

A social scientist's work is about people, either as individuals or as part of a social institution (for example, a school, a hospital, a religious group or a branch of government). Although the question motivating a researcher's work may be general ("What does the public think about abortion?" "At what age do teenagers begin to have sex?"), in practical terms a study often focuses on a subset A group of commands or functions that do not include all the capabilities of the original specification. Software or hardware components designed for the subset will also work with the original. , or sample, of the larger population. This sample must be selected carefully to ensure that the study results are applicable to the relevant general population.

The selection of the study group should be described in the "Methods" section of an article or report.

Using a representative sample is the best way to ensure that findings can be generalized to all members of the target population. If the researcher uses a representative sample, the report will typically state this specifically. There are many ways to achieve a representative sample, and selecting a true random sample is only one of them. Other common approaches are acceptable and--with appropriate statistical adjustments for weighting--can produce valid and representative results.

Sometimes, however, a researcher may have good reasons to select the target population in a different way. Perhaps there is no list of the general population available. Perhaps the behavior in question is particularly prevalent among a subgroup sub·group  
n.
1. A distinct group within a group; a subdivision of a group.

2. A subordinate group.

3. Mathematics A group that is a subset of a group.

tr.v.
, so it makes the most sense to concentrate the study among this group in order to get results quickly. For example, to study HIV HIV (Human Immunodeficiency Virus), either of two closely related retroviruses that invade T-helper lymphocytes and are responsible for AIDS. There are two types of HIV: HIV-1 and HIV-2. HIV-1 is responsible for the vast majority of AIDS in the United States.  transmission among individuals with multiple sex partners, it may make sense to focus the study among commercial sex workers. Researchers do not always have to select a representative sample, but they should explain the reason for selecting their study population, and you should consider the extent to which their findings are applicable to other groups.

If comparison groups are used, how similar are they?

If a study is comparing two or more groups (to evaluate the effects of an intervention, for example), the results will be valid only if the groups are similar in all ways other than their exposure to the intervention being studied. Any preexisting pre·ex·ist or pre-ex·ist  
v. pre·ex·ist·ed, pre·ex·ist·ing, pre·ex·ists

v.tr.
To exist before (something); precede: Dinosaurs preexisted humans.

v.intr.
 differences between the groups could account for different outcomes. For example, a study evaluating the effectiveness of a sex education program may find that students at an urban school who have received sex education are more knowledgeable about HIV prevention than those at a rural school who have not. Since urban students are also more likely to have been exposed to public education campaigns via radio and television, the researchers will have to ask additional questions or do further analysis to be sure that the urban students' knowledge actually resulted from the school program.

In the best study designs, participants are randomly assigned to the study groups. But when differences do exist between the groups, researchers can use statistical techniques to control for differences. The way the study groups were selected should be described in a report's "methods" section; a comparison of the groups in terms of age, educational attainment Educational attainment is a term commonly used by statisticans to refer to the highest degree of education an individual has completed.[1]

The US Census Bureau Glossary defines educational attainment as "the highest level of education completed in terms of the
, socioeconomic status and other variables should appear in the "Results" section. Your own experience and common sense can help determine whether the differences among them are important for the study.

What has changed since the information was collected?

A report should generally state in the abstract or summary when the information was collected. Ideally, the data used in a study will have been collected recently so that the information reflects the current situation. However, because national-level surveys can be quite expensive and time-consuming, data may not become public for several years and special analyses may extend over several years more. For example, in the United States United States, officially United States of America, republic (2005 est. pop. 295,734,000), 3,539,227 sq mi (9,166,598 sq km), North America. The United States is the world's third largest country in population and the fourth largest country in area. , data from the large National Survey of Family Growth, which was conducted in 2002, became public only in late 2004, and analyses of the new data are still ongoing. Such delays are not necessarily a problem, but you may want to think about changes that have occurred in the intervening period (such as new policies, or big shifts in economic conditions) and whether the outcomes that were measured would be different today because of these changes.

Are the methods appropriate to the research purpose?

All research methods have advantages and disadvantages. The research question should drive the choice of research methods, but logistic lo·gis·tic   also lo·gis·ti·cal
adj.
1. Of or relating to symbolic logic.

2. Of or relating to logistics.



[Medieval Latin logisticus, of calculation
 matters, resource availability and ethical concerns can also influence that choice. To evaluate the findings properly, you should consider the method used in relation to the research question, and be aware of each method's advantages and disadvantages.

Social science studies can rely on either qualitative or quantitative methods or a combination of the two. As a rule, quantitative techniques (collecting and analyzing measurements such as height and weight, number of visits to a doctor's office, whether a person is currently using a contraceptive method Noun 1. contraceptive method - birth control by the use of devices (diaphragm or intrauterine device or condom) or drugs or surgery
contraception

birth control, birth prevention, family planning - limiting the number of children born
, etc.) are best for answering questions such as "How much?" "How many?" "How often?" or "When?" By examining associations or correlations between factors, quantitative studies can also indicate important relationships, such as whether poor women are more likely than better-off women to have more children than they want. Qualitative techniques (recording and analyzing interactions with people through techniques such as in-depth interviews, focus groups or participant observation participant observation,
n a method of qualitative research in which the researcher understands the contex-tual meanings of an event or events through participating and observing as a subject in the research.
) may be more useful if the goal is obtaining a better understanding of complex contextual, attitudinal or behavioral issues or documenting a process.

Does the study establish causation causation

Relation that holds between two temporally simultaneous or successive events when the first event (the cause) brings about the other (the effect). According to David Hume, when we say of two types of object or event that “X causes Y” (e.g.
?

Often, the goal of a study is to determine the effect of something: for example, a cancer-fighting drug, a youth development program or a social welfare policy. However, because social science takes place in the real world, it is usually difficult to isolate the effects of one discrete factor from all the other things going on in people's lives. Even if the study shows that a particular outcome occurred after a drug was administered, a program got under way or a policy was implemented, it can be difficult to prove that this intervention caused the outcome.

A researcher may observe events as they happen, without deliberately intervening, or may purposefully pur·pose·ful  
adj.
1. Having a purpose; intentional: a purposeful musician.

2. Having or manifesting purpose; determined: entered the room with a purposeful look.
 experiment by altering some aspects of a situation and testing the effects. Experimental designs require that information be collected both before and after the intervention, and ideally that the results be compared with those for a control group that was not exposed to the intervention. The controlled setting of an experiment enables a researcher to draw firmer conclusions about cause and effect. However, ethical factors often prohibit the use of experimental designs in work with human beings. For example, to investigate the impact of a new family planning clinic family planning clinic nclínica de planificación familiar

family planning clinic ncentre m de planning familial

 on the occurrence of unplanned pregnancy, a researcher could not ethically force some women to use the clinic's services and deny services to others.

By "controlling for" certain variables, the researcher can also rule out some possible explanations for the study results, even in the absence of an experimental design. For example, a data set might show that young women using oral contraceptives contract more sexually transmitted infections than young women who are not using them. This could suggest that the pills are causing infection; on the other hand, the young women using birth control may be more likely to be sexually active (and, thus, exposed to greater risk of infection) than those who are not. By using statistical techniques (for example, multivariate analysis or stratification stratification (Lat.,=made in layers), layered structure formed by the deposition of sedimentary rocks. Changes between strata are interpreted as the result of fluctuations in the intensity and persistence of the depositional agent, e.g. ) that eliminate the effects of sexual activity on the results, the researcher can determine which explanation is more likely to be correct.

A study's authors will report on what they think the study proves in the "findings" or "results" section (also summarized in the abstract). In general, studies--particularly observational studies--can prove only that an outcome is "associated with" or "correlated cor·re·late  
v. cor·re·lat·ed, cor·re·lat·ing, cor·re·lates

v.tr.
1. To put or bring into causal, complementary, parallel, or reciprocal relation.

2.
 with" (rather than "caused by") a characteristic or intervention. The information may still be extremely useful, but be alert to researchers who make claims about cause and effect that seem dubious or who ignore other possible explanations for their findings.

Is the time frame long enough to identify an impact?

Studies can either follow their subjects over time, checking in with them at various intervals (a longitudinal study longitudinal study

a chronological study in epidemiology which attempts to establish a relationship between an antecedent cause and a subsequent effect. See also cohort study.
), or take a "snapshot (1) A saved copy of memory including the contents of all memory bytes, hardware registers and status indicators. It is periodically taken in order to restore the system in the event of failure.

(2) A saved copy of a file before it is updated.
" of subjects at a single moment in time (a cross-sectional study). A cross-sectional study is good for comparing groups, such as men and women, or teenagers in Kenya and teenagers in the Philippines. A series of cross-sectional studies conducted within the same general population (but selecting a different group of people each time) can also provide information on trends over time, such as changes in HIV prevalence, as long as the groups sampled are truly comparable.

Because a longitudinal study (also sometimes called a panel study or cohort study A cohort study is a form of longitudinal study used in medicine and social science. It is one type of study design.

In medicine, it is usually undertaken to obtain evidence to try to refute the existence of a suspected association between cause and disease; failure to refute
) follows the same group of individuals over time, it can be better for examining the effects of a particular intervention, as long as it allows enough time for adequate follow-up and is able to retain a sufficient number of participants throughout the course of the study. For example, to evaluate the impact of a sex education program, researchers should ideally study students not only before, during and immediately after the program, but also months or even years later to determine the long-term effect of the program, since some students will not begin to have sex until long after completing a sex education program.

Could the data be biased as a result of poor research design?

The wording and order of questions in a poll or survey can affect the answers participants provide. When possible, researchers should provide the actual wording of questions so that readers can evaluate whether the questions encouraged a certain response over another. Even when researchers make a great effort to word questions neutrally, some participants with low literacy may not understand a survey question, or cultural factors may affect how respondents interpret it.

One indication that survey results could be flawed flaw 1  
n.
1. An imperfection, often concealed, that impairs soundness: a flaw in the crystal that caused it to shatter. See Synonyms at blemish.

2.
 is a low response rate. If the response rate is low (say, fewer than 70% of those selected), then the results may be biased because the people who participated are not representative of the target group as a whole. The response rate should be explicitly noted in the "Results" section of a report or article.

Studies of sexual and reproductive behavior Reproductive behavior

Behavior related to the production of offspring; it includes such patterns as the establishment of mating systems, courtship, sexual behavior, parturition, and the care of young.
 face another hurdle. Participants do not always answer sensitive questions truthfully. For example, adolescent boys tend to over-report sexual activity, while adolescent girls tend to underreport un·der·re·port  
tr.v. un·der·re·port·ed, un·der·re·port·ing, un·der·re·ports
To report (income or crime statistics, for example) as being less than actually is the case.
 it.

Are the results statistically significant?

When a quantitative study uses a sample (as opposed to surveying an entire population), it is important to determine mathematically that there is little probability the result could have occurred by chance--that is, that a different sample could have produced other results. In the social sciences, a study finding generally is considered statistically significant if there is no more than a 5% probability that it could have occurred by chance (often expressed as a "p-value" of 0.05 or less). Researchers must report on the results of all hypotheses, regardless of whether or not they reach statistical significance.

Statistical significance alone is not enough to prove cause and effect, but it lends credibility to an argument. Statistical significance also does not necessarily mean an association has substantive significance; that is, it does not necessarily make a study finding important. In a large enough sample, a small difference can be statistically significant but of limited real world importance.

The answers to these 12 questions should help you evaluate and interpret reports of research findings. Of course, a study may be flawlessly flaw·less  
adj.
Being entirely without flaw or imperfection. See Synonyms at perfect.



flawless·ly adv.
 designed, conducted without bias, appropriately analyzed and statistically significant, yet convey nothing important to you. But if the findings are something that you care about, and you believe that the research is sound, you are in a position to play a critical role in social science research--interpreting the findings and transmitting them to the wider world to have a greater impact.

CREDITS

This publication was written by Jennifer Nadeau and Sharon Camp and shaped by the valuable input of many Guttmacher colleagues and partners. It is part of the Protecting the Next Generation project, which collects, analyzes and communicates new knowledge about the sexual and reproductive health Within the framework of WHO's definition of health[1] as a state of complete physical, mental and social well-being, and not merely the absence of disease or infirmity, reproductive health, or sexual health/hygiene  needs of young people. The project is supported by The Bill & Melinda Gates Melinda French Gates (born Melinda Ann French on August 15, 1964) is a former unit manager for several Microsoft products: Publisher, Microsoft Bob, Encarta, and Expedia. In 1994, she married Bill Gates, founder, chairman, and former chief software architect of Microsoft.  Foundation, the Rockefeller Foundation and the National Institute of Child Health and Human Development (grant no. 5R24 HD043610).

Key Questions to Ask When Reading a Social Science Research Report

* What makes the study important?

* Do the findings make sense?

* Who conducted the research and wrote the report?

* Who published the report?

* Did the researcher select an appropriate group for study?

* If comparison groups are used, how similar are they?

* What has changed since the information was collected?

* Are the methods appropriate to the research purpose?

* Does the study establish causation?

* Is the time frame long enough to identify an impact?

* Could the data be biased as a result of poor research design?

* Are the results statistically significant?

Research Designs

Surveys gather information from relatively large numbers of individuals. Polls, for example, collect people's opinions on an issue or their reactions to an event. Larger surveys gather more detailed information about people's background and behavior as well as attitudes and beliefs. When repeated on a regular basis, surveys can document trends, and sophisticated analyses can suggest the reasons behind the trends.

As long as the group of people surveyed is scientifically selected, surveys are good for explaining what people in general think or do and for identifying subgroup differences. Statistical analysis allows a researcher to draw a more comprehensive picture of the study population by breaking down the information in various ways (For example, are the women in a group more likely than the men to talk with their children about sex? What about urban parents vs. rural parents?). By examining relationships among many variables, the researcher can understand which factors are most relevant.

Qualitative Research Qualitative research

Traditional analysis of firm-specific prospects for future earnings. It may be based on data collected by the analysts, there is no formal quantitative framework used to generate projections.
 can provide rich detail and insights into the complexity of human behavior. Unlike surveys, though, qualitative designs do not produce findings meant to apply to the population as a whole.

In focus groups, several people discuss a topic with guidance from a moderator moderator - A person, or small group of people, who manages a moderated mailing list or Usenet newsgroup. Moderators are responsible for determining which email submissions are passed on to the list or newsgroup. . These discussions can document a general consensus among a group. In-depth interviews collect information from individuals one-by-one, like polls and surveys, but generally the researcher does not quantify the findings for statistical analysis, instead using interviewees' stories for deeper understanding of an issue.

Ethnographic studies ethnographic studies,
n.pl methods of qualitative research developed by anthropologists, in which the researcher attends to and inter-prets communication while participating in the research context.
 and case studies provide in-depth analysis of a small number of cases (individuals, neighborhoods, clinics) over time. They provide very rich data and can offer powerful, illustrative il·lus·tra·tive  
adj.
Acting or serving as an illustration.



il·lustra·tive·ly adv.

Adj. 1.
 stories--valuable for journalists trying to convey a situation to readers. However, their very depth and specificity mean that the results cannot be easily generalized to other situations.

Trials test the effect of an intervention, such as a vaccine or an educational program. In an uncontrolled trial, the researcher examines a subject group before and after applying the intervention and measures the difference. In a controlled trial, the researcher adds a "control group," which is comparable in every important way to the subject group but does not receive the intervention. If the groups are truly similar at the beginning of the study and carefully monitored to limit influences (other than the intervention) that might affect outcomes, then changes that occur in the subject group, but not in the control group, can be said to result from the intervention.
COPYRIGHT 2006 Guttmacher Institute
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2006 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Nadeau, Jennifer; Camp, Sharon
Publication:In Brief (Series)
Date:Jul 1, 2006
Words:3295
Previous Article:Adolescents in Ghana: sexual and reproductive health.
Next Article:Early childbearing in Honduras: a continuing challenge.
Topics:


Related Articles
Handbook of Polygraph Testing.
Attitudes, Innuendo, and Regulators: Challenges of Interpretation.
New ACTE Research Committee seeks volunteers.
What firms expect from market research and how they utilize it.
Psychotropic drug prescriber's survival guide; ethical mental health treatment in the age of big pharma.
Interpret LLC Names Elaine B. Coleman Vice President of Strategy and Analysis.
Gene variant for autism discovered.
Lower IQ 'a risk factor for heart disease'.
City involved in 156k [pounds sterling] glaucoma grant.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters