Printer Friendly

Methodological reporting in qualitative, quantitative, and mixed methods health services research articles.

As the health services research field continues to evolve, so too does its methods. Mixed methods research capitalizes on the strengths of both qualitative and quantitative methodologies by combining approaches in a single research study to increase the breadth and depth of understanding (Johnson, Onwuegbuzie, and Turner 2007). Mixed methods can be a better approach to research than either quantitative-only or qualitative-only methods when a single data source is not sufficient to understand the topic, when results need additional explanation, exploratory findings need to be generalized, or when the complexity of research objectives are best addressed with multiple phases or types of data (Brannen 1992; Creswell and Plano Clark 2011). Rigorous mixed methods approaches require that individual components (qualitative or quantitative) adhere to their respective established standards (Curry, Nembhard, and Bradley 2009; Creswell and Plano Clark 2011). Despite recent guidelines on frameworks for conducting mixed methods research (e.g., Curry, Nembhard, and Bradley 2009; Creswell and Plano Clark 2011), a critical challenge has been ensuring that reports from mixed methods studies transparently discuss the methodological components integral to the conduct of the studies. Health services researchers and reviewers need clear guidelines regarding research methodology, including methodological components that should be expected in mixed methods papers to indicate that they are sufficiently rigorous.

Mixed Methods in Health Services Research

Health services research is the study of how social factors, financing systems, organizational structures and processes, health technologies, and personal behaviors affect access to health care, the quality and cost of health care, and ultimately, health and well-being (Lohr and Steinwachs 2002). As a result of the breadth of topics addressed, health services research draws upon methods and concepts from many fields, including medicine, epidemiological and economic studies, and the evaluation of services and interventions (Field, Tranquada, and Feasley 1995). Health services researchers increasingly work in interdisciplinary partnerships (e.g., Aboelela et al. 2007) and use innovative methods, including mixed methods, to more fully understand health services phenomena. Mixed methods approaches are also consistent with suggestions to extend scientific and contextual health knowledge beyond randomized trials (Berwick 2005).

Mixed methods research capitalizes on the strengths of both qualitative and quantitative methodology by combining both components in a single research study to increase breadth and depth of understanding (Johnson, Onwuegbuzie, and Turner 2007). Qualitative and quantitative methods can be integrated for different purposes to provide a more comprehensive picture of health services than either method can alone. Mixed methods are appropriate in the following situations: (1) when researchers would like to converge different methods or use one method to corroborate the findings from another about a single phenomenon (triangulation); (2) when researchers would like to use one method to elaborate, illustrate, enhance, or clarify the results from another method (complementarity); (3) when researchers would like to use results from one method to inform another method, such as in creating a measure (development); (4) when researchers would like to use one method to discover paradoxes and contradictions in findings from another method that can suggest refraining research questions (initiation); and (5) when researchers seek to expand the breadth and depth of the study by using different methods for different research components (expansion) (Greene, Caracelli, and Graham 1989). Bryman (2006) modified and expanded this list to add that mixed methods can also be useful in obtaining diversity of views, illustrating concepts, and developing instruments.

Quantitative and qualitative research can be distinguished by the philosophical assumptions brought to the study (e.g., deductive versus inductive), the types of research strategies (e.g., experiments versus case studies), and the specific research methods used in the study (e.g., structured survey versus observation) (Creswell 2008). Qualitative health services research, for example, is a method in which the researcher collects textual material derived from speech or observation and attempts to understand the phenomenon of interest in terms of the meanings people bring to them (Denzin and Lincoln 1994; Shortell 1999; Giacomini and Cook for the Evidence-Based Medicine Working Group 2000; Malterud 2001; Bradley, Curry, and Devers 2007). Certain characteristics are typical of qualitative research, including a naturalistic setting (as opposed to a laboratory), a focus on participants' perspectives and their meaning, the outcome as a process rather than a product, and data collected as words or images (Padgett 2008).

Guidelines for Conducting Mixed Methods Research

The National Institutes of Health noted the need for rigor in combining qualitative and quantitative methods to study complex health issues in their recent publication, Best Practices for Mixed Methods in Health Sciences (Creswell, Klassen, Plano Clark, and Smith for the Office of Behavioral and Social Sciences Research 2011). There are several frameworks to guide the rigorous conduct and evaluation of mixed methods research (Collins, Onwuegbuzie, and Sutton 2006; Curry, Nembhard, and Bradley 2009; Tashakkori and Teddlie 2010; Creswell and Plano Clark 2011). Collectively, these frameworks recommend that the conduct of mixed method studies--and reports of mixed method research, including peer-reviewed publication--demonstrates explicit rationales for all decisions regarding study design, including the purpose of including both qualitative and quantitative methods. They specifically advise that each component (qualitative or quantitative) should be conducted with a level of rigor in accordance with established principles in its field, and that researchers be transparent in methodological reporting. For example, sampling design should be specified as identical, parallel, nested, or mixed (Onwuegbuzie and Collins 2007); the level of mixing methods (fully versus partially) should be described, as should time orientation (sequential or concurrent components of research) and emphasis (equal importance of methodological approaches or one more dominant) (Leech and Onwuegbuzie 2009).

Conducting and evaluating mixed methods research have unique methodological challenges, particularly related to rigor. Quantitative studies typically rely on quality criteria such as internal validity, generalizability, and reliability (Campbell 1957; Campbell and Stanley 1963; Messick 1989, 1995; Onwuegbuzie and Daniel 2002, 2004; Onwuegbuzie 2003), whereas qualitative studies have roughly comparable quality criteria of credibility, transferability, and dependability (Lincoln and Guba 1985; Guba and Lincoln 1989; Miles and Huberman 1994; Maxwell 2005; Pope and Mays 2006). For example, questions asked when evaluating a qualitative study might include the following: "Were participants relevant to the research question and was their selection well reasoned?" and "Was the data collection comprehensive enough to support rich and robust descriptions of the observed events?" (Giacomini and Cook for the Evidence-Based Medicine Working Group 2000). In addition to determining whether methodological approaches unique to qualitative or quantitative research were employed, an evaluation of a mixed methods study should assess aspects unique to mixed methods, such as how multiple components are integrated and how consistency and discrepancy between findings from each method are managed (Sale and Brazil 2004; O'Cathain, Murphy, and Nicholl 2007). Qualitative, quantitative, and mixed methodologists agree that study procedures should be reported transparently, including sufficient detail to allow the reader to make inferences about study quality (Lincoln and Guba 1985; Giacomini and Cook for the Evidence-based Medicine Working Group 2000; O'Cathain, Murphy, and Nicholl 2007; Armstrong et al. 2008; Creswell 2008; Curry, Nembhard, and Bradley 2009; Leech et al. 2009; Teddlie and Tashakkori 2009).

Several researchers have proposed specific techniques to assess the overall methodology of mixed methods research and assess the methodological components of the qualitative, quantitative, and mixed portions of the studies (e.g., Pluye et al. 2009; O'Cathain 2010; Tashakkori and Teddlie 2010; Creswell and Plano Clark 2011; Leech, Onwuegbuzie, and Combs 2011). For example, O'Cathain (2010) assessed quality of mixed methods research by evaluating transparency and clarity in reporting planning, design, data, interpretive rigor, inference transferability, reporting quality, synthesizability, and utility. Others have suggested alternative methods for assessing quality, but criteria often are not elucidated or are vague. Further, those frameworks typically address quality of the study design as opposed to the characteristics provided in the published article. By contrast, Sale and Brazil (2004) proposed a structured framework for the evaluation of mixed methods publications by identifying key methodological components that should be included for both qualitative and quantitative portions of studies. Despite these advances, we found few published accounts of the rigor of published mixed methods research. Our article has three specific research questions: (1) How has the frequency of mixed methods studies published in health services journals changed over time? (2) How are mixed methods articles being used to elucidate health services? and (3) To what extent do mixed methods reports differ in methodological content compared to qualitative-only or quantitative-only articles?

METHOD

This systematic review assessed the frequency of mixed methods publications in top health services research journals and compared the frequency of key methodological components in qualitative, quantitative, and mixed method studies. We first reviewed articles in health services research journals to determine the prevalence of mixed methods designs and the presence of key methodological components. Then, we conducted statistical analyses of trends over time in the frequency of mixed methods articles and in the presence of key methodological components of those articles. Because this was an analysis of published data, no ethical oversight was required.

Identification of Mixed Methods Articles

We examined four journals: Health Affairs, Health Services Research, Medical Care, and Milbank Quarterly, which had 5-year impact factors of 2.94-4.71. Journals were selected by reviewing the Institute for Scientific Information (2007) rankings for the top 10 journals in health care sciences and services. Of these 10, we included all journals that focused generally on health services research and excluded journals with narrower loci (Value in Health, Journal of Health Economics, Journal of Pain and Symptom Management, Statistical Methods in Medical Research, Quality and Safety in Health Care, and Quality of Life Research). Although 2001 marked a turning point in the proliferation of mixed methods studies published in major electronic bibliographic databases such as PubMED (Collins, Onwuegbuzie, and Jiao 2007), we chose to examine articles from 2003 to 2007 because 2003 marks publication of the first edition of Tashakkori and Teddlie's landmark Handbook of Mixed Methods in Social and Behavioral Research, which provided the first comprehensive collection of mixed method theory, methodology, and application. Five years represents a sufficient period of time to examine trends of published articles following the publication of a landmark methodological work.

We reviewed empirical articles to determine whether each represented a quantitative, qualitative, or mixed methods study. This entailed using all the information presented in the abstract and the body of the article to identify the research design either as stated or implied by the author(s). We excluded non-empirical articles (book reviews, literature reviews, commentaries and opinion articles, letters to the editor, policy statements) and articles from a special issue of Milbank Quarterly (Volume 83, Number 4) that included only articles published between 1932 and 1998.

We classified articles as quantitative if they included (1) a primary goal of testing theories or hypotheses about relationships between/among variables, or (2) quantitative data and methodology, such as hierarchical linear modeling, multiple regression, or Markov modeling. We classified articles as qualitative if they included either (1) a primary goal of exploring or understanding the meaning ascribed to a specific phenomenon or experience, or (2) qualitative data such as observations, unstructured or semi-structured interviews, or focus group interviews or methodologies such as thematic analysis. Although more complex definitions of mixed method studies exist (e.g., Johnson, Onwuegbuzie, and Turner 2007; Creswell and Piano Clark 2011), we classified articles as mixed methods if they integrated or combined both quantitative and qualitative methods in a single study (Sale and Brazil 2004). This definition reflects the general definitions of mixed methods and the lack of consensus on a specific definition across all multidisciplinary mixed methods researchers.

We used spreadsheets to track classifications, with cells containing articles' abstracts and our field notes. Two authors read and classified articles in batches of 50 according to type, conferring as needed until agreement was achieved (n = 300 articles); the remaining articles (n = 1,351) were each coded by one author. For the few articles for which methodology was ambiguous (n = 58, 3.5 percent of all empirical articles), classification was resolved in consultation with a third author. Similar methods have been used in other evaluations of mixed methods articles (Powell et al. 2008).

Assessments of Articles

We identified all mixed methods articles (n = 47) and equal random samples (n = 47) of quantitative articles (from 1,502 articles) and qualitative articles (from 102 articles) (total n = 141) in the four journals. Random samples of qualitative and quantitative articles were selected using a random number generator and did not adjust for journal or year. We assessed the frequency of key methodological components reported across articles, then compared rates by article type. The methodological components we focused on were drawn from two conceptual frameworks. The first included Sale and Brazil's (2004) criteria: (1) internal validity for quantitative findings and credibility for qualitative findings, (2) external validity for quantitative findings and transferability or fittingness for qualitative findings, (3) reliability for quantitative findings and dependability for qualitative findings, and (4) objectivity for quantitative findings and confirmability for qualitative findings (specific criteria are listed in Table 3). The second was O'Cathain's transparency criteria for mixed methods studies (O'Cathain, Murphy, and Nicholl 2007; O'Cathain 2010), which specify that mixed methods studies should state the (1) priority of methods (primarily quantitative, primarily qualitative, or equal priority), (2) purpose of mixing methods (e.g., triangulation, complementarity, initiation, development, or expansion), (3) sequence of methods (qualitative first, quantitative first, or simultaneous), and (4) stage of integration of both types of data (e.g., data collection, analysis, interpretation). We assessed four additional components of mixed methods studies: (1) whether qualitative and quantitative components were integrated, (2) whether limitations of design were detailed, (3) whether areas of consistency between qualitative and quantitative components were elucidated, and (4) whether areas of inconsistency between components were described.

We assessed components using categories of 0 (not described), 1 (described), or not applicable (e.g., for criteria referencing control groups in a study that had none, or ethical review for a study with no human subjects data) (O'Cathain, Murphy, and Nicholl 2007). We identified only whether the study contained or did not contain each methodological component and did not attempt to assess quality or appropriateness of each component within the context of the study. For example, we assessed whether the publication stated that missing data were addressed but not whether the methods to address missing data were the best methods for that particular research design. Similar to initial article classification, two authors read and coded articles to assess presence/absence of each criterion, with any ambiguity resolved in consultation with a third author.

Quantitative Analyses of Trends and Rigor

Once all articles were coded, we conducted a statistical analysis to determine whether there were trends over time in the prevalence of mixed methods articles. To assess this, we used linear regression to test the hypothesis that there would be an increase in the prevalence of the number of mixed methods articles over time. We also conducted chi-square tests to assess differences between mixed methods, qualitative, and quantitative articles on both quantitative and qualitative criteria. We tested whether each criterion was present in the same proportion of quantitative studies as in the quantitative portion of the mixed methods studies and in the same proportion of qualitative studies as in the qualitative portion of the mixed methods studies.

RESULTS

In general, coders could easily categorize the type of study. Challenges arose when transparency about methods was inadequate (N = 58, 3.5 percent of all empirical articles). For example, some papers indicated that data from interviews were included but did not provide details about who was interviewed, what was asked in the interviews, how the interview data were analyzed, or how the interview data were integrated into the overall study.

Research Question 1: How has the frequency of mixed methods studies published in health services journals changed over time?

Table 1 presents a summary of the types of articles published in four major health services research journals from 2003 through 2007. Only 2.85 percent (n = 47) of empirical articles were mixed methods studies; 6.18 percent (n = 102) of empirical studies represented qualitative research. Quantitative research represented 90.98 percent (n = 1,502) of empirical articles. The journal containing the highest proportion of empirical studies employing a mixed methods design was Milbank Quarterly (8.33 percent), followed by Health Affairs (6.91 percent), Health Services Research (4.03 percent), and Medical Care (0.78 percent). Chi-square test showed a significant difference in these proportions ([chi square] = 34.67, df = 3,p < .0001).

To detect temporal trends in the frequency of mixed methods research in the health services literature, articles were collapsed across journal and examined by publication year. Table 2 presents the frequency of article type for each of the 5 years. All journals combined published an average of 10.8 mixed method articles per year, or 3.27 percent of empirical articles annually. A quadratic trend was seen across the 5 years ([R.sup.2] = 0.65), indicating a slight increase in mixed method articles in the first 2 years and then a decrease for the remaining years.

Research Question 2: How are mixed methods articles being used to elucidate health services research?

Mixed methods articles were categorized into four overlapping categories: Articles on organizational and individual decision making processes (n = 18 studies) combined qualitative interviews with quantitative administrative data analyses to assess decision making about processes or impediments to processes. Examples include a study of formulary adoption decisions (Dandrove, Hughes, and Shanley 2003) and states' decisions to reduce Medicaid and other public program funding (Hoadley, Cunningham, and McHugh 2004).

Sixteen articles described outcomes or effects of policies or initiatives by combining administrative health record or performance data with interviews of health administrators, providers, or executives. Examples include papers describing outcomes of pay-for-performance changes to Medicaid (Felt-Lisk, Gimm, and Peterson 2007; Rosenthal et al. 2007) and hospital patient safety initiatives (Devers, Pham, and Liu 2004).

Thirteen measurement development articles employed mixed methods to create measurement tools to assess, for example, caregiver burden (Cousineau et al. 2003), patient activation (Hibbard et al. 2004), and the development of a Healthcare Effectiveness Data and Information Set (HEDIS) smoking measure (Pbert et al. 2003). These studies typically examined qualitative data from individual or focus group interviews first to inform creation and testing of a survey.

Articles on experiences and perceptions were the least common category (n = 8), typically combining surveys and interviews. These included family physicians' perceptions of the effect of medication samples on their prescribing practices (Hall, Tett, and Nissen 2006); caregivers' experiences of the termination of home health care for stroke patients (Levine et al. 2006); and consumer enrollment experiences in the Cash and Counseling program (Schore, Foster, and Phillips 2007).

Only five mixed methods articles (10.64 percent) of the total mixed methods sample used the terms "mixed method" or "multimethod" in the abstract or text, although four articles (8.51 percent) referred to "qualitative and quantitative" data.

Research Question 3: Do mixed methods articles report qualitative and quantitative methodology differently than methodology is reported in qualitative-only or quantitative-only articles?

Table 3 presents a summary of the frequency of key methodological components present in quantitative articles, qualitative articles, and mixed methods articles (each n = 47). For quantitative methodological components (32 items), mixed methods articles (M = 7.02 [21.94 percent], SD = 6.24) averaged statistically significantly fewer (t(92) = -4.50, p < .00001, Cohen's d effect size = 0.93) components than did quantitative articles (M = 15.06 [47.07 percent], SD = 10.53). For qualitative methodological components (35 items), mixed methods articles (M = 7.17 [21.34 percent], SD = 6.36) did not average a statistically significantly different proportion of components (t(92) = -1.10, p = .14, d = 0.23) than did qualitative articles (M - 8.91 [25.47 percent], SD = 8.83). No article met all criteria, and no criterion was met by all articles. For comparative analyses at a statistical significance level of [alpha] = 0.05, power to detect a medium difference (Cohen's h = 0.50) and a large difference (Cohen's h = 0.80) was 78 and 99 percent, respectively.

Of quantitative components, mixed methods studies were most likely to describe sources of data and data collection instruments (61.70 percent of studies), state the purpose/objective of the paper (59.57 percent), state the source of subjects (58.70 percent), and define/describe the study population (51.06 percent). Most mixed methods studies did not include control and intervention groups, which excluded related criteria. Quantitative studies tended to contain more key methodological components, with more than 90 percent of studies defining outcome measures (93.48 percent), defining/describing study population (91.49 percent), describing statistical procedures (95.74 percent), and stating hypotheses (97.87 percent). Quantitative studies were more likely than the quantitative portion of mixed methods studies to describe study characteristics (e.g., study design, subject recruitment), identify or control for confounding variables, provide probability values or confidence intervals, state hypotheses, or acknowledge both statistical and clinical significance (see Table 3).

For qualitative methodological components, mixed methods studies were most likely to state the purpose/objective of the paper (72.34 percent), triangulate qualitative sources (e.g., use both individual and focus group interviews; 53.19 percent), and describe data-gathering procedures (53.19 percent). More than 50 percent of qualitative studies triangulated qualitative sources (57.45 percent), stated the purpose/objective of the paper (57.45 percent), and described the study setting (80.43 percent), how the setting was selected (63.04 percent), the participants (55.56 percent), and data-gathering procedures (76.60 percent). Qualitative studies were more likely than the qualitative portions of the mixed methods studies to describe the study setting, justify the sampling strategy, participants, and data-gathering procedures.

For criteria regarding method integration, few authors justified the use of mixed methods or clearly described the priority, purpose, and sequence of methods, and the stage of integration. Most articles, however, integrated qualitative and quantitative components (85.11 percent); examination of articles indicated components were most frequently integrated in the interpretation phase. Across all studies, few articles stated that informed consent was obtained, ethical review was undertaken, or that subjects' confidentiality was protected.

DISCUSSION

Previous reports indicate mixed methods articles comprised <1 percent of empirical health articles examined in 2000 (McKibbon and Gadd 2004). Since then, however, the National Institutes of Health has increased funding for mixed methods research, with the proportion of funded research projects up to 5 percent of studies in some institutes (Plano Clark 2010). In the United Kingdom, the proportion of funded research that uses mixed methods is at 17 percent and continuing to increase (O'Cathain, Murphy, and Nicholl 2007). We found that the use of mixed methods in articles published in top health services research journals was generally consistent between 2003 and 2007 at approximately 3 percent of all empirical articles, lower than would be expected given the complexity and depth of health services research questions for which mixed methods would be appropriate. The presence of key methodological components was variable across type of article, but the quantitative portion of mixed methods articles included consistently fewer methodological components than quantitative-only studies and the qualitative portion of mixed methods articles included about the same proportion of methodological components as qualitative-only articles. Mixed methods articles also generally did not address the priority, purpose, and sequence of methods or the integration of methods as suggested by experts in mixed methods (e.g., Creswell and Tashakkori 2008; O'Cathain 2010; Creswell and Plano Clark 2011).

Key methodological components that cut across qualitative and quantitative methodologies were often missing from mixed methods publications. Descriptions of sample selection and sampling procedures, the study context, and data-gathering procedures are essential aspects of interpreting study findings, and mixed methods studies should not be exempt from these basic research requirements. Many mixed methods studies did not include the level of detail that would likely be required for a qualitative or quantitative paper to be accepted in these high-ranking journals. Further, the studies appeared not to follow available guidance on the structure and components of mixed methods studies that discuss basic quality criteria, data collection strategies, methods of data analysis, procedures for integration of methods, processes of making inferences from text, and recommendations for adequate reporting of results (e.g., Giacomini and Cook for the Evidence-based Medicine Working Group 2000; Curry, Nembhard, and Bradley 2009; O'Cathain 2010; Tashakkori and Teddlie 2010; Creswell and Plano Clark 2011). In some ways this finding is not surprising because guidance on mixed methods standards is still emerging. We expect that the National Institutes of Health publication, Best Practices for Mixed Methods in Health Sciences (Creswell, Klassen, Plano Clark, and Smith for the Office of Behavioral and Social Science Research) will lead to increased standardization of mixed methods approaches.

Although they reported more key methodological components on average than the mixed methods articles, quantitative articles in this analysis had some surprising gaps as well, including low reporting of power analyses, how missing data were addressed, and descriptions of control/comparison groups. It should be noted, however, that quantitative articles with large sample sizes do not necessarily need power analyses. With regard to single-method qualitative articles, low proportions described the study context, coding techniques, or data analysis. Few articles with human subjects involvement included statements that the research was conducted with ethical oversight, promised confidentiality, or obtained consent. These findings suggest that the issue of poor transparency in reporting methodology is not limited to mixed methods studies.

Recommendations for Mixed Methods Reporting

The methodological components reported here are not optimal indicators of the quality of mixed methods publications; an article could conceivably have all of these components and yet still be a poor research study. These components are, however, a useful starting point for a systematic evaluation of the rigor of qualitative and quantitative portions of mixed methods studies. Some journals require inclusion of other criteria (e.g., Consolidated Standards of Reporting Trials 2010) to guide reporting of highly structured methodologies (e.g., randomized clinical trials); it would be useful to examine researchers' and editors' perspectives on the validity of the methodological components in this study for mixed method publications. It is difficult, however, to identify measurable criteria that capture the breadth of study designs in health services. Further, determination of what indicators of rigor would be appropriate might reasonably vary by study design, topic, scope, and even journal, and qualified judgment is required to determine which criteria are appropriate for each study. These findings suggest mixed methods researchers should provide enough detail on methodology and methodological decisions to allow reviewers to judge quality.

Researchers face challenges writing and publishing mixed methods articles, including communicating with diverse audiences who are familiar with only one methodological approach (i.e., quantitative research or qualitative research), determining the most appropriate language and terminology to use, complying with journal word counts, and finding appropriate publishing outlets with reviewers who have expertise in mixed methods research techniques and who are not biased against mixed methods studies (Leech and Onwuegbuzie 2010; Leech, Onwuegbuzie, and Combs 2011). Our findings suggest that Sale and Brazil's (2004) criteria and existing guidance on conducting mixed methods research (e.g., Collins, Onwuegbuzie, and Sutton 2006; Tashakkori and Teddlie 2010; Creswell and Plano Clark 2011) might be useful frameworks for health services researchers as they work to improve methodological rigor. Journal editors might also encourage the publication of mixed methods projects by (1) publishing guidelines for rigor in mixed methods articles (e.g., Sale and Brazil 2004), (2) identifying experienced reviewers who can provide competent and ethical reviews of mixed methods studies, and (3) requiring transparency of methods for all studies so that (4) rigor and quality can be can be assessed to the same extent they are in quantitative studies. These modifications might require (5) some flexibility in word count or allowance of online appendices to allow mixed methods researchers to describe fully and concisely both qualitative and quantitative components, methods for integrating findings, and appropriate details.

Limitations

In this study, assessment was limited to only published articles. We did not contact authors to determine specific study activities, and studies may have included methodological components (e.g., consenting) not reported in publications. We assessed only whether publications reported the methodological component, but we did not evaluate whether each component was fully and appropriately implemented in the research.

CONCLUSIONS

Mixed methods studies have utility in providing a more comprehensive picture of health services than either method can alone. Researchers who use mixed methods techniques should use rigorous methodologies in their mixed methods research designs and explicitly report key methodological components of those designs and methods in published articles. Similarly, journal editors who publish mixed methods research should provide guidance to reviewers of mixed methods articles to assess the quality of manuscripts, and they must be prepared to provide adequate space for authors to report the necessary methodological information. Frameworks are now available to guide both the design and evaluation of mixed methods research studies and published works. Whatever frameworks are used, it is essential that authors who engage in mixed methods research studies meet two primary goals (developed by the American Educational Research Association 2006): Mixed methods researchers should (1) conduct and report research that is warranted or defensible in terms of documenting evidence, substantiating results, and validating conclusions; and (2) ensure that the conduct of research is transparent in terms of clarifying the logic underpinning the inquiry.

DOI: 10.1111/j.1475-6773.2011.01344.x

ACKNOWLEDGMENTS

Joint Acknowledgment/Disclosure Statement: The authors appreciate funding from the National Institute on Drug Abuse (K23 DA020487) and comments and feedback on an earlier draft from the anonymous reviewers, John Creswell, PhD, Alicia O'Cathain, PhD, Hilary Vidair, PhD, Susan Essock, PhD, and Sa Shen, PhD. Portions of this manuscript were presented at the International Mixed Methods Conference in July 2010 in Baltimore, Maryland.

Disclosures: None.

Disclaimers: None.

REFERENCES

Aboelela, S. W., E. Larson, S. Bakken, O. Carrasquillo, A. Formicola, S. A. Glied, J. Haas, and K. M. Gebbie. 2007. "Defining Interdisciplinary Research: Conclusions from a Critical Review of the Literature." Health Services Research 42 (1, part 1): 329-46.

American Educational Research Association. 2006. "Standards for Reporting on Empirical Social Science Research in AERA Publications." Educational Researcher 35 (6): 33-40.

Armstrong, R., E. Waters, L. Moore, E. Riggs, L. G. Cuervo, P. Lumbiganon, and P. Hawe. 2008. "Improving the Reporting of Public Health Intervention Research: Advancing TREND and CONSORT." Journal of Public Health 30 (1): 103-9.

Berwick, D. 2005. "The John Eisenberg Lecture: Health Services Research as a Citizen in Improvement." Health Services Research 40 (2): 317-36.

Bradley, E. H., L. A. Curry, and K.J. Devers. 2007. "Qualitative Data Analysis for Health Services Research: Developing Taxonomy, Themes, and Theory." Health Services Research 42 (4): 1758-72.

Brannen, J. (Ed.). 1992. Mixing Methods: Qualitative and Quantitative Research` Aldershot, England: Ashgate.

Bryman, A. 2006. "Integrating Quantitative and Qualitative Research: How Is It Done?" Qualitative Research 6: 97-113.

Campbell, D. T. 1957. "Factors Relevant to the Validity of Experiments in Social Settings." Psychological Bulletin 54:297-312.

Campbell, D. T., and J. C. Stanley. 1963. Experimental and Quasi-Experimental Designs for Research. Chicago, IL: Rand McNally.

Collins, K. M. T., A.J. Onwuegbuzie, and Q.. G. Jiao. 2007. "A Mixed Methods Investigation of Mixed Methods Sampling Designs in Social and Health Science Research." Journal of Mixed Methods Research 1: 267-94.

Collins, K. M. T., A.J. Onwuegbuzie, and I. L. Sutton. 2006. "A Model Incorporating the Rationale and Purpose for Conducting Mixed Methods Research in Special

Education and Beyond." Learning Disabilities: A Contemporary Journal 4: 67-100.

Consolidated Standards of Reporting Trials. 2010. Website [accessed on October 18, 2011]. Available at http://www.consort-statement.org/

Cousineau, N., I. McDowell, S. Hotz, and P. Hebert. 2003. "Measuring Chronic Patients' Feelings of Being a Burden to Their Caregivers." Medical Care 41:110-8.

Creswell, J. 2008. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks, CA: Sage.

Creswell, J. W., A. C. Klassen, V. L. Plano Clark, and K. C. Smith for the Office of Behavioral and Social Sciences Research. 2011. Best Practices for Mixed Methods Research in the Health Sciences. National Institutes of Health [accessed on October 18, 2011]. Available at: http://obssr.od.nih.gov/mixed_methods_research

Creswell, J., and V. Plano Clark. 2011. Designing and Conducting Mixed Methods Research" 2nd Edition. Thousand Oaks, CA: Sage.

Creswell, J. W., and A. Tashakkori. 2008. "Developing Publishable Mixed Methods Manuscripts." Journal of Mixed Methods Research 1:107-11.

Curry, L. A., I. M. Nembhard, and E. H. Bradley. 2009. "Qualitative and Mixed Methods Provide Unique Contributions to Outcomes Research." Circulation 119: 1442-52.

Dandrove, D., F. X. Hughes, and M. Shanley. 2003. "Determinants of HMO Formulary Adoption Decisions." Health Services Research 38 (1): 169-90.

Denzin, N. K., and Y. S. Lincoln (Eds.). 1994. Handbook of Qualitative Research. Thousand Oaks, CA: Sage.

Devers, K.J., H. H. Pham, and G. Liu. 2004. "What Is Driving Hospitals' Patient-Safety Efforts?" Health Affairs 23 (2): 103-15.

Felt-Lisk, S., G. Gimm, and S. Peterson. 2007. "Making Pay-For-Performance Work in Medicaid." Health Affairs 26:w516-27.

Field, M.J., R. E. Tranquada, and J. C. Feasley (Eds.). 1995. Health Services Research: Workforce and Educational Issues. Washington, DC: National Academies Press.

Giacomini, M. K., and D.J. Cook; for the Evidence-based Medicine Working Group. 2000. "Users' Guides to the Medical Literature. XXIII. Qualitative Research in Health Care A. Are the Results of the Study Valid?" Journal of the American Medical Association 284: 357-62.

Greene, J. C., V.J. Caracelli, and W. F. Graham. 1989. "Toward a Conceptual Framework for Mixed-Method Evaluation." Educational Evaluation and Policy 11: 255-74.

Guba, E. G., and Y. S. Lincoln. 1989. Fourth Generation Evaluation. Newbury Park, CA: Sage.

Hall, K. B., S. E. Tett, and L. M. Nissen. 2006. "Perceptions of the Influence of Prescription Medicine Samples on Prescribing by Family Physicians." Medical Care 44 (4): 383-7.

Hibbard, J. H., J. Stockard, E. R. Mahoney, and M. Tusler. 2004. "Development of the Patient Activation Measure (PAM): Conceptualizing and Measuring Activation

in Patients and Consumers." Health Services Research 39 (4, part 1): 1005-26.

Hoadley, J. F., P. Cunningham, and M. McHugh. 2004. "Popular Medicaid Programs Do Battle with State Budget Pressures: Perspectives from Twelve States." Health Affairs 23 (2): 143-54.

Institute for Scientific Information. 2007. "Web of Knowledge Journal Citation Report for Health Care Sciences and Services" [accessed on October 18, 2011]. Available at http://admin-apps.isiknowledge.com/JCR/JCR?RQ=LIST_SUMMARY JOURNAL

Johnson, R. B., A.J. Onwuegbuzie, and L. A. Turner. 2007. "Toward a Definition of Mixed Methods Research."Journal of Mixed Methods Research 2:112-33.

Leech, N. L., A. B. Dellinger, K. B. Brannigan, and H. Tanaka. 2009. "Evaluating Mixed Research Studies: A Mixed Methods Approach." Journal of Mixed Method Research4 (1): 17-31.

Leech, N. L., and A.J. Onwuegbuzie. 2009. "A Typology of Mixed Methods Research Designs." Quality & Quantity: International Journal of Methodology 43: 265-75.

--. 2010. "Guidelines for Conducting and Reporting Mixed Research in the Field of Counseling and Beyond." Journal of Counseling and Development 89 (1): 61-70.

Leech, N. L., A.J. Onwuegbuzie, and J. P. Combs. 2011. "Writing Publishable Mixed Research Articles: Guidelines for Emerging Scholars in the Health Sciences and Beyond." Mixed Methods Research in the Health Sciences 5 (1): 7-24.

Levine, C., S. M. Albert, A. Hokenstad, D. E. Halper, A. Y. Hart, and D. A. Gould. 2006. "'This Case Is Closed': Family Caregivers and the Termination of Home Health Care Services for Stroke Patients." Milbank Quarterly 84 (2): 305-31. Lincoln, Y. S., and E. G. Guba. 1985. Naturalistic Inquiry. Beverly Hills, CA: Sage.

Lohr, K. N., and D. M. Steinwachs. 2002. "Health Services Research: An Evolving Definition of the Field." Health Services Research 37: 15-7.

Malterud, K. 2001. "Qualitative Research: Standards, Challenges, and Guidelines." Lancet358:483-8.

Maxwell, J. A. 2005. Qualitative Research Design: An Interactive Approach. 2nd Edition. Newbury Park, CA: Sage.

McKibbon, K. A., and C. S. Gadd. 2004. "A Quantitative Analysis of Qualitative Studies in Clinical Journals for the 2000 Publishing Year." BioMed Central Medical Informatics and Decision Making4 (11) [accessed on October 18, 2011]. Available at http://www.biomedcentral.com/1472-6947/4/11

Messick, S. 1989. "Validity." In Educational Measurement, 3rd Edition, edited by R. L. Linn, pp 13-103. Old Tappan, NJ: Macmillan.

--. 1995. "Validity of Psychological Assessment: Validation of Inferences from Persons' Responses and Performances as Scientific Inquiry into Score Meaning." American Psychologist 50: 741-9.

Miles, M., and A. M. Huberman. 1994. Qualitative Data Analysis: An Expanded Sourcebook. 2nd Edition. Thousand Oaks, CA: Sage.

O'Cathain, A. 2010. "Assessing the Quality of Mixed Methods Research: Toward a Comprehensive Framework." In Sage Handbook of Mixed Methods in Social and Behavioral Research, 2nd Edition, edited by A. Tashakkori and C. Teddlie, pp. 531 -57. Thousand Oaks, CA: Sage.

O'Cathain, A., E. Murphy, and J. Nicholl. 2007. "Integration and Publications as Indicators of 'Yield' from Mixed Methods Studies." Journal of Mixed Methods Research 1 (2): 147-63.

Onwuegbuzie, A.J. 2003. "Expanding the Framework of Internal and External Validity in Quantitative Research." Research in the Schools 10 (1): 71-90.

Onwuegbuzie, A.J., and K. M. T. Collins. 2007. "A Typology of Mixed Methods Sampling Designs in Social Science Research." Qualitative Report 12: 281-316.

Onwuegbuzie, A.J., and L. G. Daniel. 2002. "A Framework for Reporting and Interpreting Internal Consistency Reliability Estimates." Measurement and Evaluation in Counseling and Development 35: 89-103.

--. 2004. "Reliability Generalization: The Importance of Considering Sample Specificity, Confidence Intervals, and Subgroup Differences." Research in the Schools 11 (1): 61-72.

Padgett, D. 2008. Qualitative Methods in Social Work Research. 2nd Edition. Thousand Oaks, CA: Sage.

Pbert, L., N. Vukovic, J. K. Ockene, J. F. Hollis, and K. Riedlinger. 2003. "Developing and Testing New Smoking Measures for the Health Plan Employer Data and Information Set." Medical Care 41 (4): 550-9.

Plano Clark, V. L. 2010. "The Adoption and Practice of Mixed Methods: U.S. Trends in Federally Funded Health-Related Research." Qualitative Inquiry 16: 428-40.

Pluye, P., M. Gagnon, F. Griffiths, and J. Johnson-Lafleur. 2009. "A Scoring System for Appraising Mixed Methods Research and Concomitantly Appraising Qualitative, Quantitative and Mixed Methods Primary Studies in Mixed Studies Reviews." International Journal of Nursing Studies 46: 529-46.

Pope, C., and N. Mays. 2006. Qualitative Research in Health Care. 3rd Edition. Malden, MA: Blackwell Publishing.

Powell, H., S. Mihalas, A.J. Onwuegbuzie, S. Suldo, and C. E. Daley. 2008. "Mixed Methods Research in School Psychology: A Mixed Methods Investigation of Trends in the Literature." Psychology in the Schools 45: 291-309.

Rosenthal, M. B., B. E. Landon, K. Howitt, H. R. Song, and A. M. Epstein. 2007. "Climbing Up the Pay-For-Performance Learning Curve: Where Are The Early Adopters Now?" Health Affairs26 (6): 1674-82.

Sale, J. E. M., and K. Brazil. 2004. "A Strategy to Identify Critical Appraisal Criteria for Primary Mixed-Method Studies." Quality and Quantity 38 (4): 351-65.

Schore, J., L. Foster, and B. Phillips. 2007. "Consumer Enrollment and Experiences in the Cash and Counseling Program." Health Services Research 42 (1, part 2): 446-66.

Shortell, S. 1999. "The Emergence of Qualitative Methods in Health Services Research." Health Services Research 34: 1083-90.

Tashakkori, A., and C. Teddlie (Eds.). 2010. Sage Handbook of Mixed Methods in Social and Behavioral Research. 2nd Edition. Thousand Oaks, CA: Sage.

Teddlie, C., and A. Tashakkori. 2009. Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. Thousand Oaks, CA: Sage.

SUPPORTING INFORMATION

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

Address correspondence to Jennifer E Wisdom, Ph.D., M.P.H., Psychiatry Department, Columbia University and New York State Psychiatric Institute, 1051 Riverside Drive Box 100, New York, NY 10032; e-mail: jpw2129@columbia.edu. Mary A. Cavaleri, Ph.D., L.C.S.W., is with the Psychiatry Department at Columbia University and New York State Psychiatric Institute, New York, NY. Anthony J. Onwuegbuzie, Ph.D., is with the Department of Educational Leadership and Counseling at Sam Houston State University, Huntsville, TX. Carla A. Green, Ph.D., M.EH., is with the Kaiser Permanente Northwest Center for Health Research, Portland, OR. Portions of this manuscript were presented at the International Mixed Methods Conference in July 2010 in Baltimore, MD.
Table 1: Type and Design of Empirical Articles Published in
Health Services Research journals from 2003 to 2007, Data
Presented by journal

Journal Quant Qual Mixed Total

Health Affairs 305 49 21 375
 81.33% 13.07% 5.60%

Health Services Research 428 26 17 471
 90.87% 5.52% 3.61%

Medical Care 751 12 6 769
 97.66% 1.56% 0.78%

Milbank Quarterly 18 15 3 36
 50.00% 41.67% 8.33%

Total 1,502 102 47 1,651
 90.98% 6.18% 2.85%

Note. Mixed, mixed method articles; Qual, qualitative articles;
Quant, quantitative articles.

Table 2: Type and Design of Empirical Articles Published in Four
Health Services Research Journals from 2003 to 2007, Data Presented
by Year

Year Quant Qual Mixed Total

2003 260 21 7 288
 90.28% 7.29% 2.43%

2004 295 18 13 326
 90.49% 5.52% 3.99%

2005 282 17 8 307
 91.86% 5.54% 2.61%

2006 321 25 10 356
 90.17% 7.02% 2.81%

2007 344 21 9 374
 91.98% 5.61% 2.41%

Total 1,502 102 47 1,651
 90.98% 6.18% 2.85%

Note. Mixed, mixed method articles; Qual, qualitative articles;
Quant, quantitative articles.

Table 3: Key Methodological Components in Mixed Methods,
Quantitative, and Qualitative Health Services Research Articles

 Mixed Method Studies (n = 47)

 % with
 Yes No N/A Component ([dagger])
Key quantitative
methodological components

 Truth value (internal
 validity)
 Ethical review undertaken 9 37 1 19.57
 Informed consent stated 5 21 21 19.23
 Identifying or controlling 7 40 0 14.89
 for extraneous/
 confounding
 variables ***
 Confidentiality protected 3 42 2 6.67
 Comparability of control 0 0 47 0.00
 to intervention
 groups at baseline
 Control/comparison groups 0 0 47 0.00
 treated similarly
Applicability (external
 validity/generalizability)
 Outcome measures defined 7 0 40 100.00
 Control/comparison group 2 0 45 100.00
 described
 Data collection instruments 29 18 0 61.70
 /source of data
 described ***
 Statement of purpose/ 28 19 0 59.57
 objective **
 Source of subjects stated 27 19 1 58.70
 (sampling frame) **
 Study population defined or 24 23 0 51.06
 described ***
 Source of control/comparison 1 1 45 50.00
 group stated
 Selection of control/ 1 1 45 50.00
 comparison group
 described
 Data gathering procedures 23 24 0 48.94
 described *
 Description of setting/ 22 24 1 47.83
 conditions under which
 data collected *
 Statistical procedures 19 28 0 40.43
 referenced or
 described ***
 Subject recruitment or 17 30 0 36.17
 sampling selection
 described ***
 Statement about 16 31 0 34.04
 nonrespondents, dropouts,
 deaths
 p-Values stated *** 16 31 0 34.04
 Both statistical and 13 34 0 27.66
 clinical significance
 acknowledged ***
 Study design stated 11 36 0 23.40
 explicitly **
 Inclusion/exclusion criteria 10 36 1 21.74
 stated explicitly ***
 Missing data addressed 10 37 0 21.28
 At least one hypothesis 10 37 0 21.28
 stated *
 Sample randomly selected 6 39 2 13.33
 Confidence intervals given 5 42 0 10.64
 for main results ***
 Power calculation provided 1 46 0 2.13
 Description of intervention 0 2 45 0.00
 Assessment of outcome 0 0 47 0.00
 blinded
 Consistency (reliability)
 Standardization of observers 3 44 0 6.38
 described
 Neutrality (objectivity)
 Statement of researcher's 5 42 0 10.64
 assumptions/perspective

Key qualitative methodological
components

 Truth value (credibility)
 Triangulation of qualitative 25 22 0 53.19
 sources
 Triangulation of qualitative 16 31 0 34.04
 methods
 Use of exemplars 13 34 0 27.66
 Ethical review undertaken 10 37 0 21.28
 Triangulation of 7 40 0 14.89
 investigators
 Informed consent stated 6 41 0 12.77
 Member checks 4 43 0 8.51
 Confidentiality protected 4 43 0 8.51
 Consent procedures described 3 44 0 6.38
 Peer debriefing 2 45 0 4.26
 Negative case analysis 1 46 0 2.13
 (searching for
 disconfirming evidence)
 Triangulation of theory/ 0 47 0 0.00
 perspective

Applicability (transferability
/fittingness)

 Statement of purpose 34 13 0 72.34
 /objective
 Data gathering procedures 25 22 0 53.19
 described *
 Description of study context 20 27 0 42.55
 or setting ***
 Phenomenon of study stated 18 29 0 38.30
 Sampling procedure described 18 29 0 38.30
 Rationale for qualitative 17 30 0 36.17
 methods
 Description of participants 16 31 0 34.04
 /informants *
 Statement of research 15 32 0 31.91
 questions
 Statement of how setting 15 32 0 31.91
 was selected
 Data analysis described 15 32 0 31.91
 Transcription procedures 11 36 0 23.40
 described
 Coding techniques described 9 38 0 19.15
 Justification or rationale 8 39 0 17.02
 for sampling strategy *
 Audiotaping procedures 8 39 0 17.02
 described
 Statement about 6 41 0 12.77
 nonrespondents, dropouts,
 deaths
 Description of raw data 3 44 0 6.38
 Rationale for tradition 2 45 0 4.26
 within qualitative
 methods
 Data collection to 2 45 0 4.26
 saturation specified
 Statement that reflexive 2 45 0 4.26
 journals, logbooks,
 notes were kept
Consistency (dependability)
 External audit of process 0 47 0 0.00
Neutrality (comfirmability)
 External audit of data 2 45 0 4.26
 Bracketing or epoche 0 47 0 0.00
 Statement of researcher's 0 47 0 0.00
 assumptions or
 perspective

Key mixed methods
methodological components

 Integration of qualitative 40 7 -- 85.11
 and quantitative
 components
 Sequence of methods 10 37 -- 27.03
 specified
 Areas of consistency between 12 35 -- 25.53
 methods stated
 Areas of inconsistency 6 41 -- 12.77
 between methods stated
 Stage of integration 5 42 -- 11.90
 specified
 Priority of methods 2 45 -- 4.44
 specified
 Purpose of mixing methods 2 45 -- 4.44
 specified
 Limitations of mixed methods 2 45 -- 4.26
 stated

 Quantitative Studies (n = 47)

 % with
 Yes No N/A Component ([dagger])
Key quantitative
methodological components

 Truth value (internal
 validity)
 Ethical review undertaken 9 37 1 19.57
 Informed consent stated 5 38 4 11.63
 Identifying or controlling 33 14 0 70.21
 for extraneous/
 confounding
 variables ***
 Confidentiality protected 2 42 3 4.55
 Comparability of control 8 36 3 18.18
 to intervention
 groups at baseline
 Control/comparison groups 3 40 4 6.98
 treated similarly
Applicability (external
 validity/generalizability)
 Outcome measures defined 43 3 1 93.48
 Control/comparison group 11 33 3 25.00
 described
 Data collection instruments 46 1 0 97.87
 /source of data
 described ***
 Statement of purpose/ 40 7 0 85.11
 objective **
 Source of subjects stated 41 6 0 87.23
 (sampling frame) **
 Study population defined or 43 4 0 91.49
 described ***
 Source of control/comparison 8 36 3 18.18
 group stated
 Selection of control/ 8 36 3 18.18
 comparison group
 described
 Data gathering procedures 33 14 0 70.21
 described *
 Description of setting/ 32 15 0 68.09
 conditions under which
 data collected *
 Statistical procedures 45 2 0 95.74
 referenced or
 described ***
 Subject recruitment or 35 12 0 74.47
 sampling selection
 described ***
 Statement about 21 25 1 45.65
 nonrespondents, dropouts,
 deaths
 p-Values stated *** 41 6 0 87.23
 Both statistical and 41 6 0 87.23
 clinical significance
 acknowledged ***
 Study design stated 26 21 0 55.32
 explicitly **
 Inclusion/exclusion criteria 28 19 0 59.57
 stated explicitly ***
 Missing data addressed 18 29 0 38.30
 At least one hypothesis 23 24 0 48.94
 stated *
 Sample randomly selected 12 35 0 25.53
 Confidence intervals given 26 21 0 55.32
 for main results ***
 Power calculation provided 7 40 0 14.89
 Description of intervention 7 36 4 16.28
 Assessment of outcome 2 41 4 4.65
 blinded
 Consistency (reliability)
 Standardization of observers 7 40 0 14.89
 described
 Neutrality (objectivity)
 Statement of researcher's 4 43 0 8.51
 assumptions/perspective

Key qualitative methodological
components

 Truth value (credibility)
 Triangulation of qualitative
 sources
 Triangulation of qualitative
 methods
 Use of exemplars
 Ethical review undertaken
 Triangulation of
 investigators
 Informed consent stated
 Member checks
 Confidentiality protected
 Consent procedures described
 Peer debriefing
 Negative case analysis
 (searching for
 disconfirming evidence)
 Triangulation of theory/
 perspective

Applicability (transferability
/fittingness)

 Statement of purpose
 /objective
 Data gathering procedures
 described *
 Description of study context
 or setting ***
 Phenomenon of study stated
 Sampling procedure described
 Rationale for qualitative
 methods
 Description of participants
 /informants *
 Statement of research
 questions
 Statement of how setting
 was selected
 Data analysis described
 Transcription procedures
 described
 Coding techniques described
 Justification or rationale
 for sampling strategy *
 Audiotaping procedures
 described
 Statement about
 nonrespondents, dropouts,
 deaths
 Description of raw data
 Rationale for tradition
 within qualitative
 methods
 Data collection to
 saturation specified
 Statement that reflexive
 journals, logbooks,
 notes were kept
Consistency (dependability)
 External audit of process
Neutrality (comfirmability)
 External audit of data
 Bracketing or epoche
 Statement of researcher's
 assumptions or
 perspective

Key mixed methods
methodological components

 Integration of qualitative
 and quantitative
 components
 Sequence of methods
 specified
 Areas of consistency between
 methods stated
 Areas of inconsistency
 between methods stated
 Stage of integration
 specified
 Priority of methods
 specified
 Purpose of mixing methods
 specified
 Limitations of mixed methods
 stated

 Qualitative Studies (n = 47)

 % with
 Yes No N/A Component ([dagger])
Key quantitative
methodological components

 Truth value (internal
 validity)
 Ethical review undertaken
 Informed consent stated
 Identifying or controlling
 for extraneous/
 confounding
 variables ***
 Confidentiality protected
 Comparability of control
 to intervention
 groups at baseline
 Control/comparison groups
 treated similarly
Applicability (external
 validity/generalizability)
 Outcome measures defined
 Control/comparison group
 described
 Data collection instruments
 /source of data
 described ***
 Statement of purpose/
 objective **
 Source of subjects stated
 (sampling frame) **
 Study population defined or
 described ***
 Source of control/comparison
 group stated
 Selection of control/
 comparison group
 described
 Data gathering procedures
 described *
 Description of setting/
 conditions under which
 data collected *
 Statistical procedures
 referenced or
 described ***
 Subject recruitment or
 sampling selection
 described ***
 Statement about
 nonrespondents, dropouts,
 deaths
 p-Values stated ***
 Both statistical and
 clinical significance
 acknowledged ***
 Study design stated
 explicitly **
 Inclusion/exclusion criteria
 stated explicitly ***
 Missing data addressed
 At least one hypothesis
 stated *
 Sample randomly selected
 Confidence intervals given
 for main results ***
 Power calculation provided
 Description of intervention
 Assessment of outcome
 blinded
 Consistency (reliability)
 Standardization of observers
 described
 Neutrality (objectivity)
 Statement of researcher's
 assumptions/perspective

Key qualitative methodological
components

 Truth value (credibility)
 Triangulation of qualitative 27 20 0 57.45
 sources
 Triangulation of qualitative 13 34 0 27.66
 methods
 Use of exemplars 14 33 0 29.79
 Ethical review undertaken 8 30 9 21.05
 Triangulation of 3 44 0 6.38
 investigators
 Informed consent stated 3 35 9 7.89
 Member checks 2 45 0 4.26
 Confidentiality protected 3 35 9 7.89
 Consent procedures described 2 36 9 5.26
 Peer debriefing 0 47 0 0.00
 Negative case analysis 0 47 0 0.00
 (searching for
 disconfirming evidence)
 Triangulation of theory/ 4 43 0 8.51
 perspective

Applicability (transferability
/fittingness)

 Statement of purpose 36 11 0 76.60
 /objective
 Data gathering procedures 36 11 0 76.60
 described *
 Description of study context 38 9 0 80.43
 or setting ***
 Phenomenon of study stated 24 23 0 51.06
 Sampling procedure described 22 23 2 48.89
 Rationale for qualitative 12 35 0 25.53
 methods
 Description of participants 25 20 2 55.56
 /informants *
 Statement of research 21 26 0 44.68
 questions
 Statement of how setting 30 17 0 63.04
 was selected
 Data analysis described 20 27 0 42.55
 Transcription procedures 13 28 6 31.71
 described
 Coding techniques described 17 30 0 36.17
 Justification or rationale 18 27 2 40.00
 for sampling strategy *
 Audiotaping procedures 12 29 6 29.27
 described
 Statement about 4 34 9 10.53
 nonrespondents, dropouts,
 deaths
 Description of raw data 4 43 0 8.51
 Rationale for tradition 2 45 0 4.26
 within qualitative
 methods
 Data collection to 1 44 2 2.22
 saturation specified
 Statement that reflexive 3 44 0 6.38
 journals, logbooks,
 notes were kept
Consistency (dependability)
 External audit of process 0 47 0 0.00
Neutrality (comfirmability)
 External audit of data 0 47 0 0.00
 Bracketing or epoche 0 47 0 0.00
 Statement of researcher's 2 45 0 4.26
 assumptions or
 perspective

Key mixed methods
methodological components

 Integration of qualitative
 and quantitative
 components
 Sequence of methods
 specified
 Areas of consistency between
 methods stated
 Areas of inconsistency
 between methods stated
 Stage of integration
 specified
 Priority of methods
 specified
 Purpose of mixing methods
 specified
 Limitations of mixed methods
 stated

Note. * p < .05;

** p < .01;

*** p < .001.

([dagger]) Percent with quality indicator is calculated as
n(yes)/n--n(n/a)
COPYRIGHT 2012 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:METHODS ARTICLE
Author:Wisdom, Jennifer P.; Cavaleri, Mary A.; Onwuegbuzie, Anthony J.; Green, Carla A.
Publication:Health Services Research
Geographic Code:1USA
Date:Apr 1, 2012
Words:8812
Previous Article:Delay in seeing a doctor due to cost: disparity between older adults with and without disabilities in the United States.
Next Article:Estimation of disease incidence in claims data dependent on the length of follow-up: a methodological approach.
Topics:

Terms of use | Copyright © 2015 Farlex, Inc. | Feedback | For webmasters