Printer Friendly

Deployment of a mixed-mode data collection strategy does not reduce nonresponse bias in a general population health survey.

Survey participation is declining (Hox and de Leeuw 1994; Hartge 1999; Steeh et al. 2001; de Leeuw and de Heer 2002; Tickle et al. 2003; Curtin, Presser, and Singer 2005; Morton, Cahill, and Hartge 2006; Berk, Schur, and Feldman 2007); this trend is of great concern because response rate is the most widely used measure of survey quality (Atrostic et al. 2001) and nonresponse bias can be a serious threat to the validity of survey estimates (Sackett 1979; Barton et al. 1980). In an effort to increase response rates, and potentially reduce nonresponse bias, household surveys are increasingly turning to mixed-mode designs whereby instruments are designed to be administered in more than one mode, including mail, web, telephone, and/or in-person, and respondents are allowed to respond to the mode of their choice (De Leeuw 2005; Dillman, Smyth, and Christin 2009b). The attraction of mixed-mode designs is that the characteristics of nonrespondents may vary by the mode of data collection (Groves 2006) and a second mode will bring in potentially different types of respondents. For this reason (among others), the data collection protocols for three major surveys, the Consumer Assessment of Healthcare Providers and Systems, the Experience of Care and Health Outcomes studies, and the American Community Survey (ACS), call for an initial contact by mail with telephone follow-up to encourage initial nonrespondents to mail in their completed questionnaires or to complete a telephone interview.

Available evidence supports the notion that some respondents exhibit mode preference (Siemiatycki 1979; Brambilla and McKinlay 1987; Link and Mokdad 2005) and that a sequential strategy of implementing multiple contacts allows prospective respondents to respond to a particular mode will improve response rates. For example, in work evaluating the effect of pairing a mixed mail and telephone methodology with a prepaid cash incentive on response rates in a survey of Medicaid enrollees response rates increased considerably after telephone follow-ups, from 54 to 69 percent in the incentive condition, and from 45 to 64 percent in the nonincentive condition (Beebe et al. 2005). Similarly, Gallagher, Fowler, and Stringfellow (2000) found that approximately 34 percent of a sample of Medicaid enrollees responded to a mailed survey and another 10-13 percent responded by telephone. Finally, the ACS, a large national demographic survey conducted by the U.S. Census Bureau, achieves a response rate of 56.2 percent to an initial mailed survey, an increase to 63.5 percent after telephone follow-up, and a final response rate of 95.4 percent after face-to-face interviews (Griffin and Obenski 2002).

Although these studies demonstrate the ability of mixed-mode surveys to increase response rates, they do not clarify their effect on response bias because little information on nonrespondents is available. Some research suggests that switching modes does bring in a different population from those that respond to the initial mode. For example, Fowler et al. (2002) found that telephone interviews with mail nonrespondents produced a less biased final sample in terms of gender and age in a sample of 800 health plan enrollees. In one of the few mixed-mode studies to have more detailed health-related information on the full sample of 1,900 adult patients enrolled in a randomized controlled trial to promote smoking cessation, a telephone followed by mail design improved representativeness in a number of health-related areas, such as seeking treatment, cardio-pulmonary comorbidities, and substance abuse (Baines et al. 2007). However, these studies had limited information on respondents and nonrespondents (Fowler et al. 2002); used an atypical sequential strategy (e.g., telephone followed by mail versus mail followed by telephone (Baines et al. 2007); and focused on specialized patient populations (Fowler et al. 2002; Baines et al. 2007) that render the generalizability of their results unclear.

In a general population survey utilizing a mixed-mode, mail followed by telephone data collection approach, this article reports a systematic analysis of survey nonresponse bias using extensive sociodemographic and health-related information on both respondents and nonrespondents to a general population survey. Our primary focus is to assess whether nonresponse bias was reduced by the utilization of a mixed-mode, mail and telephone data collection design.

METHODS

Survey and Procedures

The data on response status come from a sequential mixed-mode, mail and telephone survey on recent gastrointestinal symptoms conducted between September 2005 and April 2006 by the Mayo Clinic Survey Research Center. Further details of the study and its methods are available elsewhere (Beebe et al. 2007, 2011). The population for the study survey included noninstitutionalized residents of Olmsted County, Minnesota, aged 18 and older as identified in a purchased list-based sample.

The study population is the 6,939 eligible cases that were sent a mailed survey packet. Initial nonresponders were sent a second survey 3 weeks later. A telephone interview was attempted approximately 2 weeks later for remaining nonrespondents. The overall response rate for the survey was 51.2 percent (American Association for Public Opinion Research 2006). The response rates for the first and second mailings were 24.1 and 38.3 percent, respectively.

The sampling frame for the study was linked to administrative data from the Rochester Epidemiology Project (REP). Each health care provider in Olmsted County (home of Mayo Clinic, Olmsted Medical Center, and the Rochester Family Medicine Clinic) uses a unit medical record system whereby all data collected on an individual are assembled in one place. Each participating site also solicits and documents permission from patients for their records to be used. Currently, 95 percent of patients have granted this permission. The REP includes medical diagnoses, hospital admissions and surgical procedures, and demographic information. Overall, at least 98 percent of the Olmsted county population has been seen by a REP provider at some point (Melton 1996; St Sauver et al. 2011). Approximately 97 percent of the cases in the sample file were matched to members in the REP database. Primary analyses focused on the 6,716 individuals for whom health care information was available. This study was approved by the Mayo Clinic and Olmsted Medical Center IRBs.

Measures

Respondents include those who completed a mailed survey or telephone interview (at least two-thirds of the items completed). Nonrespondents include those who refused or could not be contacted. Respondents are further categorized by whether they completed the survey at the first or second mailings, or completed the telephone interview.

Selected demographic variables were obtained from the REP frame, including age, gender, and race/ethnicity. Race/ethnicity was classified as white versus other because sample sizes did not permit analysis of specific minority cultural groups. All medical and surgical diagnoses received

by patients at a health care site participating in the REP are coded using either Hospital Adaptation of the International Classification of Diseases (Commission on professional and hospital activities 1973) or the International Classification of Diseases, 9th Edition (ICD-9) codes. Also included was the formal diagnosis in the past decade of a number of disease statuses (see Table 1) dichotomized as presence or absence of each condition. The severity-weighted Charlson Index (Charlson et al. 1987; Deyo, Cherkin, and Ciol 1992) based on these diagnoses was used to provide a summary score of comorbidity. The Charlson measure is an effective method of estimating future morbidity and mortality in longitudinal studies (Charlson et al. 1987) and therefore has utility as a measure of current health.

Also ascertained was whether each subject had a surgical or nonsurgical procedure at one of the hospitals in Olmsted County in the past decade. Finally, the number of emergency room (ER) visits, outpatient clinic visits, and hospital admissions during the 2 years that covered when the survey was in the field (2005 and 2006) were calculated. Utilization was dichotomized and cut-offs were chosen to facilitate analysis and interpretation, informed by the items' marginal distributions to identify natural breaks, and designed to accord with prior authorization studies in Olmsted county using the REP (Jacobsen et al. 1999).

Statistical Analysis

The key research question was, "What effect did deploying a mixed-mode, mail and telephone data collection strategy have on nonresponse bias?" Using the distribution from the total eligible sample as population estimates, we used chi-square goodness-of-fit tests to compare respondents by mode (first and second mailing and phone) to the population. Note that throughout we refer to the survey protocol as reflecting a mixed-mode design, we acknowledge that part of our analysis, looking at response patterns between the first and second mailings, is not an evaluation of mixing modes but rather an evaluation of a second contact in the same mode. Multivariable logistic regression analysis was used to assess whether our mixed-mode design affected sample representation across data collection phases, including all sociodemographic and health-related variables. Three regression models were analyzed, considering three outcomes: (1) probability of responding to the first mailing (versus second mailing or phone or nonresponse), (2) probability of responding to either mailing (versus phone or nonresponse), and (3) probability of any response (versus nonresponse). Odds ratios (adjusted for all predictors included in the model) and 95 percent confidence intervals were estimated. All analyses were performed using SAS v. 9.1 software, p-values less than 0.05 were considered statistically significant.

RESULTS

Table 1 assesses the differences between responders and the population by response mode where we compare respondents reached after the first and second mailings and via telephone (first three columns of Table 1) to the population. Demographically, men are under-represented in the first mail contact with 48.9 percent being male, compared to 52.6 percent of the population. Older people, particularly those over 65 and white individuals are over-represented in the first mail contact.

With respect to health status, individuals with a severity-weighted Charlson score of two or more are over-represented by about 12 percent in the first mail contact. For most of the measured health conditions, the sample reached by mail (either contact) closely matched the population, with the exception of other cancer types where the sample responding to the first mailing was significantly more likely to have cancer (15 percent) compared to the population (11.8 percent), an over-representation of approximately 27 percent. The telephone mode brought in respondents with some of the other health conditions that were less representative of the population. That is, telephone respondents were less likely to have congestive heart failure, cerebrovascular disease, moderate/severe renal disease and other cancer than were the total eligible sample. With respect to office visits and procedures, early respondents were heavier utilizers than the population. The same was true (but to a lesser degree) for those responding to the second mailing with respect to office visits. The sample obtained with the second mail survey was less likely to have a hospitalization admission than the population. Finally, the sample obtained after the first mailing significantly under-represented individuals who had used the ER.

Table 2 provides the results of the multivariable logistic regression analyses that included all sociodemographic and health-related variables. The first of the three regression analyses (Model 1) shows the likelihood of response to the first mail contact compared to not responding to that initial contact, revealing biases in the sample gathered from the first mail contact. Adjusting for selected demographics, health status and utilization, older adults (50-65 and 65+) are more likely to respond to the first mail contact than are 18- to <35-year-olds. White individuals are more likely to respond as are those with three or more office visits. Individuals with one or more ER visits are less likely to respond. Most important, the results indicate younger people, those from minority cultural groups, ER users, and those who have fewer doctor visits would have been under-represented if estimates had been based only on respondents to the initial mailing.

With a few exceptions, the above biases persist after considering the sample characteristics following the second mailing of the survey (Model 2), and after additional respondents completed the survey by phone (Model 3). Adding a second mailing and a phone mode did not measurably reduce the biases that were observed in the mail sample; however, it does appear to reduce, but not eliminate, the over-representation of older persons that was observed in the first mailing. The over-representation of high-utilizers of clinics and low users of the ER that was observed after the first mail contact remains substantially unchanged after the second mail attempt and after the phone mode.

Interestingly, individuals with congestive heart failure were less likely to be a respondent once the mode switched to telephone (odds ratio [OR] = 0.61, p = 0.001), indicating that the third contact resulted in a respondent population that may be less representative of the underlying population. Of note, in a similar set of models that used the severity-weighted Charlson score as a predictor or response status instead of individual diseases, all the demographic and utilization relationships remained the same (data not shown). Across all three models, however, individuals with a weighted Charlson score of two or more were less likely to be respondents.

DISCUSSION

There is ample evidence that attaining high levels of survey participation is increasingly difficult (Hox and de Leeuw 1994; Hartge 1999; Steeh et al. 2001; Tickle et al. 2003; Curtin, Presser, and Singer 2005; Morton, Cahill, and Hartge 2006; Berk, Schur, and Feldman 2007) and that deployment of a mixed-mode data collection protocol can be an effective way of increasing survey response rates (Gallagher, Fowler, and String-fellow 2000; Griffin and Obenski 2002; Beebe et al. 2005). However, emerging evidence suggests that a low response rate does not necessarily portend major study bias (Groves 2006; Groves and Peytcheva 2008) and little evidence that mixing modes minimizes the latter. In our general population survey with an overall response rate of 51.2 percent, contrary to expectations, we found that switching modes from a mail survey to a telephone interview did notuniformly increase the representativeness of the responding sample. Indeed, we found evidence that switching modes may make the sample less representative of the population in terms of at least one clinical variable. Incidentally, we also found that a second contact in the same mode did not increase sample representativeness either.

Our finding that switching mode did not increase the representation of the final sample runs counter to the few studies investigating this issue. In the two studies most similar to our study with respect to order of contact, this approach yielded a more representative sample, although only one study had health and health care utilization for nonrespondents (Gallagher, Fowler, and Stringfellow 1999; Fowler et al. 2002). However, the populations from neither study were representative of the general population and, as such, may be more attuned to the nuances of data collection strategies and more susceptible to the deployment of specific modes. Tacit support for this notion is supplied by the juxtaposition of two studies deploying a mixed-mode design representing the converse of ours: initial telephone contact followed by another mode (e.g., mail, web). Whereas switching to a mailed survey after a telephone interview reached a segment of the population quite different from the segment that would have been reached through telephone alone among adult patients enrolled in a trial to promote treatment for relapsed smokers at five Veteran's Administration Centers (Baines et al. 2007), a similar effect was not seen in a similarly designed general population survey of close to 9,000 households, albeit in an area unrelated to health (Dillman et al. 2009a).

For general populations, switching modes may be more akin to a multiple attempt strategy, perceived only as an increased effort on our part to enlist cooperation, rather than the introduction of a new method, per se. As such, our results are more aligned with the literature investigating the effects of multiple attempts on response rates (Keeter et al. 2000; Davern et al. 2010). The impact of additional measures to enlist participation, such as multiple contacts and/or switching modes, may actually bring in respondents for whom the topic is less salient, leading to an under-representation of those who are less healthy and higher utilizers. This interpretation is consistent with Leverage-Salience Theory proposed by Groves and colleagues (Groves, Singer, and Coming 2000; Groves, Presser, and Dipko 2004), which posits that survey features, such as mode, could have variable leverage for different types of sample members and that switching modes may make a given survey more or less salient for certain types of people, thus increasing or decreasing participation. Regardless of the cause, it appears that use of a mixed-mode approach does not represent a wholesale good when considering use among general population samples, particularly if the topic of survey pertains to health.

In considering our findings, we note potentially important limitations. Our data may not be generalizable to the U.S. population because the racial composition of the population is predominantly white; the prevalence of clinical disease status may vary by ethnicity, but at a minimum our data are probably generalizable to the U.S. white population. Additionally, our study relied on the medical chart to determine disease status and utilization, which may be subject to underreporting of mild symptoms or disease status. However, we assume that more severe symptoms or disease conditions would have been charted and that utilization history was accurately characterized as payment is based on such documentation. Finally, this relatively health-literate population has been heavily surveyed and lives in close proximity to a well-known medical center with close community ties that may have reduced nonresponse bias; the results may not apply in all other U.S. population-based studies.

Survey researchers usually work with fixed resources and are faced with difficult choices of how to allocate efforts to maximize study goals. The choice to use multiple modes of data collection is increasingly popular because it is assumed to serve multiple goals. First, starting with a relatively inexpensive mode such as mail allows one to reach a substantial proportion of the sample at relatively low costs. Second, multiple modes typically are effective at reaching the goal of achieving higher response rates. The research presented here, however, suggests that it is overly simplistic to assume that reaching higher response rates in itself is consistent with a goal of reduced bias. Finally, sample size is also an important goal of survey research, especially when it comes to providing precise estimates for small subpopulations. Balancing the competing goals of survey research will always prove difficult, but further study of which types of designs actually reduce nonresponse bias is essential for informed decisions about how to allocate efforts.

ACKNOWLEDGMENTS

Joint Acknowledgment/Disclosure Statement: Supported by funds from the National Cancer Institute (R03 CA132974; PI: Beebe) and the Mayo Clinic Foundation for Education and Research. The study was made possible by the Rochester Epidemiology Project (R01 AG034676 from the National Institute on Aging; PI: Rocca).

Disclosures: None.

Disclaimers: None.

REFERENCES

American Association for Public Opinion Research. 2006. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. Lenexa, KS: AAPOR.

Atrostic, B. K., N. Bates, G. Burt, and A. Silberstein. 2001. "Nonresponse in US Government Household Surveys: Consistent Measures, Recent Trends, and New Insights." Journal of Official Statistics 117: 209-26.

Baines, A. D., M. R. Partin, M. Davern, and T. H. Rockwood. 2007. "Mixed-Mode Administration Reduced Bias and Enhanced Poststratification Adjustments in a Health Behavior Survey." Journal of Clinical Epidemiology 60 (12): 1246-55.

Barton, J., C. Bain, C. H. Hennekens, B. Rosner, C. Belanger, A. Roth, and F. E. Speizer. 1980. "Characteristics of Respondents and Non-Respondents to a Mailed Questionnaire." American Journal of PublicHealth 70 (8): 823-5.

Beebe, T. J., M. E. Davern, D. D. McAlpine, K. T. Call, and T. H. Rockwood. 2005. "Increasing Response Rates in a Survey of Medicaid Enrollees: The Effect of a Prepaid Monetary Incentive and Mixed Modes (Mail and Telephone)." Medical Care 43 (4): 411-4.

Beebe, T. J., N. J. Talley, M. Camilleri, S. M. Jenkins, K. J. Anderson, and G. R. Locke 3rd. 2007. "The HIPAA Authorization Form and Effects on Survey Response Rates, Nonresponse Bias, and Data Quality: A Randomized Community Study." Medical Care 45 (10): 959-65.

Beebe, T. J., J. Y. Ziegenfuss, J. L. St Sanver, S. M. Jenkins, L. R. Haas, M. E. Davern, and N. J. Talley. 2011. "HIPAA Authorization and Survey Nonresponse Bias." Medical Care 49 (4): 365-70.

Berk, M. L., C. L. Schur, and J. Feldman. 2007. "Twenty-Five Years of Health Surveys: Does More Data Mean Better Data?" Health Affairs (Millwood) 26 (6): 1599-611.

Brambilla, D. J., and S. M. McKinlay. 1987. "A Comparison of Responses to Mailed Questionnaires and Telephone Interviews in a Mixed Mode Health Survey." American Journal of Epidemiology 126 (5): 962-71.

Charlson, M. E., P. Pompei, K. L. Ales, and C. R. MacKenzie. 1987. "A New Method of Classifying Prognostic Comorbidity in Longitudinal Studies: Development and Validation." Journal of Chronic Disease 40 (5): 373-83.

Commission on professional and hospital activities. 1973. H-ICDA: Hospital Adaptation of ICDA. Ann Arbor, MI: Commission on Professional and Hospital Activities. Curtin, R., S. Presser, and E. Singer. 2005. "Changes in Telephone Survey Nonresponse over the Past Quarter Century." Public Opinion Quarterly 69 (1): 87-98.

Davern, M., D. McAlpine, T. J. Beebe, J. Ziegenfuss, T. Rockwood, and K. T. Call. 2010. "Are Lower Response Rates Hazardous to Your Health Survey? An Analysis of Three State Telephone Health Surveys." Health Services Research 45 (5 Pt 1): 1324-44.

De Leeuw, E. 2005. "To Mix or Not to Mix Data Collection Modes in Surveys." Journal of Official Statistics 21: 233-55.

Deyo, R. A., D. C. Cherkin, and M. A. Ciol. 1992. "Adapting a Clinical Comorbidity Index for Use with ICD-9-CM Administrative Databases." Journal of Clinical Epidemiology 45 (6): 613-9.

Dillman, D. A., G. Phelps, R. Tortora, K. Swift, J. Kohrell, J. Berck, and B. L. Messer. 2009a. "Response Rate and Measurement Differences in Mixed-Mode Surveys Using Mail, Telephone, Interactive Voice Response (IVR) and the Internet." Social Science Research 38 (1): 1-18.

Dillman, D. A., J. D. Smyth, and L. M. Christin. 2009b. Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method. Hoboken, NJ: Wiley & Sons. Fowler, F. J. Jr, P. M. Gallagher, V. L. Stringfellow, A. M. Zaslavsky, J. W. Thompson, and P. D. Cleary. 2002. "Using Telephone Interviews to Reduce Nonresponse Bias to Mail Surveys of Health Plan Members." Medical Care 40 (3): 190-200.

Gallagher, P. M., F. J. Fowler Jr, and V. L. Stringfellow. 1999. The Nature of Nonresponse in a Medicaid Survey: Causes and Consequences. International Conference on Survey Nonresponse, Portland, OR.

--. 2000. Notes from the Field: Experiments Influencing Response Rates from Medicaid Enrollees. 55th Annual Conference of the American Association for the Public Opinion Research, Portland, OR.

Griffin, D. H., and S. M. Obenski. 2002. "Meeting 21st Century Demographic Needs Implementing the American Community Survey." Report 2. Demonstrating Survey Quality. Washington DC: US Department of Commerce, Economics and Statistics Administration; US Census Bureau.

Groves, R. M. 2006. "Nonresponse Rates and Nonresponse Bias in Household Surveys." Public Opinion Quarterly 70 (5): 646-75. Doi: 10.1093/poq/nfl033.

Groves, R. M., and E. Peytcheva. 2008. "The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis." Public Opinion Quarterly 72 (2): 167-89. doi: 10.1093/poq/nfn011.

Groves, R. M., E. Singer, and A. Corning. 2000. "Leverage-Saliency Theory of Survey Participation: Description and an Illustration." Public Opinion Quarterly 64 (3): 299-308.

Groves, R. M., S. Presser, and S. Dipko. 2004. "The Role of Topic Interest in Survey Participation Decisions." Public Opinion Quarterly 68 (1): 2-31.

Hartge, P. 1999. "Raising Response Rates: Getting to Yes." Epidemiology 10 (2): 105-7.

Heilbrun, L. K., P. D. Ross, R. D. Wasnich, K. Yano, and J. M. Vogel. 1991. "Characteristics of Respondents and Nonrespondents in a Prospective Study of Osteoporosis." Journal of Clinical Epidemiology 44 (3): 233-9.

Hox, J., and E. de Leeuw. 1994. "A Comparison of Nonresponse in Mail, Telephone, and Face-to-Face Surveys: Applying Multilevel Modeling to Meta-Analysis." Quality and Quantity 28 (4): 329-44.

Jacobsen, S. J., Z. Xia, M. E. Campion, C. H. Darby, M. F. Plevak, K. D. Seltman, and L. J. Melton 3rd. 1999. "Potential Effect of Authorization Bias on Medical Record Research." Mayo Clinic Proceedings 74 (4): 330-8.

Keeter, S., C. Miller, A. Kohut, R. M. Groves, and S. Presser. 2000. "Consequences of Reducing Nonresponse in a National Telephone Survey." Public Opinion Quarterly 64 (2): 125-48. doi: 10.1086/317759.

de Leeuw, E., and W. de Heer. 2002. "Trends in Household Survey Nonresponse: A Longitudinal and International Comparison." In Survey Nonresponse, edited by R. Groves, D. Dillman, J. Eltinge, and R. Little, pp. 41-54. New York: Wiley.

Link, M. W., and A. H. Mokdad. 2005. "Alternative Modes for Health Surveillance Surveys: An Experiment with Web, Mail, and Telephone." Epidemiology 16 (5): 701-4.

Melton, L. J. 3rd. 1996. "History of the Rochester Epidemiology Project." Mayo Clinic Proceedings 71 (3): 266-74.

Morton, L. M., J. Cahill, and P. Hartge. 2006. "Reporting Participation in Epidemiologic Studies: A Survey of Practice." American Journal of Epidemiology 163 (3): 197-203.

Sackett, D. L. 1979. "Bias in Analytic Research." Journal of Chronic Disease 32 (1-2): 51-63.

Siemiatycki, J. 1979. "A Comparison of Mail, Telephone, and Home Interview Strategies for Household Health Surveys." American Journal of Public Health 69 (3): 238-45.

St Sauver, J. L., B. R. Grossardt, B. P. Yawn, L. J. 3rd Melton, and W. A. Rocca. 2011.

"Using a Medical Records Linkage System to Enumerate a Dynamic Population over Time: The Rochester Epidemiology Project." American Journal of Epidemiology 173 (9): 1059-68.

Steeh, C., N. Kirgis, B. Cannon, and B. DeWitt. 2001. "Are They Really as Bad as They Seem? Nonresponse Rates at the End of the Twentieth Century." Journal of Official Statistics 17: 227-47.

Tickle, M., K. M. Milsom, A. S. Blinkhorn, and H. V. Worthington. 2003. "Comparing Different Methods to Detect and Correct Nonresponse Bias in Postal Questionnaire Studies." Journal of Public Health Dentistry 63 (2): 112-8.

SUPPORTING INFORMATION

Additional supporting information may be found in the online version of this article:

Appendix SA1: Author Matrix.

Please note: Wiley-Blackwell is not responsible for the content or functionality of any supporting materials supplied by the authors. Any queries (other than missing material) should be directed to the corresponding author for the article.

Address correspondence to Timothy J. Beebe, Ph.D., Associate Professor of Health Services Research, Mayo Clinic College of Medicine, 200 First Street SW, Rochester, MN 55905; e-mail: beebe.timothy@mayo.edu. Timothy J. Beebe, Ph.D., Lindsey Haas, B.A., and Jeanette Y. Ziegenfuss, Ph.D, are with the Division of Health Care Policy & Research, Mayo Clinic, and Survey Research Center, Mayo Clinic, Rochester, MN. Donna D. McAlpine, Ph.D., is with the Division of Health Policy & Management, University of Minnesota School of Public Health, Minneapolis, MN. Sarah Jenkins, M.S., is with the Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, MN. Michael E. Davern, Ph.D., is with NORC at University of Chicago, IL.

DOI: 10.1111/j.1475-6773.2011.01369.x
Table 1: Characteristics of the Population by Survey Response Phase

 First Mail Second Mail
Variable (N = 1617) (N = 957)

Background
Male 790 (48.9%) 515 (53.8%)
Age
 18 to <35 209 (12.9%) 147 (15.4%)
 35 to <50 403 (24.9%) 315 (32.9%)
 50 to <65 510 (31.5%) 304 (31.8%)
 65+ 495 (30.6%) 191 (20%)
White 1,467 (90.7%) 858 (89.7%)
Clinical
 Charlson (weighted), 2+ 451 (27.9%) 222 (23.2%)
 Myocardial infarct 72 (4.5%) 37 (3.9%)
 Congestive heart failure 80 (4.9%) 35 (3.7%)
 Peripheral vascular disease 90 (5.6%) 44 (4.6%)
 Cerebrovascular disease 124 (7.7%) 58 (6.1%)
 Chronic pulmonary disease 253 (15.6%) 137 (14.3%)
 Ulcer 103 (6.4%) 48 (5%)
 Mild liver disease 37 (2.3%) 19 (2%)
 Diabetes 170 (10.5%) 84 (8.8%)
 Diabetes with organ damage 46 (2.8%) 23 (2.4%)
 Moderate/severe renal disease 93 (5.8%) 45 (4.7%)
 Moderate/severe liver disease 9 (0.6%) 1 (0.1%)
 Metastatic solid tumor 34 (2.1%) 17 (1.8%)
 Rheumatologic disease 43 (2.7%) 20 (2.1%)
 Other cancer 242 (15%) 113 (11.8%)
Utilization
 3+ Clinic office 1169 (72.3%) 636 (66.5%)
 Any ER admission 451 (27.9%) 276 (28.8%)
 Any hospital admission 365 (22.6%) 183 (19.1%)
 1+ Surgical or nonsurgical 1,039 (64.3%) 552 (57.7%)
 procedures in last 10 years

 Phone Total Responders
Variable (N = 863) (N = 3437)

Background
Male 451 (52.3%) 1,756 (51.1%)
Age
 18 to <35 147 (17%) 503 (14.6%)
 35 to <50 356 (41.3%) 1,074 (31.2%)
 50 to <65 245 (28.4%) 1,059 (30.8%)
 65+ 115 (13.3%) 801 (23.3%)
White 761 (88.2%) 3,086 (89.8%)
Clinical
 Charlson (weighted), 2+ 190 (22%) 863 (25.1%)
 Myocardial infarct 36 (4.2%) 145 (4.2%)
 Congestive heart failure 20 (2.3%) 135 (3.9%)
 Peripheral vascular disease 38 (4.4%) 172 (5%)
 Cerebrovascular disease 40 (4.6%) 222 (6.5%)
 Chronic pulmonary disease 145 (16.8%) 535 (15.6%)
 Ulcer 47 (5.4%) 198 (5.8%)
 Mild liver disease 18 (2.1%) 74 (2.2%)
 Diabetes 86 (10%) 340 (9.9%)
 Diabetes with organ damage 27 (3.1%) 96 (2.8%)
 Moderate/severe renal disease 31 (3.6%) 169 (4.9%)
 Moderate/severe liver disease 3 (0.31%) 13 (0.4%)
 Metastatic solid tumor 15 (l.7%) 66 (1.9%)
 Rheumatologic disease 16 (l.9%) 79 (2.3%)
 Other cancer 80 (9.3%) 435 (12.7%)
Utilization
 3+ Clinic office 530 (61.4%) 2,335 (67.9%)
 Any ER admission 278 (32.2%) 1,005 (29.2%)
 Any hospital admission 186 (21.6%) 734 (21.4%)
 1+ Surgical or nonsurgical 494 (57.2%) 2,085 (60.7%)
 procedures in last 10 years

 Total Sample
Variable (N = 6716)

Background
Male 3,530 (52.6%)
Age
 18 to <35 1,127 (16.8%)
 35 to <50 2,201 (32.8%)
 50 to <65 1,887 (28.1%)
 65+ 1,501 (22.3%)
White 5,833 (86.9%)
Clinical
 Charlson (weighted), 2+ 1,652 (24.6%)
 Myocardial infarct 308 (4.6%)
 Congestive heart failure 334 (5%)
 Peripheral vascular disease 334 (5%)
 Cerebrovascular disease 467 (7%)
 Chronic pulmonary disease 1,011 (15.1%)
 Ulcer 374 (5.6%)
 Mild liver disease 154 (2.3%)
 Diabetes 661 (9.8%)
 Diabetes with organ damage 199 (3%)
 Moderate/severe renal disease 373 (5.6%)
 Moderate/severe liver disease 28 (0.4%)
 Metastatic solid tumor 134 (21%)
 Rheumatologic disease 153 (2.3%)
 Other cancer 790 (11.8%)
Utilization
 3+ Clinic office 4,192 (62.4%)
 Any ER admission 2,027 (30.2%)
 Any hospital admission 1,468 (21.9%)
 1+ Surgical or nonsurgical 3,873 (57.7%)
 procedures in last 10 years

* Significant difference from total sample, p < 0.05.

Table 2: Multivariable Logistic Regression Models Examining the Effect
of Response Phase on Each Variable

 Model 1: Response
 after First Mailing
 versus All Others

Variable Comparison OR (95% CI) p
Background
Gender Males versus 0.91 (0.81, 1.02) 0.10
 females
Age 18 to <35 (ref)
 35 to <50 0.95 (0.78, 1.15) 0.58
 50 to <65 1.51 (1.25, 1.83) <0.001
 65+ 2.12 (1.72, 2.62) <0.001
Race White versus 1.40 (1.15, 1.70) <0.001
 other
 Clinical
 Myocardial infarct Present versus 0.77 (0.57, 1.04) 0.09
 absent
 Congestive heart Present versus 0.82 (0.61, 1.11) 0.20
 failure absent
 Peripheral vascular Present versus 0.96 (0.73, 1.25) 0.76
 disease absent
 Cerebrovascular Present versus 0.88 (0.70, 1.12) 0.30
 disease absent
 Chronic pulmonary Present versus 0.92 (0.78, 1.09) 0.34
 disease absent
 Ulcer Present versus 1.02 (0.80, 1.30) 0.89
 absent
 Mild liver disease Present versus 0.93 (0.63, 1.36) 0.69
 absent
 Diabetes Present versus 0.92 (0.73, 1.14) 0.43
 absent
 Diabetes with organ Present versus 0.81 (0.55, 1.20) 0.30
 damage absent
 Moderate/severe renal Present versus 0.83 (0.63, 1.09) 0.17
 disease absent
 Metastatic solid tumor Present versus 0.69 (0.45, 1.06) 0.09
 absent
 Rheumatologic disease Present versus 0.89 (0.61, 1.28) 0.53
 absent
 Other cancer Present versus 1.17 (0.97, 1.40) 0.10
 absent
Utilization
 Clinic office visits 3+ versus <3 1.73 (1.51, 1.97) <0.001
 2005-2006
 ER admission 2005-2006 Any versus none 0.70 (0.61, 0.82) <0.001
 Hospital admission Any versus none 1.02 (0.86, 1.20) 0.86
 2005-2006
 Number of surgical or 1+ versus none 1.11 (0.98, 1.26) 0.10
 nonsurgical procedures
 in last 10 years

 Model 2: Response
 after Mailing 1 or 2
 versus All Others

Variable Comparison OR (95% CI) p
Background
Gender Males versus 0.97 (0.87, 1.08) 0.57
 females
Age 18 to <35 (ref)
 35 to <50 1.00 (0.85, 1.17) 0.98
 50 to <65 1.53 (1.30, 1.81) <0.001
 65+ 1.84 (1.53, 2.21) <0.001
Race White versus 1.51 (1.29, 1.78) <0.001
 other
 Clinical
 Myocardial infarct Present versus 0.78 (0.60, 1.02) 0.07
 absent
 Congestive heart Present versus 0.77 (0.59, 1.01) 0.06
 failure absent
 Peripheral vascular Present versus 0.98 (0.77, 1.25) 0.86
 disease absent
 Cerebrovascular Present versus 0.88 (0.71, 1.09) 0.24
 disease absent
 Chronic pulmonary Present versus 0.91 (0.79, 1.06) 0.22
 disease absent
 Ulcer Present versus 0.98 (0.79, 1.23) 0.88
 absent
 Mild liver disease Present versus 0.85 (0.61, 1.20) 0.36
 absent
 Diabetes Present versus 0.88 (0.72, 1.08) 0.23
 absent
 Diabetes with organ Present versus 0.80 (0.56, 1.13) 0.20
 damage absent
 Moderate/severe renal Present versus 0.84 (0.66, 1.07) 0.15
 disease absent
 Metastatic solid tumor Present versus 0.71 (0.48, 1.04) 0.08
 absent
 Rheumatologic disease Present versus 0.88 (0.63, 1.23) 0.45
 absent
 Other cancer Present versus 1.16 (0.98, 1.37) 0.09
 absent
Utilization
 Clinic office visits 3+ versus <3 1.77 (1.57, 1.99) <0.001
 2005-2006
 ER admission 2005-2006 Any versus none 0.75 (0.66, 0.85) <0.001
 Hospital admission Any versus none 0.92 (0.80, 1.07) 0.28
 2005-2006
 Number of surgical or 1+ versus none 1.06 (0.95, 1.19) 0.27
 nonsurgical procedures
 in last 10 years

 Model 3: Any
 Response versus
 No Response

Variable Comparison OR (95% CI) p
Background
Gender Males versus 0.96 (0.87, 1.06) 0.45
 females
Age 18 to <35 (ref)
 35 to <50 1.13 (0.98, 1.31) 0.10
 50 to <65 1.47 (1.26, 1.72) <0.001
 65+ 1.41 (1.18, 1.68) <0.001
Race White versus 1.57 (1.35, 1.83) <0.001
 other
 Clinical
 Myocardial infarct Present versus 0.90 (0.70, 1.17) 0.45
 absent
 Congestive heart Present versus 0.61 (0.47, 0.80) <0.001
 failure absent
 Peripheral vascular Present versus 1.03 (0.81, 1.31) 0.80
 disease absent
 Cerebrovascular Present versus 0.81 (0.66, 1.01) 0.06
 disease absent
 Chronic pulmonary Present versus 1.03 (0.89, 1.18) 0.71
 disease absent
 Ulcer Present versus 1.05 (0.84, 1.30) 0.69
 absent
 Mild liver disease Present versus 0.78 (0.56, 1.09) 0.14
 absent
 Diabetes Present versus 0.97 (0.80, 1.18) 0.79
 absent
 Diabetes with organ Present versus 0.86 (0.61, 1.20) 0.37
 damage absent
 Moderate/severe renal Present versus 0.80 (0.63, 1.01) 0.06
 disease absent
 Metastatic solid tumor Present versus 0.80 (0.55, 1.16) 0.23
 absent
 Rheumatologic disease Present versus 0.85 (0.61, 1.19) 0.35
 absent
 Other cancer Present versus 1.08 (0.91, 1.28) 0.37
 absent
Utilization
 Clinic office visits 3+ versus <3 1.66 (1.49, 1.86) <0.001
 2005-2006
 ER admission 2005-2006 Any versus none 0.83 (0.73, 0.94) 0.003
 Hospital admission Any versus none 0.93 (0.81, 1.08) 0.35
 2005-2006
 Number of surgical or 1+ versus none 1.10 (0.99, 1.23) 0.07
 nonsurgical procedures
 in last 10 years
COPYRIGHT 2012 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:RESEARCH ARTICLE
Author:Beebe, Timothy J.; McAlpine, Donna D.; Ziegenfuss, Jeanette Y.; Jenkins, Sarah; Haas, Lindsey; Daver
Publication:Health Services Research
Article Type:Survey
Geographic Code:1U4MN
Date:Aug 1, 2012
Words:6073
Previous Article:Development of peer-group-classification criteria for the comparison of cost efficiency among general hospitals under the Korean NHI program.
Next Article:Ambulatory subspecialty visits in a large pediatric primary care network.
Topics:

Terms of use | Copyright © 2015 Farlex, Inc. | Feedback | For webmasters