Printer Friendly

The effect of income question design in health surveys on family income, poverty and eligibility estimates.

Income is an important and challenging concept to measure in household health surveys. Income is highly associated with a wide array of important health, economic, and sociological outcomes (Williams 1990; Link and Phelan 1995; Krieger, Williams, and Moss 1997; Norris et al. 2003) and determining income levels is critical to policy analysts because public programs specify income cutoff points beyond which people are no longer eligible (e.g., Temporary Assistance to Needy Families, Medicaid and State Children's Health Insurance Program) (Dubay and Kenney 2000). There are two principal challenges in measuring income in surveys. First, there are many potential sources of income. Asking about these multiple sources and the exact amounts deriving from each source can prove quite burdensome for respondents. For example, people can have earned income from a job, dividend income from stocks, interest income from a savings account, government transfer program income, self-employment income from a business, and self-employment income from various odd jobs or consulting. There can also be losses of income because of stock transactions and privately owned business losses.

The second major challenge to measuring income is that people do not like to divulge how much money they earn. Asking many income questions can be intrusive for a respondent. As a result of the question's sensitive nature, most survey income items generate a sizable proportion of missing data. Around 10-15 percent of the income data on surveys is not answered because either the respondent refuses to answer the question or because they do not remember (Moore, Stinson, and Welniak 2000; Moore and Loomis 2001).

Many demographic household surveys devote a substantial portion of the interview to collecting income data from the respondents. Surveys like the Current Population Survey's-Demographic Supplement (CPS-DS), the Survey of Income and Program Participation (SIPP), and the Survey of Consumer Finances (SCF) ask about multiple income sources and about the specific amounts of income deriving from each source (Kennickell 2001; U.S. Census Bureau 2001, 2009). Furthermore, these surveys specifically collect income amounts for everyone in the appropriate age range within a household and/or principle family within the household. These individual amounts are then summed over everyone in the family in order to estimate the aggregated total family income. These aggregated family income amounts, along with family relationship information are then used to derive estimates of the family's income relative to the federal poverty level (FPL). The percent of the family's income relative to the FPL, in turn, forms the basis of eligibility determination for public programs.

Of the major national health surveys only one, the Medical Expenditure Panel Survey's Household Component (MEPS-HC) uses an aggregated income measurement. Most of the other health surveys, such as the National Health Interview Survey (NHIS), the Behavioral Risk Factor Surveillance System (BRFSS), the Medicare Current Beneficiaries Survey (MCBS), the Coordinated State Coverage Survey (CSCS), and other state surveys of health insurance coverage, do not devote as many questions on the survey instrument to collecting income data (Centers for Disease Control and Prevention 2001; National Center for Health Statistics 2001; SHADAC 2003). (1) Whereas the CPS-DS, SIPP, and SCF are all concerned primarily with measuring income, labor force participation, and/or wealth within the population, surveys like the NHIS, BRFSS, and CSCS are more concerned with measuring health-related characteristics (e.g., health conditions, health risk factors, or health insurance coverage). In the health survey context, income is typically used as a correlate of a host of health related variables (e.g., Nelson et al. 1999; Ayanian et al. 2000; Norris et al. 2003; Cohen and Ni 2004) but it is also used to measure poverty and estimate the number of eligible people for federal and state programs. For example, health surveys such as the CSCS attempt to estimate the number of uninsured respondents who may be eligible for public health care coverage based on their household income and family size (e.g., Beebe, McElhinney, and Johnson 2003a, b, c). Measuring income can also provide estimates of financial burden imposed by health care expenditures and underinsurance by measuring health expenditures as a function of income (Blewett et al. 2004). Reflecting their emphasis on health rather than income as a principal content area, most of the health surveys generally rely on one "omnibus" family income question asking about the total family income with one omnibus survey item in the form of either a continuous income amount or a categorical income amount.

In this paper, we use the aggregated family income estimates as the standard to compare the omnibus family income measures to. We do this for two main reasons. First, when compared with total aggregated estimates obtained from independent record systems, survey respondents tend to underreport income such as interest and dividend income when generating their income estimates, even when prompted for these sources (Moore, Stinson, and Welniak 2000; Roemer 2000). Given the survey research literature documenting the importance of such prompting on clarifying the definitional ambiguity of general concepts such as income (Fowler 1995; Moore, Stinson, and Welniak 2000) as well as its beneficial effect on the cueing of recall (Tulving, 1983), we believe the aggregated methodology that decomposes income into a series of items reflecting its component parts is more accurate than only asking, "What is your total family income?" (2)

Second, we believe that the aggregated methodology has more face validity (i.e., whether the item looks like it is measuring what it is supposed to measure) than the omnibus measure of family income for health policy research. Health policy research is concerned with, among other things, the ability of people to pay for health care costs, the effects of poverty on health care access, and whether people are eligible for public programs. In order to measure these complex concepts, a good accounting for the various sources of income and the amounts of income is necessary. In the case of eligibility, there are complicated rules for what type of income to include or to exclude from the total family income. The survey instrument used to measure whether the person is eligible should mimic the enrollment criteria (Dubay and Kenney 2000). In addition, the aggregated measure of income is used for the official poverty estimates that are produced with CPS-DS data (even though the CPS-DS has an omnibus measure as well) and systematic variation from the official standard is bias.

We expect to find that the omnibus income amount tends to be lower and under-reported when compared to the aggregated income amount (Moeller and Mathiowetz 1994). Flowing from this expectation, we also expect that poverty (and eligibility) would be overstated. Our hypothesis stems from the fact that personal income tends to be one of the most inconsistently defined and sensitive topics for respondents, and that--in survey research--asking specific questions about a topic generally yields more information than asking one summary item. As stated earlier, income data has very high rates of refused and don't know responses relative to other items (Moore and Loomis 2001). In addition to high levels of missing data, Moore, Stinson, and Welniak (2000, p. 354) state, "... respondents' sensitivity about discussing their income may also lead to 'motivated mis-remembering.'" Moreover, Bradburn, Sudman, and Associates (1979, p. 15) state, "Threatening behavioral topics invariably elicit underreporting; thus, higher reporting levels can be interpreted as a reduction in negative response effect rather than an increase in positive response effect." Translating these observations to the present investigation, we hypothesize that income levels obtained by the aggregated measure should be higher than those generated by the omnibus question. And, because the estimates are higher, we contend that the information obtained via the aggregated method is the more accurate.

We will also examine the differences between the omnibus measure and the aggregated measure of income to understand what sources of income as well as demographic characteristics are related to the differences. And finally, we will examine whether there is a bias from using the omnibus income item to estimate the number of people below a certain level of poverty. This is an important question because poverty is a major consideration in determining eligibility for many of the largest federal programs targeting the poor and working poor. To answer our major research questions we make use of the CPS-DS data from 2001. The CPS-DS contains both a general omnibus family income item early in the survey as well as many individual survey items based on income sources and amounts later in the survey.

DATA AND METHODS

The CPS is a monthly survey conducted by the U.S. Census Bureau and the Bureau of Labor Statistics to estimate monthly labor force characteristics. During a three-month period (from February through April 2001) 78,000 households were interviewed using the CPS DS (U.S. Census Bureau 2001). We use the data from the 2001 CPS-DS to study the differences between asking an omnibus income question versus asking each respondent about various sources and amounts of income and summing up the individual amounts for a grand total. The CPS-DS contains both an omnibus family income question at the beginning of the survey as well as a module that is designed to measure the aggregated income amount by asking about multiple sources and amounts. (3) The vast majority of the CPS-DS data users rely on the aggregated amounts because these are what the Census Bureau uses to determine the official poverty thresholds (Proctor and Dalaker 2003). Having both types of questions on the same survey, however, allows us to examine if the differences between the aggregated income amounts and the omnibus income amounts form a specific pattern and to what extent using the omnibus income measure biases estimates of poverty.

The CPS sample design uses a rotating panel in which households are in the monthly sample for 4 months, then they are out of the sample for 8 months, and finally the household is back in the CPS sample for 4 additional months. The omnibus family income question is asked in the first and fifth month that the same household and/or reference family is in the CPS sample and, therefore, we limit our analysis to households that are in their first or fifth month in the sample. We impose two additional limitations in our analysis. First, we limit our analysis to the family within the household containing the household reference person (some households have more than one family). Second, we limit our analysis to families that report an omnibus income amount. By limiting our analysis to reported omnibus income amounts of the family of the household reference person who is in their first or fifth month of interviewing we ensure that the responses given to the questions are given as part of the same monthly interview. (4)

Variable Descriptions

The dependent variable has three levels and measures the concordance between the omnibus and aggregated income measures. The first level is the whether the aggregated income amount falls within the same income category the respondent provides on the omnibus income item (the response values for the omnibus income item are provided in Figure 1); the second level is if the aggregated income amount is one or more lower than the omnibus income category; and the third level is if the aggregated income amount is one or more higher than the omnibus category.

[FIGURE 1 OMITTED]

We construct two additional variables to specifically address possible discrepancies between the two different types of income questions. The first one is a variable that indicates whether any of the specific aggregated income variables were imputed. Income data tend to have more missing data than other types of survey items because people refuse to answer them (Moore, Stinson, and Welniak 2000). Imputation is the process by which the Census Bureau replaces missing values using a statistical model to make their best guess as to what the income may actually be based on related characteristics. This is done to test whether any mismatch of income amount is because of the Census Bureau's imputation process. The second variable is an indicator variable for those families that have someone under 24 years of age with earned income who is not the reference person. Household reference people under 24 years of age are not included in this variable (but they are included in the analysis). This is done to specifically see if the income of young family members is not captured in the respondent's answer to the omnibus income question.

We use a series of indicator variables that reflect whether any member of the family reports a particular type of income among the aggregated income items. These include alimony, child support, dividend, disability, education assistance, financial assistance, interest, public assistance, retirement, rent, self-employment, survivor, supplemental security income, social security, workers compensation, veterans, unemployment compensation, wage and salary, and "other income." These indicators are included to determine whether the mismatch between the omnibus income item and the aggregated income amount could be attributed to respondents consistently missing or overcounting particular sources of aggregated income in the omnibus income item.

The last series of income variables are constructed to be used as control variables. We made each of the 14 income categories included in the omnibus income question its own binary indicator variable. The highest category (family income $75,000 and over) was the reference category. This is an important control variable because those families in the lowest (or highest) omnibus income categories cannot be lower (or higher) by design of the dependent variable. Furthermore, we expect there to be a tendency to have an omnibus-aggregated family income mismatch in the lower income to middle income categories because more families fall in these categories and they are smaller intervals.

Various demographic variables are used in the analysis as well. In the multinomial logistic regression model, these data are drawn from the household reference person's record.

Methods

We first assess the level of agreement between the two income sources by showing the percentage of respondents in the same income category, those whose aggregated amount was lower than their reported omnibus category, and those whose aggregated amount was higher than their reported omnibus category for each of the 14 omnibus income categories. We then employ two different methodologies to evaluate the mismatch between the omnibus income amount and the aggregated income amount. The first approach attempts to answer the question of whether the mismatch is predictable and which respondent characteristics and/or income sources differentially contribute to disagreement. To evaluate this question we use a multinomial logistic regression model to predict the concordance using a maximum-likelihood estimation with discrete dependent variables (Greene 2003).

We present the coefficients in terms of a relative risk ratio, which is simply the exponentiated value of the coefficient (StataCorp 1999). Again, the relative risk is measured as the risk of any given category relative to the category of no difference between the omnibus and aggregated income amounts. If the relative risk ratio is less than 1, that means that the risk is lower for that group versus the reference group to report their aggregated income differently than their omnibus income. For example, the relative risk of reporting an aggregated income lower than their omnibus income is 0.76 for whites relative to nonwhites. The standard errors for this model were computed using the Taylor series adjustment in Stata 7 (StataCorp 1999) and following the CPS-DS public-use specification laid out by Davern et al. (2003)

The second methodology used to evaluate the mismatch forces the aggregated income to be within the omnibus income's interval. Forcing the aggregated income to be equal to the omnibus income allows us to examine the question, "If we only had the omnibus income question on a survey would it affect the number of people we estimated to be living in poverty?" If an aggregated income response is within the interval of the omnibus income response then it is not changed. However, when the two income amounts are not equal, we recode the aggregated income to be "missing." Then we use a hotdeck imputation (5) procedure to match up those with "missing" income to an aggregated income amount from within the reported omnibus range. (6) This forces the aggregated and omnibus incomes to fall within the same income category. We then use these hotdecked aggregated family income amounts to develop person-level poverty estimates for everyone within a household reference person's family. We compare the number of people estimated to be in poverty using the original aggregated income amounts for everyone within a household reference person's family (the numbers that are used to establish the official Census Bureau poverty estimates from the CPS-DS) to the hot-decked amounts forcing the aggregated amounts to fall within the omnibus income category.

FINDINGS

There is a great deal of mismatch between the aggregated income amounts and the omnibus income categories. On average, only 31 percent of the income amounts match between the omnibus measure and the aggregated measure. The highest amount of agreement occurs in the highest income category (also the broadest income category) with over 80 percent agreement, and the lowest amount of agreement is 20.9 percent in the $12,500-$14,999 omnibus income category (among the narrowest income categories). Figure 1 illustrates the degree of this mismatch by showing for each omnibus income category the percent that report their aggregated income within the same omnibus category, the percent that report their aggregated income one omnibus category, higher or lower, and the percent that report two or more omnibus income categories, higher or lower. For example, respondents in the $60,000-$74,999 range that report aggregated income lower are only off by one omnibus income category. As opposed to those in the $10,000-$12,499 range that have a much higher percentage reporting two or more categories lower (36 percent) than those that report only 1 category lower (25 percent).

The relative risk ratios from the multinomial logistic regression are reported in Table 1. These coefficients support the idea that the differences between the omnibus income amounts and the aggregated income amounts are not completely random. The differences are consistent with our expectation that the omnibus income underestimates family income. Having additional people in the family makes a family more likely to report a higher aggregated income amount relative to their omnibus income and families with three or more people are also less likely to report their aggregated incomes lower relative to their omnibus income. In addition, married families and families that have members under 24 years of age with earnings are more likely to report higher aggregated income and are less likely to report lower aggregated income relative to being in the reference category of no difference.

Families with a household member under 24 with earnings, reporting receipt of disability income, educational assistance, rental income, earnings from self-employment, social security income, survivor income, income from veterans' programs, or income from workers' compensation or unemployment compensation, are more likely to report a higher aggregated income relative to being in the reference category of no difference. Families with some income sources, such as dividends, interest, retirement income, and wage/ salary earnings, are more likely to report higher aggregated and are less likely to report lower aggregated income relative to the reference category of no difference. Finally, for those families that had any one of the income variables imputed, the pattern of reporting is not as clear. In these cases, the analysis indicates that families with imputed income sources are more likely to both report higher and lower aggregated incomes relative to being in the reference category of no difference.

The results of the hotdeck imputation procedure are reported in Table 2. There is a significantly higher number of people below 100 percent of FPL using the hotdeck imputation approach to restrict the aggregated income amount to be within the omnibus income category than are in the poverty according to the aggregated amounts.

The pattern of overestimating poverty using the imputed aggregated amounts is more of a problem for those people who have characteristics that are associated with a higher probability of being in poverty. For example, blacks and Hispanics are more likely to be in poverty in general and they are also much more likely to be in poverty with the hotdecked aggregated amounts as opposed to the aggregated amounts themselves. (7) The same goes for 18-24 year olds and those 65 years of age or older. People without a high school diploma, noncitizens, people not working, and people with public health insurance coverage also show significantly higher percentage of people in poverty using the hotdeck to impute aggregated values within the omnibus family income range than the aggregated amounts themselves.

DISCUSSION

Income is difficult to measure because household income can derive from a variety of sources for each person within a household. Omitting amounts from specific sources or difficulty recalling amounts of income can introduce significant error to the estimates, which becomes even more serious if that error is systematic. As expected, the omnibus income amount underestimates family income and, as shown in Figure 1, there is limited direct concordance between the information yielded by the omnibus household income question and the aggregated income amount derived from every family member. As shown in Table 1 the differences between the omnibus and aggregated income amounts are associated with certain types of income and demographic characteristics. And finally, as shown in Table 2, the estimates of poverty are overstated using an omnibus measure.

When a researcher is determining the level of income detail to be collected in a health survey the most important question is: Why is the income data being collected? If it is being collected as a correlate of health or some other demographic trait, then less detail is needed and an omnibus income question is adequate. If, however, the survey is being conducted for the purpose of estimating poverty or eligibility for public programs, then more detail is needed. Dubay and Kenney (2000) show it is important to make exclusions to income amounts for public programs income disregards. Not doing this will underestimate the number of people who are eligible. In addition, it is important to measure some expenses in order to adjust income. Certain types of expenses are subtracted from a household's income when determining eligibility for public programs. For example, child support payments and childcare expenses, as well as some earned income, may be disregarded when determining whether someone is eligible for a particular public program (Dubay and Kenney 2000; Dubay, Haley, and Kenney 2000). (8) Estimating eligibility with an omnibus question has limited face-validity (i.e., whether the item looks like it is measuring what it is supposed to measure) given all of the criteria required to know if someone is actually eligible. (9)

One potential outcome of our analysis is to create a methodology that could be used to impute aggregated income amounts into omnibus income ranges in order to better estimate poverty and eligibility on health surveys that, given space and resource constraints, only ask omnibus income questions. Surveys have made use of a similar "unfolding brackets" (Heeringa, Hill, and Howell 1995) methodology for estimating income and asset amounts in household surveys. (10) The idea behind unfolding brackets hotdeck imputation is that if the respondent refuses to answer a continuous amount--or does not know the continuous amount--then the instrument follows up with ranges of amounts of income or assets. If the respondent reports a specific range then it is used in the hotdeck procedure to impute a continuous amount within the reported range. This procedure forces the continuous amount to be within the reported range; however, for the concept of family income this may not be appropriate, as shown by the comparisons in Table 2. A slightly altered approach is one in which a researcher would use the omnibus income categories to link up donors from one survey (e.g., CPS) with respondents in another (e.g., NHIS, BRFSS), but the final aggregated amount would not be limited to falling within the omnibus range. This approach would allow the researcher to take advantage of the strengths of the various surveys mixing the health-related detail (NHIS, BRFSS) with the income-related detail (CPS) in such a fashion as to provide more sound estimates of eligibility and poverty.

Our hotdeck procedure shows that forcing the aggregated income and omnibus income to be equal will bias the estimates of poverty (and the related eligibility concept) to find more people in poverty and eligible for public programs. If, for example, one developed a hotdeck income procedure that matched CPS respondents to BRFSS respondents based on their CPS omnibus income to BRFSS omnibus income amounts, then the bias may be reduced. After a BRFSS and CPS respondent have been matched through the hotdeck procedure using omnibus income and other variables, then relevant portions of the CPS family's earnings record could be imputed on the BRFSS record. This could allow for more detailed estimates of poverty and public program eligibility than is currently possible with these surveys.

The process for matching a CPS family to a NHIS family in order to impute the detailed income information is likely to be different from the BRFSS to CPS match. The NHIS instrument asks respondents whether people in the family had income from a variety of sources (e.g., transfer programs, and interest income) before asking the omnibus income item. This "cueing" may prompt the respondent to report income that would be a closer match to an aggregated income amount (Tulving 1983). Further work should be done to see whether an omnibus to aggregated match for purposes of imputing household income data for NHIS respondents provides reasonable estimates, or whether the NHIS to CPS match should be done with aggregated to aggregated categories as well.

Another future enhancement could be to work off the framework provided by Schenker et al. (2004) to use multiple imputation to replace missing values in the NHIS. Multiple imputation has superior statistical properties to single hotdeck imputation (Rubin 1996) but has not yet been widely adopted by the federal data system. Schenker et al. (2004) developed a strategy for imputing missing data in the NHIS using NHIS data, but this could be expanded to take into account the problems associated with under-reporting of the omnibus income question and use information from the CPS in the imputation model. This methodology and the hotdeck method described here should be compared in future research to see whether they are able to make the health surveys more usable for detailed income analysis.

CONCLUSION

Based on the findings of this research, the use of an omnibus household income question by itself as a basis for estimating eligibility for public programs should be done only after acknowledging its likely bias. The omnibus income question results in biased estimates of those in poverty, thereby resulting in overestimates of the number of people eligible for public insurance programs. We propose a possible methodological solution to match respondents from the CPS to the NHIS or BRFSS in order to take advantage of both the health-related detail on these surveys and the detailed earning data from the CPS. Although our process will work for some estimates of eligibility and program "take up," it will not work for models that want to use particular sources and amounts of income from those sources as predictors of various health-related phenomenon in the NHIS or BRFSS.

ACKNOWLEDGMENTS

This paper was presented at the American Association for Public Opinion Research meeting in Phoenix, Arizona, in May 2004. Preparation of this manuscript was funded by grant no. 038846 from the Robert Wood Johnson Foundation.

REFERENCES

Ayanian, J. Z., J. S. Weissman, E. C. Schneider, J. A. Ginsburg, and A. M. Zaslavsky. 2000. "Unmet Health Needs of Uninsured Adults in the United States." Journal of the American Medical Association 284 (16): 2061-9.

Beebe, T. J., J. McElhinney, and K. Johnson. 2003a. "2003 Alabama Health Care Insurance and Access Survey: Select Results." (Final Report for the Alabama Department 21 of Public Health). State Health Access Data Assistance Center (SHADAC). Minneapolis, MN.

--. 2003b. "2003 Health Insurance for Indiana's Families Survey: Select Results." (Final Report for the Indiana Family and Social Services Administration). State Health Access Data Assistance Center (SHADAC). Minneapolis, MN.

--. 2003c. "2003 Virgin Islands Health Care Insurance and Access Survey: Select Results." (Final Report for the Office of the Governor, Bureau of Economic Research). State Health Access Data Assistance Center (SHADAC). Minneapolis, MN.

Blewett, L. A., A. Ward, T. Beebe, and T. Wood. 2004. "Underinsurance: An Overview of Definitions, Data and Approaches to Estimates." SHADAC Working Paper. Minneapolis, MN.

Bradburn, N. M., S. Sudman, and Associates. 1979. Improving Interview Method and Questionnaire Design. San Francisco: Jossey-Bass.

Centers for Disease Control and Prevention. 2001. Overview: Behavioral Risk Factor and Surveillance System 2001. Atlanta: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention.

Cohen, R. A., and H. Ni. 2004. "Health Insurance Coverage: Estimates from the National Health Interview Survey, January-June 2003." Division of Health Interview Statistics, National Center for Health Statistics. Available at http:// www.cdc.gov/nchs/data/nhis/earlyrelease/insur200401.pdf

Davern, M., J. M. Lepkowski, G. Davidson, and L. A. Blewett. 2003. "Evaluating Various Methods of Standard Error Estimation for Use with the Current Population Survey's Public Use Data," Presented to the Section of Survey Research Methods at the Joint Statistical Meetings, San Francisco (August 1-7, 2003).

Davern, M., L. A. Blewett, B. Bershadsky, and N. Arnold. 2004. "Missing the Mark? Examining Imputation Bias in the Current Population Survey's State Income and Health Insurance Coverage Estimates." Journal of Official Statistics 20 (3): 249-64.

Dubay, L., and G. Kenney. 2000. "Assessing SCHIP Effects Using Household Survey Data: Promises and Pitfalls." Health Services Research 35 (5): 112-27.

Dubay, L., J. Haley, and G. Kenney. 2000. "Children's Eligibility for Medicaid and SCHIP: A View from 2000." Number B-41 in Series, "New Federalism: National Survey of America's Families." Washington, DC: Urban Institute.

Fowler, F. 1995. Improving Survey Questions, Vol. 38, Applied Social Research Methods Series. Thousand Oaks, CA: Sage Publications.

Greene, W. H. 2003. Econometric Analysis Fifth Edition. Upper Saddle River, NJ: Prentice-Hall.

Heeringa, S., D. Hill, and D. Howell. 1995. "Unfolding Brackets for Reducing Item Nonresponse in Economic Surveys." Health and Retirement Study Working Paper: 94-009.

Hirsch, B. T., and E.J. Schumacher. 2004. "Match Bias in Wage Gap Estimates Due to Earnings Imputation." Journal of Labor Economics 22 (3): 689-722.

Kennickell, A. 2001. Survey of Consumer Finances, 1998 [Computer File]. ICPSR Version. Washington, DC: Board of Governors of the Federal Reserve System [producer]. Ann Arbor, MI: Inter-University Consortium for Political and Social Research [distributor].

Krieger, N., D. R. Williams, and N. E. Moss. 1997. "Measuring Social Class in U.S. Public Health Research: Concepts, Methodologies, and Guidelines." Annual Review of Public Health 18: 341-78.

Link, B. G., and J. Phelan. 1995. "Social Conditions as Fundamental Causes of Disease." Journal of Health & Social Behavior 35 (Special Issue): 80-94.

Moeller, J., and N. Mathiowetz. 1994. "Problems of Screening for Poverty Status." Journal of Official Statistics 10 (3): 327-37.

Moore, J., and L. S. Loomis. 2001. "Reducing Income Nonresponse in a Topic-Based Interview." Paper Prepared for the 2001 AAPOR Meetings, Montreal, May 1720, 2001. Center for Survey Methods Research/SRD. U.S. Census Bureau.

Moore, J., L. L. Stinson, and E.J. Welniak Jr. 2000. "Income Measurement Error in Surveys." Journal of Official Statistics 16 (4): 331-61. National Center for Health Statistics. 2001. National Health Interview Survey, 2001 [Computer file]. ICPSR version. Hyattsville, MD: U.S. Department of Health and Human Services, National Center for Health Statistics [producer]. Ann Arbor, MI: Inter-University Consortium for Political and Social Research [distributor].

Nelson, D. E., B. L. Thompson, S. D. Bland, and R. Rubinson. 1999. "Trends in Perceived Cost as a Barrier to Medical Care, 1991-1996." American Journal of Public Health 89 (9): 1410-3.

Norris, J. C., M.J. van der Laan, S. Lane, J. N. Anderson, and G. Block. 2003. "Nonlinearity in Demographics and Behavioral Determinants of Morbidity." Health Services Research 38 (6): 1791-818.

Proctor, B., and J. Dalaker. 2003. Poverty in the United States: 2002. Washington, DC: U.S. Census Bureau.

Roemer, M. 2000. Assessing the Quality of the March Current Population Survey and the Survey of Income and Program Participation Income Estimates 1990-1996. Washington, DC: Income Survey Branch, Housing and Household Economic Statistics Branch, U.S. Census Bureau.

Rubin, D. 1996. "Multiple Imputation after 18 + Years." Journal of the American Statistical Association 91: 473-89.

Schenker, N., T. E. Raghunathan, P.-L. Chiu, D. M. Makuc, G. Zhang, and A.J. Cohen. 2004. Multiple Imputation of Family Income and Personal Earnings in the National Health Interview Survey: Methods and Examples. Hyattsville, MD: National Center for Health Statistics.

SHADAC (State Health Access Data Assistance Center). 2003. Coordinated State Coverage Survey. Minneapolis, MN: SHADAC. Available at http://www.shadac.org/ collecting/cscs.asp

StataCorp. 1999. Stata Statistical Software: Release 7.0. College Station, TX: Stata Corporation.

Tulving, E. 1983. Elements of Episodic Memory. New York: Oxford University Press.

U.S. Census Bureau. 2001. Current Population Survey Annual Demographic Supplement, 2001 [Computer file]. Washington, DC: U.S. Department of Commerce, U.S. Census Bureau ]producer], Ann Arbor, MI: Inter University Consortium for Political and Social Research [distributor].

--. 2002. Survey of Income and Program Participation (SIPP) 1996 Panel [Computer file]. Washington, DC: U.S. Department of Commerce, U.S. Census Bureau [producer]. Ann Arbor, MI: hater University Consortium for Political and Social Research [distributor].

Williams, D. R. 1990. "Socioeconomic Differentials in Health: A Review and Redirection." Social Psychology Quarterly 53 (2): 81-99.

NOTES

(1.) The NHIS does have several questions identifying a variety of sources of income for family members, but the amount of family income is an omnibus amount.

(2.) There are two sources of potential discrepancy between the omnibus income measure and the aggregated income measure. The first is that the rounding can occur in the omnibus measure because respondents have to choose the best fitting categorical income range. The second has to do with who defines "income." The omnibus income item lets the respondent define what types of income should be included in the total, whereas the aggregated total sums up the components that the survey analyst defines as the total income amount.

(3.) There could also be an order effect in that the omnibus income question comes before the aggregated measures.

(4.) We do include imputed aggregated income amounts. In the statistical analysis, we explicitly control for whether any of the family income amounts were imputed.

(5.) Hotdeck is a process by which a respondent's valid value for a specific variable is assigned to another respondent who does not have a valid value for this variable. The respondent with the valid value is called a "donor" and a person with the invalid value is called a "recipient." Potential donors are sectioned into homogeneous groups called "cells" that can be defined by many parameters. For example, all black, employed, college educated, females over the age of 65 with a valid value for the specific variable can be placed into one cell, while all white, unemployed, high school graduates, males over 6,5 can be placed into another cell. Recipients are matched to these homogenous cells of donors based on their characteristics. A random donor selected from the matching group supplies his/her value to the recipient. In our hot deck routine we used the ominibus income category as the only hotdeck parameter and valid values were those that matched flora aggregated to omnibus income amounts and invalid amounts were those that did not match.

(6.) No other variables were used in the hotdeck procedure.

(7.) Because we only used the omnibus income variable in the hotdeck procedure and did not use others (such as race, education gender, ethnicity, and employment) we would expect the relationship between these variables and poverty to be slightly underestimated because of imputation bias (Davern et al. 2004; Hirsch and Schumacher 2004). The hotdecked estimates of poverty are more likely to be underestimates for these groups than if these characteristics were taken into account.

(8.) In addition there are also various asset tests such as the value of automobiles.

(9.) There are other important variables to measure concerning eligibility and some surveys do a better a job of measuring them than others. Citizenship and legal alien status are important considerations (Dubay, Haley, and Kenney 2000). There are also family relationship issues. When constructing Medicaid eligibility units it is essential to know how people within the household are related and the income amounts for each person. If there are grandparents, parents, and children all within the same household being able to assign the various income types and amounts to these people to form Medicaid eligibility units is important for determining eligibility. Omnibus income items for an entire household do not allow for this type of disaggregation.

(10.) Unfolding brackets are usually applied to impute a specific amount of income (e.g., interest income) as opposed to imputing the entire amount of income at once (as an omnibus measure attempts to do).

Address correspondence to Michael Davern, Ph.D., Assistant Professor, State Health Access Data Assistance Center, University of Minnesota, School of Public Health, 2921 University Avenue, Suite 345, Minneapolis, MN 55414. Holly Rodin, M.P.A., is with the University of Minnesota, Timothy J. Beebe, Ph.D., Associate Professor, is with Health Services Research, Mayo Medical School. Kathleen Thiede Call, Ph.D., Associate Professor, is with the School of Public Health, University of Minnesota, Minneapolis, MN.
Table 1: Relative Risk Ratio of Aggregated Income Being Lower Than the
Omnibus Income, and Aggregated Income Being Higher Than the Omnibus
Income, Relative to There Being No Difference between the Two

 Relative Risk Ratios with
 the Same Category as the
 Reference

 Risk of Risk of
 Aggregated Aggregated
 Being Lower Being Higher
Covariates Than Ominimus Than Ominimus

High school degree 1.07 1.31 ***
Some college 1.02 1.24 **
College degree 0.79 * 1.42 ***
White 0.76 *** 0.89
Male 0.96 1.01
At least good health 1.05 0.90
Married 0.59 *** 1.20 **
Two people in family 0.88 2.23 ***
Three or more people in family 0.70 *** 3.03 ***
Metro area resident 1.02 1.13 *
Born U.S. citizen 0.83 * 1.06
Income type
 Alimony 1.15 2.13
 Child support 1.13 1.17
 Disability 0.91 1.85 *
 Dividends 0.74 *** 1.33 ***
 Educational assistance 1.17 1.82 ***
 Financial assistance 3.84 *** 3.00 ***
 Farm income 1.18 1.27
 Interest 0.63 *** 1.14 *
 Other income 0.82 1.27
 Public assistance 3.45 *** 0.86
 Retirement 0.68 *** 1.89 ***
 Rental 1.20 1.38 **
 Self-employment 0.94 2.45 ***
 SSI income 0.71 1.01
 Social security 1.16 2.38 ***
 Survivor 0.73 1.47 *
 Unemployment compensation 1.16 1.32 *
 Veterans 0.79 1.90 **
 Workers compensation 0.92 1.95 **
 Wage/salary 0.37 *** 2.15 ***
Additional income mismatch covariates
 Aggregated includes imputed amount(s) 1.29 *** 1.63 ***
 Includes earnings of someone under 24 0.82 * 1.99 ***

Source: 2001 Current Population Survey Annual Demographic Supplement.

* p < .05;

** p < .01;

*** p < .001.

n = 16,739 (weighted n = 22,954,000).

Table 2: Comparison of Poverty Rates by Selected Characteristics
Using the Hotdeck Imputed Aggregated Income within the Ominbus
Income Range versus the Original Aggregated Income Amount

 Imputed
 Aggregated Aggregated Difference
Percent of Poverty Level Income (%) Income (%) (%)

Total 11.5 10.5 -1.1 ***
Sex
 Female 12.6 11.7 -1.0 **
 Male 10.3 9.2 -1.1 ***
Race/Ethnicity
 White 9.2 8.5 -0.7 **
 Black 25.7 22.6 -3.1 ***
 American Indian 21.7 19.0 -2.7
 Asian 9.2 7.9 -1.3
 Hispanic 24.3 21.1 -3.2 ***
Age (years)
 <18 16.4 15.5 -0.9
 18-24 14.1 12.2 -1.9 *
 25-34 10.5 9.4 -1.2
 35-44 8.3 7.4 -0.9
 45-64 7.3 6.9 -0.4
 65+ 11.6 9.4 -2.2 **
Citizenship
 Non-U.S. citizen 17.6 14.8 -2.9 **
 U.S. citizen 10.8 9.9 -0.8 ***
Education
 Less than high school 22.8 18.9 -4.0 ***
 HS graduate 10.7 9.2 -1.5 **
 Some college 6.0 5.8 -0.2
 College 2.5 2.9 0.4
Work status
 Full time 4.1 3.4 -0.7 **
 Part time 9.4 8.0 -1.4
 Not working 18.3 16.5 -1.8 **
Insurance status
 Private 3.4 3.1 -0.3
 Public 30.3 26.8 -3.6 ***
 Uninsured 23.1 22.3 -0.9

Source: 2001 Current Population Survey Annual Demographic
Supplement.

* p < .05;

** p < .01;

*** p < .001;

n = 45,852 (weighted n = 57,800,000) except for education
n = 33,162 (44,090,000) and workstatus n = 32,982 (34,870,000).
COPYRIGHT 2005 Health Research and Educational Trust
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Methods
Author:Davern, Michael; Rodin, Holly; Beebe, Timothy J.; Call, Kathleen Thiede
Publication:Health Services Research
Geographic Code:1USA
Date:Oct 1, 2005
Words:6825
Previous Article:Social and economic determinants of disparities in professional help-seeking for child mental health problems: evidence from a national sample.
Next Article:A self-report measure of clinicians' orientation toward integrative medicine.
Topics:

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters