Printer Friendly

RELEASING THE TRAP: A METHOD TO REDUCE INATTENTION BIAS IN SURVEY DATA WITH APPLICATION TO U.S. BEER TAXES.

I. INTRODUCTION

Surveys are a mainstay in the social sciences, and choice experiments, in particular, have become a widely used tool to elicit consumer preferences in the health economics literature (de Bekker-Grob, Ryan, and Gerard 2012). Yet eliciting quality responses has become an increasingly difficult task (Curtin, Presser, and Singer 2005; Meyer, Mok, and Sullican 2015). Individuals have limited capacities for processing information, making it reasonable for a survey respondent to inattentively complete a survey. In an effort to identify the most problematic participants, some researchers now include "screeners" or "trap questions" to identify which participants are most likely to be inattentive (Oppenheimer, Meyvis, and Davidenko 2009). Asking questions that have an unambiguously correct answer allow survey designers to identify who is and is not paying attention during the survey. Between a quarter and a half of the respondents in previous surveys miss the trap question, indicating that they are not paying attention (Maniaci and Rogge 2014). The convention has been to delete these inattentive participants from the sample. However, this practice has the potential to restrict the study's representativeness and external validity (Aronow, Baron, and Pinson 2016). Furthermore, omitting up to half of a sample is likely to be costly. As such, there is a clear need for a method that might reduce participant inattention without inviting selection bias.

If inattention simply increased noise, the error term might exhibit higher variance, but parameters would remain unbiased. Unfortunately, research has shown that inattention can substantively bias policy-relevant estimates, making inattention bias an important issue for survey research (Malone and Lusk 2018a). In the case of choice experiments, inattentive survey respondents tend to pay less attention to price changes, resulting in higher willingness-to-pay estimates. In this article, we propose an easy-to-implement method to reduce the measurement error associated with inattention bias. Before the battery of choice questions, we ask a question with an obvious answer designed to "trap" inattentive respondents into incorrectly answering. If the participant misses the trap question, we gently nudge them to pay attention and provide them an opportunity to revise the incorrect response. Our results show that removing participants who do not revise their responses improves data quality. Additionally, we find that this simple reminder can improve subsequent policy recommendations without introducing additional bias.

By applying this method to an online discrete choice experiment for beer, we seek to contribute to an ongoing policy discussion. While alcohol consumption has been heavily studied in the health economics literature (e.g., Wagenaar, Salois, and Komro 2009 identified over a thousand demand elasticity estimates in the literature), most estimates generally focus on beer as a commodity. Moreover, previous approaches have relied on aggregate time-series data that are likely to suffer from problems associated with endogeneity. Only rarely have any segment-specific estimates been published (Toro-Gonzalez, McCluskey, and Mittelhammer 2014), making it difficult to determine the consequences of policies that exempt certain types of beer from taxation.

These shortcomings are particularly problematic for the beer market in the United States, where the craft beer market segment has experienced rapid expansion. As such, the United States is home to more breweries today than ever before (Hahn 2016). While macrobrewers like Anheuser-Busch InBev and MillerCoors still comprise nearly three-quarters of the domestic market (Tremblay and Tremblay 2011), craft breweries comprise the largest growth area for the beer market. Policymakers across the country are considering changes to promote the burgeoning craft breweries; for example, the "Tax Cuts and Jobs Act of 2017" reduced the federal excise tax paid by craft brewers from $7 per barrel to $3.50 per barrel (Stein 2018). Those tax policy changes have the potential to substantially influence the size and structure of the craft beer market (Elzinga, Tremblay, and Tremblay 2015). A complete analysis of such public policies requires data on segment - or brand-specific own - and cross-price demand elasticities because some tax policies may induce substitution effects, which may counteract the intended effects of policies (i.e., to reduce the externalities associated with alcohol consumption). To date, few demand studies account for the likely differences between craft and macrosegments (Bray, Loomis, and Engelen 2009; Malone and Lusk 2018b; Toro-Gonzalez, McCluskey, and Mittelhammer 2014), and those that have generally focused on purchases for consumption at home rather than consumption away from home (i.e., at a restaurant or bar). Our choice experiment on beer builds on the existing literature by focusing on macro--and microbrands in an away-fromhome setting, thereby increasing policy-relevant information regarding alcohol taxes, and the projection of the effects of such taxes on public health outcomes.

The overall objective of this article is to estimate the effects of changes in U.S. beer policy on alcohol consumption via a discrete choice experiment that includes a new method to reduce measurement error caused by inattention. In the following section, we provide a background on the literature relevant to inattention bias in surveys and then introduce some of the literature related to beer taxes. In the third section, we describe the data and methods in further detail, providing additional motivation for controlling for inattention, and explicitly describe how we use trap questions with feedback to minimize the consequences of inattention bias. Our aim is to identify a method that might encourage participants to consider their responses more attentively. Fourth, we discuss the results of our discrete choice experiment and implications for beer taxes. The final section concludes with a brief review of our findings and recommendations for future research.

II. BACKGROUND

While stated preference techniques can provide more control for making causal inferences, surveys are hampered by quality of responses. This article explores a new approach that minimizes measurement error due to inattention bias. Related to this approach is the use of "cheap talk" and consequential scripts, which inform participants of the tendency to exaggerate willingness-to-pay (Cummings and Taylor 1999; Jacquemet et al. 2013). Cheap talk scripts can, in some instances, reduce hypothetical bias by reminding participants that their responses are consequential (List 2001; Lusk 2003). Unlike cheap talk and hypothetical bias, inattention is clearly and quickly identifiable at the individual level, and our method includes a "prompt" only to those respondents who need it.

We focus on choice experiments, as they have become a popular way to determine impacts of consumer-focused policies (Bryan and Dolan 2004). Dozens of journal articles were published in health economics between 2001 and 2014 that utilized choice experiments, covering a wide range of topics (Clark et al. 2014). As choice experiments have become more common, so too have concerns regarding the validity of the method's results (Bryan and Dolan 2004; de Bekker-Grob, Ryan, and Gerard 2012). A common concern is that participants choose to ignore one or more attributes when deciding between alternatives. In some instances, participants might neglect an attribute because they are indifferent to one or all of the attributes or attribute levels (Carlsson 2011). It might also be that the nonattendance is not created by inattention, but rather, the participant simply has a dominant preference for a specific attribute level, making other attributes irrelevant (Scott 2002). Regardless of the cause, "attribute nonattendance" has the potential to bias policy parameters (Hole 2011). To adjust for these issues, some researchers recommend asking participants ex post whether they ignored any of the attributes (Hole, Kolstad, and Gyrd-Handsen 2013) while others infer attribute nonattendance econometrically (Hensher and Greene 2010). In this study, we focus on a related issue: that some proportion of participants inattentively answer entire survey questions (not just particular attributes), creating measurement error due to inattention bias. That is, we study inattention broadly--as whether someone pays attention to the survey itself--as opposed to the narrow way it has been studied in the attribute nonattendance literature, which confounds preferences for attributes (or lack thereof) with careless inattention.

A. Inattention

A main purpose of this article is to test a method for identifying inattention while preserving the external validity of the discrete choice experiment. Specifically, we test the effectiveness of providing feedback to individuals who miss "trap questions." (1) These types of questions are crafted to identify inattention on surveys, often classifying a third to a half of all participants as "satisficing" or inattentive (Berinsky, Margolis, and Sances 2014; Oppenheimer, Meyvis, and Davidenko 2009). A typical trap question seeks to identify participants who briefly skim a task's instructions by providing the real subdirections hidden within the larger overall instructions (e.g., a participant might be asked simply to check "strongly agree" to a particular item). Those who miss trap questions tend to be willing to pay more and are not as consistent in their responses, suggesting the possibility of higher error variance and even bias among people who miss trap questions (Gao, House, and Xie 2016; Jones, House, and Gao 2015; Malone and Lusk 2018a).

Often, the convention has been to delete these participants from the sample, as eliminating these observations has been shown to increase statistical power (Oppenheimer, Meyvis, and Davidenko 2009). However, this convention can prove problematic as data collection is costly, and throwing out responses is akin to throwing away money. Furthermore, deleting these participants has the potential to threaten the survey's external validity by biasing the survey sample (Berinsky, Margolis, and Sances 2014; Lancsar and Louviere 2006). Instead, we argue that there are at least two types of people who might misrespond to a trap question--the inattentive participant who is unconcerned with providing honest answers to the survey, and the inattentive participant who simply needs to be reminded to pay more attention. (2) Thus, we propose a simple approach that might "rescue" inattentive respondents: provide a simple prompt to people who miss a trap question requesting that they read more carefully.

B. Beer Taxes

An extensive literature has identified negative externalities associated with heavy drinking. Beer constitutes more than half of the ethanol consumed in America, making governments keenly interested in the way policies might influence consumption habits (LaVallee and Yi 2011). Additionally, beer has been identified as the alcoholic beverage most commonly consumed by binge drinkers (Naimi et al. 2007). Historically, restrictions on alcohol distribution have been used to minimize alcohol consumption (Fosdick and Scott 1933), but those restrictions have the potential to reduce business development (Malone and Lusk 2016) and subsequently raise concerns regarding corruption (Gohmann 2016).

Another classic response to these negative externalities is a Pigouvian tax (Cesur and Kelly 2014). By incorporating alcohol's negative externalities into the price of a pint, the consumption effects can be more directly targeted. Most research indicates that aggregate beer demand is generally price and income inelastic, making the beverage nearly "recession-proof' (Freeman 2001) and implying that beer taxes might be less effective than other methods of reducing consumption (Nelson 2014). Furthermore, beer-centric government policies must balance the benefits of moderate alcohol consumption against the negative externalities associated with alcohol abuse (Peters and Stringham 2006; Stringham and Pulan 2006). As noted by Tremblay and Tremblay (2005, 275), "the issue is more complex for alcoholic beverages, however, since moderate consumption generates positive externalities and excessive consumption generates negative externalities." By extension, taxes have the potential to reduce overall constituent welfare if moderate drinkers reduce or quit drinking alcoholic beverages and alcohol abusers continue to drink at the same rate.

Partially in response to these challenges, some policymakers have advocated modified tax structures that effectively impose minimum prices on alcoholic beverages (Craven, Marlow, and Shiers 2013; Ludbrook 2009). The notion of taxing alcohol via a minimum price mechanism even merited a special section of the journal Alcohol and Alcoholism (Callinan, Room, and Dietze 2015). European countries such as Scotland have already implemented a similar policy, and some have argued that the United States should follow suit (Brennan et al. 2015). There appears to be growing interest among American policymakers in this tax scheme given the aforementioned shift toward craft-beer consumption; however, as we will show, this sort of policy may have some unintended consequences.

Of primary interest to this article is the influence of tax policies on consumer choice. We evaluate the consequences of a minimum price per pint, as the effectiveness of this policy is likely to be influenced by substitution effects across beer brands. For this study, we determine the minimum price that would have to be set to reduce the beer consumed away from home in the United States by 1%. A choice experiment is appropriate to determine the effects of the policy as some beers would not be subject to this type of pricing policy. As such, substitution effects have the potential to mitigate the impacts of this policy, as consumers are likely to substitute toward similar products such as craft beer, which are not taxed as heavily. Assuming away substitution effects might indicate to policymakers that minimum pricing would more effectively reduce drinking habits than would be the case in reality. In the following section, we explain our data-collection and estimation methods.

III. DATA AND METHODS

We employ a discrete choice experiment for beer choice with the population of interest being beer drinkers in the United States. The discrete choice experiment has become increasingly popular in the health economics literature (de Bekker-Grob, Ryan, and Gerard 2012). While there are concerns about so-called hypothetical bias, previous research has shown that such bias is less of a problem when estimating marginal changes (Carlsson and Martinsson 2001; Lusk and Schroeder 2004). Moreover, prior research has shown that preferences measured via stated preference choice experiments are consistent with those inferred from revealed preference data (Hensher, Louviere, and Swait 1998; Louviere, Hensher, and Swait 2000) and by combining both types of data, improved predictions can be obtained (Brooks and Lusk 2010; Swait and Andrews 2003). Moreover, choice experiment data have been shown to exhibit high levels of external validity (Chang, Lusk, and Bailey Norwood 2009). Choice experiments can complement the extant literature on beer demand relying on secondary data in several ways. Choice experiments provide increased control by avoiding potential context-specific confounds. By designing experiments where attributes are uncorrelated with one another, it also avoids endogeneity concerns and concerns about unobserved quality attributes. Moreover, choice experiments allow the researcher to identify individual-specific information more clearly than secondary data.

A. Data

We utilized a simple "branded" choice experiment, where individuals chose between six different beer brands at a given set of prices. A main effects orthogonal fractional factorial design was employed to assign prices to brands; the final design consisted of eight choice questions in which the price of each brand was uncorrelated with the price of other brands. Each person answered all eight choice questions (Figure 1). To control for effects related to beer type, all beers were lagers. Participants were given the following instructions: "We are interested in the types of beer you like to buy. Imagine you're at a bar or restaurant. In what follows, we will ask you 8 different choice questions, and in each question we would like to know which type of beer you most prefer when you buy a pint of beer." Participants chose between six randomly ordered beers: Miller Lite, Budweiser, Sam Adams Boston Lager, Marshall Old Pavilion Pilsner, and Oskar Blues Mama's Little Yella Pils at price combinations of $3 and $6, or respondents could choose "none." We included Miller Lite and Budweiser to represent domestic macrobreweries. As a proxy for import brands, we used Corona, as it is America's largest import (Tremblay and Tremblay 2011). We included Sam Adams Boston Lager as Sam Adams is the largest American craft brewery. Two smaller beer brands (Mama's Little Yella Pils from Oskar Blues Brewing in Colorado and Old Pavilion Pilsner from Marshall Brewing in Oklahoma) were included to represent microbreweries. Characteristics of the beers in the sample can be found in the Appendix.

We used a between-subject design to test for the effect of different trap question approaches. Participants were randomly assigned into a control group (N = 547), a group given a long version of a trap question (TV = 559), and a group given a version of a trap question embedded in a Likert scale (A = 591). The first trap question directed participants to select "High" on a Likert scale if they live in the United States (Figure 2A). If participants "straight-line" through the scale, they will be unlikely to see these directions, as the instructions originally ask the participant, "How would you rate your familiarity with each of the following brands?" For the second trap question, we use a multiple-choice question where the true instructions direct the participant to click "None of the above" (Figure 2B). Inattentive participants would be more likely to skim the long instructions, and then select their current emotional state. We hypothesize that this style of trap question will catch a higher number of inattentive participants than will the first trap question, as the cognitive effort required for a correct response is higher. Additionally, we hypothesize that a larger portion of participants will correctly revise their response once provided feedback, as a simple reading of the full instructions makes it clear what response is correct. This is likely to occur because Likert scales are often tedious for participants, and requiring an already-inattentive participant to revise responses on a Likert scale is likely to make the process even more tedious. (3)

Instead of simply identifying survey participants who might be inattentive, we notified participants who responded incorrectly. Participants who incorrectly responded were given the following prompt. "You appear to have misunderstood the previous question. Please be sure to read all directions clearly before you respond." The respondent then had the chance to revise their answers to the trap question they missed.

Data were collected in May 2015 with the population of interest being U.S. beer consumers. Participants were recruited online through the company Survey Sampling, International (SSI) and the survey was conducted using Qualtrics (N = 1,697). SSI recruits its panel of participants through a variety of means including phone calls and online ads, and offers a nominal award of approximately $1.50/survey in gift cards for completing surveys. We utilized a screener question that eliminated nonbeer drinkers from the sample. Men make up 54.15% of our sample, and 42.07% of the participants are under 45 years of age. Nearly 15% of our participants live in a city of more than 1,000,000 residents and 34.84% of the participants identified themselves as drinking craft beer at least once a week. Our data indicate millennials aged 21 to 34 tend to be most likely to drink craft beer frequently, while adults 55 years or older are not as likely to drink craft beer. (4)

B. Methods

We estimate a series of random parameter logit models to compare the choice behavior of participants who either correctly responded to the trap question or correctly revised their response and participants who did not correctly revise their response. We define participant n's utility of selecting beer choice j in choice number s as:

(1) [U.sub.njs] = [alpha][PRICE.sub.js] + [[beta].sub.jn] + [[epsilon].sub.njs]

where [PRICE.sub.js] is the price of choice j in choice s, a is the marginal (dis)utility of the price, [[beta].sub.jn] indicates the utility of beer j relative to the "none" option which is normalized to zero for identification purposes, and [[epsilon].sub.jsn] is the unobserved portion of the utility function.

To account for heterogeneity, we estimate random parameter logit models. (5) Each of the "alternative-specific constants" [[beta].sub.jn] are assumed to follow a normal distribution with mean [[bar.[beta]].sub.j] and standard deviation [[sigma].sub.j], making the alternative specific constant take on the distribution [[bar.[beta]].sub.j] + [[lambda].sub.jn][[sigma].sub.j], where [[lambda].sub.jn] is a draw from the standard normal probability distribution function (Train 2009). Assuming the [[epsilon].sub.jsn] are independent and identically distributed Type I extreme value, the probability of individual n choosing beer j in choice number s is:

(2) [Prob.sub.njs] = exp ([[bar.[beta]].sub.j] + [[lambda].sub.jn][[sigma].sub.j]) / [9.summation over (k=1)] exp ([[bar.[beta]].sub.k] + [[lambda].sub.kn][[sigma].sub.k]).

The above probability statement contains the random terms, [[lambda].sub.jn], which must be integrated out of the likelihood function. To achieve this task, we utilize simulated maximum likelihood estimation and utilize 1,000 Halton draws for each [[lambda].sub.kn].

In this study, we provided inattentive participants with the opportunity to revise their incorrect response, effectively "untrapping" themselves and thereby implying that they will be more attentive. As a prelude to the policy analysis, we explore the price sensitivity of attentive and inattentive respondents. In particular, we estimate the change in the probability of not purchasing a beer when all beer prices are increased 1% (percentage changes can be found in the Appendix). Base prices for each beer j are based on national averages generated by the consulting firm Restaurant Sciences, LLC (Jennings 2013) where the price of Budweiser and Miller Lite are $3.50, Corona and Sam Adams are $5.00, and Marshall and Oskar Blues are $5.25.

To estimate the consequences of minimum pricing legislation, we use the same base prices and consider several minimum price policies starting with a minimum price policy of $3.50/pint and working up (in $0.50 increments) to a minimum price policy of $5.50/pint. The outcome of interest is the effect of the policy on the probability of not choosing a beer (since the goal of the policy is to reduce overconsumption). Because craft beers are already more highly priced ex ante, minimum price policies will have differential effects on macro and craft beers. For example, a $5.00/pint minimum price policy increases the price of Bud and Miller from $3.50 to $5 (a 42.8% increase), but the policy does not affect the prices of the microbreweries (which are already above the minimum at $5.25). As a result, the policy will cause a shift away from macro to craft beer and not toward "no purchase" as the policy was perhaps intended. To derive standard errors on the probability of "no purchase," we follow the method outlined by Krinsky and Robb (1986) with 1,000 random draws.

IV. RESULTS

A total of 547, 559, and 591 participants were randomly assigned to the control, embedded trap question, and long trap question treatment groups, creating at least 4,300 individual choices per treatment. A chief concern about the inclusion of trap questions is the potential for widespread protest-like behavior, although trap questions themselves have not been found to bias responses (Berinsky, Margolis, and Sances 2014). It might be that the obvious nature of the trap question offends respondents, thereby confounding the results from the choice experiment. It might also be that our feedback has the potential to increase social desirability bias, as warning messages embedded within a choice experiment have been shown to do (Clifford and Jerit 2015). To test this concern, we tested whether the choice experiment model parameters were the same for the treatment and control groups in aggregate. Table 1 shows the random parameter logit estimation results for the control treatment and the full dataset. A likelihood ratio test fails to reject the null that the preference parameters were equal across the two treatments and control, indicating that inclusion of the trap question, per se, does not significantly alter parameter estimates ([[chi].sup.2.sub.df=26,0.05] = 18.6, p value = .881). Another concern regarding the use of trap questions is that they might increase attrition rates. We do not find this to be the case. In fact, our findings indicate the opposite: across all treatments, 91.04% of all participants who opened the survey completed it, while the completion rate for the control group was only 80.6%.

Signs and significance of parameter estimates in Table 1 are all as expected. In the full dataset, the average participant appears to derive the most utility from Corona or Sam Adams as the mean utilities are 3.306 and 3.442 (estimated relative to "none" which is normalized to zero for identification). There also appears to be substantial heterogeneity in preferences, as the estimated standard deviations are all over three. In the control group, the random parameter estimate of the mean preference for Oskar Blues was not statistically different from zero, although the estimated standard deviation for Oskar Blues is 4.437, indicating that approximately half of the control sample have a positive utility for the brand, and half negative.

A. Embedded Trap Question--Results

In the treatment with the trap question embedded in a Likert scale, 21.82% of the 559 participants incorrectly responded to the first iteration. When notified of their wrong response, 53 of the 122 originally incorrect responses changed their response to a correct answer. Relative to the control, a larger percentage of participants in this treatment actually completed the survey, as the completion rate for this treatment was 96.88%. Time to complete did not vary across inattentiveness; participants who responded correctly to the trap question completed the survey in 19 minutes, 47 seconds, whi le participants who incorrectly responded took 19 minutes, 30 seconds. A likelihood ratio test indicates participants who correctly answered the embedded trap question the first time are not statistically different ([[chi].sup.2.sub.df=13.0.05] = 20, p value = 0.52) from the participants who correctly revised the embedded trap question after we prompt (estimates for these separate groups can be found in the Appendix), suggesting our prompt caused inattentive participants to become attentive.

Table 2 shows the parameter estimates for those in the embedded trap question treatment. Parameter estimates for persons who revised their responses were statistically different from those who did not revise their responses and who were persistently inattentive ([[chi].sup.2.sub.df=13.0.05] = 48.4, p value <.001). We conclude that attentive and persistently inattentive participants exhibit different preferences. Estimated standard deviations are all smaller for the persistently inattentive participants than for the attentive participants, and parameter estimates for Corona, Sam Adams, and Marshall are all larger for attentive participants. Most importantly, the parameter estimate for price is significantly lower for attentive participants (-0.613) than for inattentive participants (-0.348). As such, this difference indicates that persistently inattentive participants on average have a lower marginal disutility for price.

B. Long Trap Question--Results

An even larger percentage of the sample finished in the long trap question treatment versus the embedded trap question treatment as the completion rate was 97.20%. Of the 591 participants randomly allocated to this treatment, 25.38% of participants answered incorrectly to the long version of the trap question. When notified of their wrong answer in the long trap question, 98 of the 150 originally incorrect responses were changed to a correct answer. Incorrect respondents took 13 minutes, 10 seconds on average to complete the survey while correct respondents took 15 minutes, 30 seconds to complete the survey, although this difference was not statistically significant (t-stat = 1.16, p value = .24). This group completed the survey in a notably shorter period as participants were not required to complete a Likert-type scale in the same fashion as the previous treatment. A likelihood ratio test indicates that parameter estimates for participants who correctly answered the long trap question the first time were still statistically different ([[chi].sup.2.sub.df=13.0.05] = 183.2, p value <.001) from the participants who correctly revised the long trap question after we prompt (estimates can be found in the Appendix). An additional likelihood ratio test indicates that participants who incorrectly revised and were persistently inattentive generated statistically different parameter estimates than did the participants who correctly revised their response to the long trap question after the prompt ([[chi].sup.2.sub.df=13.0.05] = 49.4, P value <.001). In other words, while participants who revised were still different from the participants who responded correctly, they were also different from the persistently inattentive participants.

Table 3 shows the parameter estimates for those who missed the trap questions and did not revise to correct their wrong response (i.e., those who were persistently inattentive) versus those who did revise or correctly answered the first iteration of the trap question. Parameter estimates for persistently inattentive participants were statistically different from those who did correctly revise their responses or correctly responded to the first iteration of the trap question ([[chi].sup.2.sub.df=12,0.05] = 276.8, p value <.001). Standard deviations are all larger for attentive participants than for inattentive participants, but there is no clear pattern in the differences between random parameter estimates. Most importantly, the parameter estimate for price is significantly lower for attentive participants (--0.829) than for inattentive participants (--0.102). This difference again indicates that persistently inattentive participants appear to have a substantively higher marginal disutility for price. The price parameter difference indicates that demand for beer is more inelastic than actually might be the case, potentially leading to different policy recommendations and elasticities.

C. Policy Implications

Inattention bias has the potential to significantly influence policy estimates (Malone and Lusk 2018a). Table 4 displays the elasticity of beer for each group of participants. Results from the control group are consistent with the previous literature as the elasticity for beer in this treatment is-0.202%, which closely matches the demand elasticity estimated by Nelson (2014). Participants in the embedded trap question treatment were less price sensitive than those in the long trap question treatment. Both treatments, however, yield different elasticities than the control treatment; demand for beer was more elastic for correct respondents in both treatments than for the control group, even though the 95% confidence intervals of the control and embedded trap question treatments overlapped. Indeed, for both the embedded and long trap questions, the price increase affects persistently inattentive respondents less than correct respondents. Specifically, respondents who did not revise their answers in the embedded trap question treatment only decrease their probability of drinking by -0.094% in response to an across the board 1% increase in beer prices compared to-0.182% for respondents who correctly answered or revised. For the long trap question, respondents who did not revise only decreased their probability of drinking by-0.013% compared with-0.330% for attentive respondents. At this point, it is a bit unclear why the attentive respondents in the long trap question were more elastic than attentive respondents in the embedded trap question. The key point is that the respondents who remained persistently inattentive even after the prompt were less price sensitive than the control or the attentive respondents, and that the elasticities from the persistently inattentive respondents are much more inelastic, and therefore biased, in comparison with what has been estimated in previous literature.

We now turn toward estimating the effects of minimum price policies. Table 5 shows the probability of not selecting a beer for several minimum price policies. When only Budweiser and Miller Lite are priced at the minimum price of $3.50, we estimate between 5.45% and, 8.22% of the control group do not drink beer. From implementing the nudge and omitting persistently inattentive participants in the long trap question treatment, we find that up to 10.99% of our sample would not select a beer. By contrast, the embedded trap question treatment indicates that fewer people would change their drinking habits, although the difference is not statistically different from the control group.

While the intended effect of a minimum pricing policy is to reduce the amount of alcohol consumed, unintended consequences arise when there are substitution possibilities. Because we assumed all of the craft options cost at least $5.00, a minimum beer price might cause some consumers to opt for a craft beer substitute whose price remains unaffected. Table 6 shows the increase in the purchase of craft beers (Sam Adams, Oskar Blues, and Marshall) as the minimum price increases in $0.50 increments. When the minimum price is $4.00, we would expect a 1% increase in the demand for craft beer. Contrast this to the change in the quantity of none, which would take a $5.00 minimum price to achieve a similar change. Given the stated goal of a minimum tax policy is curbing overconsumption of alcohol, this substitution toward craft beers is potentially problematic as craft beers generally have more alcohol by volume (ABV) than their macro counterparts. As such, it is possible that this policy will reduce beer consumption, but at the same time increase alcohol consumption overall.

V. CONCLUSION

Instead of generating quality responses, surveys incentivize inattention by tying payment to survey completion. We contribute to the literature on survey quality by testing an ex ante method to deal with inattention in choice experiments through feedback. Although not all participants correctly revise their answers, those who did provide statistically different answers. While notifying participants that they incorrectly answered a trap question might not entice all misresponders to pay more attention, it does have the potential to identify the most problematic in the sample set. We take our results to mean that the answers of "untrapped" participants are more consistent with a thoughtful response.

Trap questions can be useful for identifying the inattentive participants who might introduce measurement error, yet not all trap questions are as successful at identifying respondent inattention. Of the two most commonly used trap questions, the long trap question appears to be most appropriate for our method. Rather than simply catching inattentive participants, we show that providing feedback to those who incorrectly respond to this trap question significantly alters parameter estimates. Nudging participants with a long trap question appears to most appropriately deal with inattention bias. Practitioners might benefit from taking this approach, as it shows potential to minimize measurement error in online surveys. Future research might consider alternative data-collecting methods as they have the potential to generate different parameter estimates (Mjelde, Kim, and Lee 2016). It is possible that reminding inattentive participants to pay attention will be incentive enough to reduce inattention bias, but we cannot conclusively claim that simply catching participants entirely eliminate participant inattention.

Even when we control for inattention, we would only expect slight changes in beer consumption habits from a tax policy. On the upper end, a 1% tax on all beers would not even lead to a 0.2% decrease in the number of beers demanded away from home. It would take a minimum beer price of at least $5.00 to reduce beer consumption by 2% from current levels. Even those changes are likely to be overestimated for heavy drinkers, as it is generally believed that binge drinkers are less sensitive to price than the general population. It is also important to remember that taxes are likely to have varying effects for different socioeconomic characteristics. As such, a minimum tax proposal is likely to be inequitable: it has the potential to harm impoverished drinkers but is unlikely to have any effect on wealthier drinkers. Future research might consider changes in consumer behavior across socioeconomic characteristics.

Despite these shortcomings, the primary contribution of this paper is to encourage future researchers to more deeply consider how to obtain valid surveys data. Although this method might not have achieved substantial changes in the elasticity estimates, the increasing concerns about survey response quality imply even marginal improvements are worthy of consideration. For example, rather than simply nudging inattentive participants, future studies might "punish" poor respondents by paying them less. Relatedly, the Mechanical Turk platform has implemented respondent ratings systems, which has led to increases in attention (Hauser and Schwarz 2016). As such, in contrast to the likely ineffective tax analyzed by this article, this "tax" on survey participant inattention might actually be an efficient mechanism at achieving improved survey quality.

APPENDIX

Table A1 displays the characteristics of the beers in the sample. In an effort to focus on brand-specific perceptions, all beers in the sample are lagers. Alcohol percentages ranged from Miller Lite, which only contains 4.17% ABV, to 5.30% ABV (Oskar Blues Mama's Little Yella Pils). Beer Advocate rates the Sam Adams Boston Lager the highest quality option in our sample (86), while Corona and Miller Lite were rated as the lowest quality options (55). Note that the data shown in Table A1 on origin and ABV were not shown to participants and is shown here for informational purposes only. Participants were heterogeneous in their familiarity, but as expected, they were most familiar with Budweiser, but were least familiar with Oskar Blues and Marshall. For a more thorough discussion of the role of perceptions in choice experimental methods, see Malone and Lusk (2018b).

By comparing models that allow for differences in scale variance, we can determine whether differences in models represent differences in preferences or differences in relative scale (Swait and Louviere 1993). Table A2 shows model estimates for the treatment with the embedded short trap question. The likelihood ratio test indicates that adding a scale parameter does not significantly improve the model's fit; however, dividing the dataset into two groups is a better fit. By conducting another likelihood ratio test. we determine that responses from those that incorrectly answered the trap question were statistically different from those who correctly answered the trap question the first time ([[chi].sup.2.sub.df=1,0.05] = 126, p value < .001). We conclude that participants who correctly and incorrectly responded to the trap question displayed different preferences from each other. The parameters that inflated were those for the most popular beer choices, while the parameters for the craft options actually converged to zero. For example, the price parameter for the Marshall Pilsner decreased to -0.082, making its value not significantly different from zero at the 5% level.

Table A3 shows model estimates for the treatment with the long trap question. In this instance, the likelihood ratio test indicates that adding a relative scale parameter improves the goodness-of-fit. By conducting a likelihood ratio test, we determine that responses from those that incorrectly answered the trap question were statistically different from those who correctly answered the trap question the first time ([[chi].sup.2.sub.df=12,0.05] = 145, p value < .001). The results in this treatment follow a pattern similar to that exhibited by the above short trap question treatment, although more parameters lose their statistical significance. Five of the incorrect participant model's 12 parameter estimates are not statistically significant from zero. Again, parameter estimates for incorrect participants are not all higher than those who responded correctly. For example, while the Budweiser parameter estimate is 1.621 for incorrect participants, it is 1.968 for correct participants. For Miller Lite, we observe a parameter of 1.821 for incorrect participants and a parameter of 1.167 for incorrect participants.

Table A4 shows the parameter estimates for those who missed the trap questions and did not revise to correct their wrong response versus those who did revise. When notified of their wrong response in the short trap question group, 53 of the 122 originally incorrect responses changed their response to a correct answer. Parameter estimates for persons who revised their responses were statistically different from those who did not revise their responses. ([[chi].sup.2.sub.df=12,0.05] = 35, p value < .001). Parameter estimates are different for those who correctly revised their responses. For example, the Corona parameter nearly doubled when comparing revising participants with nonrevising participants. When notified of their wrong answer in the long trap question, 98 of the 150 originally incorrect responses changed their response to a correct answer. Parameter estimates for participants who correctly revised their responses to the long trap question were statistically different from those who did not correctly revise their responses ([[chi].sup.2.sub.df=12,0.05] = 32, p value < .001). Those who did not revise their responses responded more randomly than those who did revise their responses--fewer parameters were statistically significant at the 0.05 level. The participants who do revise their responses to the long trap question are statistically different from the participants who correctly responded the first time ([[chi].sup.2.sub.df=12,0.05] = 46, p value < .001), but the majority of the variation comes from those who do not revise their answers, as those participants were statistically different from the rest of the sample ([[chi].sup.2.sub.df=12,0.05] = 83, p value < .001).
TABLE A1
Characteristics of the Beers in the Sample

Beer                   Style                  Brewery

Budweiser              American Adjunct       Anheuser-Busch InBev
                         Lager
Miller Lite            Light Lager            Miller Brewing
                                                Company
Corona                 American Adjunct       Grupo Modelo
                         Lager
Sam Adams              Vienna Lager           Boston Beer Company
Mama's Little Yella    Czech Pilsner Lager    Oskar Blues Brewing
  Pils                                          Company
Old Pavilion Pilsner   German Pilsner Lager   Marshall Brewing
                                                Company

Beer                   Origin               IBUs (a)   ABV (b)

Budweiser              Missouri, USA           12       5.00%

Miller Lite            Wisconsin, USA          10       4.17%

Corona                 Mexico                  19       4.60%

Sam Adams              Massachusetts, USA      30       4.90%
Mama's Little Yella    Colorado, USA           35       5.30%
  Pils
Old Pavilion Pilsner   Oklahoma, USA           28       5.00%

                               Brand
Beer                      Familiarity (c)

Budweiser              2.43 [2.40. 2.47] (d)


Miller Lite              2.54 [2.50, 2.57]

Corona                   2.40 [2.37, 2.43]

Sam Adams                2.57 [2.54, 2.60]
Mama's Little Yella      1.34 [1.31, 1.37]
  Pils
Old Pavilion Pilsner     1.35 [1.32, 1.38]

(a) International Bitterness Units.

(b) Alcohol by Volume.

(c) Brand familiarity evaluated on a 3-point scale (3 = high
familiarity, 1 = low familiarity).

(d) Numbers in brackets are 95% confidence interval.

TABLE A2
MNL Model Estimates for the Embedded Trap Question Treatment

                                                   Relative Scale
Variable                           MNL               Parameter

Price of Miller Lite      -0.253 * (0.030) (a)  -0.231 * (0.030)
Price of Budweiser          -0.246 * (0.025)    -0.224 * (0.026)
Price of Corona             -0.226 * (0.023)    -0.203 * (0.024)
Price of Sam Adams Lager    -0.219 * (0.028)    -0.195 * (0.028)
Price of Oskar Blues        -0.334 * (0.049)    -0.306 * (0.047)
  Pilsner
Price of Marshall           -0.284 * (0.037)    -0.258 * (0.036)
  Pilsner
Miller Lite                  1.697 * (0.142)      1.524 * (0.161)
Budweiser                    2.124 * (0.124)      1.918 * (0.159)
Corona                       2.260 * (0.117)      2.037 * (0.161)
Sam Adams Lager              1.743 * (0.134)      1.555 * (0.158)
Oskar Blues Pilsner          1.080 * (0.208)      0.993 * (0.194)
Marshall Pilsner             1.412 * (0.166)      1,287 * (0.163)
Scale parameter (c)                               1.139 * (0.083)
Log likelihood                  -7,929.5             -7,928
AIC                               15,883               15.882
Number of observations            4,472                4,472
Number of participants             559                  559

                                 Correct             Incorrect
Variable                       Respondents          Respondents

Price of Miller Lite      -0.285 * (b) (0.038)  -0.197 * (0.053)
Price of Budweiser          -0.284 * (0.029)    -0.131 * (0.051)
Price of Corona             -0.231 * (0.026)    -0.215 * (0.055)
Price of Sam Adams Lager    -0.201 * (0.032)    -0.272 * (0.057)
Price of Oskar Blues        -0.370 * (0.055)    -0.215 * (0.104)
  Pilsner
Price of Marshall           -0.316 * (0.040)     -0.082 (0.098)
  Pilsner
Miller Lite                  1.626 * (0.171)      2.026 * (0.267)
Budweiser                    2.227 * (0.141)      1.828 * (0.262)
Corona                       2.328 * (0.130)      2.005 * (0.273)
Sam Adams Lager              1.605 * (0.153)      2.192 * (0.277)
Oskar Blues Pilsner          1.225 * (0.233)       0.590 (0.466)
Marshall Pilsner             1.632 * (0.179)       0.068 (0.468)
Scale parameter (c)
Log likelihood                   -6,158             -1,708.5
AIC                               12,340               3,441
Number of observations            3,496                 976
Number of participants             437                  122

Note. AIC, Akaike information criterion.

(a) Numbers in parentheses are standard errors.

(b) To compare these parameter estimates to the MNL with a relative
scale parameter, the parameters for the MNL with correct respondents
should be multiplied by the scale parameter value (1.139).

(c) Scale parameter is the scale of correct participant relative to
the incorrect participants, with the latter normalized to one.

(d) Designates statistical significance at the 5% level.

TABLE A3
MNL Model Estimates for the Long Trap Question Treatment

                                                   Relative Scale
Variable                           MNL               Parameter

Price of Miller Lite      -0.29! * (0.032) (a)  -0.230 * (0.029)
Price of Budweiser          -0.250 * (0.024)    -0.206 * (0.021)
Price of Corona             -0.266 * (0.023)    -0.215 *(0.022)
Price of Sam Adams Lager    -0.301 * (0.029)    -0.245 * (0.026)
Price of Oskar Blues        -0.416 * (0.057)    -0.347 * (0.049)
  Pilsner
Price of Marshall           -0.289 * (0.037)    -0.236 * (0.032)
  Pilsner
Miller Lite                  1.270 * (0.142)      0.970 * (0.129)
Budweiser                    1.801 * (0.113)      1.417 * (0.123)
Corona                       1.941 * (0.111)      1.523 * (0.126)
Sam Adams Lager              1.548 * (0.130)      1.206 * (0.126)
Oskar Blues Pilsner          0.661 * (0.231)      0.560 * (0.186)
Marshall Pilsner             0.910 * (0.164)      0.736 * (0.133)
Scale parameter (c)                               1.366 * (0.098)
Log likelihood                  -8,307.5            -8,296.5
AIC                               16.639               16,619
Number of observations            4.728                4,728
Number of participants             591                  591

                                 Correct             Incorrect
Variable                       Respondents          Respondents

Price of Miller Lite      -0.311 * (b) (0.038)  -0.254 * (0.059)
Price of Budweiser          -0.331 * (0.028)     -0.054 (0.044)
Price of Corona             -0.325 * (0.027)     -0.087 (0.046)
Price of Sam Adams Lager    -0.380 * (0.036)    -0.136 * (0.050)
Price of Oskar Blues        -0.526 * (0.075)    -0.213 * (0.093)
  Pilsner
Price of Marshall           -0.356 * (0.044)     -0.093 (0.074)
  Pilsner
Miller Lite                  1.167 * (0.167)      1.821 * (0.277)
Budweiser                    1.968 * (0.131)      1.621 * (0.232)
Corona                       2.093 * (0.126)      1.629 * (0.240)
Sam Adams Lager              1.644 * (0.156)      1.631 * (0.253)
Oskar Blues Pilsner          0.858 * (0.287)       0.625 (0.417)
Marshall Pilsner             1.086 * (0.187)       0.529 (0.356)
Scale parameter (c)
Log likelihood                   -6.104              -2,131
AIC                               12,232               4,286
Number of observations            3,528                1,200
Number of participants             441                  150

Note: AIC, Akaike information criterion.

(a) Numbers in parentheses are standard errors.

(b) To compare these parameter estimates to the MNL with a relative
scale parameter, the parameters for the MNL with correct respondents
should be multiplied by the scale parameter value (1.366).

(c) Scale parameter is the scale of correct participant relative to
the incorrect participants, with the latter normalized to one.

* Designates statistical significance at the 5% level.

TABLE A4
MNL Model for Participants Who Missed the Trap Question in the Beer
Choice Experiment

Variable                            Embedded Short Trap Question

                            Revised Correctly     Revised Incorrectly

Price of Miller Lite      -0.204 * (0.081) (a)   -0.191 * (0.070)
Price of Budweiser           -0.073 (0.075)      -0.180 * (0.070)
Price of Corona             -0.322 * (0.090)     -0.144 * (0.071)
Price of Sam Adams Lager    -0.318 * (0.087)     -0.237 * (0.076)
Price of Oskar Blues         -0.141 (0.121)       -0.420 (0.221)
  Pilsner
Price of Marshall            -0.012 (0.161)       -0.125 (0.125)
  Pilsner
Miller Lite                  2.630 * (0.438)        1.711 * (0.344)
Budweiser                    2.218 * (0.428)        1.692 * (0.342)
Corona                       2.949 * (0.456)        1.470 * (0.350)
Sam Adams Lager              2.997 * (0.449)        1.720 * (0.363)
Oskar Blues Pilsner          1.375 * (0.599)         0.370 (0.878)
Marshall Pilsner              0.172 (0.801)          0.070 (0.581)
Log likelihood                    -730                -960.5
AIC                               1.484                  1945
Number of observations             424                    552
Number of participants              53                    69

Variable                              Long Trap Question

                           Revised Correctly   Revised Incorrectly

Price of Miller Lite       -0.301 *(0.074)     -0.167 (0.098)
Price of Budweiser         -0.058 (0.055)      -0.047 (0.073)
Price of Corona           -0.162 * (0.056)       0.088 (0.086)
Price of Sam Adams Lager   -0.115 (0.065)     -0.170 * (0.080)
Price of Oskar Blues      -0.293 * (0.129)     -0.110 (0.137)
  Pilsner
Price of Marshall          -0.168 (0.096)        0.019 (0.118)
  Pilsner
Miller Lite                 1.839 * (0.337)      1.922 * (0.497)
Budweiser                   1.421 * (0.284)      2.128 * (0.416)
Corona                      1.881 * (0.281)      1.067 * (0.480)
Sam Adams Lager             1.249 * (0.320)      2.434 * (0.433)
Oskar Blues Pilsner          0.606 (0.550)        0.908 (0.663)
Marshall Pilsner             0.595 (0.445)        0.636 (0.611)
Log likelihood                 -1,380               -735
AIC                              2,784                1,494
Number of observations            784                  416
Number of participants            98                   52

Note: AIC, Akaike information criterion.

(a) Numbers in parentheses are standard errors.

* Designates statistical significance at the 5% level.


ABBREVIATIONS

ABV: Alcohol by Volume

IBU: International Bitterness Unit

IMCs: Instructional Manipulation Checks

MNL: Multinomial Logit

SSI: Survey Sampling, International

doi: 10.1111/ecin.12706

REFERENCES

Aronow, P. M., J. Baron, and L. Pinson. "A Note on Dropping Experimental Subjects Who Fail a Manipulation Check." Unpublished manuscript. 2016.

de Bekker-Grob, E. W., M. Ryan, and K. Gerard. "Discrete Choice Experiments in Health Economics: A Review of the Literature." Health Economics, 21(2), 2012, 145-72.

Berinsky, A. J., M. Margolis, and M. Sances. "Separating the Shirkers from the Workers? Making Sure Participants Pay Attention on Internet Surveys." American Journal of Political Science, 58(3), 2014, 739-53.

Bray, J. W., B. R. Loomis, and M. Engelen. "You Save Money When You Buy in Bulk: Does Volume-Based Pricing Cause People to Buy More Beer?" Health Economics, 18(5), 2009, 607-18.

Brennan, A., P. Meier, R. Purshouse, R. Rafia, M. Yang, D. Hill-Macmanus, C. Angus, and J. Holmes. "The Sheffield Alcohol Policy Model--A Mathematical Description." Health Economics, 24(10), 2015, 1368-88.

Brooks, K., and J. L. Lusk. "Stated and Revealed Preferences for Organic and Cloned Milk: Combining Choice Experiment and Scanner Data." American Journal of Agricultural Economics, 92(4), 2010, 1229-41.

Bryan, S., and P. Dolan. "Discrete Choice Experiments in Health Economics: For Better or for Worse?" European Journal of Health Economics, 5(3), 2004, 199-202.

Callinan, S., R. Room, and P. Dietze. "Alcohol Price Policies as an Instrument of Health Equity: Differential Effects of Tax and Minimum Price Measures." Alcohol and Alcoholism, 50(6), 2015, 629-30.

Carlsson, F. "Non-Market Valuation: Stated Preference Methods," in The Oxford Handbook of the Economics of Food Consumption and Policy, edited by J. L. Lusk, J. Roosen, and J. Shogren. New York: Oxford University Press, 2011, 181-214.

Carlsson, F., and P. Martinsson. "Do Hypothetical and Actual Marginal Willingness to Pay Differ in Choice Experiments? Application to the Valuation of the Environment." Journal of Environmental Economics and Management, 41(2), 2001, 179-92.

Cesur, R., and I. R. Kelly. "Who Pays the Bar Tab? Beer Consumption and Economic Growth in the United States." Economic Inquiry, 52(1), 2014, 477-94.

Chang, J. B., J. L. Lusk, and F. Bailey Norwood. "How Closely Do Hypothetical Surveys and Laboratory Experiments Predict Field Behavior?" American Journal of Agricultural Economics, 91(2), 2009, 518-34.

Clark, M. D., D. Determann, S. Petrou, D. Moro, and E. W. deBekker-Grob. "Discrete Choice Experiments in Health Economics: A Review of the Literature." PharmacoEconomics, 32(9), 2014, 883-902.

Clifford, S., and J. Jerit. "Do Attempts to Improve Participant Attention Increase Social Desirability Bias?" Public Opinion Quarterly, 79(3), 2015, 790-802.

Craven, B. M., M. L. Marlow, and A. F. Shiers. "The Economics of Minimum Pricing for Alcohol." Economic Affairs, 33(2), 2013, 174-89.

Cummings, R. G., and L. O. Taylor. "Unbiased Value Estimates for Environmental Goods: A Cheap Talk Design for the Contingent Valuation Method." American Economic Review, 89(3), 1999, 649-65.

Curtin, R., S. Presser, and E. Singer. "Changes in Telephone Survey Nonresponse over the Past Quarter Century." Public Opinion Quarterly, 69(1), 2005, 87-98.

Elzinga, K. G., C. H. Tremblay, and V. J. Tremblay. "Craft Beer in the United States: History, Numbers, and Geography." Journal of Wine Economics, 10(3), 2015, 242-74.

Fosdick, R. B., and A. L. Scott. Toward Liquor Control. New York: Harper and Brothers. 1933.

Freeman, D. G. "Beer and the Business Cycle." Applied Economics Letters, 8(1), 2001, 51-54.

Gao, Z., L. A. House, and J. Xie. "Online Survey Data Quality and Its Implication for Willingness-To-Pay: A Cross-Country Comparison." Canadian Journal of Agricultural Economics, 64(2), 2016, 199-221.

Gohmann, S. F. "Why Are There So Few Breweries in the South?" Entrepreneurship Theory and Practice, 40(5), 2016. 1071-92.

Hahn, F. "America Now Has More Breweries than Ever. And That Might Be a Problem." Washington Post, January 18, 2016.

Hauser, D. J., and N. Schwarz. "Attentive Turkers: MTurk Participants Perform Better on Online Attention Checks than Do Subject Pool Participants." Behavior Research Methods, 48(1), 2016, 400-7.

Hensher, D. A., and W. H. Greene. "Non-Attendance and Dual Processing of Common-Metric Attributes in Choice Analysis: A Latent Class Specification." Empirical Economics, 39(2), 2010, 413-26.

Hensher, D., J. Louviere, and J. Swait. "Combining Sources of Preference Data." Journal of Econometrics, 89(1), 1998, 197-221.

Hole, A. R. "A Discrete Choice Model with Endogenous Attribute Attendance." Economics Letters, 110(3), 2011, 203-5.

Hole, A. R., J. R. Kolstad, and D. Gyrd-Handsen. "Inferred vs. Stated Attribute Non-Attendance in Choice Experiments: A Study of Doctors' Prescription Behaviour." Journal of Economic Behavior & Organization, 96, 2013, 21-31.

Jacquemet, N., R.-V. Joule, S. Luchini, and J. F. Shogren. "Preference Elicitation under Oath." Journal of Environmental Economics and Management, 65(1), 2013, 110-32.

Jennings, L. "Alcohol Prices Rise at Restaurants, Bars." Restaurant News, May 24, 2013.

Jones, M. S., L. A. House, and Z. Gao. "Participant Screening and Revealed Preference Axioms: Testing Quarantining Methods for Enhanced Data Quality in Web Panel Surveys." Public Opinion Quarterly, 79(3), 2015, 687-709.

Krinsky, I., and A. L. Robb. "On Approximating the Statistical Properties of Elasticities." Review of Economics and Statistics, 68(4), 1986, 715-19.

Lancsar, E., and J. Louviere. "Deleting 'Irrational' Responses from Discrete Choice Experiments: A Case of Investigating or Imposing Preferences?" Health Economics, 15(8), 2006, 797-811.

LaVallee, R. A., and H.-y. Yi. "Apparent Per Capita Alcohol Consumption: National, State, and Regional Trends, 1977-2009." Surveillance Report No. 92. Bethesda, MD: National Institute on Alcohol Abuse and Alcoholism, 2011.

List, J. A. "Do Explicit Warnings Eliminate the Hypothetical Bias in Elicitation Procedures? Evidence from Field Auctions for Sportscards." American Economic Review, 91(5), 2001, 1498-507.

Louviere, J. J., D. A. Hensher, and J. D. Swait. Stated Choice Methods: Analysis and Applications. Cambridge: Cambridge University Press, 2000.

Ludbrook, A. "Minimum Pricing of Alcohol." Health Economics, 18(12), 2009, 1357-60.

Lusk, J. L. "Effects of Cheap Talk on Consumer Willingness-to-Pay for Golden Rice." American Journal of Agricultural Economics, 85(4), 2003, 840-56.

Lusk, J. L., and T. C. Schroeder. "Are Choice Experiments Incentive Compatible? A Test with Quality Differentiated Beef Steaks." American Journal of Agricultural Economics, 86(2), 2004, 467-82.

Malone, T., and J. L. Lusk. "Brewing Up Entrepreneurship: Government Intervention in Beer." Journal of Entrepreneurship and Public Policy, 5(3), 2016. 325-42.

--. "Consequences of Participant Inattention with an Application to Carbon Taxes for Meat Products." Ecological Economics, 145, 2018a, 218-30.

--. "A Simple Diagnostic Measure of Inattention Bias in Discrete Choice Models." European Review of Agricultural Economics, 45(3), 2018b, 455-62.

--. "If You Brew It, Who Will Come? Market Segments in the American Beer Market." Agribusiness: An International Journal, 34(2), 2018c, 204-21.

Maniaci, M. R., and R. D. Rogge. "Caring about Carelessness: Participant Inattention and Its Effects on Research." Journal of Research in Personality, 48, 2014, 61-83.

Meyer, B. D., W. K. C. Mok, and J. X. Sullican. "Household Surveys in Crisis." Journal of Economic Perspectives, 29(4), 2015, 1-29.

Mjelde, J. W., T.-K. Kim, and C.-K. Lee. "Comparison of Internet and Interview Survey Modes When Estimating Willingness to Pay Using Choice Experiments." Applied Economics Letters, 23(1), 2016, 74-7.

Naimi, T. S., R. D. Brewer, J. W. Miller, C. Okoro, and C. Mehrotra. "What Do Binge Drinkers Drink? Implications for Alcohol Control Policy." American Journal of Preventive Medicine, 33(3), 2007, 188-93.

Nelson, J. P. "Estimating the Price Elasticity of Beer: Meta-Analysis of Data with Heterogeneity, Dependence, and Publication Bias." Journal of Health Economics, 33, 2014, 180-87.

Oppenheimer, D., T. Meyvis, and N. Davidenko. "Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power." Journal of Experimental Social Psychology, 45(4), 2009, 867-72.

Peters, B. L., and E. Stringham. "No Booze? You May Lose: Why Drinkers Earn More Money than Nondrinkers." Journal of Labor Research, 27(3), 2006, 411-21.

Scott, A. "Identifying and Analysing Dominant Preferences in Discrete Choice Experiments: An Application in Health Care." Journal of Economic Psychology, 23(3), 2002, 383-98.

Stein, Jeff. "Republications Say They've Slashed Taxes on Small Breweries. But Big Alcohol May Be the Biggest Winners." Washington Post: Wonkblog Analysis, January 3, 2018.

Stringham, E., and I. Pulan. "Evaluating Economic Justifications for Alcohol Restrictions." American Journal of Economics and Sociology, 65(4), 2006, 971-90.

Swait, J., and R. L. Andrews. "Enriching Scanner Panel Models with Choice Experiments." Marketing Science, 22(4), 2003, 442-60.

Swait, J., and J. Louviere. "The Role of the Scale Parameter in the Estimation and Comparison of Multinomial Logit Models." Journal of Marketing Research, 30(3), 1993, 305-14.

Toro-Gonzalez, D., J. J. McCluskey, and R. C. Mittelhammer. "Beer Snobs Do Exist: Estimation of Beer Demand by Type." Journal of Agricultural and Resource Economics, 39(2), 2014, 174-87.

Train, K. Discrete Choice Methods with Simulation. Cambridge: Cambridge University Press, 2009.

Tremblay, V. J., and C. H. Tremblay. The US Brewing Industry: Data and Economic Analysis. Cambridge, MA: MIT Press, 2005.

Tremblay, C. H., and V. J. Tremblay. "Recent Economic Developments in the Import and Craft Segments of the U.S. Brewing Industry," in The Economics of Beer, edited by J. Swinnen. Oxford: Oxford University Press, 2011, 141-60.

Wagenaar, A. C., M. J. Salois, and K. A. Komro. "Effects of Beverage Alcohol Price and Tax Levels on Drinking: A Meta-Analysis of 1003 Estimates from 112 Studies." Addiction, 104(2), 2009, 179-90.

Malone: Assistant Professor, Department of Agricultural, Food, and Resource Economics, Michigan State University, East Lansing, MI 48824. Phone 316-990-6182, Fax 517-432-1800, E-mail tmalone@msu.edu

Lusk: Distinguished Professor and Department Head, Department of Agricultural Economics, Purdue University, West Lafayette, IN 47907. Phone 765-494-4191, Fax 765-494-4191, E-mail jlusk@purdue.edu

(1.) Trap questions are synonymous with "validation" or "red herring" questions. Additionally, "screeners" or Instructional Manipulation Checks (IMCs) are a specific type of trap question where a participant is instructed to ignore the response format and select a specific answer (Berinsky, Margolis, and Sances 2014; Oppenheimer, Meyvis, and Davidenko 2009). For clarity, we refer to them throughout as trap questions.

(2.) We argue that these are the two most likely types of inattentive participants. It is possible that there are additional types of malrespondents. For example, some participants might be unwilling to answer a question honestly, which could lead to the impression that this participant is inattentive.

(3.) Rather than using trap questions, practitioners might also consider the "Random Response Share" as an alternative method for identifying inattentive respondents in discrete choice models (Malone and Lusk 2018b).

(4.) For a thorough discussion of survey participants, see Malone and Lusk (2018c).

(5.) It could be that differences between coefficients across treatments are due to differences in error variance (or scale) rather than differences in underlying preferences (Swait and Louviere 1993). As such, we also estimated models that allow for differences in scale across treatment, but determined that estimating separate models fit the data better. Results from these tests, along with standard multinomial logic (MNL) models, are in included in the Appendix.
TABLE 1
Random Parameter Logit Estimates for Treatments and all Data

                                 Control         Long Trap Question

Mean of random parameter
  Budweiser                1.812 * (0.276) (a)    2.044 * (0.287)
  Miller Lite                0.841 * (0.332)      1.451 * (0.331)
  Corona                     3.063 * (0.256)      3.282 * (0.218)
  Sam Adams                  4.111 * (0.234)      3.390 * (0.227)
  Oskar Blues               -0.479 (0.375)        0.311 (0.336)
  Marshall                   0.673 * (0.317)      1.341 * (0.284)

Nonrandom parameter
  Price                    -0.659 * (0.024)     -0.563 * (0.021)

Standard deviation of random parameter
  Budweiser                  4.035 * (0.287)      4.059 * (0.320)
  Miller Lite                4.752 * (0.402)      3.607 * (0.252)
  Corona                     3.885 * (0.249)      3.276 * (0.2I5)
  Sam Adams                  3.675 * (0.229)      3.620 * (0.245)
  Oskar Blues                4.437 * (0.348)      3.122 * (0.266)
  Marshall                   4.044 * (0.304)      3.053 * (0.234)

No. of observations               4,376                4,472
No. of respondents                 547                  559
Log likelihood                 -4,727.9             -5,004.5

                           Embedded Trap Question     Full Dataset

Mean of random parameter
  Budweiser                   1.892 * (0.297)       1.976 * (0.170)
  Miller Lite                 1.009 * (0.354)       1.206 * (0.184)
  Corona                      2.530 * (0.276)       3.306 * (0.135)
  Sam Adams                   3.504 * (0.243)       3.442 * (0.129)
  Oskar Blues                  0.103 (0.365)        0.508 * (0.172)
  Marshall                     0.668 (0.357)        0.785 * (0.190)

Nonrandom parameter
  Price                      -0.692 * (0.024)     -0.634 * (0.013)

Standard deviation of random parameter
  Budweiser                   4.468 * (0.296)       4.007 * (0.162)
  Miller Lite                 4.157 * (0.307)       4.198 * (0.180)
  Corona                      4.351 * (0.238)       3.674 * (0.139)
  Sam Adams                   4.095 * (0.278)       4.032 * (0.140)
  Oskar Blues                 3.674 * (0.403)       3.159 * (0.136)
  Marshall                    3.851 * (0.300)       3.834 * (0.171)

No. of observations                4,728                 13,576
No. of respondents                  591                  1,697
Log likelihood                   -5,082.9            -14,824.6

(a) Numbers in parentheses are standard errors.

* Designates statistical significance at the 5% level.

TABLE 2

Embedded Trap Question Random Parameter Logit Results

                                 Correct             Correct or
                               Respondents       Revised Respondents

Mean of random parameter
  Budweiser                1.253 * (0.322) (a)     1.062 * (0.392)
  Miller Lite                1.656 * (0.331)       1.336 * (0.336)
  Corona                     3.363 * (0.277)       2.759 * (0.345)
  Sam Adams                  3.696 * (0.290)       3.990 * (0.311)
  Oskar Blues                 0.161 (0.351)       -0.118 (0.448)
  Marshall                   1.657 * (0.267)       1.286 * (0.317)

Nonrandom parameter
  Price                    -0.635 * (0.026)     -0.613 * (0.024)

Standard deviation of random parameter
  Budweiser                  5.411 * (0.450)       4.917 * (0.429)
  Miller Lite                3.693 * (0.303)       4.302 * (0.334)
  Corona                     3.442 * (0.262)       3.861 * (0.371)
  Sam Adams                  4.249 * (0.357)       3.660 * (0.277)
  Oskar Blues                3.365 * (0.301)       3.715 * (0.347)
  Marshall                   3.234 * (0.284)       3.168 * (0.256)

No. of observations               3,496                 3,920
No. of respondents                 437                   490
Log likelihood                 -3,784.7             -4.274.3

                           Respondents Who       Correctly
                            Did Not Revise        Revised

Mean of random parameter
  Budweiser                1.908 * (0.454)    2.835 * (0.617)
  Miller Lite              1.712 * (0.562)    2.960 * (0.787)
  Corona                   2.225 * (0.433)    3.661 * (0.626)
  Sam Adams                1.757 * (0.557)    3.014 * (0.628)
  Oskar Blues              -0.596 (0.647)    -0.989 (1.639)
  Marshall                 -0.996 (0.959)     0.525 (0.858)

Nonrandom parameter
  Price                   -0.348 * (0.049)  -0.435 * (0.062)

Standard deviation of random parameter
  Budweiser                2.570 * (0.493)    2.318 * (0.499)
  Miller Lite              3.164 * (0.514)    3.789 * (0.754)
  Corona                   2.195 * (0.374)    2.969 * (0.559)
  Sam Adams                3.025 * (0.544)    2.991 * (0.603)
  Oskar Blues              1.297 * (0.360)    3.712 * (1.118)
  Marshall                 3.126 * (0.787)    2.120 * (0.517)

No. of observations              552                424
No. of respondents                69                 53
Log likelihood                 -706.0            -479.6

                             Incorrectly
                              Responded

Mean of random parameter
  Budweiser                2.335 * (0.378)
  Miller Lite              2.223 * (0.436)
  Corona                   2.702 * (0.376)
  Sam Adams                2.271 * (0.373)
  Oskar Blues              -1.014 (0.934)
  Marshall                 -0.607 (0.751)

Nonrandom parameter
  Price                   -0.381 * (0.039)

Standard deviation of random parameter
  Budweiser                2.291 * (0.337)
  Miller Lite              3.117 * (0.457)
  Corona                   2.492 * (0.308)
  Sam Adams                3.027 * (0.507)
  Oskar Blues              3.374 * (0.811)
  Marshall                 2.960 * (0.530)

No. of observations              976
No. of respondents               122
Log likelihood                -1,188.5

(a) Numbers in parentheses are standard errors.

* Designates statistical significance at the 5% level.

TABLE 3
Long Trap Question Random Parameter Logit Results

                                 Correct             Correct or
                               Respondents       Revised Respondents

Mean of random parameter
  Budweiser                2.289 * (0.395) (a)     1.692 * (0.398)
  Miller Lite                1.155 * (0.535)       1.425 * 10.390)
  Corona                     4.085 * (0.350)       3.232 * (0.280)
  Sam Adams                  4.585 * (0.360)       4.094 * (0.253)
  Oskar Blues                 0.793 (0.470)       -0.366 (0.551)
  Marshall                    0.785 (0.484)        0.935 * (0.368)

Nonrandom parameter
  Price                    -1.055 * (0.038)     -0.829 * (0.028)

Standard deviation of random parameter
  Budweiser                  5.859 * (0.442)       5.246 * (0.544)
  Miller Lite                5.565 * (0.485)       4.691 * (0.345)
  Corona                     5.323 * (0.387)       4.962 * (0.285)
  Sam Adams                  5.560 * (0.423)       4.362 * (0.266)
  Oskar Blues                3.669 * (0.270)       4.417 * (0.479)
  Marshall                   5.631 * (0.407)       4.508 * (0.385)

No. of observations               3.528                 4,312
No. of respondents                 441                   539
Log likelihood                 -3,328.5             -4,348.8

                           Respondents Who Did      Correctly
                               Not Revise            Revised

Mean of random parameter
  Budweiser                  1.812 * (0.464)     1.408 * (0.718)
  Miller Lite                1.652 * (0.419)     1.038 * (0.498)
  Corona                     2.135 * (0.469)     1.644 * (0.535)
  Sam Adams                  1.509 * (0.552)     2.299 * (0.525)
  Oskar Blues                0.898 * (0.425)     -0.753 (0.707)
  Marshall                  -0.419 (0.700)      -0.148 (0.677)

Nonrandom parameter
  Price                    -0.102 * (0.047)    -0.324 * (0.043)

Standard deviation of random parameter
  Budweiser                  2.232 * 10.461)     3.145 * (0.586)
  Miller Lite                1.201 * 10.316)     2.948 * (0.508)
  Corona                     2.231 * (0.412)     4.282 * (0.636)
  Sam Adams                  2.204 * (0.441)     3.310 * (0.581)
  Oskar Blues                0.955 * (0.334)     3.024 * (0.645)
  Marshall                   2.692 * (0.726)     2.708 * (0.485)

No. of observations                416                 784
No. of respondents                 52                   98
Log likelihood                  -595.7              -928.7

                             Incorrectly
                              Responded

Mean of random parameter
  Budweiser                2.014 * (0.353)
  Miller Lite              0.907 * (0.315)
  Corona                   1.788 * (0.347)
  Sam Adams                1.871 * (0.365)
  Oskar Blues               0.504 (0.345)
  Marshall                 -0.362 (0.471)

Nonrandom parameter
  Price                   -0.227 * (0.032)

Standard deviation of random parameter
  Budweiser                2.330 * (0.313)
  Miller Lite              2.523 * (0.283)
  Corona                   3.414 * (0.364)
  Sam Adams                2.823 * (0.372)
  Oskar Blues              1.746 * (0.326)
  Marshall                 2.792 * (0.661)

No. of observations             1,200
No. of respondents               150
Log likelihood                -1,549.1

(a) Numbers in parentheses are standard errors.

* Designates statistical significance at the 5% level.

TABLE 4

Beer Elasticities Derived by Estimating the
Percent Change in the Probability Chosen Given
a 1 % Increase in the Price of All the Beers

Control              Embedded TQ           Long TQ

-0.202%               -0.141%            -0.268%
[-0.253,-0.162]    [-0.18,-0.109]     [-0.321,-0.216]

                         Correct respondents

                      -0.183%            -0.503%
                   [-0.236,-0.139]   [-0.641,-0.378]

                     Correct or revised respondents

                      -0.182%            -0.330%
                   [-0.232,-0.143]   [-0.408,-0.252]

                     Respondents who did not revise

                      -0.094%            -0.013%
                   [-0.152,-0.057]    [-0.031,-0.001]

Notes: Estimates based on random parameter logit model
where attentive participants either correctly responded or
revised the trap question. Numbers in brackets are 95%
confidence intervals as calculated by the method derived by
Krinsky and Robb (19S6). TQ, trap question.

TABLE 5
The Probability of Not Selecting a Beer for Alternative Minium Price
Policies for the Attentive Participants

                                             Embedded TQ
Min.        Beers at the
Price       Minimum Price       Control Group   Correct or Revise

$3.50   Budweiser,                  6.70%             6.49%
        Miller Lite             [5.45, 8.22]       [5.19.8.06]
$4.00   Budweiser,                  7.20%             6.91%
        Miller Lite             [5.83, 8.82]      [5.55, 8.59]
$4.50   Budweiser,                  7.71%             7.34%
        Miller Lite             [6.24, 9.42]       [5.93.9.10]
$5.00   Budweiser,                  8.20%             7.75%
        Miller Lite,            [6.65. 9.99]      [6.30, 9.60]
        Sam Adams, Corona
$5.50   Budweiser,                  9.88%             9.24%
        Miller Lite,            [8.07, 11.97]     [7.53, 11.32]
        Sam Adams, Corona,
        Oskar Blues, Marshall

                                 Embedded TQ         Long TQ
Min.        Beers at the
Price       Minimum Price        No Revision    Correct or Revise

$3.50   Budweiser,                  6.31%             8.75%
        Miller Lite              [4.21,9.30]      [6.95, 11.15]
$4.00   Budweiser,                  6.71%             9.49%
        Miller Lite             [4.45, 9.84]      [7.54, 120.1]
$4.50   Budweiser,                  7.12%            10.24%
        Miller Lite             [4.76, 10.36]     [8.18, 12.90]
$5.00   Budweiser,                  7.52%            10.99%
        Miller Lite,            [5.03. 10.91]     [8.81, 13.75]
        Sam Adams, Corona
$5.50   Budweiser,                  8.46%            13.29%
        Miller Lite,            [5.67. 12.46]    [10.82, 16.49]
        Sam Adams, Corona,
        Oskar Blues, Marshall

                                  Long TQ
Min.        Beers at the
Price       Minimum Price       No Revision

$3.50   Budweiser,                 2.96%
        Miller Lite             [1.70,4.97]
$4.00   Budweiser,                 3.01%
        Miller Lite             [1.73,5.05]
$4.50   Budweiser,                 3.07%
        Miller Lite             [1.76,5.15]
$5.00   Budweiser,                 3.12%
        Miller Lite,            [1.79.5.26]
        Sam Adams, Corona
$5.50   Budweiser,                 3.26%
        Miller Lite,            [1.87,5.50]
        Sam Adams, Corona,
        Oskar Blues, Marshall

Notes: Numbers in brackets are 95% confidence intervals calculated
using 1,000 draws from the method derived by Krinsky and Robb (1986).
TQ, trap question.

TABLE 6
Increase in the Market Share for Craft Beer (a) Given Alternative
Minium Price Policies for the Attentive Participants

                                           Embedded TQ
Min.    Beers at the
Price   Minimum Price           Control Group   Correct or Revise

$3.50   Budweiser,                 13.47%            11.87%
        Miller Lite             [8.66, 19.19]     [8.46, 15.79]
$4.00   Budweiser,                 14.00%            12.80%
        Miller Lite             [9.08, 19.83]     [9.20, 17.76]
$4.50   Budweiser,                 14.52%            13.35%
        Miller Lite             [9.44, 20.47]     [9.62, 18.44]
$5.00   Budweiser,                 15.00%            13.92%
        Miller Lite,            [9.84,21.07]     [10.12, 19.05]
        Sam Adams, Corona
$5.50   Budweiser,                 15.48%            14.31%
        Miller Lite,            [10.21,21.65]    [10.38, 19.56]
        Sam Adams, Corona,
        Oskar Blues, Marshall

                                 Embedded TQ         Long TQ
Min.    Beers at the
Price   Minimum Price            No Revision    Correct or Revise

$3.50   Budweiser,                  6.53%            12.14%
        Miller Lite             [3.52, 11.28]     [8.59, 16.77]
$4.00   Budweiser,                  6.81%            12.76%
        Miller Lite             [3.71, 11.70]     [9.03, 17.43]
$4.50   Budweiser,                  7.12%            13.32%
        Miller Lite             [3.87, 12.09]     [9.48, 18.10]
$5.00   Budweiser,                  7.40%            13.83%
        Miller Lite,            [4.04, 12.50]     [9.90, 18.73]
        Sam Adams, Corona
$5.50   Budweiser,                  7.63%            14.24%
        Miller Lite,            [4.20, 12.86]    [10.15, 19.20]
        Sam Adams, Corona,
        Oskar Blues, Marshall

                                   Long TQ
Min.    Beers at the
Price   Minimum Price            No Revision

$3.50   Budweiser,                 13.85%
        Miller Lite             [8.89. 19.79]
$4.00   Budweiser,                 14.00%
        Miller Lite             [8.98,20.11]
$4.50   Budweiser,                 14.17%
        Miller Lite             [9.18, 20.34]
$5.00   Budweiser,                 14.33%
        Miller Lite,            19.31,20.56]
        Sam Adams, Corona
$5.50   Budweiser,                 14.53%
        Miller Lite,            [9.44, 20.78]
        Sam Adams, Corona,
        Oskar Blues, Marshall

Notes: Numbers in brackets are 95% confidence intervals derived by
the method outlined in Krinsky and Robb (1986). TQ, trap question.

(a) Craft beer options in the sample are Sam Adams Boston Lager,
Oskar Blues Mama's Little Yella Pils, and Marshall Old Pavillion
Pilsner.

FIGURE 1

Example Choice Experiment Question

Given these options, which beer would you choose?

Miller Lite    $3.00
Corona Extra   $3.00
Budweiser      $3.00
Yella Pils     $6.00
Marshall       $6.00
Samuel Adams   $3.00

I would
choose
none
of
these.

FIGURE 2
(A) Short Trap Question Embedded in a List and (B) Long Trap Question

A

How would you rate your familiarity with each of the following brands?

                          Low   Medium   High

Dogfish Head 90           ()      ()      ()
Minute IPA

If you live in the U.S.   ()      ()      ()
select High.

Bell's Amber Ale          ()      ()      ()

New Glarus Spotted        ()      ()      ()
Cow

B

Recent research on decision making shows that choices are affected by
context.

Differences in how people feel, their previous knowledge and
experience, and their environment can affect choices. To help us
understand how people make decisions, we are interested in
information about you. Specifically, we are interested in whether you
actually take the time to read the directions; if not, some results
may not tell us very much about decision making in the real world. To
show that you have read the instructions, please ignore the question
below about how you are feeling and instead check only the "none of
the above" option as your answer. Please click on the word that
describes how you are currently feeling.

Note: Twenty different emotions are listed after the question, with
the final option being "none of the above."
COPYRIGHT 2019 Blackwell Publishers Ltd.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Malone, Trey; Lusk, Jayson L.
Publication:Economic Inquiry
Date:Jan 1, 2019
Words:12208
Previous Article:QUANTITATIVE EASING AND THE UK STOCK MARKET: DOES THE BANK OF ENGLAND INFORMATION DISSEMINATION STRATEGY MATTER?
Next Article:FOSTERING THE BEST EXECUTION REGIME: AN EXPERIMENT ABOUT PECUNIARY SANCTIONS AND ACCOUNTABILITY IN FIDUCIARY MONEY MANAGEMENT.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters