Investigating the impact of inquiry mode on self-reported sexual behavior: theoretical considerations and review of the literature.In 1941, Alfred Kinsey began one of the first large-scale studies of sexual behavior in modern scientific history-an effort that was met with substantial resistance at the time (Bullough, 1998). Many of the early questions that were raised regarding Kinsey's research focused on whether sexual behavior should be studied at all. Kinsey forged on, despite his critics, and in the 70 years following this groundbreaking effort, sexual behavior research has grown exponentially, as has the recognition of its benefit to society. The emergence of the AIDS epidemic and increasing rates of other sexually transmitted infections (STIs), along with a greater awareness of the impact of sexual violence, has convincingly demonstrated the importance of a scientific understanding of sexual behavior as a potential threat to public health. Sexual behavior research has also provided insight into the prevalence and correlates of positive sexual functioning, sexual pleasure, and sexual satisfaction, as well as the importance of sexuality in the lives and relationships of men and women across the lifespan. Today, a larger number of people recognize the value of sexual behavior research; however, controversy remains as to how these behaviors should best be studied.
One such point of controversy, which can be traced back to the early work of Kinsey, is how to ask people about their sexual behavior. Kinsey himself advocated for the necessity of face-to-face interviewing, as he saw self-administered paper-and-pencil questionnaires to be an invitation for dishonest responding (Bullough, 1998). Researchers today share Kinsey's concerns about the accuracy of self-report data collected through self-administered questionnaires and face-to-face interviews, although the issue has become considerably more complex. Modern technology has given rise to a range of methods, or modes of inquiry, in self-report data collection, as researchers have begun capitalizing on the ubiquity of telephones, computers, and the Internet. These rapid advances have provided researchers with broad and affordable access to large and diverse samples of research participants but have done little to resolve questions regarding the accuracy of reporting and have made it even more difficult to establish equivalence across modes of inquiry.
Although the questions regarding inquiry mode and accuracy of self-report sexual data date all the way back to the early stages of sexual behavior research, they are by no means academic or inconsequential. Sexual behavior research is one of the most directly applied research areas in existence today, with applications ranging from informing public policy and legal statutes in the United States, to evaluating the efficacy of AIDS interventions in sub-Saharan Africa (e.g., Bloom et al., 2000). Given these broad applications, it is essential that sexual behavior researchers are aware of the degree to which methodological decisions may impact their findings and, in turn, the application of their findings to policies, laws, and clinical settings.
Measuring Sexual Behavior
Much as Kinsey did in the 1940s, researchers today rely heavily on self-reports in studying sexual behaviors, although such reports are inherently problematic. One of the major limitations of any self-report is a dependence on accurate recollection and accurate reporting of the targeted behavior by participants. Recognizing this major limitation, many researchers have sought to identify alternative methods of data collection to serve as a point of comparison for self-report data.
For example, biomarkers, such as the presence of semen in women's urine, have been suggested as points of comparison for particular self-reported behaviors, such as engagement in intercourse (Langhaug, Sherr, & Cowan, 2010). However, major restrictions inherent to the applicability of such markers (e.g., measures of semen in urine are applicable only to women engaging in unprotected vaginal intercourse with a man) greatly limit their utility. Further, even biological tests with high levels of specificity are bound to generate high rates of false positives, particularly in large, epidemiological studies of low-prevalence behaviors (Hamilton & Morris, 2010). Biomarkers also fail to capture information related to private attitudes and beliefs relating to sexual behavior, which are often essential to understanding and impacting decision making surrounding sexual behavior (e.g., MacPhail & Campbell, 2001).
Due to the inherent limitations of other methods available, self-report remains the dominant method of sexual behavior data collection. As such, a focus on optimizing self-report methodology has the greatest potential for improving sexual behavior research as a whole. Complex cognitive processes are involved in providing self-report responses. Participants must appraise the meaning of questions, search their memory for appropriate responses, and provide those responses in the correct answer format (e.g., Schwarz, 1999). Research has provided important insight into the degree to which question formats impact the self-report answers that participants provide (e.g., for a review, see Sudman, Bradburn, & Schwarz, 1996). Beyond a greater appreciation of the importance of question format, however, methodological researchers have acknowledged the importance of the context in which self-report questions are being asked; however, this context has received somewhat less research attention.
In regards to sexual behavior, self-report is not a unitary methodology, but rather a broad category encompassing a range of inquiry modes employed in an effort to capture attitudes and behaviors. Each of these modes engender a unique contextual environment that has the potential to impact a participant's responding. Reviewing the most frequently used modes of inquiry and the impact these modes may have on participants' responding elucidates the current understanding of an important methodological consideration in sexual behavior research.
Modes of Self-Report
Self-report modes of inquiry all elicit responses from participants about thoughts, behaviors, or experiences that have happened in the past. However, the means by which these responses are elicited greatly varies. Classically, researchers had few modes of inquiry from which to choose (Knapp & Kirk, 2003). Perhaps the first option considered for self-report data collection was a face-to-face interview in which a researcher would sit down with a participant and ask him or her questions. For a long time, the only available alternative to the face-to-face interview was a paper-and-pencil questionnaire in which participants were prompted by the text on a page to provide responses.
Modern technology has since provided a number of alternative iterations of the classic modes of inquiry. Interviews can now be conducted over the phone--a process that preserves some of the human contact of face-to-face interviews while giving them a greater sense of confidentiality. Some researchers have further limited the experimenter contact involved in phone-based interviews by allowing participants to respond to questions using buttons on a touch-tone phone, rather than verbalizing their responses (Knapp & Kirk, 2003). Further, paper-and-pencil surveys are now frequently replaced by computer-based surveys, sometimes called computer-assisted self-interviews (CASI), which can be completed either on site at a research facility or in a location of the participant's choosing, using the Internet as a means of data collection. In addition, an audio computer-assisted self-interview (A-CASI) can be used to effectively replace a human interviewer and actively question participants using a prerecorded audio component to negate any literacy requirements (Knapp & Kirk, 2003). More recently, mobile technologies, such as personal digital assistants (PDAs) and smartphones, have provided researchers with other convenient alternatives to the paper-and-pencil questionnaire (e.g., Vannier & O'Sullivan, 2008). These modern inquiry modes share many common elements with the more traditional forms of self-report but also provide additional benefits, including easier data entry, more precision in delivery of complex instructions, the ability to utilize skip patterns to avoid unnecessary or redundant questions, and the ability to enhance the representative nature of a sample through strategies such as random digit dialing. Further, because of the elements unique to each mode of self-report, it is possible that each may be differentially impacted by the sources of bias to which all forms of self-report research are potentially vulnerable.
Participation Bias in Sexual Behavior Research
Much like any other social science research, sexual behavior research is largely dependent on the ability of researchers to recruit a representative sample of a desired population. Given the personal nature of sexual behavior and the wide range of views on the appropriateness of discussing such behaviors, sex researchers have reason to be particularly concerned with participation bias--the systematic decision by certain types of individuals to seek out or avoid participation in a study (Catania, Gibson, Chitwood, & Coates, 1990). Given the unique demands of various inquiry modes, there exists the potential for differential levels of sampling or participation bias across the various modes of inquiry.
Survey-based research, such as that most commonly conducted in sexual behavior studies, is largely dependent on contacting individuals to solicit participation in a study. Depending on the mode of data collection, participation may involve showing up on site to complete participation, mailing back a survey, logging on to a computer, or answering a telephone. The unique demands of various modes may give certain types of participants various incentives or disincentives to participate and given that potential participants will have varying levels of motivation, these demands will likely have an impact on which individuals ultimately agree to participate.
There are indications that modes of inquiry have a substantial impact on the response rate of targeted participants. An accumulation of research suggests that Web-based surveys on a variety of topics yield 11% lower response rates than other modes of data collection, such as on-site interviews or paper-and-pencil questionnaires (Manfreda et al., 2008). There also are indications that these disparities grow larger when Web-based participation is solicited through non-computer-based methods, such as postal mail.
Concerns with lower rates of participation in Web-based research are tempered by research reviews indicating that Web-based samples are typically more diverse than traditional samples in respect to gender, socioeconomic status, geographic location, and age (Gosling, Vazire, Srivastava, & John, 2004). Further, Web-based samples appear to be relatively equivalent to traditional samples in regard to race. It is also worth noting that, although early critics of Web-based research suggested that participants may have been particularly psychologically dysfunctional or maladjusted, a number of studies provide evidence that counters this assumption (Gosling et al., 2004).
More pertinent to sexual behavior research is the relationship between response rates to Internet surveys and the types of question being asked. It would be problematic for online sex research if a certain subset of the population refused to participate in online sex research due to concerns about providing information about their private behaviors through such a medium. However, if people decide not to participate in online survey research, regardless of question content (i.e., due to disinterest in research participation, lack of familiarity with the Internet, or some other nonsystematic factors external to the survey topic), the concerns for sex researchers, specifically, might not be as great. There is some indication that response rates to Web-based surveys are not significantly related to question sensitivity, suggesting that individuals do not appear to be self-selecting out of studies to avoid answering sensitive questions (Cook, Heath, & Thompson, 2000).
Although participants do not seem deeply concerned with question sensitivity, there are several factors that impact a decision to participate in Web-based research. Two factors that appear to be particularly important in the decision-making process are saliency and confidentiality (Tourangeau & Yan, 2007). In other words, participants are more motivated to participate in a Web-based study if they believe the topic is relevant to them as individuals and are reasonably confident that their confidentiality will be protected. It is not clear to what extent these factors are specific to participation in Web-based research; relevance and confidentiality are likely to be important considerations for individuals who are invited to participate in other modes of data collection, too. Although it seems unlikely that mode of inquiry would have much of an impact on perceived relevance, it is reasonable to expect differences across modes in terms of perceived confidentiality (or even anonymity).
It is worth noting that some researchers have suggested methods for increasing relevance within Web-based research. For example, in relation to personality research, it has been proposed that offering participants feedback immediately following completion of a study may increase their motivation to participate fully (Gosling et al., 2004). Although the type of feedback that might be useful or appropriate in sexual behavior research is somewhat less clear than that provided by personality researchers, this strategy may still be useful in certain circumstances.
Sources of Bias within Self-Reports of Sexual Behavior
Beyond recruiting a representative sample, researchers must also work to obtain accurate data from those who volunteer to participate. The accuracy of self-report data are dependent on participants' abilities to correctly remember behaviors in which they previously engaged and their willingness to accurately report those behaviors to examiners. Although this is not unique to sexual behavior research, the cultural expectations tied to sexual behavior, along with the highly variable frequencies of targeted behaviors, make sexual behavior reports particularly vulnerable to biases that threaten accurate reporting (Schroder, Carey, & Vanable, 2003). Although there are many factors involved, much of the distortion in reported sexual behavior can be captured under three main sources of bias. Recall, or the ability of a participant to accurately remember the frequencies of behaviors over a variable span of time, is one of the major potential sources of bias. Another important source of bias is social desirability, or an individual's motivation to be viewed in a favorable light. A final issue is item response rates. Much like social desirability, item response bias largely relates to the social pressures regarding sexual behavior. Many individuals may avoid specific questions about sexual behavior by omitting responses, responding randomly, or repeatedly providing "zero" or "I don't know" responses. These three sources of bias have the potential to distort the results of sexual behavior research and, although entirely eliminating these confounds is impossible, steps may be taken to minimize their impact. One possible way to reduce bias may be through a careful consideration of mode of inquiry--a methodological component that has been linked to each of these three sources of bias.
Accuracy and Bias: Evaluating Sexual Behavior Research
Bias, as it is typically understood in research, is a systematic deviation from the "truth." As such, a conversation about bias requires some consideration of what is meant by truth or accuracy in the context of sexual behavior research. In terms of self-report, accuracy is dependent on a participant's ability to recall past behaviors or attitudes and their willingness to report those behaviors to researchers. Both of these factors play an important role in the information that is ultimately reported, and both are potentially vulnerable to various sources of bias.
The issues of accurate recollection and reporting are not unique to sex research, but they are particularly relevant. Recall is particularly important in that sex researchers are often interested in the number of times participants have engaged in a specific behavior over a specific period of time. When participants are less than perfect in their efforts to recollect that information, the possibility of memory bias is introduced wherein some external factor (e.g., recency or salience of the event or frequency of the behavior) could systematically impact which behaviors are recalled. Even when participants are able to accurately recall information, the possibility remains that they may elect not to share that information with researchers. Given the sensitive nature of sexual behaviors and attitudes, participants may feel intruded upon, or may provide responses that conform to various social expectations.
One of the greatest challenges in dealing with sexual behavior is addressing the issue of accuracy. As mentioned earlier, social policies, education programs, and intervention efforts worldwide are dependent on the information gained by sex research, and the stakes associated with significant inaccuracies are very high. As such, researchers are continually striving to provide more accurate measures of sexual behavior to better inform these applications. However, without a "gold standard" measurement technique to serve as a point of comparison, it is difficult to gauge the accuracy of a measure (e.g., Schroder et al., 2003). Sex researchers often operate on the "more is better" principle, assuming that measures or techniques that elicit higher rates of reported sexual behavior are getting closer to the actual rates at which the behavior took place (Tourangeau &Yan, 2007). This assumption is based on the observation that sexual behavior is personal, private, and sometimes socially unacceptable or embarrassing, and as such, participants are more likely to underreport than overreport behaviors (e.g., Catania et al., 1990; Gillmore, Leigh, Hoppe, & Morrison, 2010). Although researchers have developed a number of ways of estimating participants' behaviors "more accurately," it is important to note that any comparisons made between retrospective reports are estimates and cannot establish true accuracy.
A great deal of research has focused on question content, examining how factors such as question wording, types of questions, ordering, and specific instructions can impact the accuracy of responses being provided (e.g., Schwarz, 1999; Sudman et al., 1996). This body of research consistently suggests that individual question content and wording and the broader organization of questionnaires have a significant impact on participant responding. Researchers have been developing techniques to select item wording in a manner that minimizes the aforementioned sources of bias (e.g., Catania et al., 1996). Further, although a thorough review of item content considerations is beyond the scope of this article, a wealth of research exists to guide researchers through the process of developing the language of individual questions, selecting response fields or choices, and ordering questions within a survey in such a manner as to promote thoughtful responding to self-report questions (e.g., Dillman, 1999; Sudman & Bradburn, 1983; Sudman et al., 1996). The relationship between inquiry mode and participant responding has been studied less than the relationship between item content and accuracy, and more research on mode and responding is needed.
Accurate Recollection of Sexual Behavior
Sexual behavior researchers routinely ask participants to look back at their own behaviors over long periods of time--sometimes a participant's entire lifespan, a challenging task that requires participants to compute the frequency or number of specific behaviors, such as unprotected vaginal intercourse or unique sexual partners. Inevitably, some participants make mistakes in recollection. However, it is difficult to determine the extent of these memory errors without a more reliable point of comparison.
One method for promoting accurate recall is to minimize the recall period. This is often accomplished through the use of daily behavioral diaries, which can provide a near-instant self-report by having participants log relevant behaviors in a diary soon after they happen, minimizing the potential for forgetting or misremembering a behavior. However, one drawback of this method is the possibility that involvement in a diary-based study may lead participants to shift their behavior, possibly in an effort to become more congruent with social expectations or their own values. One example of such a shift was seen in a study examining mode-dependent effects in a sample of Hispanic college students (Schroder, Johnson, & Wiebe, 2007). The study sought to compare a number of inquiry modes with a daily diary. Researchers found that female participants began reporting significantly less sexual behavior as the study progressed--a trend that was interpreted to be an indication of reduced behavior in response to daily diary monitoring.
Despite some indications of behavior change associated with the mode of data collection, daily diaries have frequently been used as a point of comparison for more traditional retrospective self-report measures because of their potential to minimize memory bias. One such study found substantial discrepancies between daily diaries and retrospective reports made three months later (McAuliffe, DiFranceisco, & Reed, 2007). Based on comparisons with daily diaries, results indicated that participants both overreported and underreported sexual behaviors in the retrospective condition, although the majority of participants reported fewer behaviors overall in the retrospective condition as compared to the daily diary condition. Participants in the study tended to report fewer sexual partners, less intercourse, and less unprotected sex on retrospective measures when compared to daily diaries. The discrepancies in reports were substantial, as exemplified by a 31% mean difference between diary and retrospective reports for number of sexual partners. Such differences call into question the degree of confidence placed in retrospective self-reports. Other researchers have found similar results when comparing daily diaries and retrospective reports (e.g., Gillmore et al., 2010; Graham, Catania, Brand, Duong, & Canchola, 2002).
Although some degree of memory inaccuracy is unavoidable, there is some indication that the mode of retrospective self-report can modestly impact consistency between diaries and later reporting of sexual behavior. One such example comes from a study examining adult sexual behavior using several different modes of retrospective self-reports--including paper-and-pencil questionnaire, CASI, and A-CASI--and comparing them to reports made using a daily diary technique (McAuliffe et al., 2007). The results from the study indicated that participants in the CASI and A-CASI conditions made retrospective reports that were somewhat more consistent with daily diaries than participants in the paper-and-pencil questionnaire condition. Given that previous research has consistently found daily diaries to be less vulnerable to memory bias than retrospective reports, the greater consistency observed between diaries and CASI-based retrospective reports may suggest that participants in the computer conditions were less impacted by memory bias than those in the paper-and-pencil conditions. More research is needed to confirm these findings and expand them to other modes of self-report data collection.
Research to date provides strong support for recall as an important factor in the degree to which sexual behavior is accurately reported but has not advanced far enough to establish specific methodological guidelines to minimize memory bias. There is also some indication that, for certain populations, computer-based modes of inquiry may increase respondents' comfort with and interest in questionnaires, which, in turn, may increase their motivation to complete the surveys (Vannier & O'Sullivan, 2008). It is possible that this increased motivation to participate translates into a greater willingness to meet the recall demands of a retrospective questionnaire, although no study to date has directly examined this. Clearly, more research is needed to make any conclusions about the impact of inquiry mode on recall and to expand the recall literature to examine other modes of administration (e.g., Internet-based surveys and PDAs).
Social Desirability and Self-Reported Sexual Behavior
Social desirability generally refers to an effort by participants to be favorably evaluated. Researchers have long been concerned about the impact of social desirability on the content participants are willing to report. Social desirability is a particular concern for the measurement of sexual behaviors, which are typically kept private and are rarely disclosed to strangers. The concept of social desirability has been further broken down by researchers recognizing that participants not only aim for positive evaluation by others, but also strive to protect their own self-image as well (e.g., Paulhus, 1984). Impression management refers to participants' efforts to tailor their responses in such a way as to maintain or project a pro-social image to others who may be viewing the results. This is distinct from self-deception, which is conceptualized as an unconscious effort by participants to respond in an overly favorable way to protect or inflate their self-image (Paulhus, 1984). As impression management is the best-studied source of social desirability bias to date, it requires a thorough review to establish any possible mode of inquiry-related considerations that might be made.
Impression management. The vast majority of research relating to the relationship between mode and social desirability has been focused on impression management (e.g., Richman, Weisband, Kiesler, & Drasgow, 1999; Testa, Livingston, & VanZile Tamsen, 2005; Wood, Nosko, Desmarais, Ross, & Irvine, 2006). Theoretically, individuals reporting details about their sexual behaviors to an interviewer sitting across from them may be more likely to engage in impression management than those completing a paper-and-pencil or computer-based questionnaire in private. This expectation has been supported by an accumulation of research suggesting that participants score higher on measures of socially desirable responding in face-to-face interviews than in computer-based questionnaires (Richman et al., 1999).
There are a number of identified factors that make impression management efforts more likely. One important factor is the type of question being asked. Questions that relate to sensitive information and tap into gender or cultural roles or some form of stigma are more likely to elicit motivated "editing," or impression management efforts (Tourangeau &Yan, 2007). This is further amplified when participants view questions as intrusive or have concerns about possible negative repercussions for disclosing sensitive information to researchers. The relationship between these factors clearly supports the possibility of an inquiry mode-dependent effect in sexual behavior research. Sexual behaviors are considered to be private and are typically tied to both gender roles and cultural values. Further, many people do see questions about sexual behaviors to be somewhat intrusive and may be concerned about these behaviors being made public. Some researchers have suggested that the perceived level of intrusiveness or threat of disclosure may vary across self-report mode.
Social desirability theory would predict higher rates of impression management regarding sensitive topics, such as sexual behavior, particularly in modes in which social interaction is more obvious (Meston, Heiman, Trapnell, & Paulhus, 1998). Existing research related to sexual behavior is mixed but generally does show support for a possible mode-dependent effect, which may impact the reporting of some behaviors but not others. For example, in regards to lifetime number of sexual partners, a review of seven large-scale, population-based surveys has not shown support for variation across inquiry mode, perhaps suggesting that the question may not be as sensitive as previously thought or that sensitivity of the question may not be the only or primary determinant of impression management (Hamilton & Morris, 2010). In contrast, one study examining a wider range of behaviors provides some indication that mode-dependent differences may exist for some behaviors, such as unprotected oral sex and recent sexual partners, with more of these specific behaviors being reported via anonymous CASI conditions than self-administered paper-and-pencil questionnaires; this relationship was not found for other behaviors, such as lifetime sexual partners (Brown & Vanable, 2009). It is possible that topics such as multiple sexual partners may be commonly discussed or widely experienced and, thus, do not evoke significant impression management efforts.
It appears that there is a threshold of sensitivity that, when crossed, leads participants to engage in impression management at higher rates for some inquiry modes than others. Environmental factors that vary across modes present one possible mechanism through which social desirability might operate. Differences in inquiry mode may impact important factors, such as proximity to the experimenter and degree of anonymity (or the participants' perception of anonymity). These factors may, in turn, be an important source of systematic variability in responding. Accumulating research suggests that mode itself is not sufficient to predict socially desirable responding, but may interact with other factors, such as question content or presence of others, to impact distortion efforts (Richman et al., 1999). Such findings are in keeping with social desirability theory and support the possibility of inquiry mode-related distortion effects in sexual behavior research. Although a meta-analysis of impression management and mode of inquiry research revealed no significant overall difference between computer-based and paper-and-pencil questionnaires, with consideration of moderators, participants completing computer-administered questionnaires scored significantly lower on measures of socially desirable responding than those completing paper-and-pencil questionnaires, suggesting less distortion in their responses (Richman et al., 1999). Specifically, when participants were alone and were able to skip questions and backtrack, they showed less distortion in computer-based conditions than in paper-and-pencil conditions. Other studies have supported the finding that participants are more candid when responding to computer-based questionnaires than face-to-face interviewing or paper-and-pencil formats (Feigelson & Dwight, 2000).
Researchers also have sought to identify the mechanisms contributing to inquiry mode-dependent distortion. One study examining participants' responding through paper-and-pencil, on site computer-based, and off-site Internet questionnaires revealed a number of notable differences (Bates & Cox, 2008). Participants reported a variable perception of anonymity across conditions, tending to report higher rates of perceived anonymity in computer-based administration conditions. Participants also reported a belief that the accuracy of their responses varied across inquiry modes as well, with higher rates of perceived anonymity being positively associated with perceived accuracy. It is interesting to note that, despite their self-perceived inaccuracy in some conditions, no significant differences were observed in the behaviors participants reported across conditions. This inconsistency highlights the complexity of the mixed findings relating to mode of inquiry and impression management. Participants themselves seem unclear about the degree to which inquiry mode impacts their responding.
The majority of sex studies specifically related to mode and impression management have focused on measuring number of sexual partners, frequency of masturbation, and frequency of vaginal intercourse (Catania et al., 1990) and have been conducted with college students; more diverse community populations have been largely ignored (Weinhardt, Forsyth, Carey, Jaworski, & Durant, 1998). This is a notable limitation in that research largely focusing on common sexual behaviors engaged in by White, middle-class populations is far less likely to capture mode-dependent differences than research on less common behaviors or a minority population's behaviors, as social pressure for conformity may be less in the former than the latter due to disproportionate costs of deviance. Further research is needed to determine the impact of these factors on impression management in sexual behavior reporting.
Self-deception. There is some indication that certain types of sex-related questions are more likely to activate self-deceptive efforts than others. For example, it has been found that participants who score highly on measures of self-deception are also likely to provide an overly positive view of their sexual adjustment, likely in an effort to maintain the self-perception that they are sexually well-adjusted (Meston et al., 1998).
It has been suggested that individuals who have more perceived control over a situation may be less motivated to protect themselves with deceptive efforts (Fox & Schwartz, 2002). Such a relationship would predict less self-deception in more independent collection modes, which afford participants a greater deal of control. An examination of this hypothesis using paper-and-pencil surveys along with computer-based questionnaires found no significant differences across modes for a measure of self-deception (Fox & Schwartz, 2002). However, other studies have found differences between group-administered paper-and-pencil surveys and computer-based or individually administered questionnaires, with the individually based administration yielding higher rates of self-deception (Lautenschlager & Flaherty, 1990). This seems inconsistent with the idea that self-deception should be lower in situations involving greater perceived control. One possible way to interpret these results is that self-deception is more likely when questionnaires are completed independently of social contact and plays less of a role when other participants or evaluators are immediately present, as this latter condition may shift an individual's focus from self-evaluation (i.e., self-deception) to social evaluation (i.e., impression management).
The mixed results regarding mode and self-deception indicate that more research is needed. The limited research currently available suggests that self-deception plays at least a minor role in responding to certain sexual domains, although there has been little support for an impact on reporting of specific sexual behaviors. Self-deception may be more likely to play a role in inquiry modes that provide the participant with no other peers or social reference points with whom to compare one's self, and less likely to impact responding in the presence of others.
Social desirability and candor. Although social desirability is often thought to distort the accuracy of self-report data, experimental manipulations have exploited social desirability in an effort to increase accurate responding. As candor is typically considered a socially desirable trait, researchers have employed the "bogus pipeline" paradigm--a deceptive technique used to convince participants that inaccurate responses will be detected via a lie-detector device. When used in the context of sexual behavior research, it has been shown that the bogus pipeline manipulation is associated with higher rates of sensitive behaviors (e.g., masturbation) being reported than other self-report modes of data collection, although the effect was only significant for women (Alexander & Fisher, 2003).
Given the contrived nature of the bogus pipeline and the pronounced gender effect that has been reported, this research does not provide a practical means for combating the distorting impact of social desirability in all sex research, but it does provide further evidence that conscious editing is typically taking place in self-reports related to sexual behavior. The observed gender difference may relate to greater social pressure on women than men to be chaste or conservative in their sexual behavior; in turn, this may lead women to be more likely than men to edit their self-reported behavior, which is consistent with the expectations of social desirability theory. Alternatively, it is possible that women tend to place a higher value on being perceived as honest than do men, more strongly motivating them to reveal, in response to the bogus pipeline, sensitive behaviors that they would otherwise prefer to keep private.
Overall, there is substantial evidence to support a relationship between social desirability and self-reported sexual behavior that, under some circumstances, appears to be moderated by the mode of inquiry. There is also some indication that the sensitivity of a question is a key factor in the degree to which editing takes place and, in turn, the degree to which inquiry mode impacts reporting. Researchers also have demonstrated that competing motivations may exist within social desirability, with participants who may otherwise want to present socially desirable responses to sexual behavior questions ultimately deciding to provide accurate responses in an effort to be viewed as honest by evaluators. Further research is needed to better understand the factors that contribute to editing, the degree to which self-deception occurs, and how participants decide between competing socially desirable traits, as these factors may all contribute to the impact of inquiry mode on participant responding.
Item Level Responding
Participants who begin surveys frequently omit responses to certain items, discontinue prior to completion, respond randomly, or deny a history of any behaviors. If missed items, random responding, or discontinuation points are systematic, this type of behavior can lead to biased results. It is assumed that these behaviors are often the result of conscious decisions and relate to low motivation, social discomfort, or a desire for privacy.
There is some indication that participants are more likely to omit answers or to provide "zero" or "never" responses to questions about atypical sexual behaviors (e.g., sexual violence and extramarital sex) than questions about more common sexual behaviors (Catania et al., 1990). This latter tendency is particularly problematic as zero responses, unlike omitted responses, leave researchers with a difficult decision regarding the interpretation of the data. Whereas some participants may have genuinely never engaged in an infrequent or uncommon behavior, others may endorse a "never" response in an effort to comply with social demands or to protect sensitive personal information.
There appears to be some evidence of the impact of inquiry mode on item non-response rates in sexual behavior research. Paper-and-pencil questionnaires containing items relating to specific sexual behaviors have been shown to yield significantly more omissions than otherwise identical online questionnaires (Wood et al., 2006). Further, participants tend to skip more items toward the end of paper-and-pencil conditions as compared to the beginning; this problem is not as notable in computer-based conditions, suggesting that participant fatigue may be a greater concern in paper-and-pencil modes than in computer-based collection modes.
Paralleling the issue of participant fatigue is the issue of motivation. Participants with a high level of motivation may thoughtfully answer every question on a survey, regardless of the length of the survey or the specific demands of an individual question. However, participants with less motivation may opt to discontinue, respond randomly, or simply provide a negative response. Although the issue has not yet been directly researched, it is possible that participants may be more motivated to thoughtfully complete a survey when questions are presented in one mode of inquiry versus another.
Overall, it is clear that the questions that individuals agree to answer play a central role in the type and quality of the data obtained. In sexual behavior research, there are always concerns surrounding the degree to which participants answer all questions fully and candidly. There is some evidence that participants are more likely to omit responses to sensitive questions about sexual behaviors than responses to other types of questions (Catania et al., 1990) and that these omissions are more frequent in some data collection modes than others (e.g., Wood et al., 2006). This finding presents a possible route for inquiry mode-dependent differences to emerge. More research is needed to understand why differences exist between inquiry modes, what specific rationales participants have for omitting response, and the degree to which motivation plays a role in responding across modes. Recently, researchers have had some success in applying motivational theories toward increasing response rates for postal mail-based survey research (Wenemark, Persson, Brage, Svensson, & Kristenson, 2011). Researchers used motivational theory to redesign and streamline a personal health survey and to modify the collection procedure for a mail-based self-administered survey (additional pre-notification letter and a shortened survey with a second reminder). These procedural modifications boosted response rates from the 54.6% obtained using standard methods (longer survey, initial mailing, and two reminder letters) to 63.8% for the collection methods inspired by motivational theory. It may be possible to facilitate increased response rates within other modes of inquiry as well, although more research is needed to determine the success of such manipulations for modes beyond mail-based paper-and-pencil surveys.
Possible Mediating or Moderating Factors for Inquiry Mode and Self-Reported Sexual Behavior
Much of the research that has been reviewed thus far has shown tentative support for a possible relationship between inquiry mode and self-reported sexual behaviors. However, this support has been modest, and has been tempered by inconsistent or conflicting findings. One possible explanation for such inconsistencies is unidentified underlying factors that are preventing straightforward interpretation of mode-dependent impact. Many such underlying factors may also lead to nonlinear or bidirectional effects, which would make interpretation of results even more challenging (Richman et al., 1999). Currently, there has been very little research aimed at identifying underlying factors that might sway the impact of inquiry mode on self-reports. There are a number of factors that will need to be examined in the future to make determinations about this relationship. A brief overview of some of these factors and the theoretical means by which they may impact responding will help to highlight the importance of considering mediating and moderating factors in this area.
Conceptually, there are a number of reasons why gender is a possible moderator that should be considered when examining inquiry mode-dependent impact on self-reported sexual behavior. As previously mentioned, there is a long-standing assumption within sexual behavior research that the more behaviors being reported, the closer researchers are to tapping into the "true" number of participants' behaviors. However, the different cultural expectations for men and women regarding sex often challenge this assumption. For example, due to a "sexual double standard" in many Western cultures, men reporting higher numbers of sexual partners or more casual sexual partners may be seen as more attractive or sexually accomplished than men reporting lower numbers of partners, whereas women reporting higher numbers of partners may be seen as immoral or promiscuous (e.g., Crawford & Popp, 2003). These differing social expectations may lead women to underreport sexual behaviors and men to overreport them in an effort to be socially desirable (e.g., Schroder et al., 2003; Smith, 1992).
Another related factor may be concordant versus discordant gender in data collection modes, which require interaction between participants and researchers. Participants who are interacting with a same-sex researcher may be more or less likely to edit their responses, depending on the type of question being asked, than participants interacting with a researcher of a different gender. This effect has been well-established for face-to-face interviews, with concordant gender pairs yielding higher rates of reported sexual behavior than discordant gender pairs (Catania et al., 1996). More research is needed to determine the impact of researchers' gender on participants' responding across modes of inquiry. Conceptually, any experimenter effects would likely be stronger in modes with higher rates of interaction, such as a face-to-face interview and be less pronounced in modes with limited interaction, such as a Web-based survey.
Another important potential moderator is age. As mentioned earlier, technology is playing an increasingly important role in data collection. This technology has been rapidly evolving over the past century, and as such, the age of a participant likely plays an important role in his or her views of technology, comfort with that technology, and ability to use it properly. The recent explosion in popularity of social media serves as an example of how quickly attitudes about technology can change, as evidenced by the willingness of the youngest generations to provide a great deal of identifiable and publically available personal information on the Internet. It is possible that differing views of privacy within technology could lead to very different response patterns across cohorts and that such patterns may be more or less relevant depending on the degree of technological reliance in the mode of inquiry employed.
Age also plays a role in the way technology is viewed. Some of the older research with computer-based inquiry suggested that participants believed that computers possessed special insight into participant truthfulness or that participants viewed computers as intimidating, perhaps due, in part, to the limited exposure participants had to computers at the time (Feigelson & Dwight, 2000). It is unlikely that younger generations would have the same attitudes toward computers, and this differing view may impact responding across various cohorts.
Cohort effects may also play an important role in how sexual behavior is viewed and the willingness to share information about private behaviors with strangers. Younger individuals may be generally more comfortable disclosing private sexual behaviors than older individuals. These differing attitudes may impact whether participants self-select out of certain studies or omit responses to specific questions. Some limited support for this possibility can be drawn from a study examining demographic characteristics of nonresponders to items relating to sexual experiences, which found that older participants were more likely to omit responses to these items (Wiederman, 1993). More research is needed to determine if such effects are a function of participants' age or the cohort they belong to. Further, more research is needed to assess the degree to which age-dependent sexual attitudes differentially impact sexual behavior responding across different inquiry modes. In lieu of evidenced-based recommendations, it stands to reason that researchers who are particularly focused on the sexual behavior of older adults may be better off sticking to more traditional means of data collection (e.g., face-to-face interviews or paper-and-pencil questionnaires), whereas those who are studying younger generations may have more flexibility in the modes of inquiry utilized.
Much of the discussion up to this point about participants' views of sexual behavior, technology, and social interaction has been largely based on the cultural assumptions of White Americans. Open discussions about sexual behaviors are viewed very differently by different cultures (e.g., Langhaug et al., 2010). Cultural effects may be globally related to all sexual behaviors or specific to a certain behavior (e.g., masturbation and premarital sex), which may have special significance for that group. The relationship between mode of inquiry and willingness to report may vary across cultures as well.
As with gender, the concordance or discordance of the cultural background of the examiner and the participant may play an important role in the degree of socially desirable editing in which the participant engages and perhaps the degree of motivation behind remembering and thoroughly answering questions as well. As previously mentioned, decreasing social distance (e.g., through matching of researcher and participant cultural backgrounds) may lead to more candid reporting. Cultural allegiance may also increase a participant's motivation, which has been linked to more accurate recall and higher completion rates (Morrison-Beedy, Carey, & Tu, 2006). However, this is contrasted by cultural conformity as a motivation, which may lead participants to edit their responses to be more in line with traditional cultural values when interacting with a researcher from a similar background. There is also a possibility that an examiner from the cultural majority will lead participants to conform to the values of the majority culture or cue cultural stereotypes, leading participants to modify their responses to be more in line with majority values or stereotypes.
Cultural factors are important in nearly all domains of research, but the practical applications of sexual behavior research add an additional layer of significance. Sexual behavior research is conducted across a broad range of cultural groups, making it imperative that researchers understand the degree to which inquiry modes impact responding and which modes may be more or less suited to various cultural groups. A review of extant research related to mode of inquiry in developing countries provides support for a possible mode-dependent effect (Langhaug et al., 2010). Specifically in developing countries, there appears to be some indication that CASI yields much higher rates of reported sexual behavior than paper-and-pencil or face-to-face inquiry modes. Although it is not immediately clear whether culture plays a role in these differences, these findings support further research into the degree of impact of inquiry modes across different cultural groups.
Overall, cultural factors have been largely overlooked in existing mode of inquiry research. As others have observed, more research is needed to determine the degree to which cultural identity interacts with inquiry modes to impact self-reported sexual behavior (Vereecken & Maes, 2006). Specifically, future research should address the impact of modes of inquiry on various cultural groups, and assess the degree to which social distance impacts responding.
Literacy and Numeracy
An obvious, but important, consideration regarding the impact of inquiry modes is literacy. Several of the most common inquiry modes are heavily dependent on reading ability. This may lead potential participants to self-select out of studies in which literacy is an obvious requirement or to respond randomly or omit answers to complex questions or questions requiring more advanced reading capabilities (Schroder et al., 2003). There is also some indication that individuals with limited literacy are more likely to enlist the help of others in completing paper-and-pencil questionnaires and, further, that those who complete surveys with outside assistance report fewer sexual behaviors than those who complete the questionnaires independently (Couper & Stinson, 1999). Clearly, individuals who are illiterate or who have limited reading abilities are much more likely to present accurate information in modes that do not require reading ability. The A-CASI provides an alternative to face-to-face interviews for working with these populations (e.g., Gribble, Miller, Rogers, & Turner, 1999; Schroder et al., 2003; Tourangeau & Smith, 1996).
A related issue, which has received even less attention, is numeracy, or basic arithmetic skills. Most sexual behavior questionnaires require participants to mentally calculate the number of specific behaviors in which they have engaged or the number of unique partners they have had, perhaps calculating the proportion of total sexual acts in which they engaged in unprotected sex or calculating a weekly or monthly average number of sexual acts. McAuliffe, DiFranceisco, and Reed (2010) proposed that numeracy, like literacy, may have an impact on participants' abilities to accurately respond to sexual behavior questionnaires. To test this hypothesis, they examined the correlation between daily diary-based reports of sexual behavior and retrospective reports, and found that the correlation grew stronger as a function of numeracy, indicating that participants with stronger arithmetic skills may have been better able to accurately calculate the number of behaviors they engaged in over a longer period of time. Unlike literacy, numeracy demands do not decrease in interview or computer-assisted conditions, but variable cognitive demands across inquiry modes may leave participants with fewer resources available for numerical computation of behaviors. McAuliffe et al. (2010) did assess for possible inquiry mode-dependent differences, comparing A-CASI to paper-and-pencil questionnaires, and found greater discrepancies in the paper-and-pencil condition, although these results were not statistically significant. Further research is needed to confirm these results and further examine the role of numeracy in accurate reporting of sexual behaviors across different modes. The importance of numeracy becomes more pronounced for high-frequency behaviors (e.g., number of sexual acts) or behaviors occurring over a long period of time (e.g., lifetime sex partners), as these require more complicated or sophisticated means of calculation.
Frequency of Behavior Being Measured
The frequency of behaviors plays an important role in the ability of participants to accurately recall those behaviors (Schroder et al., 2003). High-frequency behaviors may be relatively easy to recall, but it may be difficult for participants to provide accurate estimates of the exact frequencies of such behaviors. In contrast, low-frequency behaviors may be easily forgotten, or they may be memorable and, thus, may be reported more accurately than high-frequency behaviors. Different strategies or levels of motivation may be necessary for accurate recollection of behaviors that are more or less frequent. Further, it is possible that different modes of inquiry lend themselves to more accurate recollection of higher- or lower-frequency behaviors as a consequence of factors such as motivation or concentration.
For example, in a study examining participants' cognitive approaches to recalling sexual behaviors, researchers found that, whereas participants tended to "add up" low-frequency behaviors, they used rate-based estimates or other shortcuts to estimate their participation in higher-frequency behaviors (Bogart et al., 2007). This suggests that participants employ different strategies to arrive at responses, depending on the type of question being asked and how that question applies to their own experience. Bogart et al. suggested that researchers utilize interview prompts to direct respondents to recall sexual behaviors in small, manageable chunks. Whereas implementation of this recommendation would be relatively straightforward for face-to-face interviews, it would be more challenging for other inquiry modes. No study, to date, has examined the relationship between inquiry mode and the approach participants select to calculate behaviors, although it is possible that participants may be more likely to employ a given strategy in one mode than another. It is also possible that efforts to facilitate the use of a certain recall strategy may be more successful in some modes than others.
Tourangeau and Smith (1996) suggested that face-to-face interaction may allow interviewers to maintain participants' motivation; alternatively, the presence of the interviewer may act as a distraction to the participant. Similarly, it is possible that participants who are less familiar with computers may be distracted by the practicalities of computer use or may be more engaged by the novelty of a new experience. Although such possibilities are speculative at this point, variability in distraction or motivation has the potential to impact participants' abilities to accurately recollect both high- and low-frequency behaviors by depleting available cognitive resources. At this point, there has not been any systematic review of an interaction between behavior frequency and inquiry mode. A better understanding of this relationship may allow researchers to optimize the methodology for a study based on the types of questions being asked.
The content being assessed is also an important consideration. The majority of research relating to sexual behavior and inquiry mode has focused on the most basic of sexual behavior questioning, such as lifetime number of sexual partners. Very little research has focused on a broad range of sexual behaviors, such as specific activities or safe-sex behaviors.
Surprisingly, condom use behaviors, behaviors that are among the most relevant to public health, have received little attention in mode of inquiry research. This omission is problematic because information about a participant's number of sexual partners has limited utility as a measure of HIV or STI risk without questions about condom use behavior (Noar, Cole, & Carlyle, 2006). Further, cultural and social beliefs about condom use differ from those about sexual behavior in general and it is certainly possible that the impact of inquiry mode may be different for this type of questioning.
Mode of inquiry research also has largely ignored the impact of mode on self-reported autoerotic behaviors, such as masturbation. This is an important topic and, given the highly sensitive nature of the subject, it may be particularly vulnerable to mode effects. Previous research has shown that participants in anonymous test conditions are considerably more likely than those in confidential conditions to report engaging in masturbation (Ong & Weiss, 2000). Given the research suggesting a relationship between inquiry mode and perceptions of anonymity (Bates & Cox, 2008), it seems plausible that inquiry modes that involve less apparent experimenter involvement are likely to yield higher rates of reported masturbation. However, more research is needed to determine if this is indeed the case.
Another important and understudied type of sexual behavior questioning relates to nonconsensual and abusive sexual experiences. There is evidence that victims frequently underreport nonconsensual and abusive sexual experiences. For example, a longitudinal study comparing young adults' self-reported history of childhood sexual abuse (CSA) to documented history of CSA found that only 16% of male victims and 64% of female victims disclosed that they were victims of sexual abuse (Widom & Morris, 1997). Although that study relied solely on face-to-face interviews, it highlighted the importance of improving self-reports relating to sexual victimization.
Currently, there is limited and mixed research related to modes of inquiry and questions regarding nonconsensual sex in adulthood, with existing studies generally focused on female victimization. There is some indication that women are more likely to disclose sexual assault related to alcohol use through a Web survey than they are in a phone interview (Parks, Pardi, & Bradizza, 2006). Another study indicated higher rates of sexual assault disclosure by participants through paper-and-pencil inquiries than those observed in CASIs (Testa et al., 2005). However, the latter study was limited by low response rates in the computer condition, which exemplifies concerns about inquiry mode-dependent participation bias, as only 61.4% of contacted participants showed up for the computer condition, in comparison to the 87.6% of participants who completed and returned paper-and-pencil surveys.
Research on mode of inquiry and reporting perpetration of nonconsensual sex or sexual abuse is absent from the existing literature. Reporting of sexual aggression perpetration raises issues related to the potential legal ramifications of participants' responding (e.g., Tourangeau &Yan, 2007). Some sexual aggression perpetration is illegal. Regardless of assurances of confidentiality, participants may resist providing incriminating responses in inquiry modes that do not provide complete anonymity. Given the importance of sexual behavior research in understanding sexual perpetrators' behaviors, more research is needed to identify the optimal mode of inquiry and to estimate the degree of distortion across the most commonly used methods of assessment.
Although the aforementioned areas of sexual behavior research provide important examples of understudied topics in mode research, this is by no means an exhaustive list. The wide range of topics that exist under the sexual behavior umbrella make it impossible to generalize findings from one type of behavior to another. Topics such as sexual dysfunction and positive sexual functioning deserve attention as well.
Implications for Future Research Relating to Serf-Report Methodology and Sexual Behavior Data Collection
An ideal outcome of this review would be to arrive at specific recommendations for sexual behavior researchers as to an optimal inquiry mode for obtaining accurate self-report data. However, the state of the current literature clearly prevents any such definitive recommendations. A number of the studies reviewed showed weak or moderate mode-dependent differences (Brown & Vanable, 2009; Kissinger et al., 1999; McAuliffe et al., 2007; Morrison-Beedy et al., 2006; Parks et al., 2006; Reddy et al., 2006; Testa et al., 2005; Tourangeau & Smith, 1996) in responding for one or more measures related to sexual behavior, whereas a number showed no significant differences based on mode (Bates & Cox, 2008; DiLillo, DeGue, Kras, Di Loreto-Colgan, & Nash, 2006; Hines, Douglas, & Mahmood, 2010; Knapp & Kirk, 2003; Mangunkusumo et al., 2005; Rosenbaum, Rabenhorst, Reddy, Fleming, & Howells, 2006).
The existing research does suggest that inquiry mode may have some degree of impact on participants' responding to certain questions in certain situations, but the research has not progressed enough to isolate which mode is best suited for which situation. This suggests that, eventually, researchers may identify situations, questions, or populations for which a certain mode of inquiry is superior to other modes, but it is unlikely that one inquiry mode will become the clear choice for all sexual behavior research. As such, the wealth of literature reviewed suggests there is value in the continued use of the wide range of inquiry modes available. Hopefully, a continued consideration of inquiry modes will contribute to a refinement of sexual behavior research methodologies as they continue to evolve.
Although there are indications that future mode research may ultimately lead to improved methodology in the domain of sex research, there are also a number of substantial gaps within the literature that must be addressed. First, as noted earlier, many of the published studies related to mode of inquiry and self-reported sexual behavior have found no significant differences. When the "file drawer effect" is taken into consideration, one could reasonably argue that mode differences are the exception, rather than the rule. Further, within the studies that have been published, the observed effect of inquiry mode has been modest at best, and no study to date has convincingly concluded that mode of inquiry effects have impacted the conclusions being drawn about sexual behaviors or attitudes. At this stage, researchers looking for methodological guidance are better off considering the available research related to item content and wording or employing methods aimed at encouraging thoughtful and motivated participation through the emphasis of a study's value (e.g., Dee Leeuw, Callegaro, Hox, Korendijk, & Lensvelt-Mulders, 2007) than relying on the limited research on mode of self-report. However, the possibility remains that continued inquiry mode research may eventually support the development of methodological guidelines aimed at certain research questions or specific populations.
One potential area of future research may be an examination of the degree of variability in the format and display of items across inquiry modes. For example, in some cases, participants' access to viewing all questionnaire questions simultaneously is variable from one mode of inquiry to the next. For example, in paper-and-pencil questionnaires, participants generally have the option to skip ahead to questions or to revisit previously answered questions; the ability to do this is variable in computer-based questionnaires, and this is not an option in face-to-face interviews. Previous research has indicated that participants who have the ability to revisit previous questions on computer surveys answer them more candidly (e.g., Richman et al., 1999). Although backtracking is generally available on most electronic surveys, it may not always mirror the navigation through traditional paper-and-pencil surveys. For example, the amount of information presented on a PDA screen is far less than could be found on a printed page. Further, the ability to move back and forth from page to page may be less straightforward when using a stylus or a touchscreen. As such, it may be more difficult for participants to thoughtfully consider complex instructions or a chain of responses when responding in computer-based modes as compared to paper-and-pencil modes. Some modes also may better allow for the presentation of lengthy instructions or the inclusion of prompts or reminders.
Another issue worthy of consideration in relation to inquiry modes is that of consistent responding. When participants are providing responses to similar or linked behavioral questions, inconsistencies can make interpretation challenging. Different modes of inquiry afford researchers different opportunities to address inconsistencies and, in turn, may provide different levels of consistency as a result. For example, whereas interviewers and some computer surveys can request clarification from participants regarding a given inconsistency, traditional paper-and-pencil surveys are limited to "self-policing" by participants to ensure consistent responses. Research is needed to address the degree to which consistency is facilitated across modes of inquiry and the degree to which such efforts impact participant responding.
Even as research continues to expand and better address understudied areas related to inquiry mode, the lack of a gold standard point of comparison in sexual behavior research will remain a significant barrier to improving data collection methods. Diary research is not without its own sources of bias and may be impacted by inquiry mode-related effects as well (as diaries can be completed in paper-and-pencil, electronic, or interview formats). Other attempts at a more accurate comparison, such as test-retest paradigms or the utilization of partner reports, may compound measurement errors and provide more opportunities for distortion. Also, as previously discussed, efforts at identifying useful biomarkers of sexual behaviors have not been successful. Ultimately, ethical standards and practicalities prevent researchers from ever having a "truth" with which to compare self-reported sexual behaviors, and this reality constrains their abilities to evaluate the accuracy of self-reports and the impact of inquiry modes on those reports. Given these constraints, it is unlikely that researchers will ever have a gold standard; however, ongoing work to develop and improve on comparison points, such as daily dairies that are closer to the gold standard ideal, will greatly advance researchers' abilities to evaluate the methods that are commonly being used today and will likely advance the understanding of inquiry mode effects as well (e.g., Schroder et al., 2003).
The Evolution of Data Collection and Underlying Factors
Nearly all of the research reviewed primarily focused on demonstrating equivalency between modes of inquiry in sexual behavior research. Dating back to the early use of computers in self-report research, this has been recognized as an important research question. However, an accumulation of decades of research has failed to unequivocally answer the question, and it is unlikely that a clear answer can be found. A primary reason for this is that the measurement modes being compared are constantly changing. A departure from the sole reliance on face-to-face interviews and paper-and-pencil questionnaires came with the emergence of telephones and personal computers. In the time since that occurred, landline phones have been largely replaced by cell phones, and computers have evolved at a feverish pace. This trend is problematic for methodological researchers in that the "shelf life" of technologies is extremely short, and participants' attitudes regarding such technologies also rapidly evolve. Unlike paper-and-pencil surveys, which are practically the same as they were 100 years ago, the personal computer of today greatly differs from what was available a mere 10 years ago. Similarly, the software available for designing surveys has steadily improved and become more visually appealing and easier to use. The implications of this evolution are that any comparisons made between interview or paper-and-pencil methodology and computer-based inquiry must be reevaluated every several years to account for technological advances and changing participant attitudes. Such efforts are further complicated by the introduction of research using handheld devices and other new technologies that were practically nonexistent 10 years ago (e.g., Vannier & O'Sullivan, 2008). One question raised by this evolution of technology is whether these new modes of data collection are actually useful in increasing our precision or accuracy of measurement; in other words, do these new modes of data collection actually improve the quality of our data? This is a question that needs to be answered by future research on the impact of inquiry modes.
However, given the pace of technological evolution, the current efforts to capture inquiry mode-related differences are inefficient at best. It is very difficult to anticipate specifically how self-report data will be collected in the future or how participants might react to such technologies. As such, efforts to understand inquiry modes and their effects may be more productive if the focus shifts to the identification of underlying factors such as social desirability, motivation, recall, or participation decisions, which may systematically contribute to differences across modes. This would allow researchers to generalize findings relating to identified underlying factors and to speculate about the degree to which they might affect other inquiry modes, including the inquiry modes of the future.
The continued spread of STIs around the globe, an increased awareness of sexual perpetration, and a growing acceptance of sexuality as an important component of well-being across the lifespan all suggest that the importance of sexual behavior research will continue to grow for a long time. Advancing the current understanding of underlying factors inherent to self-report inquiry modes will be important for improving the quality of current and future sexual behavior research.
Alexander, M. G., & Fisher, T. D. (2003). Truth and consequences: Using the bogus pipeline to examine sex differences in self-reported sexuality. Journal of Sex Research, 40, 27-35. doi: 10.1080/00224490309552164
Bates, S. C., & Cox, J. M. (2008). The impact of computer versus paper-pencil survey, and individual versus group administration, on self-reports of sensitive behaviors. Computers in Human Behavior, 24, 903-916. doi: 10.1016/j.chb.2007.02.021
Bloom, S. S., Banda, C., Songolo, G., Mulendema, S., Cunningham, A. E., & Boerma, J. T. (2000). Looking for change in response to the AIDS epidemic: Trends in AIDS knowledge and sexual behavior in Zambia, 1990 through 1998. Journal of Acquired Immune Deficiency Syndromes, 25, 77-85. doi: 10.1097/00126334200009010-00011
Bogart, L. M., Walt, L. C., Pavlovic, J. D., Ober, A. J., Brown, N., & Kalichman, S. C. (2007). Cognitive strategies affecting recall of sexual behavior among high-risk men and women. Health Psychology, 26, 782793. doi: 10.1037/0278-618.104.22.1687
Brown, J. L., &Vanable, P. A. (2009). The effects of assessment mode and privacy level on self-reports of risky sexual behaviors and substance use among young women. Journal of Applied Social Psychology, 39, 2756-2778. doi: 10.1111/j.1559-1816.2009. 00547.x
Bullough, V. L. (1998). Alfred Kinsey and the Kinsey report: Historical overview and lasting contributions. Journal of Sex Research, 35, 127-131. doi: 10.1080/00224499809551925
Catania, J. A., Binson, D., Canchola, J., Pollack, L. M., Hauck, W., & Coates, T. J. (1996). Effects of interviewer gender, interviewer choice, and item wording on responses to questions concerning sexual behavior. Public Opinion Quarterly, 60, 345-375. doi: 10.1086/297758
Catania, J. A., Gibson, D. R., Chitwood, D. D., & Coates, T. J. (1990). Methodological problems in AIDS behavioral research: Influences on measurement error and participation bias in studies of sexual behavior. Psychological Bulletin, 108, 339-362. doi: 10.1037//0033-2909.108.3.339
Cook, C., Heath, F., & Thompson, R. L. (2000). A recta-analysis of response rates in Web- or Internet-based surveys. Educational and Psychological Measurement, 60, 821-836. doi: 10.1177/ 00131640021970934
Couper, M., & Stinson, L. (1999). Completion of serf-administered questionnaires in a sex survey. Journal of Sex Research, 36, 321-330. doi: 10.1080/00224499909552004
Crawford, M., & Popp, D. (2003). Sexual double standards: A review and methodological critique of two decades of research. Journal of Sex Research, 40, 13-26. doi: 10.1080/00224490309552163
Dee Leeuw, E., Callegaro, M., Hox, J., Korendijk, E., & Lensvelt-Mulders, G. (2007). The influence of advanced letters on response in telephone surveys: A meta-analysis. Public Opinion Quarterly, 71, 413-443. doi: 10.1093/poq/nfm014
DiLillo, D., DeGue, S., Kras, A., Di Loreto-Colgan, A., & Nash, C. (2006). Participant responses to retrospective surveys of child maltreatment: Does mode of assessment matter? Violence and Victims, 21, 410-424. doi: 10.1891/vivi.21.4.410
Dillman, D. A. (1999). Mail and Internet surveys: The tailored design method (2nd ed.). New York, NY: Wiley.
Feigelson, M., & Dwight, S. (2000). Can asking questions by computer improve the candidness of responding?: A meta-analytic perspective. Consulting Psychology Journal." Practice and Research, 52, 248-255. doi: 10.1037//1061-4087.52.4.248
Fox, S., & Schwartz, D. (2002). Social desirability and controllability in computerized and paper-and-pencil personality questionnaires. Computers in Human Behavior, 18, 389-410. doi: 10.1016/S07475632(01)00057-7
Gillmore, M., Leigh, B., Hoppe, M., & Morrison, D. (2010). Comparison of daily and retrospective reports of vaginal sex in heterosexual men and women. Journal of Sex Research, 47, 279-284. doi: 10.1016/S0191-8869(96)00137-7
Gosling, S., Vazire, S., Srivastava, S., & John, O. (2004). Should we trust Web-based studies? A comparative analysis of six preconceptions about Internet questionnaires. American Psychologist, 59, 93-104. doi: 10.1037/0003-066X.59.2.93
Graham, C., Catania, J., Brand, R., Duong, T., & Canchola, J. (2002). Recalling sexual behavior: A methodological analysis of memory recall bias via interview using the diary as the gold standard. Journal of Sex Research, 40, 325-332. doi: 10.1080/00224490209552198
Gribble, J., Miller, H., Rogers, S., & Turner, C. (1999). Interview mode and measurement of sexual behaviors: Methodological issues. Journal of Sex Research, 36, 16-24. doi: 10.1080/ 00224499909551963
Hamilton, D., & Morris, M. (2010). Consistency of self-reported sexual behavior in surveys. Archives of Sexual Behavior, 39, 842-860. doi: 10.1007/s10508-009-9505-7
Hines, D. A., Douglas, E. M., & Mahmood, S. (2010). The effects of survey administration on disclosure rates to sensitive items among men: A comparison of an Internet sample with a RDD telephone sample. Computers in Human Behavior, 26, 1327-1335. doi: 10.1016/j.chb.2010.04.006
Kissinger, P., Rice, J., Farley, T., Trim, S., Jewitt, K., Margavio, V., & Martin, D. H. (1999). Application of computer-assisted interviews to sexual behavior research. American Journal of Epidemiology, 149, 950-954.
Knapp, H., & Kirk, S. (2003). Using pencil and paper, Internet and touch-tone phones for self-administered surveys: Does methodology matter? Computers in Human Behavior, 19, 117-134. doi: 10.1016/S0747-5632(02)00008-0
Langhaug, L., Sherr, L., & Cowan, F. (2010). How to improve the validity of sexual behaviour reporting: Systematic review of questionnaire delivery modes in developing countries. Tropical Medicine &
International Health, 15, 362-381. doi: 10.1111/j.1365-3156.2009.02464.x
Lautenschlager, G., & Flaherty, V. (1990). Computer administration of questions: More desirable or more social desirability? Journal of Applied Psychology, 75, 310-314. doi: 10.1037//00219010.75.3.310
MacPhail, C., & Campbell, C. (2001). "I think condoms are good but, aai, I hate those things": Condom use among adolescents and young people in a Southern African township. Social Science & Medicine, 52, 1613-1627. doi: 10.1016/S0277-9536(00)00272-0
Manfreda, K., Bosnjak, M., Berzelak, J., Haas, I., Vehovar, V., & Berzelak, N. (2008). Web surveys versus other survey modes: A recta-analysis comparing response rates. International Journal of Market Research, 50, 79-104.
Mangunkusumo, R. T., Moorman, P. W., Van Den Berg-de Ruiter, A. E., Van Der Lei, J., De Koning, H. J., & Raat, H. (2005). Internet-administered adolescent health questionnaires compared with a paper version in a randomized study. Journal of Adolescent Health, 36, 70-76. doi: 10.1016/j.jadohealth.2004.02.020
McAuliffe, T., DiFranceisco, W., & Reed, B. (2007). Effects of question format and collection mode on the accuracy of retrospective surveys of health risk behavior: A comparison with daily sexual activity diaries. Health Psychology, 26, 60-67. doi: 10.1037/ 0278-622.214.171.124
McAuliffe, T., DiFranceisco, W., & Reed, B. (2010). Low numeracy predicts reduced accuracy of retrospective reports of frequency of sexual behavior. AIDS and Behavior, 14, 1320-1329. doi: 10.1007/s10461-010-9761-5
Meston, C., Heiman, J., Trapnell, P., & Paulhus, D. (1998). Socially desirable responding and sexuality self-reports. Journal of Sex Research, 35, 148-157. doi: 10.1080/00224499809551928
Morrison-Beedy, D., Carey, M., & Tu, X. (2006). Accuracy of audio computer-assisted self-interviewing (ACASI) and self-administered questionnaires for the assessment of sexual behavior. AIDS and Behavior, 10, 541-552. doi: 10.1007/s10461-006-9081-y
Noar, S., Cole, C., & Carlyle, K. (2006). Condom use measurement in 56 studies of sexual risk behavior: Review and recommendations. Archives of Sexual Behavior, 35, 327-345. doi: 10.1007/s10508006-9028-4
Ong, A. D., & Weiss, D. J. (2000). The impact of anonymity on responses to "sensitive" questions. Journal of Applied Psychology, 30, 1691-1708. doi: 10.1111/j. 1559-1816.2000.tb02462.x
Parks, K., Pardi, A., & Bradizza, C. (2006). Collecting data on alcohol use and alcohol-related victimization: A comparison of telephone and Web-based survey methods. Journal of Studies on Alcohol and Drugs, 67, 318-323.
Paulhus, D. (1984). Two-component models of socially desirable responding. Journal of Personality and Social Psychology, 46, 598-609. doi: 10.1037//0022-35126.96.36.1998
Reddy, M. K., Fleming, M. T., Howells, N. L., Rabenhorst, M. M., Casselman, R., & Rosenbaum A. (2006). Effects of method on participants and disclosure rates in research on sensitive topics. Violence and Victims, 21, 499-506. doi: 10.1891/vivi.21.4.499
Richman, W., Weisband, S., Kiesler, S., & Drasgow, F. (1999). A meta-analytic study of social desirability response distortion in computer-administered and traditional questionnaires and interviews. Journal of Applied Psychology, 84, 754-775. doi: 10.1037/ /0021-9010.84.5.754
Rosenbaum, A., Rabenhorst, M. M., Reddy, M. K., Fleming, M. T., & Howells, N. L. (2006). A comparison of methods for collecting self-report data on sensitive topics. Violence and Victims, 21, 461-471. doi: 10.1891/vivi.21.4.461
Schroder, K., Carey, M., & Vanable, P. (2003). Methodological challenges in research on sexual risk behavior: II. Accuracy of self-reports. Annals of Behavioral Medicine, 26, 104-123. doi: 10.1207/S15324796ABM2602_03
Schroder, K., Johnson, C., & Wiebe, J. (2007). Interactive voice response technology applied to sexual behavior self-reports: A comparison of three methods. AIDS and Behavior, 11, 313-323. doi: 10.1007/s10461-006-9145-z
Schwarz, N. (1999). Self-reports: How questions shape the answers. American Psychologist, 54, 93-105. doi: 10.1037//0003-066X. 54.2.93
Smith, T. (1992). A methodological analysis of the sexual behavior questions on the General Social Surveys. Journal of Official Statistics, 8, 309 325.
Sudman, S., & Bradburn, N. M. (1983). Asking questions. San Francisco, CA: Jossey-Bass.
Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers. The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass.
Testa, M., Livingston, J., & VanZile Tamsen, C. (2005). The impact of questionnaire administration mode on response rate and reporting of consensual and nonconsensual sexual behavior. Psychology of Women Quarterly, 29, 345-352. doi: 10.1111/j.14716402.2005.00234.x
Tourangeau, R., & Smith, T. (1996). Asking sensitive questions: The impact of data collection mode, question format, and question context. Public Opinion Quarterly, 60, 275-304. doi: 10.1086/ 297751
Tourangeau, R., &Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133, 859-883. doi: 10.1086/297751
Vannier, S. A., & O'Sullivan, L. F. (2008). The feasibility and acceptability of handheld computers in a prospective diary study of adolescent sexual behaviour. Canadian Journal of Human Sexuality, 17, 183-192.
Vereecken, C., & Maes, L. (2006). Comparison of a computer-administered and paper-and-pencil-administered questionnaire on health and lifestyle behaviors. Journal of Adolescent Health, 38, 426-432. doi: 10.1016/j.jadohealth.2004.10.010
Weinhardt, L., Forsyth, A., Carey, M., Jaworski, B., & Durant, L. (1998). Reliability and validity of self-report measures of HIV-related sexual behavior: Progress since 1990 and recommendations for research and practice. Archives of Sexual Behavior, 27, 155-180. doi: 10.1023/A:1018682530519
Wenemark, M., Persson, A., Brage, H. N., Svensson, T., & Kristenson, M. (2011). Applying motivational theory to achieve increased response rates, respondent satisfaction and data quality. Journal of Official Statistics, 27, 393-414.
Widom, C. S., & Morris, S. (1997). Accuracy of adult recollections of childhood victimization: Part 2. Childhood sexual abuse. Psychological Assessment, 9, 3446. doi: 10.1037//1040-35188.8.131.52
Wiederman, M. (1993). Demographic and sexual characteristics of nonresponders to sexual experience items in a national survey. Journal of Sex Research, 30, 27-35. doi: 10.1080/ 00224499309551675
Wood, E., Nosko, A., Desmarais, S., Ross, C., & Irvine, C. (2006). Online and traditional paper-and-pencil survey administration: Examining experimenter presence, sensitive material and long surveys. Canadian Journal of Human Sexuality, 15, 147-155.
Ethan B. McCallum and Zoe D. Peterson
Department of Psychology, University of Missouri-St. Louis
We are grateful to Kristin Carbone-Lopez, Steven E. Bruce, and Matthew J. Taylor for their feedback on an earlier draft of this article.
Correspondence should be addressed to Ethan B. McCallum, Department of Psychology, University of Missouri--St. Louis, 1 University Blvd., 325 Stadler Hall, St. Louis, MO 63121. E-mail: firstname.lastname@example.org