Experience design methodology: the four questions.
An approach to developing a research method course for user experience design is discussed. The interplay between objective scientific study and interpretive humanistic inquiry is readily seen in this emergent field. A framework based on four paradigmatic questions is developed. Experience design is explored in relationship to experiments, surveys, qualitative descriptive inquiry, and rhetorical research. Students learn that research questions drive the choice of methodologies and methods. The key is for students to gain the skills to apply the correct methodologies to the design situation.
Information design, user experience design, and new product development use methods from both the sciences and the humanities. In product development, different business disciplines such as technical communication, usability research, marketing, and engineering work toward the development of new products (Keiman, Anschuetz, & Rosenbaum, 2002). "Experience design or experience-driven design can be considered as a new strategy in industrial design and major corporations (Nokia, Philips, Nike) claim to have adopted it for their product development" (Hekkert, Mostert, & Stompff, 2003, p. 114). This article provides a case analysis of how to best develop a user experience research design course in which methodologies from the humanities and sciences are utilized in an interdisciplinary fashion. Students studying experience design should learn rationalistic and humanistic research approaches. With rationalist measurements, students learn the basics of traditional human factor principles that "objectively" measure human machine interaction. Furthermore, experience design implies social connectedness and action that requires the need to understand how information transfers meaning between human and machine, and also between people using the technology (i.e., machine). In this methodological framework, the technical artifact is considered part of the user experience within a context.
The main question becomes what methodologies and methods are best used for interaction and experience design? Strauss and Corbin (1998) stated that methodology is a "way of gathering knowledge about the world" (p. 4). Experience design research is a melding of methodologies from the humanities and sciences to establish knowledge on how people live with and shape the uses of consumer products. Experience design brings "together ideas from a wide array of disciplines with exciting results, including economics, electronic commerce, psychology, sociology, communications, artificial intelligence, and other specialty areas in computer science such as virtual reality and persuasive technologies, as well as theatre and entertainment" (McLellan, 2000, p. 60). More specifically, "the goal of experience design is to orchestrate experiences that are not only functional and purposeful, but also engaging, compelling, memorable, and enjoyable" (McLellan, 2000, p. 59-60). As products, information and communication technologies (ICT) are especially designed to become more ubiquitous. Thus, our design methodologies need to be more sensitive to how people and products can seamlessly interact.
While experience design is a potentially powerful approach, it is "still in its infancy" (Hekkert, Mostert, & Stompff, p. 119). The rise of products such as ICTs beacon the need to develop an experience design discipline with a stockpile of research methods. An area fostering this development is human factors, which is traditionally cognitive and behavioral. Yet, experience design calls for a wider range of approaches "... we need to better understand how the different approaches relate to each other ... What is needed is a framework that articulates experience in a way that does not rely on the point of view of any single discipline, but provides a common design-oriented frame of reference for all the relevant actors involved in design" (Forlizzi & Battarbee, 2004, p. 261). In particular, research needs to elicit how users experience their interactions with products. This shift in foci causes underlying paradigmatic assumptions to also be different. The experience design field may be modernized and invigorated by adopting a communicative focus. Critical to any field is its research methods. In this article, the new possibilities of experience design research methods are explored from both objective and interpretive views. For brevity, ICT products are used as a main example of product design.
Rational for Approach: The Four Questions
Design research efforts, especially in designing ICT products, draw on multiple traditions encompassing humanistic and scientific inquiry. User experience design is a newly emerging field that pulls from research traditions in human factors, ergonomics, information systems, heuristics, marketing, psychology, sociology, anthropology, ethnography, and communication. Students must learn to call upon the correct mixture of research techniques. This proficiency can be developed by making clear a structure for analysis and inquiry. Starting with basic questions, a trans-disciplinary approach to user experience research is developed that more readily allows research findings to be useful within academia and business. Even "the traditional model of topical research often ignores the powerful impact of asking a 'good' question" (Hertel, 2004 p. 268). Four essential questions provide a framework for the curriculum development of a user experience research design course.
* How do we measure behaviors and predict causation for experience design with a product?
* How do we measure general attitudes of a population for experience design with a product?
* How do we determine the meanings users create for experience design with a product?
* How do we critically analyze the technology-business-society (i.e., culture) relationships for experience design with a product?
In general, information sciences in theory and practice call for an understanding of what methodologies, methods, and analysis tools to apply to differing technology and product development situations (Ford, 1999). Question one (how do we measure behaviors and predict causation of product interaction?) adopts a quantitative approach that is either experimental or quasi-experimental. Question two (how do we measure general attitudes of a population about a design?) adopts a quantitative survey method. Questions three (meanings users create) and Question four (critical analysis) both primarily use qualitative methods. Quantitative (i.e., positivistic) and qualitative (i.e., interpretative), research methods are employed in designing products.
While these methodologies and methods can be used in conjunction to triangulating data sets and analyses, this discussion will shed light on the distinct attributes of the individual approaches (Please see Appendix A for a list of common attributes that align with each methodology).
Quantitative: Objective Measurement
In positivistic research, the underlying philosophy is rational in natural. The goal of research is objective evaluation and measurement. In the logical positivist paradigm, the underlying philosophical stance is that reality is objective and objective truth can be found through systematic investigation and measurement. This can be further split into two areas: experimental research and descriptive research (i.e., survey design). Each is primarily quantitative in nature and answers different fundamental research questions. A commonality is the predefined features of variables. Variables to be studied, such as ease of use and rate of task completion, are defined and ultimately measured by the researchers on participants. All variables are operationalzed and distilled into abstract definitions from the "neutral" or object point-of-view that can be measured. Thus at the outset, the views and experiences of the people using a product have been limited to predefined categories. By the action of researchers identifying the variables to be measured, the viewpoints and full set of possible experiences of users are restricted by the research process.
Question 1: How to measure behaviors and predict causation of product interaction?
Experimental--Test Causation/Prediction. Experimental design answers questions of causation and is designed for replication of findings. In the replication of the design, the same causation should occur under the same conditions. An inherent attribute of the experiment is that causation can be measured and predicted under defined circumstances. This is readily seen in ergonomic design where the physical measurement is the goal. Items such as driver control panels need accurate objective measurement of use by operators. The testing of products requires highly controlled experimental design and measurement in order to assure users' physical safety. In general, ICT products should not be physically risky to use. Therefore, designing highly controlled experiments are unnecessary. Instead, quasi-experiments and natural experiments can be used to test ICT products. These latter two experimental designs still measure variables. Specifically, each design can be useful for measuring tasks and performance levels in experience design research for ICT products.
Product usability testing can be conceptualized as quasi and natural experiments. In product usability testing, researchers assess user performance on products. Instead of random assignment of participants to different treatment conditions to assess an independent variable's influence on a dependent variable, usability specialist and product designers need to analyze how people "naturally" interact with products. Usability testing traditionally takes place in the laboratory to measure user task performance through direct observation. The observation is highly controlled through a series of pre-defined tasks that users perform on products. The study participant's behavior and/or attitudes will be measured by observation or questionnaires in relation to predefined variables. The measured causation is assumed to be due to "naturally" occurring interactions between people and technological artifact. The goal is to objectively measure user behavior with the product artifact. Usability testing as a data collection method relies most heavily on the observational technique of think aloud protocol. The think aloud method has developed from a cognitive study approach beginning in the late 1960s to decision making (Williamson, Ranyard, & Cuthbert, 2000). Participants verbalize their thought process as they go about each pre-defined usability task. With the observations from researcher notes plus screen video and audio capture, the observation of user behavior with the technology is quantified. This can include measuring task completions and false starts. Through observation and objective user surveys, measurements of user task and overall performance for products are calculated. The analysis offers usability test results on task analysis and performance that rate how well users can interact with a product. The objectively observed behavioral data, as collected in usability testing, aids product developers' decisions on what usability problems to fix for a product. One weakness is that from a user experience perspective, the research is designed to measure pre-established categories defined by researchers. The experiences of the participants will not fully emerge. Instead, a priori experience categories are assumed to be important to users and are thus, measured. It is an objective research activity of quantifiable measurement. It is neither emergent nor conceptual.
Question 2. How to measure general attitudes of a population about a design? Survey Research--Measurement to Generalize. Survey design methodology is used to gauge and generalize behaviors and attitudes of a population based on a sample of the targeted population. While survey methodologies are not the most appropriate primary research choice for an experience design approach, some methods within these methodologies can be used in conjunction with more suitable research designs. These methods can be used to elicit attitudes on how strongly people feel about a certain aspect of a product design. The number of people needed to form a satisfactory sample population in order to generalize users' attitudes and behaviors based on an actual product test is unrealistic. For instance, usability professionals indicate that the standard number of participants for most usability tests is five (Caulton, 2001). Five users would not even be sufficient to generalize findings to a population of 10 people at a confidence level of .05 (see, Krejcie & Morgan, 1970, p.608). While usability attitude rating scales are sometimes used before and after usability testing, the generalizability of the findings is faulty under the logical assumptions of survey methodology. Even if marketers run several focus groups with 10 people in each study, reaching a large enough number of study participants to generalize to the larger population would be highly unlikely.
True survey methodology can test product concept, but only attitudinally. Survey methodology, intended to generalize, can not feasibly assess products, at the level of real user product interaction. This is why products can be conceptually rated high by survey methodology, yet fail in the marketplace when people actually use the product. The wrong methodology has been applied to the situation. "Often the main problem in cognitive research is the kind of perspectives that are omitted because of implicit rationalistic assumptions," (Hjoland, 2002, p. 257). The purpose of survey design is to generalize research finding to larger groups of a population based on a studying a subset of the target population. In summary, survey design answers questions of association and correlation based on participants' reports of attitudes and behaviors. In experimental and survey research, the data reduction and reporting of finding is done with statistical testing and measurement. Both lack the perspective of users and their experiences with products.
Qualitative inquiry is the other dominant research paradigm. Sometimes labeled as heuristic, two major research traditions prevail here in which the underlying philosophy posits that reality is subjective. This means that through interaction and communication with each other and artifacts, people create reality. Qualitative research is "... a nonmathematical process of interpretation, carried out for the purpose of discovering concepts and relationships in raw data and then organizing these into a theoretical explanatory scheme" (Strauss & Corbin, 1998, p. 11). The first major approach in this paradigm is descriptive qualitative research. This approach answers or "interprets" questions of process and understanding in the context of the human condition from the participants' perspectives. The second methodological stance is critical (i.e., rhetorical) research where the researcher interprets social meaning and action based on social symbols, actions, environments, and artifacts. In qualitative research, there are not predefined variables to be measured. Descriptive qualitative research seeks out how people define the "variable." This is a flip of quantitative methodology which sets pre-defined variables to be measured on participants. In essence, qualitative reasoning allows the significant themes or "variables" to be defined by participants. This holds true for critical and rhetorical research since the important categorical "variables" or cultural patterns are defined by cultural rules and interaction patterns.
Question 3: How to determine the meanings users create about these products?
Descriptive--Interprets Meaning & Understanding. This approach involves interpretive and generative research. Understanding participants' experience is a primary goal. Here the goal is to explore and understand how people encounter the world from their points of view. The research findings emerge from study participants' experience as opposed to pre-defined variables. Interpretive data reduction is accomplished through techniques that allow participants' voices and texts to be represented in the research reporting. This may include using representative participant quotations as supporting evidence from data sets. This method is in direct opposition to experimental and survey research design in which participants' behaviors and attitudes are directed. User experience design is first and foremost an interpretive methodology. Collecting and analyzing data on users' interaction with products provides knowledge that can be used strategically to improve products in numerous business areas such as engineering, information design, research and development, public relations, and marketing. Providing the best methods to collect and analyze user information can be critical to marketplace success. Two methodologies that fit well with investigating user experience design are presented below.
Grounded Theory. Grounded theory can be one of the most useful methodologies to user experience research. Grounded theory seeks to understand the "lifeworld" of individuals as close to their viewpoints as possible. Its basic premise is to understand how people live in and understand their world through data collected by in-depth interviewing and contextual analysis. Beyond observation, researchers comb through participant interview data and field notes to recognize emergent themes in data. These patterns are developed from a constant comparative between individual sets of data such as interviews and the emerging themes. From data properties in participants' interview transcripts and researchers' notes, themes or patterns emerge across the data. Grounded theory can be used to ask people about their experiences with products. Instead of being bound by rigid construct categories as offered in experimental designs, participants are unrestricted in their discussions and opinions about products.
Ethnographic Inquiry/Field research. Ethnography is a methodology to interpret cultures through their symbols and meaning systems. Researchers must construct ways of observing and/or interacting with a culture and its members to understand and interpret its everyday rules and rituals. This may include how people perform tasks, accomplish work, and interact with others to construct a culture. Culture has been studied and defined in numerous ways. Researchers have studied work groups, families, and online groups as cultures. These taken-for-granted rules and rituals form cultural meaning systems that are known by cultural group members. In product design, ethnographic field methods are utilized to observe and analyze how people use technological artifacts in their daily lives. Researchers must conduct field research to analyze how people interact with products. The goals are to better design products or develop newer products to fit the needs of the consumer.
A host of data gathering methods are employed including observation, participation, and interviewing. The data is captured in field notes and possibly audio and video taped. While traditional ethnography can take many years, product development field research can take only a number of days. The focus is on a task and artifacts that support the accomplishment of the task. It is a micro-ethnography that zeros in on an artifact and the interaction analysis in the surrounding context. Instead of people and their culture being the main focal point, the tools the culture uses for daily life become the objects of interest. Resulting data and the meaningful patterns that emerge from its analysis is used to build interaction scenarios, typical user personas, and story boards of product usage. These tools (i.e., scenarios, personas, and story boards) can then be used by product developers in building user-centered products. The best possible "user experience" with the new product is built into the design and development process.
Question 4: How to critically analyze the technology-business-society relationships in the design process?
Critical Research--Critiques social meaning and action. Rhetorical and critical research is performed by a person, usually a cultural insider, who is able to articulate detailed knowledge of social rules and structures. In product design, the critiqued social artifacts come from particular "cultural" domain experts. Representative methods are expert reviews or heuristic evaluations. The expert reviewing the product design has a specialized understanding of particular products and users. With this in-depth knowledge, the evaluator analyzes the product and user fit. There is somewhat of a pre-defined structure. Instead of being a freely formed analysis by the analysts, the critical lens and scope is curbed to fit a "cookie-cutter" approach. Some evaluations have check lists that include categories like user control, helpful documentation, error recovery, and ease of use. These and other categories are developed from best practices of past information design.
Using a question-oriented problem-based approach to an experience design methodology course provides a way to reach beyond disciplinary boundaries in the humanities and sciences. This provides students the knowledge and skill to define and use the best methodologies and methods in any given product design situation. Developing a course around four paradigmatic questions, presented in this article, allows the exploration of strengthens and weaknesses for research designs in problem-based situations. While quasi-experimental design might be best for studying cognitive difficulty in completing physical tasks on products, a qualitative meaning-based approach like ethnography will show how the product is used and provides meaning in everyday life. Students learn how to focus their research by allowing research questions to drive the choice of methodologies and methods.
Caulton, D. A. (2001). Relaxing the homogeneity assumption in usability testing. Behaviour & Information Technology, 20(1), 1-7.
Ford, N. (1999). The Growth of understanding in information science: Towards a developmental model. Journal of the American Society for Information Science, 50(12), 1141-1152.
Forlizzi, J. & Battarbee, K. (2004). Understanding Experience in Interactive Systems. Proceedings of DIS 2004. Cambridge, MA. ACM, 261-268.
Hekkert, P., Mostert, M., & Stompff, G. (2003). Dancing with a Machine: A Case of Experience-Driven Design. DPPI'03, Pittsburgh, Penn, ACM., 114-119.
Hertel, K. (2004). "Jumpstarting research with essential questions." Academic Exchange Quarterly, 8(4), 268-272.
Hjoland, B. (2002). Epistemology and the socio-cognitive perspective in information science. Journal of the American Society for Information Systems and Technology, 54(4), 257-270.
Kanigel, R. (1997). The One Best Way: Frederick Winslow Taylor and the Enigma of Efficiency. Viking, New York.
Keiman, T., Anschuetz, L., & Rosenbaum, S. (2002). Combining usability research with documentation development for improved user support. SIGDOC 2002:84-89
Krejcie, R. V., & Morgan, D. W. (1970). Determining sample size for research activities. Educational and Psychological Measurement, 30, 607-610.
McLellan, H. (2000). Experience design. CyberPsychology & Behavior, 3(1), 59-69.
Strauss, A. & Corbin, J. (1998). Basics of Qualitative Research. 2nd. Sage Publications, Thousand Oaks: CA
Williamson, J., Ranyard, R., & Cuthbert, L., (2000). A conversation-based process tracing method for use with naturalistic decisions: An evaluation study. British Journal of Psychology, 91, 203-221.
Linda M. Gallant, Bentley College, MA
Linda Gallant, Ph.D., is an Assistant Professor in the Information Design and Corporate Communication Department. She investigates how design processes shape information technology products.
|Printer friendly Cite/link Email Feedback|
|Author:||Gallant, Linda M.|
|Publication:||Academic Exchange Quarterly|
|Date:||Jun 22, 2006|
|Previous Article:||Civic and political leadership education.|
|Next Article:||Putting an ethical frame on problem solving.|