Retrieval from a case-based reasoning database.
We examined users' information retrieval performance and their perceptions of their performance for a case-based reasoning knowledge repository to determine how users should be supported for effective use. The users were able to retrieve information from the system and noticed a difference between traditional keyword searching and case-based reasoning searching, but did not understand what caused the difference. This lack of understanding may limit the full potential of alternative search engines that provide results in a different manner than familiar search engines.
Teachers and those who educate them are continually searching for innovative and effective tools to assist with learning to integrate technology into teaching. When teachers are seeking resources for solving a technology integration problem, they turn to databases, work groups, and communities of practice. They look for similar situations to see how problems were solved and then adapt the information that they find to fit their own needs. The Knowledge Innovation for Technology in Education (KITE) project is a web-based knowledge repository with nearly 1000 stories or cases describing the real-life experiences of in-service teachers as they integrate technology into their teaching (Wang, Moore, Wedman, & Shyu, 2003). The technological design of this repository relies on information retrieval with case-based reasoning (CBR), which means that the users can search for cases that are either identical or similar to the desired criteria. Although the repository provides access to a wealth of knowledge, many users are not familiar with the search principles of CBR retrieval systems.
With the proliferation of the new search engines, it may be necessary to create support tools, such as job-aids and tutorials that will assist users in effectively using the search interface. In order to determine what type of support users need, we decided to examine their search experience while using the KITE search engine. A survey by Jonassen and Erdelez (2005) identified that unfamiliarity with the new search environment was the most consistent problem for using the CBR database. In another study, Wu (2006) identified the importance of providing users a conceptual description of the search engine in order to facilitate higher search correctness and user satisfaction. Our study builds on this prior research by applying task-based evaluation, conducted in a usability laboratory[l]. In particular, the study collected data about users' search sessions in response to assigned search tasks, and users' perceptions of searching with CBR and keyword search interfaces.
The Principles of Case-Based Reasoning
According to Jonassen and Hernandez-Serrano (2002), we can prepare professionals to deal with ill-defined and ill-structured problems in the workplace by exposing them to stories generated in the same environment. People will naturally use what they have learned in a previous problem and apply it to a new problem. This principle creates the basis for utilizing cases to generate solutions, and it is used extensively in everyday, common sense reasoning. The process of problem solving that includes analysis, evaluation, synthesis, and conceptualization, produces better judgment and decision-making. From a CBR perspective, the problem-solving approach involves the process of retrieving/adapting of cases with associated solutions (Bradley, 1994). An assumption is that individuals have numerous experiences that have been indexed in their memory to make available to themselves and other people to be used in new situations. An additional assumption is that community knowledge is stored and can be captured in the form of stories (Schank, 1990).
The KITE knowledge repository embodies a library of technology integration cases, with each case containing a description of the problem, the solution and/or the outcome (Wang, Moore, Wedman, & Shyu, 2003). Each case reflects the knowledge and reasoning process used by the teacher to solve the problem. Although this process is not labeled, it is implicit in the solution. Thus, the repository can be used as an anchor for instructional activities integrated within a teacher education course. Students can analyze and synthesize multiple cases given an instructional situation, then develop new lesson plans or activities based on the same thread of cases. Case-based reasoning includes four major steps: retrieve, reuse, revise, and retain. When solving a current problem, the user searches for similar cases in the database. The retrieved cases are used to suggest a solution, which is reused and tested for success. If necessary, the solution is then revised. Finally the current problem and the final solution are retained as part of a new case.
There are three options for accessing the case library: Browsing, Keyword Search, and Super Search. Browsing provides an approach for users to navigate the entire library by the predefined index terms, such as type of technology used, standards, and subject. The Keyword Search finds information by counting the term frequencies of keywords in text. The searching is more complex for the Super Search, because the search engine retrieves cases based on semantic meanings (i.e., similarities) and the "distance" between the meanings. A meaningful distance measurement between cases is the key to the search engine. A distance between two values ranges from 0 to 1 with l being the maximum distance, which means no similarity at all. When a query case is submitted, the engine first computes the distances between the query case and all cases in the database. A shorter distance is expected when two cases are more similar. The engine then ranks the distances to determine the order of retrieved cases so that the users are prompted with the best matched case first. If the initial retrieval results are not satisfied, the users can revise the query case or create a new query case. For instance, imagine that a user submits a search request for stories about technology use in third grade classes. If the search engine cannot find third grade cases, it will first return second and fourth grade cases, then first and fifth grade cases, and so on. If the engine cannot find science cases, it will return math cases before returning language arts cases since math is arguably closer to science than language arts. As such, users may not find perfectly matched cases but the Super Search always returns other closely related cases. Consequently, previous search experiences with tools such as Yahoo or Google can impact how users perceive the results retrieved from the Super Search. The potential for misperceptions of the search functions was the basis for our study.
We employed a combination of qualitative and quantitative methods for collecting and analyzing data. The participants were 21 college students (16 females and 5 males). Seventeen students were enrolled in a teacher education program, and the remaining 4 were enrolled in College of Education courses. To obtain user profile information regarding computer technology experience, we asked five questions relating to self-perception of comfort with computers and with the Web, and experience with the Web, search engines, and electronic databases. The data was collected in a usability laboratory[I] equipped with a personal computer connected to the Internet. The participants used the same web browser to access KITE. The study participants were presented with four search tasks to find related cases in the repository. Each search task required two search sessions, wherein users searched using the Keyword (traditional search) and Super Search (CBR search) options. A search session was operationalized as all of the queries submitted in the process of finding the answer to the search task.
There were two "positive" and two "negative" search tasks. A "positive" search task means that there was at least one case in the case repository that was a "perfect" match for all of the elements in the search scenario. For example, one of the positive tasks was worded as: "Find a case that describes how students use PowerPoint and digital cameras for creating presentations." For a "negative" search task, the results did not provide cases that included all of the criteria stated in the scenario. The example scenario for one of the negative tasks was: "A third grade science teacher has 20 years of teaching experience but no technology integration experience. He teaches in an urban school with Internet connection and wants to use videoconferencing to conduct distance education activities." Once the user identified a case for search task, they rated the relevance of the case to the scenario. Other data collection instruments were observations of users performing search tasks, and search task response forms completed by participants. For the search task response forms, the participants described their search process and the differences between the keyword and super search outcomes.
The demographics survey was comprised of two components: general information (e.g., name, age, gender, academic status), and computer technology experience. The majority of the students rated themselves as comfortable with both computers (71%) and with the Web (71%), and perceived themselves as intermediate users for both web experience (71%) and search engine experience (62%). For their experience level with electronic databases, 48% of the participants chose the nonuser label, but only 5% considered themselves advanced users. The remaining 47% were evenly distributed between novice and intermediate users. Using a 5-point scale, the participants rated the relevancy of retrieved cases. The relevance of cases retrieved with the Keyword Search tool was 3.12, and the average relevancy using Super Search was 3.69. The results from a paired-samples t-test indicate statistically significant differences (p = .002) between relevancy ratings for Super Search and Keyword. That is, subjects rated cases found using Super Search more relevant to the search tasks than cases found using the Keyword Search tool.
The qualitative data indicates that the participants' previous search experience did affect their perception of the KITE search engine. The observations and search tasks responses revealed that most users perceived the Super Search as a method to provide more options for narrowing their search. They believed that the more search criteria chosen (e.g., grade level, subject area, type of technology, etc.) would "limit" the number of cases retrieved. However, the search engine actually uses these criteria to expand the number of potential matches and not to narrow it. There was an additional misperception among the participants. They believed that when the Super Search provided more results than Keyword Search, it was an indication that Super Search performed better even though many retrieved cases did not "perfectly" match the search scenario. Many participants commented that they expected to find cases that could be immediately applied to their specific instructional environment. Therefore, the main issue was how the information could be used when it was retrieved, more so than how to use the search options.
The positive and negative search tasks represented realistic situations that users encounter when using a database. Oftentimes, users might believe that they are not using the system appropriately because they do not receive any results. In reality, some users fail to realize that databases may not contain a solution or case that matches their search query. This fact was evident during the negative search tasks. For our study, the participants did not know that the negative task would not produce a relevant case that represented the criteria stated in the search scenario. When asked about their search performances and results, several participants expressed disappointment with their queries and assumed that they were not accurately performing the search. The authors realize that this perception may also have been influenced by the artificial nature of the research study. Meaning, when participants were asked to perform the search task, some may have assumed that there must be at least one case that fits the search tasks. Also, some participants may have believed that the researchers would not have them perform a search task wherein there was not a case that fit the search scenario.
Our findings indicate that the participants in our study successfully used the KITE retrieval system and were capable of noticing differences in the results between traditional keyword searching and CBR search tools. The users were satisfied with their own performance and with the relevance of retrieved cases; however, they failed to understand the underlying operation of search engine. While the primary task of finding relevant cases (for positive tasks) may be successfully completed with either traditional keyword searching or CBR searching, the lack of understanding of the underlying principles may prevent users from taking full advantage of similar systems. For example, providing numerous related examples rather than exact matches to an instructional problem may not be the typical results that teachers expect from a database. However, teachers must learn that examining the differences and similarities of technology solutions can help them generate innovative activities. This cognitive model of reasoning can encourage reflection and improve problem-solving skill, which is critical for adapting to the technology situations that teachers will encounter in the classroom.
In summary, the developers of online learning environments that rely on CBR should not expect users to fully understand the potential of this type of searching on their own, nor through repetitive system use. Our task-oriented methodology confirmed the findings of the prior studies that conceptual training about the characteristics of a CBR search system and its differences from a traditional keyword search system is needed to ensure that users gain full advantage of this type of retrieval system. The training should not be focused only on the search interface, but should explain the conceptual differences between two approaches to searching. With such training, the users may be able to reach higher levels of problem solving and learning.
Bradley, A. P. (1994). Case-based reasoning: Business applications. Communications of the ACM, 37(3), 40-42.
Jonassen, D. H., & Erdelez, S. (2005). Usability of case libraries by teachers. Journal of Computing in Teacher Education, 22(2), 67-74.
Jonassen, D. H., & Hernandez-Serrano, J. (2002). Case-based reasoning and instructional design: Using stories to support problem solving. Educational Technology Research and Development, 50(2), 65-77.
Knowledge Innovation for Technology in Education (n.d.). Retrieved October 20, 2006 from http://kite.missouri.edu/
Kolodner, J. L. (1993). Case-based reasoning. San-Mateo, CA: Morgan Kaufmann Publishers, Inc.
Schank, R. C. (1990). Tell me a story: Narrative and intelligence. Evanston, IL: Northwestern University Press.
Wang, F., Moore, J. L., Wedman, J., & Shyu, C. (2003). Developing a case-based reasoning knowledge repository to support the technology integration community. Educational Technology Research and Development, 51 (3), 45-62.
Wu, H. (2006). The effects of conceptual description and search practice on users' mental models and information seeking in a case library with a best match search mechanism. Unpublished doctoral dissertation. University of Missouri-Columbia.
 The study was conducted in the Information Experience Laboratory at the University of Missouri-Columbia. http://ielab.missouri.edu
Joi L. Moore, University of Missouri-Columbia
Sanda Erdelez, Information University of Missouri-Columbia
Wu He, Old Dominion University
Joi L. Moore, Ph.D. and Sanda Erdelez, Ph, D. are Associate Professors in the School of Information Science & Learning Technologies. Wu He, Ph.D. is an Instructional Technologist at the Center for Learning Technologies
|Printer friendly Cite/link Email Feedback|
|Publication:||Academic Exchange Quarterly|
|Date:||Dec 22, 2006|
|Previous Article:||Students' perceptions of student-led conferences.|
|Next Article:||The voice as a learning technology: a review.|