Evaluation of a third-generation zoo exhibit in relation to visitor behavior and interpretation use.
Zoo exhibits have gone through a dramatic evolution over the last 100 years. This change has often been described in terms of the transition from first- to third generation exhibits (Campbell, 1984). First-generation exhibits housing, often, solitary animals were bare, featureless and either barred completely or utilized deep pits as animal containment. Second-generation exhibits (of which many remain) can still be fairly austere, with modest attempts at including "cage" furniture. They are typically constructed of inorganic materials such as concrete and often surrounded by a water-filled moat. They are designed, at least in part, with the welfare of the animal in mind. In third-generation exhibits, animals are kept in species-appropriate group numbers and are planted and themed to resemble the native ecosystem of the species. The barriers between visitors and animals are normally concealed. Often, the term immersive or immersion is used to describe third-generation exhibits. By adopting this approach, the mood of the exhibit can contribute subliminally towards public education and when supported by a number of interpretive elements including signs, sensory experiences, and interactives, the exhibit message can be further consolidated (Coe, 1987).
The idea that zoos should promote conservation education is not new. Most zoos in the UK have had formal education programs for schoolchildren since the 1970s or '80s (Woollard, 1998). In more recent times education for all of the zoo-going public has become an increasing priority. In the UK, in line with other European Union member countries, zoos now have a legal obligation to actively promote conservation education "by providing information about the species exhibited and their natural habitats" (DEFRA, 2004). The World Association of Zoos and Aquariums (WAZA) refers to conservation education as having a "central role for all zoos and aquariums" and forming a "critical component of field conservation, building awareness and support" (WAZA, 2005, p. 35). Tribe and Booth (2003) also favor zoo education for all of the visiting public, stating that because zoos attract around 600 million visitors, they have a great opportunity to engage in public education. Brewer (2001) provides support for the value of conservation education as a whole by describing it as "the practical nuts and bolts of connecting teaching with learning to cultivate conservation literacy" (p. 1203). However, in recent years there has been a call for more research to be conducted into the effectiveness of conservation education within zoos (Balmford, Leader-Williams, Mace, Manica, Walter, West, & Zimmerman, 2007; RSPCA, 2006). In addition, WAZA (2005) also recognizes the key importance of the evaluation of education programmes.
Traditional educational interventions such as teaching schoolchildren are routinely evaluated in many zoos. Exhibits are part of the educational landscape of zoos, particularly if they are third-generation and contain interpretive elements. Their effectiveness as public education conduits should also be evaluated. This may provide valuable information to drive remedial work or retrofitting, provided that during the planning phase, clearly defined exhibit aims are identified. See Sickler, Fraser, Gruber, Boyle, Webler, and Reiss, 2006, or Hayward and Rothenberg, 2004, for examples of comprehensive evaluations of exhibits, including pilot phases similar to this study.
Summary of Exhibit and Interpretation Evaluation
Zoo exhibit studies have often mirrored the techniques pioneered in museum studies. These include the collection of data relating to visitor behavior and the variables that may determine it (Ross & Gillespie, 2009; Zwinkels, Oudegeest, & Laterveer, 2009; Moss, Francis, & Esson, 2008; Nakamichi, 2007; Ross & Lukas, 2005; Johnston, 1998; Philpott, 1996; Bitgood, Patterson, & Benefield, 1988; Marcellini & Jenssen, 1988; Derwin & Piper, 1988). The most common method used can be described in general terms as "visitor tracking" or "behavior mapping." This involves the unobtrusive observation of visitors and recording how they behave in exhibits. Yalowitz and Bronnenkant (2009) provide an overview of this approach. Typically data, such as the number of stops at animal viewing windows or interpretive elements, are collected together with the viewing time for each and overall dwell time for the exhibit as a whole. Bitgood, Patterson, and Benefield (1986) give an overview of ten variables that can affect visitor behavior in zoo exhibits. Such studies allow a detailed picture of exhibit usage to be revealed. However, from an educational perspective, we cannot deduce that learning has taken place on the basis of a visitor stopping at an animal viewing window or apparently attending to an interpretive sign.
There is a growing number of researchers involved in the evaluation of interpretation (as opposed to the evaluation of whole exhibits) and studies are commonly undertaken at heritage sites, national parks, museums, and science centers (Yamada & Knapp, 2010; Hughes, Ham, & Brown, 2009; Weiler & Smith, 2009; Ballantyne, Packer, & Hughes, 2008). Much of this work concentrates on the impact of interpretation on visitors--this could mean uncovering what visitors might have learned from interpretation, or how their behavior or actions might have changed because of its influence. For example, Kim, Airey, and Szivas (2010) used a survey instrument to assess attitudes and environmental intentions of visitors to a coastal area in the UK and found that interpretation had some positive, site-specific impact but was not as effective on influencing longer-term or broader conservation issues. Tubb (2003) reported similar results, finding that visitors to a UK national park generally showed only site-specific attitude changes when exposed to interpretation. Barriault and Pearson (2010) present a "Visitor Engagement Framework"--a system that allows observations of visitors to be assigned to one or more hierarchical, behavior categories.
Interpretation research undertaken in zoo settings is becoming more common (Fraser, Bicknell, Sickler, & Taylor, 2009; Fuhrman & Ladewig, 2008; Povey & Rios, 2002; Broad & Weiler, 1998; Woods, 1998) and there have been some highly innovative and diverse studies. For example Fraser et al. (2009) provide an insight into what zoo visitors want to read on species signs-namely factual information: conservation status, where animals live, any unusual adaptations or behaviors. In a similar vein, Fuhrman and Ladewig (2009) reviewed the characteristics of species utilized by zoos in interpretation. Broad and Weiler (1998) present a detailed content and visitor analysis of two tiger exhibits and suggest a link between interpretive content and the learning that took place.
Post-occupancy Studies in Zoos
There have been a limited number of other post-occupancy exhibit studies (those conducted after the completion of new exhibits) undertaken in other zoos. For example, Shettel-Neuber (1988) used four methods (including visitor tracking and questionnaires) to assess two second-generation (older style) and two third-generation (modern, immersive) exhibits that housed the same species. The attitudinal preference of visitors was for the modern exhibits, but the conclusions from the visitor behavior data were not as obvious. Derwin and Piper (1988) used a mix of unobtrusive observation correlated with visitor recall of information, finding a strong correlation between dwell time and exhibit elements explored. Wilson, Kelling, Poline, Bloomsmith, and Maple (2003) conducted interviews with visitors with regard to a new giant panda exhibit, finding that visitors felt positively towards many aspects of the new exhibit, with particular reference to the educational opportunities this exhibit offered.
Rationale for This Work
In this paper, we provide a detailed analysis of a replacement immersive, third-generation orangutan exhibit at Chester Zoo, UK, using primarily quantitative methods. Visitor exhibit use was observed in the second-generation exhibit and compared with that in the newly built one and new, prototype interpretation was pre-tested with zoo visitors. Quantitative methods were selected for their ease of use and speed of data analysis, allowing for rapid feedback during the design process and in the immediate post-occupancy period. However, it is recognized that a mixed-methods approach could have provided a more powerful overall assessment, particularly when considering the educational impact of the exhibit.
The main motivation for this research was to utilize an evidence-based approach to improve interpretation use and performance, but also to provide a better overall exhibit experience for visitors, with interpretation being a key part of this.
Chester Zoo is one of the leading zoos in world (Forbes, 2007) and is the most visited in the UK (attracting around 1.4 million visitors each year). It is situated in the northwest of England and is within a 90-minutes drive of some of the most populated areas of the UK, including Manchester, Liverpool, Birmingham, and Leeds. Chester Zoo covers an area of approximately 50 hectares and is home to around 400 different species.
The Old Exhibit
Before the construction of the Realm of the Red Ape (RORA) exhibit began, evaluation was conducted in the Orangutan Breeding Centre (OBC), which was built in 1969 and was very much a second-generation exhibit, more functional than aesthetic. Visitor behavior was recorded in this exhibit using visitor tracking methods (see below for specific details) and overall dwell time and visitor stops were recorded. The aim was to compare visitor use of orangutan exhibit elements in the old and new exhibit (controlling for variables). Two hundred groups were recorded. This data was collected during between October 2004 and March 2005.
The New Exhibit
The study took place between 2007 and 2008 in the RORA exhibit at Chester Zoo and the data was collected by experienced visitor researchers from Chester Zoo's Education Division. RORA was opened in June 2007 after a two-year construction and replaced the second-generation exhibit, OBC.
RORA sought to recreate a southeast Asian tropical forest canopy in an indoor exhibit and the design theme was "Life in the Canopy." Visitors entered the exhibit at canopy level via a raised walkway (as they had in OBC) and then made their way around a climate-controlled linear walkway, themed with authentic tropical plants, including orchids and pitcher plants. Water was sprayed into the exhibit with misting machines, for the plants' benefit and to add to the immersive experience. The orangutans (Bornean, Pongo pygmaeus and Sumatran, Pongo abelii) were found in a number of viewing windows along the walkway--some with an outside aspect, some with viewing to the inside enclosures. A family of lar gibbons (Hylobates lar) shared the enclosures with the orangutans forming a mixed-species exhibit.
The walkway experience included smaller enclosures housing canopy species that share the same distribution as orangutans, such as Salvador's monitor (Varanus salvadorii), reticulated python (Python reticulates), and dead leaf praying mantids (Deroplatys desiccata). The planting, atmosphere, and variety of canopy species all contributed to the third-generation immersive exhibit experience.
Orangutan interpretation was principally in the form of 13 x 2 meter-high panels with high-impact photography; text was based around the educational theme of "Kinship" making connections that emphasized the relatedness between humans and orangutans in order to evoke an emotional response in visitors. The interpretive scheme included some traditional read-only signage, primarily for the other canopy species and the installation of interactive elements; a "virtual storm" where a back-lit panel displayed the changing moods and weather of a rainforest, complete with sounds; a large "driptip" leaf model for children to shelter under (with a photographic panel that showed comparative behavior in a young orangutan) as a photo opportunity, and a model orangutan nest for children to sit in (accompanied by a photographic comparison), again as a photo opportunity. Two computer-driven interactive comparison games were included, using the same large panels, but with touch-pad controls and monitors.
Pre-evaluation of Interpretive Concepts for RORA
As this was a multi-million pound exhibit, it was seen as important to test the prior knowledge and understanding of the proposed design theme "Life in the Canopy," the educational theme "Kinship." and the proposed branding of the exhibit "Realm of the Red Ape." This was done in two ways.
First, 74 zoo visitors were randomly stopped and asked three questions:
* Which part of a forest is the canopy?
* What do you understand by the word kinship?
* If one of the zoo's exhibits was called "Realm of the Red Ape," what animal would you expect to find there?
The second approach involved the design of prototype orangutan interpretive elements to explore visitor understanding further, testing the education theme in each panel. Random groups of zoo visitors were brought together to view an audio-visual presentation of the panels. They were asked to rate each prototype theme on how closely related each one made them feel towards orangutans. Visitor preferences and feedback were recorded using software designed to assess educational performance, and utilized handheld voting pads (CPS(tm)); this allowed data to be recorded automatically but also meant that participants were less influenced by one another's responses. Participants ranked their responses to questions on a five-point Likert scale, namely: Very closely related, Quite closely related, Related, Quite distantly related, Very distantly related. A total of 183 visitors participated.
Evaluation of RORA
The main body of the study focused on the evaluation of the completed exhibit. This involved a visitor tracking method based on that provided by Serrell (1998). Visitor groups were randomly selected for tracking as they entered the main exhibit (i.e. not on the raised walkway to the entrance). When one member of that group made a definitive movement towards an exhibit element (animal viewing or interpretation), they became the group's representative and the sole target of tracking for the rest of that particular group's visit. Using a plan of the exhibit, the researcher (who was dressed in plain clothes and not identifiable as member of zoo staff) recorded the behavior of the tracking target. This required the researcher to record each stop at an exhibit element in addition to the time spent at the element in question. Overall dwell time for the stay in the exhibit was also recorded. This data allowed the calculation of the percentage of visitor groups that stopped at each particular element (attracting power) and how long they were attentive (holding or viewing time). This allowed accurate "time budgets" for the 151 visitor groups that were tracked to be constructed.
Additional methods were employed to assess the degree to which visitors were engaging with some of the computer-driven interactive games. After extensive prior observations of visitor interaction with the games, a four-level visitor engagement scale was developed (similar in concept to the framework described by Barriault and Pearson, 2010):
Level 1: Glances at the element, or presses button but does not stop.
Level 2: Stops at element, but either does not interact or fails to reach question stage. This behavior is often characterized by misuse of an interactive, such as repeated pressing of button without purpose.
Level 3: Reaches question stage of the game but may not answer all the questions. Visitors interact and may discuss the questions (perhaps incorrectly) but less enthusiastically than at level 4.
Level 4: Visitor reaches question stage and completes the majority of questions and may repeat the question cycle. This behavior is often characterized by enthusiasm and/ or animated visitor conversation.
Data was collected for a number of variables, including nominal and scale measures. Also, some scale data did not fit a normal distribution therefore, in those cases, averages were reported as median figures and non-parametric methods of inferential statistics were used, such as the Mann-Whitney test for independent samples. Nominal data was subject to chi-square tests of independence where appropriate.
Pre-evaluation and Testing of Exhibit Themes
As explained above, three questions were asked to a number of visitor groups. Table 1 displays the outcome of this survey. Clearly, visitors were comfortable with the idea of "kinship," with almost 90 percent of respondents giving an acceptable answer. As a consequence, this education theme was adopted for the exhibit. However, visitors were less confident in their understanding of the terms "canopy" and "red ape." As a result of this finding the word "canopy" was substituted by the words "tree tops" and the exhibit design theme was re-named as "Life in the Treetops." An image of an orangutan was incorporated into the branding of the exhibit whenever Realm of the Red Ape was used, to provide a visual prompt.
As expected, visitor groups involved in the testing of the prototype panels that explored "Kinship" felt particularly strongly towards content: that had a human interest (see Table 2). For example, the pynels "Mirror Image" (explaining how, like humans, orangutans can recognize their reflection in a mirror) and "Growing Up Takes Longer" (showing how similar child-rearing behaviors are in human and orangutan mothers) were given high "relatedness" ratings by the visitors. Other, more scientifically-based panels were scored less favorably For exampk the "Same Deep Down" panel (looking at the biological similarities--DNA, large brain size, etc.--between humans and orangutans) did not invoke the same feelings of kinship in visitors. In response, the scientific content of this particular panel was toned down, while another poor-scoring panel that investigated the digestive systems of humans and orangutans was dropped altogether.
Comparisons Between RORA and the OBC
The obvious comparison between the two exhibits is how long visitors sp9nt in each exhibit (Table 35). From this it is clear that RORA held visitors for a longer period than OBC. This equates to a median dwell time of 583 seconds in RORA (10 minutes, 43 seconds) in comparison with a median dwell time of 188 seconds in OBC (4 minutes, 8 seconds). There was a significant difference in overall dwell time between the two exhibits (U=1240, z=-14.726, p<.001).
However, this is not necessarily a fair comparison as the visitor floor area of both exhibits differs greatly (122m2 for the OBC, 336[m.sup.2] for RORA). When we control for visitor floor area (Table 3), we find that the two exhibits are more similar in visitor use than first thought (1.74 seconds/[m.sup.2] for RORA and 1.54 seconds/[m.sup.2] for the OBC) although still significantly different (U=12278, z=-2.998, p=.003).
Quantitative Evaluation of Visitor Behavior in RORA
The collection of holding time data for all visitor stops allows for a more complete analysis of behavior. Figure 1 shows the breakdown of this into the four main categories of stop. Clearly, animal viewing took up the majority of visitor time in the exhibit (187 seconds for flagship species--orangutans, 93 seconds for the other species). Visitors spent more time viewing the orangutans when compared to the other species. Overall, time spent interacting with interpretation was much less. Visitors spent, on average, over twice the amount of time using interactive interpretation compared to read-only signs. The remainder of the time (50 percent, 290 seconds) was spent passively (that is, walking through the exhibit, standing, waiting for others, etc.).
Interactive Interpretation Use and Retrofitting
Figure 2 shows the relative attracting power and holding times for the interactive interpretation in RORA. From this we found that two pieces were not as appealing to visitors as we would have hoped (when compared to the usage of the other interactives): the "Virtual Storm" and the "Leaf model." As a result of this data, some simple retrofitting was conducted on these elements. This involved the fitting of brightly colored "cue" signs, clearly advertising the interactives in question. Previously both the "Virtual Storm" and the "Leaf Model" had been either missed comp letely os appcared to have been misunderstood by visitors. Photographs 1 and 2 show details of this for the "Virtual Storm"; Photographs 3 and 4 show details for the "Leaf model."
Attracting power and holding time data was then collected after the fitting of these signs. Table 4 shows the revised data after the changes were made. Here we can see that the attracting power and holding time significantly increased for both interactives.
Specific Examples of Visitor Engagement with Interpretation
As a further exploration into interpretation use, two examples of computer-driven interactive games were evaluated using a devised "scoring matrix" of engagement. The two examples used were:
The Same Deep Down Interactive that explores the similarities between humans and orangutans. Touch pads and monitors allow visitors to pair match images. Examples of images include an x-ray of a human and an orangutan skull (see Photograph 5).
Growing Up Takes Longer Interactive that explores similarities in child/parent relationships between humans and orangutans. Again, touch pads and monitors are used to allow visitors to compare photographs of various behaviors, touching the pad when they match a pair (see Photograph 6).
As detailed in the methods section, a four-level scale was used to rank visitor engagement; this could then be converted into a percentage figure within each of the four levels. Table 5 compiles data collected at these two interpretive elements. The mean level of engagement was fairly high (on the four-point scale) for both interactives, although when explored further we find that there was a greater proportion of visitors reaching levels 3 & 4 (63 percent) at "The Same Deep Down" when compared to "Growing Up Takes Longer" (40 percent). Few visitors who stopped remained at engagement level 1 (1 percent and 11 percent respectively), indicating a good transition from stop to interaction.
Further investigation of these revealed an interesting relationship between the engagement level and holding time (Figure 3). There appears to be a strong positive relationship between the two measures.
Pre-testing of Visitor Understanding and Interpretation Prototyping
Using a combination of face-to-face interviews, electronic voting, and unobtrusive observations, a broad understanding of visitor prior knowledge was obtained. From this it was quite clear that visitors had a grasp of the meaning of "Kinship" and to a lesser extent "Canopy." The proposed branding of the exhibit "Realm of the Red Ape" was tested and there was found to be a lack of visitor understanding of the term "Red Ape." Nearly half of those surveyed did not make the connection with orangutans. Further probing revealed that visitors did have an awareness of orangutans but did not associate the species with the term "Red Ape." The assumption was that if a zoo exhibit was called "Realm of the Red Ape" it would contain "red apes"--a species of ape.
[FIGURE 3 OMITTED]
Prototyping revealed that participants are very much attracted to sentimental imagery. Interpretation that explored the anthropomorphic relationships between humans and orangutans promoted a more affiliative response in the visitor, particularly when compared to interpretation that explored the colder scientific basis to our similarities. These scientific educational themes clearly invoked a lesser response from the visitor groups tested. This would suggest that interpretation dealing with purely scientific topics may need to be reassessed as to how it is presented to visitors, to avoid this science-laden turn-off.
Evaluation of RORA--General Discussion
Exhibit dwell time comparison clearly tells us that visitors spend more time in RORA exhibit when compared to OBC (Table 3), whether we compare directly or when controlling for visitor floor area. What is less clear is whether this makes RORA in some way more successful as a result. By looking in more detail at what visitors are doing, for how long, and in which part of the exhibit, this method, as an evaluative tool, becomes much more useful. For example, in some exhibits it might be useful to plot attracting power and/or holding times on a floor plan of the exhibit. From this, it is possible to find "hot" or "cold" spots within the exhibit space--that is, elements that are more or less popular relative to others of the same genre.
This method also allows for the relative comparison of interpretive elements within the same space; for example, we can provide a detailed breakdown of which interpretive elements are the most attractive and which hold attention for the longest. Those that under-perform relative to others need to be assessed carefully to uncover any potential problems. For example, does an interpretive element attract visitors but fail to hold attention? If so, this would suggest that something is not quite right with the presentation of the content. Conversely, an element with a low attracting power but high holding time suggests that the element is working well but visitors are simply not seeing it. This is exactly what we uncovered with the "Virtual Storm" and "Leaf Model" interactives (Figure 2), except in these cases, both attracting power and holding times were low (in relative terms). By adding brightly colored signs that cued in visitors to wait for the storm to develop in the case of the storm and advertising the photographic opportunity for children in the case of the leaf model, the effectiveness of both pieces increased greatly. The important point here is that without this research, there would not necessarily have been any awareness of this under-performance.
Limitations of Study and Recommendations
The limitations associated with using only quantitative data to evaluate exhibits, particularly when considering the educational impact, is that time and stopping data cannot tell us what visitors are thinking. These data can therefore only ever be used as an indicator of educational impact and not as a direct measure.
It is tempting to assume that the longer a visitor engages with an exhibit element, the greater the likelihood that some sort of learning experience is taking place and conversely, the shorter the time involved, the less likely it is that a visitor will assimilate knowledge or understanding. On the other hand there is the chance that a longer holding time simply indicates increased confusion or in the case of an interactive, a lack of understanding in how to operate. This was the purpose of the four-point "engagement" scale--to see what visitors were doing during the time spent at the computer-driven interactive games.
We found a close positive relationship between the level of engagement achieved and element holding time (Figure 3). This relationship appears to be almost directly proportional (and therefore predictable), although of course, we can only confirm this pattern in these two interactives--it would be useful to have more data. However, there are two potential weaknesses in inferring too much from this relationship. First, although the four levels of engagement were carefully determined by extensive prior visitor observations, the scoring matrix is still a subjective rating of behavior, which is more prone to error and inconsistency than more easily quantified measures. Second, there is an almost automatic implication that as holding time increases so does the level of engagement, making the comparison between the two potentially flawed. Whether this suggests that the method is inappropriate (or in need or refinement) or that there really is a relationship between time and visitor engagement is currently unresolved.
With hindsight, it would have been useful to employ a mixed-methods approach to data collection to explore visitor thoughts and feelings, using qualitative methods such as conversation analysis or the use of Personal Meaning Maps (Falk, Moussouri, & Coulson, 1998) Without doubt, the inclusion of techniques like this would add another dimension to the data collection and increase understanding of the exhibit as a whole. Quantitative methods used on their own fail to encompass the full richness and depth of experience that an immersive exhibit experience can offer. Because of the diverse range of prior knowledge and experience each visitor brings to their exhibit experience, it is logical to assume that the outcomes for individual visitors may be multiple, highly diverse, and not necessarily as initially intended by educators and exhibit planners (Rennie & Johnston, 2004).
This is not to say that exhibit themes cannot be planned. The education theme in RORA was clearly defined from the project conception as being our feelings of "Kinship" towards our close relatives, the orangutans. This theme is very much in keeping with global zoo conservation strategy (WAZA, 2005) in that it was designed to "induce a feeling of wonder and respect for the web of life and our role in it; it should engage the emotions and build on this experience to create a conservation ethic that can be carried into action" (p. 38). The quantitative methods we employed have enabled us to construct a detailed model for visitor behavior within the exhibit and, as a consequence, allow us to infer educational impact to some degree. Future work should certainly focus on a more multi-method approach to help uncover the kind of "meanings" people are taking away from the exhibit experience. This is important if zoos are to further evidence their educational influence on their visitors.
Visitor dwell time was increased in the RORA exhibit when compared to the exhibit it replaced, OBC. This pattern was also observed when we controlled for visitor floor area. The usefulness of the simple pre-testing of exhibit and educational themes was demonstrated. The most notable findings were that visitors understood the concept of "Kinship" (the proposed interpretive theme) but failed to uniformly understand the link between the exhibit title "Realm of the Red Ape" and orangutans. This may have an impact on visitor way-finding and understanding of zoo exhibits if names are ambiguous. Interpretation prototyping was instrumental in the fine-tuning of interpretation that was more palatable to visitor ideas of kinship and human/nonhuman animal similarities.
Visitor tracking in RORA revealed patterns of behavior that aid comparative exhibit element evaluation and therefore the implementation of remedial measures. The use of stopping and time data in the evaluation of educational impact was less clear-cut, although a tentative relationship between increasing holding time and visitor engagement was proposed.
Ballantyne, R., Packer, J., & Hughes, K. (2008). Environmental awareness, interests and motives of botanic gardens visitors: Implications for interpretive practice. Tourism Management, 29(3), 439-444.
Balmford, A., Leader-Williams, N., Mace, G. M., Manica, A., Walter, O., West, C., et al. (2007). Message received? Quantifying the impact of informal conservation education on adults visiting UK zoos. In A. Zimmermann, M. Hatchwell, L. Dickie & C. West (Eds.), Zoos in the 21st Century: Catalysts for Conservation? Cambridge: Cambridge University Press.
Barriault, C., & Pearson, D. (2010). Assessing Exhibits for Learning in Science Centers: A Practical Tool. Visitor Studies, 13(1), 90-106.
Bitgood, S., Patterson, D., & Benefield, A. (1986). Understanding Your Visitors: Ten Factors that Influence Visitor Behavior. Technical report, Psychology Institute, Jacksonville State University, 86-60, 1-17.
Bitgood, S., Patterson, D., & Benefield, A. (1988). Exhibit Design and Visitor Behavior: Empirical Relationships. Environment and Behavior, 20(4), 474-491.
Brewer, C. (2001). Cultivating Conservation Literacy: Trickle-Down Education Is Not Enough. Conservation Biology, 15(5), 1203-1205.
Broad, S., & Weiler, B. (1998). Captive animals and interpretation--a tale of two tiger exhibits. Journal of Tourism Studies, 9(1), 14-27.
Campbell, S. (1984). A new zoo? Zoonooz, 55(9), 4-7.
Coe, J. (1987). What's the Message? Exhibit Design for Education. AAZPA Northeastern Regional Proceedings, 19-23.
DEFRA. (2004). The Secretary of State's Standards of Modern Zoo Practice. Chapter 7: Conservation and Education measures. from http://www.defra.gov.uk/ wildlifepets/zoos/documents/zoo-standards/chap7.pdf.
Derwin, C. W., & Piper, J. B. (1988). The African Rock Kopje Exhibit: Evaluation and Interpretive Elements. Environment and Behavior, 20(4), 435-451.
Falk, J. H., Moussouri, T., & Coulson, D. (1998). The effect of visitors' agendas on museum learning. Curator, 41, 106-120.
Fraser, J., Bicknell, J., Sickler, J., & Taylor, A. (2009). What Information Do Zoo & Aquarium Visitors Want on Animal Identification Labels? Journal of Interpretation Research, 14(2), 7-19.
Fuhrman, N. E., & Ladewig, H. (2008). Characteristics of Animals Used in Zoo Interpretation: A Synthesis of Research. Journal of Interpretation Research, 13(2), 31-42.
Hayward, J., & Rothenberg, M. (2004). Measuring Success in the "Congo Gorilla Forest" Conservation Exhibition. Curator: The Museum Journal, 47(3), 261-282.
Hughes, M., Ham, S. H., & Brown, T. (2009). Influencing Park Visitor Behavior: A Belief-based Approach. Journal of Park & Recreation Administration, 27(4), 38-53.
Johnston, R. J. (1998). Exogenous Factors and Visitor Behavior: A Regression Analysis of Exhibit Viewing Time. Environment and Behavior, 30(3), 322-347.
Kim, A. K., Airey, D., & Szivas, E. (2010). The Multiple Assessment of Interpretation Effectiveness: Promoting Visitors' Environmental Attitudes and Behavior. Journal of Travel Research, 20(10), 1-14.
Marcellini, D. L., & Jenssen, T. A. (1988). Visitor behavior in the National Zoo's reptile house. Zoo Biology, 7(4), 329-338.
Moss, A., Francis, D., & Esson, M. (2008). The Relationship Between Viewing Area Size and Visitor Behavior in an Immersive Asian Elephant Exhibit. Visitor Studies, 11(1), 26-40.
Phillpot, P. (1996). Visitor viewing behaviour in the Gaherty Reptile Breeding Centre, Jersey Wildlife Preservation Trust: A preliminary study. The Dodo, Journal of the Wildlife Preservation Trusts, 32, 193-202.
Povey, K., & Rios, J. (2002). Using interpretive animals to deliver affective messages in zoos. Journal of Interpretation Research, 7(2), 19-28.
Rennie, L. J., & Johnston, D. J. (2004). The nature of learning and its implications for research on learning from museums. Science Education, 88(S1), 4-16.
Ross, S., & Lucas, K. (2005). Zoo Visitor Behavior at an African Ape Exhibit. Visitor Studies Today, 8(1), 4-12.
Ross, S. R., & Gillespie, K. L. (2009). Influences on visitor behavior at a modern immersive zoo exhibit. Zoo Biology, 28(5), 462-472.
RSPCA (2006). Evaluation of the effectiveness of zoos in meeting conservation and education objectives. In The Welfare State: Measuring Animal Welfare in the UK, 2006 (pp. 95-98): Royal Society for the Prevention of Cruelty to Animals.
Serrell, B. (1998). Paying Attention: Visitors and Museum Exhibitions. Washington DC: American Association of Museums.
Shettel-Neuber, J. (1988). Second and Third-Generation Zoo Exhibits: A Comparison of Visitor, Staff, and Animal Responses. Environment and Behavior, 20(4), 452-473.
Sickler, J., Fraser, J., Gruber, S., Boyle, P., Webler, T., & Reiss, D. (2006). Thinking about dolphins thinking, WCS working paper no. 27: New York; Wildlife Conservation Society.
Tribe, A., & Booth, R. (2003). Assessing the Role of Zoos in Wildlife Conservation. Human Dimensions of Wildlife, 8(1), 65-74.
Tubb, K. N. (2003). An Evaluation of the Effectiveness of Interpretation within Dartmoor National Park in Reaching the Goals of Sustainable Tourism Development. Journal of Sustainable Tourism, 11(6), 476-498.
WAZA (2005). The World Zoo and Aquarium Conservation Strategy: Building a future for wildlife. World Association of Zoos and Aquariums, Berne, Switzerland.
Weiler, B., & Smith, L. (2009). Does more interpretation lead to greater outcomes? An assessment of the impacts of multiple layers of interpretation in a zoo context. Journal of Sustainable Tourism, 17(1), 91-105.
Wilson, M., Kelling, A., Poline, L., Bloomsmith, M., & Maple, T. (2003). Post-occupancy evaluation of zoo Atlanta's Giant Panda Conservation Center: Staff and visitor reactions. Zoo Biology, 22(4), 365-382.
Woods, B. (1998). Animals on Display: Principles for Interpreting Captive Wildlife. Journal of Tourism Studies, 9(1), 28-39.
Woollard, S. P. (1998). The development of zoo education. International Zoo News, 45(7), 422-426.
Yalowitz, S. S., & Bronnenkant, K. (2009). Timing and Tracking: Unlocking Visitor Behavior. Visitor Studies, 12(1), 47-64.
Yamada, N., & Knapp, D. (2010). Participants' Preferences for Interpretive Programs and Social Interactions at a Japanese Natural Park. Visitor Studies, 13(2), 206-221.
Zwinkels, J., Oudegeest, T., & Laterveer, M. (2009). Using Visitor Observation to Evaluate Exhibits at the Rotterdam Zoo Aquarium. Visitor Studies, 12(1), 65-77.
Education Research Officer
North of England Zoological Society, Chester Zoo, UK
Education Programmes Manager
North of England Zoological Society, Chester Zoo, UK
Formerly of North of England Zoological Society
Now at British Museum, London
Table 1. Pre-testing of visitor knowledge with regard to interpretive concepts and exhibits ideas for RORA. N=74. Question and accepted answers % acceptable response What do you understand by the word kinship)? (accepted answers = family, 87.8 friends, friendship, relative, clan, tribe) Which part of a forest is the canopy? (accepted answers = highest layer of 60.8 foliage, high up, in the trees) If one of the Zoo's exhibits was called 'Realm of the Red Ape', what animal would you expect to find there? 54.1 (accepted answers = orangutan) Table 2. Five-point Likert responses to prototype panels. All respondents were asked to rate according to how closely related to Orangutans each of the prototypes made them feel. N=183. Kinship rating Standard Mean Deviation Interpretation The Same Deep Down Prototype (brain folds) 3.59 .99 The Same Deep Down (brain size) 3.59 .87 The Same Deep Down (chromosomes) 3.61 Indigestion! 3.65 1.16 Bedtime 3.82 1.06 Handyman Hints 4.04 .90 Growing up takes longer 4.13 1.01 Staying Dry ... 4.19 Mirror Image 4.25 .91 Table 3. Overall median dwell time comparison between the previous orangutan exhibit (Orangutan Breeding Centre) and the new exhibit (Realm of the Red Ape). Overall median Median dwell time dwell time for per unit exhibit area Exhibit exhibit (seconds) (seconds/[m.sup.2]) Realm of the Red 583 1.74 Ape (RORA) Orangutan 188 1.54 Breeding Centre Significance testing Exhibit (Mann-Whitney U) Realm of the Red (U=1240, z=-14.726, Ape (RORA) p<.001) * Orangutan (U=12278, z=-2.998, Breeding Centre p=.003) * * denotes significant result Table 4. Comparison of attracting power and holding time before and after retrofitting exercise for the leaf model and virtual storm interactives. Attracting Attracting Power Power (%) Significance (%) with signage before Drip-tip Leaf 14.34 32.69 * X2 = model 20.377, df=1, p<.001 * X2 Virtual Storm 15.97 38.64 =23.887, df=1, p< 001 Median Median Holding time Holding time Significance (seconds) (seconds) before with signage Drip-tip Leaf 8 22 * U=389, model z=-5.051, p<.001 Virtual Storm 10 13 * U=704, z=-2.202, p=.028 * denotes significant result Table 5. Levels of engagement for two pieces of interactive interpretation; "The Same Deep Down" and "Growing Up Takes Longer." Mean holding times used as normal distribution observed. N=100 for both interactives. Mean level of engagement Mean Holding time (1-4 scale) (seconds) The Same Deep Down 2.96 41.5 Growing up Take Longer 2.43 38 Figure 1. Time budget for the average (median) visitor to RORA exhibit. Flagship species = 32% (187 seconds); Other species = 16% (93 seconds); Non-interactive interpretation = 0% (1 seconds); Interactive interpretation = 2% (12 seconds); Passive time = 50%. Overall active time = 50%. Flagship animal viewing 187 32.08% Integral species viewing 93 15.95% Non-interactive interpretation Interactive interpretation 12 2.06% Passive time 290 49.74% Note: Table made from pie chart. Figure. 2 Comparison of interactive interpretation use (attracting power and holding time) before retrofitting work. Interactives are listed from left to right as visitors would approach them through the exhibit. For example the "Map" interactive is the first to be encountered; "Growing up takes longer" is the last. Attracting Holding time Power (%) (seconds) Map 11 33 The same deep down 40 38 Virtual storm 14 8 Leaf model 16 10 Mirror 21 19 Partnerships (phones) 13 32 Growing up takes longer 29 18 Note: Table made from bar graph.