Printer Friendly

Extending the role of audit trails: a modular approach.

Audit trails are records of user activities within educational technology environments (ETEs). (1) A generic, yet flexible audit trail system that fulfils a wide range of auditing roles and which, through a modular approach, ensures scalability and simplifies data management has been developed. While designed specifically for use with Macromedia's Director(r) authoring software, the system's framework ensures future compatibility with many widely used script-based authoring environments.

The audit trail system consists of a series of functions that create and service a master record (the audit trail), comprising any number of object records corresponding to specific user activities. Five types of objects are supported by the system-timer, counter, string, numeric and boolean. A separate history function can be used to create a detailed sequential record of the timing and sequence of user activities. The system includes a number of developer support tools designed to simplify its implementation within target ETEs.

Three examples of implementations of the audit trail system into medical courseware are described. These examples were designed to illustrate the range and type of auditing roles that the system can fulfil and to assess its performance against a series of predefined design criteria.

**********

Audit trails are electronic records of users' activities within an educational technology environment (ETE), referred to by Williams and Dodge (1993) as users' "what, where, and when." Although the use of audit trails by developers and researchers is relatively commonplace, most attention has focused on the interpretation and analysis of data generated by them (Misanchuk & Schwier, 1992; Salter, 1995; Chavero, Carrasco, Rossell, & Vega, 1998), rather than how the data is actually collected. This is despite the fact that, in many cases, developers and researchers must create and implement customised and often complex data collection solutions. This article proposes a more generic and systematic approach to the design and implementation of audit trail systems. It describes a new system based on modular reusable components and explores potential applications of this system through a series of pilot implementations.

PURPOSES OF AUDIT TRAILS

Misanchuk and Schwier (1992) define and ascribe four primary roles to audit trails:

1. formative evaluation of instructional design--that is, evaluation aimed at optimising the performance of an ETE;

2. basic research in instructional design--for example, investigating how different users interact with an ETE;

3. usage audits for unstructured or public environments--that is, determining which paths or components of an ETE are most visited by particular users; and

4. counselling and advising--that is, supporting the user's decision making within an ETE based on the actions of prior users.

The first three of these roles (formative evaluation, basic research, and usage audits) are closely related. In each case, data are collected within an ETE and written to an external destination for further analysis. Differentiation of these three roles lies, therefore, not in the implementation of the audit trail, but in the analysis and further application of the data it includes. All three roles are also conceptually linked as each is concerned on some level with research and evaluation. All of these roles are grouped under the domain of "research and evaluation" (Table 1).

The fourth role of audit trails defined by Misanchuk and Schwier (1992), counselling and advising, represents one of a number of related roles of audit trails that the authors placed within a second domain-that of "user support." As the name suggests, user support involves using audit trail data to support user activities within ETEs. Rather than being collected for external analysis, here the audit trail data is used within the ETE for the purposes of internal reporting and decision-making. The authors recognised three categories of user support based on the audit trail data's source and its mode of implementation. With respect to the data's source, "individual" data was defined as that generated by discrete users and "group" data, as that comprising the individual data of many users. With respect to the implementation of audit trail data, "no user control" denotes that the application of audit trail data within a user support role is predefined by the developer and accepts no additional user input. "User co ntrol," on the other hand, denotes that individual users assume some responsibility for the interpretation and application of the audit trail data within the target ETE.

The three categories of user support are directive support, individual feedback, and group feedback. Each of these categories is explained by the following:

* Directive support (no user control, based on individual data). This category refers to the use of audit trail data to drive adaptive decision logic within the target ETE. For example, if a user does not satisfactorily complete a specified task they are directed to a review screen rather than automatically progressing to the next screen.

* Individual feedback (user control, based on individual data). This category describes the use of an individual's audit trail data to provide users with information that may assist them with decision making or further actions. Examples of this type of user support could include a site map that is dynamically updated, as a user progresses through an ETE, or a text entry that must be edited or reviewed at several locations within an ETE.

* Group feedback (user control, based on group data). This category is similar to Misanchuk and Schwier's (1992) "counselling and advising" role. An example of its application being where, at various locations within an ETE, the user is presented with information on which paths have been selected by previous users. Such information could be based on all prior users or those of peers or individuals with similar interests to the current user.

Unlike the majority of audit trail implementations involving directive support and individual feedbackkasee of data, group feedback requires a means of storing and retrieving audit trail data generated by multiple users. These implementations may be relatively straightforward where users interact with the software through a single client, as with public kiosks, but in multi-client situations such as student computer laboratories, some form of networked database is required.

PROBLEMS WITH AUDIT TRAILS

Researchers and developers can encounter a number of problems when implementing and using audit trails in ETEs. These include difficulties associated with managing large data sets, data analysis, and presentation and meaningful data interpretation. The problems associated with data management are discussed by Misanchuk and Schwier (1992) in some detail. Briefly, as the number of user activities recorded increases so must the physical size of the data--to the point where it can become unmanageable for the researcher or developer. A good example is the use of server log files, which log all requests to the server, to track user movements within web-based environments. While it is possible to generate accurate usage data from these automatically generated files (Peled & Rashty, 1999), targeting specific data can be difficult and may require elaborate parsing mechanisms.

The analysis and presentation of audit trial data can also present challenges to researchers. In complex ETEs, though it may be relatively straightforward to record the navigational paths of users, the interpretation of these paths in a meaningful way can be extremely difficult. A number of authors have proposed ways in which a variety of types of data can be presented to aid in interpretation (Misanchuk & Schwier, 1992; Fritze, 1994; Chavero et al., 1998; Salter, 1995). These include both qualitative and quantitative descriptions and various graphical approaches to data presentation.

A final difficulty is assigning meaning to the data. There are times when this will be relatively straightforward, such as when using audit trails to establish basic usage audits. However, interpretation becomes more difficult when researchers are dealing with more complex questions (such as why users are interacting with an ETE in a particular way). When audit trails are used for formative evaluation and basic research, researchers and developers must infer external meaning from users' behavioral responses within the ETE. This is an inherently difficult task as users may choose similar pathways in an ETE for quite different reasons (Misanchuk & Schwier, 1992; Salter, 1995).

It should be noted that these three difficulties are not inherent weaknesses of audit trail systems per se. As indicated by Misanchuk and Schwier (1992), data management usually becomes a problem when researchers and developers adopt an all-inclusive rather than a targeted approach to data collection. Well designed audit trail systems should assist researchers and developers with data management by allowing them to restrict the amount of data collected and to be specific about the type of data required. The way in which data is restricted will be determined--not by the audit trail system--but by the specific research or evaluation questions of interest. The audit trail system should assist researchers in the process of meaningful data collection by being capable of handling a variety of data types, thereby indirectly assisting researchers and developers with appropriate data analysis and presentation. Finally, the audit trial system has little impact on the interpretation of data. A well designed research or evaluation program and the use of alternative data collection techniques (such as observation and interviews) will assist researchers in interpreting audit trail data in meaningful ways (Misanchuk & Schwier, 1992; Salter, 1995).

DESIGNING A BETTER SYSTEM

No two ETEs are the same--they differ markedly in content, scope, and complexity and, where audit trails are implemented, in the potential roles that these audit trails may be required to fulfil. However, each (and possibly all) of the roles of audit trails outlined in Table 1 could potentially be required within a single ETE at some stage during its development and implementation. Thus, rather than adopting ad hoc approaches to satisfy requirements on an application by application basis, the need to develop a generic system--one that had the capacity to satisfy equally any, or all, of these roles was recognised.

With this as the starting point, a series of essential criteria for a robust and flexible audit trail system was developed. The six criteria established were:

* Flexibility--the system should be capable of fulfilling all of the roles of audit trails outlined in Table I. It should be able to store a range of data types.

* Efficiency--the system should manage data efficiently. It should only record and deliver information specifically requested by the developer! researcher and any information collected should be clearly associated with a specific user action or activity.

* Scalability--the system should be capable of handling an unlimited number of records and of dynamically adding or deleting records.

* Portability--the system should adopt an architecture that can be implemented across a range of authoring environments/software.

* Retrospective compatibility--the system should be easily fitted to existing EThs.

* Ease of use--the system should be easily implemented and require minimal programming effort by developers/researchers.

The first two criteria are general and address the various roles of audit trails and the need to provide a robust yet flexible framework and to avoid indiscriminate data collection. The third and fourth criteria are essentially technical requirements that speak to the need to develop a robust yet generic approach to audit trail systems. The final two criteria are concerned with adoption and implementation; ensuring the system is both useful to and useable by the developer/researcher.

A NEW AUDIT TRAIL SYSTEM

An audit trail system that seeks to address each of the previous criteria was designed. As most of the ETEs produced within the authors' own development unit are created using Macromedia Director[R] the authors elected to initially develop the system in and for that authoring environment. However, in keeping with the design criteria for portability, it was ensured that the system's architecture was structured in such a way that it could be readily adapted to a range of other scriptable multimedia authoring systems.

Overview

The audit trail system is built around a library of functions that read and write individual records of user activities to a master record (the audit trail) within the target ETE. There are three main function types (Figure 1):

* Administrative--functions for creating and managing the audit trail.

* Object--functions for creating and managing individual object records within the audit trail.

* History--a single function for creating and managing a detailed history entry within the audit trail that records the timing and sequence of user activities.

System Functions

Administrative. The system's administrative functions fall within three broad categories: (a) establishment, (b) service, and (c) export. Establishment functions are used to create the audit trail. A newly created audit trail includes the target ETE's name and the audit trail's creation time and date.

If user identification is required for either evaluation or internal reference, a login function can be used to create either an anonymous identification number or to record a typical user name and password combination. An import function can be used to populate the audit trail with data created and saved during previous sessions.

Service functions are used to process data within the audit trail. For example, a number function is used to convert numbers between various formats (e.g., integer, floating point) and an encode/decode function is used to replace certain characters in text-based entries (e.g., carriage returns) that would otherwise compromise the integrity of individual object records within the audit trail.

Export functions are used to process (parse) the audit trail and export the resulting data. These data can have any of three destinations; (a) internal (stored within the target ETE), (b) local (written to a file on the client computer), or (c) network (posted using standard Internet protocols to a receiving server-side script or application). The parsing functions specify which components of the audit trail are exported. The default method is to export all records but records can also be marked for export on the basis of object type (e.g., timer objects only) or on an object-by-object basis. Alternatively, the audit trail data can be exported in its raw format. Raw data can be reimported by the system (e.g., in a user support role) or can be post-processed by a utility application that provides similar parsing capabilities to the system's export functions.

Object. The system uses five core Object functions (Figure 1). In object-oriented terms these equate to classes, from which any number of instances (objects) each with its own unique properties, can be generated. Each instance of a particular class is stored within a larger "parent" record within the audit trail. The basic architecture of all parent records is the same, that is, an item-delimited list of instances, each comprising a unique identifier, an object description, and a list of properties.

The five core Object functions are:

(1.) Timer--objects created with this function maintain the following properties: the current status of the timer (on or off), the time when the timer was initialised, the time when the timer was last started (equivalent to the initialisation time if the timer has never been run), and the elapsed time (calculated whenever the timer is started, stopped or reset). Timer objects can be started, stopped, or reset and both the status and elapsed time of the timer can be queried. Examples of where Timer objects could be employed include recording the total time the user spends within the ETE, the time spent on selected screens, or the time taken to complete a particular interactive task.

(2.) Counter--objects created with this function maintain a single property: the count. Counter objects can be incremented by any integer value (the default value is +1), reset, and the value of the counter queried. Examples of where Counter objects may be used include the number of times the user visits a particular screen or the number of attempts they have made on a multiple-choice question.

(3.) String--objects created with this function maintain a single property, in the form of a user- or predefined text string. Because of the potential for text strings to include punctuation characters that may interfere with the demarcation of object and parent records in the audit trail, all text entries are encoded before being submitted. String objects can be set and queried. Examples of where String objects could be implemented include open-ended responses or menu selections.

(4.) Numeric--objects created with this function maintain a single property, in the form of a user- or predefined integer or floating-point number. Numeric objects can be set and queried. Examples of where Numeric objects could be used include storing user-entered values in simulations, numbers required for internal calculations or numeric representations of navigational paths.

(5.) Boolean--objects created with this function maintain a single property. This integer value denotes whether the object's state is true (1) or false (0). An optional third state (2) is provided for additional flexibility (e.g., if the true and false states were used to designate whether a task had or had not been attempted, then the third state could be used to flag that the task had been successfully completed). Boolean objects can be set, reset, and queried. Examples of where Boolean objects could be used include recording whether particular screens within an ETE have been visited and in logging responses to true and false questions.

Developers can also create custom Object functions to write to the audit trail providing the records they produce conform to the standard record architecture.

History. The History function is used to service a single history record within the audit trail. User activities are recorded by passing an object reference (e.g., "Screen X") and optional description (e.g., "multiple-choice questions on Y") to the function. This adds a date and time stamp to the submitted information and appends it to the history record. In many respects, this record acts as an audit trail within the audit trail and can, on its own, be used to generate accurate usage data. It also maintains much of the flexibility of the object records as single (or all) instances of individual objects within the record can be retrieved using their object references.

Support Tools

To simplify implementation of the system two developer support tools were developed:

* A graphical user interface (GUI) to the system that allows the developer to easily select and view records within the audit trail. The developer can also select from a range of export options covering the type and format of records and the data's destination. For example in Figure 2, the developer has chosen to export only timer, counter, and numeric records, and has specified a cgi application on a remote server as the export destination.

* A library of "drag and drop" behaviors (2) that allows a developer to use the basic functionality of all of the systems Object and History functions without requiring any hand coding.

IMPLEMENTING THE NEW SYSTEM

The following three examples describe pilot implementations of the audit trail system using ETEs developed by the Biomedical Multimedia Unit at the University of Melbourne. The first two relate to ETEs currently in use and are designed to demonstrate the audit trail system's application in the domains of research and evaluation and user support respectively. The ETE described in the third example is currently under development. It illustrates an application of the system in both research and evaluation and user support roles, including the use of both individual and group data.

Example 1: Research and Evaluation (formative evaluation, usage audits)

Medical Genetix is a courseware package used by medical students to investigate biomedical and clinical aspects of genetic disorders (e.g., cystic fibrosis). Each disorder is divided into three primary sections: Clinical Diagnosis, Laboratory Diagnosis, and Counselling and Ethics. Each of these sections is further divided into a number of subsections--the Clinical Diagnosis section, for example, comprises Clinical Features, Family Histories and Pedigrees, and Molecular Pathogenesis. Each subsection consists of a series of screens that allow the user to review material, perform construction tasks, and test their knowledge and understanding of a particular topic (Metcalfe, Williamson, & Bonollo, 1999).

A series of audit trail objects was used to track user movement and activity within a single subsection of Medical Genetix. Timer and Counter objects were created for each screen within this subsection. In addition, a String object was used to track user activity within a single interactive task--the drag and drop construction of a family pedigree, in which the user drags tiles from a palette onto blank nodes on a pedigree on the basis of a provided family history (Figure 3). Incorrect placements are rejected and the user can make as many attempts as required to complete the task. This String object was dynamically updated by several associated Timer, Counter, and Boolean objects to create a detailed log of the pedigree's construction including which order tiles were placed into the pedigree, whether a correct choice was made, how much time the user took to place each tile and the number of times they consulted the history. When users quit the package, the audit trail data were parsed by the system and writte n to a series of log files on a server. This implementation of the audit trail system was designed to test the systems capacity to record, store, and deliver data destined for research and evaluation purposes. Specifically, data relating to the formative evaluation of the pedigree construction task and to track usage of a single subsection within the wider package were collected.

The results from this implementation of the audit trail system are described in detail elsewhere (Kennedy & Judd, 2000). However, to illustrate the type and scope of data returned by the system a short descriptive summary follows. All results were readily obtained by applying simple summary statistics to the raw data following their extraction from the log files.

Data summary. Seventy-eight students used the program during the two-week period over which submissions were logged. Forty-nine accessed the Family History and Pedigree subsection of the Cystic Fibrosis module and of these, 42 reached the interactive pedigree task. Thirty-four students attempted the task. Only one student successfully completed the task. On average, students spent a little over four minutes on the task, made 15 correct tile placements (out of a total of 20), six errors, and accessed the family history, which provided the necessary information to complete the task, 15 times.

Example 2: User Support (individual data)

Sleep Health is a courseware package used by science and medical students to investigate the causes and treatment of sleep disorders including sleep apnoea. It consists of two main sections. In the first, the user participates in a hypothetical public forum comprising a series of question and answer sessions dealing with different aspects of sleep (e.g., the functions of sleep or causes of daytime sleepiness) and is required to submit typed responses to individual questions and complete revision tasks at the end of each session. In the second section, the user is presented with four cases of individuals displaying various symptoms of disturbed sleep. For each case, the user incrementally develops an informed diagnosis and treatment plan by drawing on a series of general and case-specific resources. This second section also includes several self-contained modules dealing with the use and interpretation of specific biomedical resources.

The audit trail system was implemented extensively throughout Sleep Health. Boolean objects were used to record which screens were visited, which resources were accessed and to determine whether particular interactive tasks were completed. String objects were used to record all user submissions including open text responses and predefined or constructed entries created by the various interactive tasks.

The audit trail system was also closely integrated with the navigational framework of the program. For example:

* Boolean objects were associated with hurdle tasks on key screens and were used to selectively disable navigational elements or prevent access to various resources until such tasks were successfully completed.

* Several tasks required the user to incrementally revise text submissions that they had made earlier in the program (e.g., preliminary vs. informed diagnoses). String objects were used to update the onscreen text in one or more locations as required.

* The package includes a navigable sitemap that provides a visual record of the user's progress. Each element of this sitemap is hyperlinked to a section of the package thereby providing an alternative referential navigation system to the program's primary, and superficially linear, navigation system. A series of Boolean objects were used to dynamically display the user's progress on the sitemap and to selectively enable or disable particular components contingent on the completion of various hurdle tasks. For example in Figure 4, the user has completed the introduction, attempted each of the three sessions within the "Public Forum" section and completed four of the six tasks within the first session.

This implementation of the audit trail system within Sleep Health fell squarely within the domain of user support and included both directive support (e.g., hurdle tasks) and individual feedback (revision of submissions, navigable progress/sitemap). However, the type and scope of data collected to update the sitemap was clearly sufficient to support a number of research and evaluation related activities should they be required in the future.

Example 3: Research and Evaluation (formative evaluation) and User Support (individual and group data)

The Personal Learning Planner (PLP) is a software support tool designed to assist medical students with the planning, organization, and management of their studies within a problem-based curriculum (Kennedy, Petrovic, Judd, Lawrence, Dodds, Delbridge, & Harris, 2000). It is comprised of three primary phases: search, plan, and review. The searching phase allows students to conduct a keyword search of a database of medical resources, view abstracts of found resources, and to create and save a shortlist of selected resources (Figure 5). The planning phase allows students to create a concept map of the relationships between various learning issues associated with a particular medical problem (conceptual planning) and to plan a specific course of action with regards to their investigation of the problem (action planning). The review phase of the PLP allows students to review their searching and planning activities and to evaluate the "success" of their investigations by comparing their searching and planning activ ities with those of their peers. The searching and planning phases of the PLP have been pilot tested; the review phase is under development.

Unlike the previous two examples, which were authored using Director[R], the PLP, was created in SuperCard[R], a Macintosh-based multimedia authoring/rapid application development tool. The audit trail system implemented within the PLP therefore differed somewhat from that described previously but adopted the same basic architecture. Audit trails were used to record users' activities within the search phase. Each time a user searched the resource database, a String object was updated with a list of keywords used as search terms and the found resources associated with each term. At the completion of each session of searching, a second String object was used to record the user's short list of selected resources and the audit trail data posted to a custom server-side application. This application was responsible for parsing and redirecting the incoming audit trail data.

The data from the first String object was appended to a log file. Data from the second String object was used to update the profiles of both individual users and their peer group (summary data based on all individual users) within a networked database. The log file data was used by the PLP's developers to evaluate various aspects of the implementation of the resources database--that is, for research and evaluation. The database of user profiles will form the basis of the PLP's review phase. Within this phase, and for any given problem, users will be able to query the profiles' database and compare the list of resources they have selected with those selected by the group as a whole. Users will thereby be able to identify resources that the majority of their peers have selected but which they themselves have missed or ignored. Moreover, they will be able to view the abstracts for any such resources and, if desired, carry them over into the planning phase to incorporate in their conceptual or action planning act ivities. This application of audit trail data will encompass both individual and group support.

DISCUSSION

In the introductory sections of this article the various roles of audit trails in ETEs were identified and placed within two broad domains; research and evaluation, and user support (Table 1). The need for a generic audit trail system that was capable of fulfilling these various roles was proposed and a set of criteria that such a system would be required to meet was presented. With these criteria in mind, an audit trail system was subsequently designed, developed, and then tested by way of pilot implementations in three ETEs.

These implementations were designed to test the audit trail system's ability to fulfil the various roles for audit trails identified and, in doing so, address each of the established criteria. With one exception, practical applications of each these roles was successfully demonstrated. An application of the remaining role, that is, user control based on group data, was described and will be implemented in the future. While it was not possible to test all features of the system in these pilot implementations, they nevertheless provide a thorough preliminary examination of the system's scope and functionality as measured against the aforementioned criteria:

* Flexibility--the three example implementations include applications covering each of the three roles of audit trail identified under the domains of research and evaluation and user support respectively. Furthermore, the system proved capable of storing and retrieving a mixture of data types.

* Efficiency--the modular data structure adopted proved extremely efficient for storing and retrieving records and for managing data in general. In particular, it allowed the authors to accurately target specific user activities during each of the described implementations and subsequently, when analysing the resultant data, to clearly and readily associate these with individual records within the audit trail.

* Scalability--a gain, the system's modular data structure allowed the generation and successful management of small or large numbers of records. For example, the pilot implementation involving the Sleep Health courseware package generated several hundred individual records comprising a variety of data types in a typical user session.

* Portability--our third pilot implementation involved using a variant of the system based on the original architecture in a second authoring environment (SuperCard[R]) with relatively few modifications.

* Retrospective compatibility--the example implementation involving the Medical Genetix courseware package was added on top of the existing programming, that is, while new code was added, no recoding of existing functions was required to accommodate the audit trail system.

* Ease-of-use--implementation of the system proved relatively straightforward. The authors were able to install basic implementations of the system within Director[R] using its "drag-and-drop" behaviors and export GUI with little or no additional scripting.

While each of the pilot implementations generated substantial amounts of data, neither the management of this data nor its analysis proved especially troublesome. In two of the pilot implementations (examples 2 and 3) the data collected were destined primarily for reuse within the target ETE, that is, in a user support role, thus much of the interpretation of the audit trail data was handled internally by the system. And, as already mentioned, the flexible, efficient, and scalable nature of the system handled this task admirably. In the remaining implementation (example 1) the audit trail data was collected solely for the purposes of evaluation. In this case while the potential for data overload clearly existed, the researchers were careful to avoid it by (a) restricting the scope of the implementation within the target ETE and, more importantly, by (b) developing clear and cogent research/evaluation questions prior to commencing the implementation. Nevertheless the design of this implementation was such, tha t while localised to a particular section of the target ETE, it was sufficiently broad to facilitate a thorough preliminary exploration of the data and then, once an unexpected user behavior was discovered (i.e., noncompletion of an interactive task) to accurately compare and contrast the behaviour of individual users within the task. Although it was not used it this way, the ability to selectively enable and disable individual records or data types within the audit trail using the systems GUI means that "blanket" implementations of the system can be installed during development and then customised to satisfy particular research and evaluation questions as required.
Table 1

Roles of Audit Trails Based on the Domain, Destination, and Eventual
Application of the Audit Trail Data

Domain Research & Evaluation User Support

Destination External analysis Internal reporting and decision-
 making

Role or Formative evaluation of Directive support
Category instructional design (no user control, individual data)
 Basic research Individual Feedback (user control,
 individual data)
 Usage audits Group Feedback (user control,
 group data)


Notes

(1.) In this article, the term "educational technology environment" is used to refer to any educational or instructional multimedia or hypermedia presentation or product.

(2.) Director[R] supports the implementation of reusable programming routines called "behaviours" that can be applied to individual screens or screen objects by dragging and dropping them onto an appropriate target.

References

Chavero, J.C., Carrasco, J., Rossell, M.A., & Vega, J.M. (1998). A graphical tool for analyzing navigation through educational hypermedia. Journal of Educational Multimedia and Hypermedia, 7(1), 33-49.

Fritze, P, (1994). A visual mapping approach to the evaluation of multimedia learning materials. In K. Beatie, C McNaught, & S. Wills, (Eds.), Interactive multimedia in university education: Designing for change in teaching and learning (pp. 273-285). Amsterdam: Elsevier.

Kennedy, G., & Judd, T. (2000). Pilot testing of a system of electronic evaluation. In, R. Sims, M. O'Reilly, & S. Sawkins (Eds.), Learning to choose: Choosing to learn (Short Papers and Works in Progress) (pp.187-192). Lismore, NSW, Australia: Southern Cross University Press.

Kennedy, G., Petrovic, T., Judd, T., Lawrence, J., Dodds, A., Delbridge, L., & Harris, P. (2000). The personal learning planner: A software support tool for self directed learning. In R. Sims, M. O'Reilly, & S. Sawkins (Eds), Learning to choose: Choosing to learn. Proceedings of the 17th Annual ASCILITE Conference (pp. 54 1-550). Lismore, NSW, Australia: Southern Cross University Press.

Metcalfe, S.A, Williamson, B., & Bonollo, A. (1999). Using multimedia to enhance teaching of contemporary medical genetics. American Journal of Human Genetics, 65 (Suppl.), A. 386.

Misanchuk, E.R., & Schwier, R. (1992). Representing interactive multimedia and hypermedia audit trails. Journal of Educational Multimedia and Hypermedia, 1(3), 355-372.

Peled, A., & Rashty, D. (1999). Logging for success: Advancing the use of WWW logs to improve computer mediated distance learning. Journal of Educational Computing Research, 21(4), 413-431.

Salter, G. (1995). Quantitative analysis of multimedia audit trails. In, J.M. Pearce, A. Ellis, U. Hart, & C. McNaught (Eds.), Learning with technology: Conference proceedings. Proceedings of the 12th Annual ASCILITE Conference (pp. 456-461), Melbourne, VIC: Science Multimedia Teaching Unit, The University of Melbourne.

Williams, M.D., & Dodge, B.J. (1993). Tracking and analysing learner-computer interaction. In M.R. Simonson & K. Abu-Omar (Eds.), Annual Proceedings of Selected Research and Development Presentations at the 1993 National Convention of the Association for Educational Communications and Technology (pp. 1115-1129). New Orleans: AECT.
COPYRIGHT 2001 Association for the Advancement of Computing in Education (AACE)
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kennedy, Gregor
Publication:Journal of Educational Multimedia and Hypermedia
Geographic Code:1USA
Date:Dec 22, 2001
Words:5810
Previous Article:Mariner--A 3-dimensional navigation language.
Next Article:How to design educational multimedia: a "loaded" question.
Topics:


Related Articles
Tennessee rules on privity.
New ethics rules for CPA firms.
PKI Security In The New Extranet Marketplace.
Security Heads AICPA's Top 10 Technologies.
Privacy code protects Internet users.
Project at park violated bid laws.
BPM software.
Touchpaper's ITBM suite.
Portwise 4.0.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters