Printer Friendly

A pilot investigation comparing instructional packages for MTS training: "manual alone" vs. "manual-plus-computer-aided personalized system of instruction".

Matching-to-sample (MTS) training consists of tasks in which a stimulus is presented as a sample, followed by two or more stimuli called comparisons. The subject makes a choice, for establishing conditional stimulus relations between the sample and its correct comparison (Green & Saunders, 1998). A number of research studies have investigated MTS training to produce stimulus equivalence, defined as responding in accord with behavioral tests of reflexivity, symmetry, and transitivity (Arntzen & Lian, 2010), and also to teach a wide range of academic repertoires such as reading (Sidman, 1971), writing (Stromer & Mackay, 1993), statistics (Critchfield & Fienup, 2010; Fields, Travis, Roy, Yadlovker, Aguiar-Rocha, & Sturmey, 2009), manual signs (Elias, Goyos, Saunders, & Saunders, 2008), mathematics skills (Lynch & Cuvo, 1995), braille literacy skills (Toussaint & Tiger, 2010), and brain-behavior relations (Fienup, Covey, & Critchfield, 2010). MTS tasks have also been used to investigate the acquisition of more elementary verbal relations, such as tacts and pre-requisites for mands (Ribeiro, Elias, Goyos, & Miguel, 2010). Results suggest that MTS training may represent a powerful teaching tool for use in educational settings.

To date, however, few studies have investigated applications of MTS training in the classroom (Rehfeldt, 2011). In one of the first attempts to address this issue, Stromer, Mackay, and Stoddard (1992) published an article aimed to provide an applied approach for using MTS training to teach repertoires such as reading. In this work, the authors define reading based on the notion of equivalence networks involving dictated words, pictures, printed words, and spoken words by the student (Figure 1).

Accordingly, using MTS training to teach reading usually involves explicit teaching of relations between dictated words and pictures (AB) and between dictated words and printed words (AC). In an MTS task involving the relation AB, a dictated word is presented as sample. As soon as the child responds to the sample (e.g., by pointing to it), two or more pictures are presented as comparisons. The child then selects one of the comparisons and, as a result, she or he receives either praise for selecting the comparison arbitrarily defined as correct, or a brief time-out period for selecting any other comparisons. MTS tasks involving the relation AC are presented in a similar manner, except that printed words are presented as comparisons. The child's performance in these tasks cannot be considered as reading though, because she or he could meet accuracy criterion simply by memorizing the words. Even if the child also passes tasks in which she or he is presented with a picture and asked to say its name (relation BD), and in which she or he is presented with a printed word and asked to say its name (relation CD), explicit teaching of the relation AB is necessary for performances in reading comprehension, which is inferred by the emergence of relations between pictures and printed words (BC) and between printed words and pictures (CB) (Sidman, 1971).

[FIGURE 1 OMITTED]

The work by Stromer and colleagues (1992) represents an important step in the effort to transfer teaching methods derived from basic research to practical settings such as schools, an issue that currently is greatly emphasized within the field of behavior analysis (Mace & Critchfield, 2010). However, some variables that need to be taken into account are: (1) the suitability of MTS training for application in the classroom; and (2) the development of an approach for training teachers to conduct such training to teach reading.

With regard to the first issue, research on stimulus equivalence has employed procedures that can be broadly categorized as tabletop and automated (Saunders & Williams, 1998). In tabletop procedures, trials and delivery of consequences are arranged by the experimenter, whereas in automated procedures they are computer-controlled (Dymond, Rehfeldt, & Schenk, 2005).

The use of tabletop procedures has been criticized in the literature (Saunders & Williams, 1998). First, cuing effects are more likely to occur when tabletop procedures are used raising questions about whether positive results are caused by the experimenter cuing rather than the training. Second, tabletop procedures require accurate and careful methodological control which, from a practical standpoint, may represent a barrier for teachers applying them in classroom.

Rehfeldt (2011) carried out a citation analysis of studies that aimed to teach academic repertoires such as reading in practice settings using the derived stimulus relations technology, a behavioral approach for the production of emergent behaviors; specifically, those involving relating events that were never related to one another directly, after some other behaviors have been explicitly taught. Equivalence relations are an example of this. The analysis showed that 65% of the studies used automated procedures whereas only 38% used tabletop procedures; some of these studies used both in combination (Rehfeldt, 2011). These results strongly support the suitability of using automated procedures in applied educational settings.

Accordingly, we suggest that automated procedures can be reasonably used in the development of an approach for training teachers to conduct MTS training to teach reading in classroom. In the present study, we used MestreLibras (Elias & Goyos, 2010), a stimulus-equivalence-based computer program developed to teach stimulus relations and to test for emergent relations such as those involved in basic reading (see Figure 1). The main features of the program include: (1) stimulus databases; (2) automatic trials; (3) MTS tasks databases; and (4) a performance data report. This program has been widely tested to teach repertoires such as reading and writing (Souza & Goyos, 2003), mathematical skills (Rossit, 2009), and manual signs (Elias, Goyos, Saunders, & Saunders, 2008). Results found in these studies are consistent with those found in stimulus-equivalence research (e.g., Sidman, 1971), supporting the effectiveness of this program as a teaching tool.

With regard to the second issue mentioned above, the development of effective training approaches for teachers requires addressing the needs of teachers. For instance, on-the-job training programs for teachers can be time intensive; thus, using an online approach can be beneficial in at least two ways: (1) cost-effectiveness, since it is possible to teach individuals from different places around the world using online course material, and (2) time effectiveness, since it is possible to teach individuals at the same time without the instructor being present (Scherman, 2010).

Computer-aided Personalized System of instruction (CAPSI) is an online version of the Personalized System of instruction (PSI; Keller, 1968). CAPSI has been validated as a teaching tool in courses taught at the University of Manitoba and other postsecondary institutions (Kinsner & Pear, 1998; Pear & CroneTodd, 1999; Pear & Novak, 1996; Pear, Schnerch, Silva, Svenningsen, & Lambert, 2011). CAPSI-taught courses are based on textual material (e.g., a textbook) divided into study units with study questions on each study unit. During a course the students read the material at a place of their choosing, learn the answers to the study questions, and take online tests via CAPSI. The unit tests are typically composed of three or four study questions randomly selected by the system. As soon as a test is submitted for marking (i.e., grading), the system assigns it to either the instructor or to two other students, called peer-reviewers, who have already completed the current study unit. Marking consists of providing written feedback on students' answers, and can designate: (1) a "pass", which indicates that the student is ready to proceed to the next unit test, or (2) a "re-study", which requires the student to take a new test on the same unit after an hour of re-studying time.

Although CAPSI has been widely tested for teaching psychology courses, its effectiveness as a Web-based teaching tool for training individuals working in applied settings is an issue that has been addressed only recently. For example, Scherman (2010) pioneered investigating CAPSI as an educational tool for the delivery of a self-instructional manual on Discrete Trial Teaching (DTT; Fazzio & Martin, 2011). The study used an ABA design with five participants. During baseline the participants were assessed in their skills on using DTT to teach three tasks --imitation, matching, and pointing--with a confederate role-playing a child with autism. Training consisted of studying the DTT manual, completing online tests via CAPSI, and performing DTT during scheduled sessions with the researcher prior to post-training. During post-training the participants were again assessed in their skills on implementing DTT to teach the above tasks. Results suggested that CAPSI in combination with the DTT manual was effective for training university students to use DTT to teach children with autism.

Hu, Pear, and Yu (2012) investigated a multiple-component training consisting of: (1) a self-instructional manual on the Assessment of Basic Learning Abilities (ABLA; available at http:// www.stamant.mb.ca/abla), (2) CAPSI, and (3) tutorial videos to teach individuals concepts on ABLA and also the implementation of tasks called "levels" in ABLA. During baseline the participants completed written tests involving ABLA concepts and administered the ABLA levels with the first author role-playing a client. During training the participants studied the ABLA manual, completed online tests via CAPSI, and watched tutorial videos, which were delivered conditional upon the completion of each online test. During post-training participants were assessed on the same skills of those assessed during baseline and, in addition, they answered a survey on the usefulness of each component used in the training. Results showed that participants' accuracy in the written knowledge tests increased from a mean of 42 % during baseline to 82.6% after training and 85.6% during follow-up. Moreover, participants' accuracy in the application tests increased from a mean of 26% during baseline to 86% after training and 90% during follow-up." These data suggest that CAPSI combined with both the ABLA manual and the tutorial videos was effective in teaching university students not only ABLA concepts but also the implementation of the ABLA levels.

These studies demonstrated the feasibility of CAPSI as a Web-based educational tool for training individuals to implement behavioral procedures, such as the DTT and the ABLA levels. In the present study we attempted to evaluate the effectiveness of two different packages: (1) a manual on the use of MTS training by itself; and (2) the manual in combination with CAPSI.

The findings we present here are preliminary results from an investigation to address two specific questions: (1) would a comparison between the two packages result in the manual-plus-CAPSI package being more effective than the manual alone? and (2) would the manual-plus-CAPSI package result in an improvement from baseline to 90% accuracy or higher in conducting the MTS training?

* METHOD

PARTICIPANTS, SETTINGS, AND MATERIALS

Six students enrolled in an undergraduate second-year distance education psychology course taught at the University of Manitoba signed up to participate in the study. They had no previous experience in conducting automated MTS training. The participants were randomly assigned to one of the two groups, control or experimental. Four of them--two in the experimental group and two in the control group, as described below--successfully completed the study. The participants received credit toward a project in their course for their participation in the research.

Baseline and post-training sessions took place in a room at the University of Manitoba, containing a table and two chairs placed side by side in front of the table. The MestreLibras computer program, installed in a laptop, a 16-page description of its use, a mouse, and a 4-page summary of steps on how to conduct automated MTS training to teach reading were used during baseline and post-training. A 20-item checklist was used to evaluate the accuracy with which participants implemented the tasks. A video camera and a tripod were placed at approximately 1m from the table and oriented at 220[degrees] toward the table.

Training occurred at the participants' place of choosing. Training was based on a 27-page manual on how to conduct automated MTS training to teach reading, adapted from Goyos and Almeida (1994). This manual is composed of five chapters, each chapter associated with a study unit in CAPSI. The chapters provide brief descriptions of the following topics (where each number represents a chapter): (1) stimulus equivalence; (2) MTS procedures; (3) reading comprehension; (4) creating MTS tasks; and (5) implementing MTS tasks. The manual also contains 10 study questions per chapter corresponding to the topics being described and, for the last two chapters, exercises in which the student is asked to perform skills involved in conducting automated MTS training to teach reading. The CAPSI program was used to deliver unit tests. Participants required a personal computer and an Internet connection to access CAPSI. A computer with an Internet connection was used to provide participants with brief instruction on how to use CAPSI before training started.

SCRIPT

Conducting automated MTS training required the participants to: (1) create four MTS tasks; (2) evaluate reading repertoires; (3) conduct teaching sessions; and (4) conduct testing sessions. In 3 and 4, the experimenter (the first author) role-played a child. In 2-4, a script that specified the percentage of correct responses of the child in each MTS task was used. During reading repertoire evaluation, the percentage of role-played correct responses was 25% for all MTS tasks--AB, AC, BC, and CB. During teaching sessions, each of the MTS tasks--AB, and AC --was presented three times and the percentage of role-played correct responses was 25%, 50%, and 50%, respectively. During testing sessions, the percentage of correct responses was 100% for each of the MTS tasks--BC and CB.

DESIGN, RESPONSE MEASUREMENT, AND INTEROBSERVER RELIABILITY (IOR)

A group design composed of a control and an experimental group was used to compare the effectiveness of two packages consisting of "manual alone" and "manual-plus-CAPSI" for training university students to conduct automated MTS training to teach reading. Performance accuracy was defined as correctly performing the following steps:

1. Creating four MTS tasks involving dictated words and pictures (AB), dictated words and printed words (AC), pictures and printed words (BC), and printed words and pictures (CB). Creating each task consisted of: (a) choosing sample and comparison stimuli, (b) arranging trials, and (c) choosing the correct response for each trial.

2. Evaluating an individual's reading repertoire, defined as: (a) providing instructions for each task, (b) presenting 12 trials of each MTS task, and (c) not presenting feedback on tasks.

3. Conducting teaching sessions, defined as: (a) providing instructions for each task, (b) presenting two MTS tasks AB and AC, (c) providing feedback for each task, and (d) repeatedly presenting each task until criterion was met on that task.

4. Conducting testing sessions, defined as: (a) providing instructions for each task, (b) presenting two MTS tasks BC and CB, (c) not presenting feedback on tasks, and (d) repeatedly presenting the same task until criterion was met on that task.

Sessions were videotaped and the participants' accuracy in conducting automated MTS training to teach reading was scored using a 20-item checklist. The percentage of steps performed correctly was calculated by dividing the total number of steps performed correctly by the total number of steps on the checklist, and this ratio was converted to a percentage. Interobserver reliability (IOR) was calculated by having two observers independently score 20% of the sessions. One observer watched the sessions and the other observer watched videotapes of the sessions. Using the same 20-item checklist they independently scored whether the participant performed each step correctly or incorrectly. A step was scored as an agreement if both observers scored the step identically; otherwise, it was considered a disagreement. Percent agreement was calculated by dividing the number of agreements by the number of agreements plus disagreements and multiplying by 100 (Martin & Pear, 2011). Mean percentage agreement was 93.75%, ranging from 87.5% to 100%. Two observers determined procedural integrity using a 10-item procedural integrity checklist. The observers placed a checkmark in the "yes" column if the experimenter implemented a procedural step as indicated on the checklist; otherwise, they placed a checkmark in the "no" column. Procedural integrity checks were conducted for 20% of the sessions. According to these data, the experimenter correctly followed 100% of the procedural steps.

PROCEDURE

Baseline. During baseline participants were assessed individually. At the beginning of the session, the participants were provided with a laptop, the MestreLibras computer program, the 16-page description of its use, and the 4-page summary of steps that provided abbreviated instructions on how to conduct automated MTS training to teach reading. Participants were then asked to read the material and, afterwards, using the 4-page summary of steps as a guide, to attempt to teach reading with the experimenter role-playing a child. The participants used the MestreLibras 16-page description for displaying functions in the computer program that were related to the steps they were asked to perform. After reading the first page of the summary of steps, the participants made an attempt to create four MTS tasks--AB, AC, BC, and CB--each having 12 trials using the words "bee", "bed", and "cat". After reading the second page of the 4-page summary of steps the participants attempted to evaluate reading repertoires with the experimenter role-playing a child. Twelve trials of the four MTS tasks were presented in the following manner. A trial started with a sample stimulus being presented alone. As soon as the "child" clicked with the mouse on the sample, three comparison stimuli were presented and the child then chose one of them. No feedback on the "child's" responses was given. A new trial started after a 2-s inter-trial interval. After reading the third page of the summary of steps the participants made an attempt to conduct teaching sessions. Participants presented MTS tasks in the same manner as described above except that: (1) MTS tasks presented were those involving AB and AC relations; (2) feedback on "child's" performance was provided and consisted of an animation in the screen following correct responses and a black screen following incorrect responses; and (3) each MTS task was presented until a criterion of 100% correct responses was met. After reading the fourth page of the summary of steps the participants made an attempt to conduct testing sessions. Participants presented MTS tasks in the same manner as described above, except that: (1) the MTS tasks were those involving BC and CB relations; and (2) each MTS task was presented until a criterion of 100% correct responses was met. (Note that these are ideal descriptions of a baseline session; however, since participants' performance during baseline was not completely accurate the descriptions did not necessarily correspond to the behaviors that were observed.) No feedback on the participants' performance was provided during baseline sessions and, at the beginning of the session, the participants were told that their questions would not be answered. Baseline sessions were videotaped and participants' accuracy in conducting automated MTS training to teach reading was assessed using the 20-item checklist mentioned previously.

Training. After baseline, participants were randomly assigned to either the control or experimental group. Participants in control group were given a hard copy of the 27-page manual mentioned previously plus a CD-Room containing the MestreLibras computer program and the 16-page description for its use, and were asked to study the manual and answer the study questions provided in the manual. Participants in the experimental group were given the same materials that the participants in the control group received. In addition, they were given a CAPSI username and password, and were provided with brief instruction on how to use CAPSI for taking unit tests and peer-reviewing, that is, marking tests of other participants on those units they have already completed. Participants in the experimental group were asked to study the manual, answer the study questions on it, and take unit tests on CAPSI. They were also asked to peer-review within 12h a test has been assigned to them; however, they did not have opportunities to peer-review due to the small number of participants. First, one participant started taking tests late; because there were not two participants available for peer-reviewing, which was a requirement in the course, tests were automatically assigned to the instructor for marking. Second, one participant dropped out of the study after a week, resulting in the other participant no longer being assigned for marking tests.

As mentioned previously, the manual contained five chapters, and each chapter had an associated unit test delivered through CAPSI. The unit tests consisted of three questions randomly selected by the system. Participants were able to proceed to the next unit test only if their current unit test was marked as demonstrating mastery of the tested material. Because of the small number of participants in the CAPSI condition, there were no peer reviewers and all tests were assigned to the experimenter for marking, which was done within 24 hours of the completion of the test. Marking consisted of providing detailed written feedback on participants' answers and could either designate a "pass" or a "re-study". A pass indicated that the participants could proceed to the next unit, and a re-study indicated that they could request a new test in the same unit after an hour of re-studying the manual. Initially, all participants were given two weeks to complete training at a place of their choosing. However, participants in the experimental group had some difficulties in mastering the unit tests. After two of them appealed the tests (e.g., by saying that they would like to have more time so they could pass the units), a third week was given for participants in both the control and experimental groups.

Post-training. Post-training occurred after all participants completed the training. Participants were assessed for accuracy in conducting automated MTS training to teach reading in the same manner as described in the baseline.

* RESULTS AND DISCUSSION

Table 1 presents percentage accuracy on conducting automated MTS training to teach reading. Only four participants--two in each group--completed the study; therefore, only data for these participants were analyzed. Percentage of correct responses for participants in each group and the overall mean score across participants is presented for baseline and post-training.

The combined scores during baseline were 72.5% and 72.7% accuracy for control and experimental groups, respectively. During the post-training, the combined scores were 100% and 95% accuracy for the control and the experimental groups, respectively. High baseline performance accuracy and, in addition, accuracy greater than 90% during post-training were observed for both control and experimental groups. According to these results, the two packages--"manual alone" and "manual plus CAPSI"--produced similar effects in the control and experimental groups' post-training performance accuracy in conducting automated MTS training to teach reading.

We suggest that two main reasons may have accounted for that. First, the high percentage accuracy observed during the baseline produced a ceiling effect making it difficult to infer clear effects of each package on the improvements observed in the participants' performance during post-training. Second, because the sample size was very small, it is difficult to infer whether the results that were encountered were due to group effects or to random variation.

These findings shed light on the current efforts to use equivalence-based methods to teach reading in applied educational settings (Rehfeldt, 2011). First, the results encountered may represent a fruitful route in the development of effective approaches for directly training individuals to conduct automated MTS training to teach reading. Second, the data showed that individuals can perform such skills with high levels of accuracy after a relatively short period of time (e.g., three weeks), indicating the time-effectiveness of the training approaches investigated in the present study. Third, we pioneered combining two different areas of research within behavior analysis--stimulus equivalence and CAPSI--to address this issue. This can be considered an important addition to the line of research investigating behavioral approaches to train professionals on the use of behavior analytic procedures in applied settings such as clinics and schools. In the following, we discuss some of the findings in more detail.

In the post-training, percentage accuracy greater than 90% in conducting automated MTS training to teach reading was observed for both groups. Discussing this in terms of the contents presented in the manual, we highlight the fact that, for some chapters, study questions required participants not only to perform their writing skills, but also their skills in conducting automated MTS training.

Recalling Scherman's study (2010), for the units in the DTT manual that required practicing conducting DTT procedure, the participants were asked to schedule an appointment with the researcher to perform their DTT skills prior to post-training. However, this was not part of the "pass" requirements for the units and feedback on the participants' performance on implementing the DTT procedure was not given. Unlike Scherman's study, participants in the present study were required to simply study the contents in the manual during training; however, for the units in the manual that required practicing conducting MTS, they were not required to perform their skills in scheduled sessions prior to post-training. Future studies are necessary to evaluate systematically the effect of previous practice sessions on participants' accuracy during post-training.

The fact that high percentage accuracy was observed for participants in both groups during baseline deserves further attention. First, their baseline performance was not totally naive. The participants were provided with a summary of steps for conducting automated MTS training to teach reading and, also, with a 16-page description for using the MestreLibras computer program. We felt this was necessary to prevent a large number of errors and, moreover, to make representative samples of task-related behaviors likely to occur.

During the training participants in the experimental group reported difficulties in passing the unit tests. For example, all participants in the experimental group received a re-study on at least one unit test; moreover, one participant dropped out of the study after some unsuccessful attempts to pass the units. Because of the difficulties participants had passing unit tests, as mentioned previously, it was necessary to give all participants an additional week to complete the training. All these events may have added aversive features to the experimental conditions. The exact effects of such features on the overall participants' performance are not well known. Thus, we suggest that future research should be carried out to address the aforementioned issue.

For example, future studies could take care to minimize any aversive features produced by CAPSI--e.g., by providing smaller units of material--or make CAPSI a more positive experience --e.g., by including practical components in addition to written material. In particular, substantial improvements of CAPSI as a teaching tool for training professionals such as teachers, educators, and staff working with children could be achieved by investigating its interaction with demonstration videos, in which the practical use of a technique is taught using videotaped examples (Catania, Almeida, Liu-Constant, & Reed, 2009).

Although the participants in both groups performed MTS training with high levels of accuracy during post-training, the two packages investigated in this study--"manual alone" and "manual-plus-CAPSI"--produced similar effects in the participants' accuracy during post-training. We suggest that the combination of the manual with CAPSI deserves further investigation --i.e., by addressing the aforementioned issues with regard to CAPSI features.

Two limitations of the study are as follows: (1) the participants' performances were assessed under simulated conditions that may poorly represent the contingencies that are in effect in real-world classrooms. Specifically, the fact that we had the experimenter role-playing a child makes it difficult to infer the extent to which the participants could adequately conduct automated MTS training to teach reading to an actual child following training. (2) the training focused on a circumscribed set of skills. According to some authors (Goyos & Freire, 2000; Green & Saunders, 1998), the implementation of equivalence-based methods requires more complex skills such as: (a) decision making with regard to the behaviors to be taught in order to obtain the desired emergent behaviors; (b) careful analysis of the child's performance session by session; and (c) decision making with regard to the adequacy of the training and testing protocols to produce effective learning.

Considering that CAPSI has been used to teach higher-order thinking repertoires (Crone-Todd & Pear, 2001), perhaps these skills would be more suitable for teaching via CAPSI than the procedures taught in the present study. Future studies could investigate not only the effectiveness of CAPSI to teach performance-based skills in implementing equivalence-based methods, but also to teach knowledge on stimulus equivalence, in which knowledge is defined in accord with Bloom's taxonomy (1956).

In addition, future investigations should aim at evaluating the peer-reviewing component, which was not possible in this study due to the fact that only two participants in the experimental group completed the study. (However, an experimenter or instructor could specify only one peer reviewer per test, thus providing peer-reviewing experience for at least one participant.) This may make relevant contributions to improving CAPSI's use in the development of teachers training programs in at least two ways: (1) the peer-reviewing component is important as a way to ensure that feedback will be given within a short period time; and (2) peer-reviewing may also contribute to learning by those who do the peer reviewing. However, its role in learning deserves further analysis (Martin, Pear, & Martin, 2002).

Given the limitations that were discussed, the present study is nevertheless encouraging in light of recent debate on the need of bringing the technology derived from research in behavior analysis to practical settings. Accordingly, the application of stimulus equivalence in applied educational settings such as classrooms has been addressed through concrete steps provided by research within behavior analysis.

This research was part of a dissertation submitted by the first author in partial fulfillment of the requirements for a doctoral degree at the Federal University of Sao Carlos. It was supported by grants from the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) to M. Oliveira, Application Number 200938/2010-0, and Celso Goyos, Application Number 306921/2010-3; a grant from Sao Paulo Research Foundation (FAPESP) to C. Goyos, Application Number 2011/22244-3; and grant KAL 114098 from the Knowledge Translation Branch of the Canadian Institutes of Health Research (CIHR) to J. J. Pear.

* REFERENCES

Arntzen, E., & Lian, T (2010). Trained and derived relations with pictures versus abstract stimuli as nodes. The Psychological Record, 60, 659-678.

Bloom, B. S. (1956). Taxonomy of Educational Objectives: Cognitive and Affective Domains. New York: David McKay.

Catania, C. N., Almeida, D., Liu-Constant, B., & Reed, F. D. (2009). Video modeling to train staff to implement discrete-trial instruction, Journal of Applied Behavior Analysis, 42(2), 387-392.

Critchfield, T S., & Fienup, D. M. (2010). Using stimulus equivalence technology to teach about statistical inference in a group setting. Journal of Applied Behavior Analysis, 43(4), 437-462.

Crone-Todd, D. E., & Pear, J. J. (2001). Application of Bloom's taxonomy to PSI. The Behavior Analyst Today 2(3), 204-210.

Dymond, S., Rehfeldt, R. A., & Schenk, J. (2005). Nonautomated procedures in derived stimulus relations research: a methodological note. The Psychological Record, 55, 461-481.

Elias, N. C., & Goyos, C. (2010).MestreLibras no ensino de sinais: Tarefas informatizadas de escolha de acordo com o modelo e equivalencia de estimulos. In: Mendes, E. G., Almeida, M. A. (Org.). Das margens ao centro: perspectivas para as polfticas e pra'ticas educacionais no contexto da educagao especial inclusiva. Primeira Edigao (p. 223-234). Sao Carlos: Junqueira & Marin Editora.

Elias, N. C., Goyos, C., Saunders, M., & Saunders, R. R. (2008). Teaching manual signs to adults with mental retardation using matching-to-sample procedures and stimulus equivalence. The Analysis of Verbal Behavior 24, 1-13.

Fazzio, D., & Martin, G. L. (2011). Discrete-Trials Teaching With Children With Autism. A Self-Instructional Manual (p. 21-24). Winnipeg: Hugo Science Press.

Fields, L., Travis, R., Roy, D., Yadlovker, E., de Aguiar-Rocha, L., & Sturmey, P. (2009) Equivalence class formation: A method for teaching statistical interactions. Journal of Applied Behavior Analysis, 42(3), 575-593.

Fienup, D. M, Covey, D. P., & Critchfield, T. S. (2010). Teaching brain-behavior relations economically with stimulus equivalence technology. Journal of Applied Behavior Analysis, 43(1), 19-33.

Goyos, C., & Freire, A. F. (2000). Programando ensino informatizado para individuos deficientes mentais. In: Manzini, E. (Org.). Educagao Especial: temas atuais (p. 57-74). Marilia: UNESP--Marilia Publicagoes.

Goyos, C., & Almeida, J. C. (1994). Mestre 1.0 [computer software]. Sao Carlos Mestre Software.

Green, G., & Saunders, R. R. (1998). Stimulus equivalence. In: Lattal K. A., Perone M. editors. Handbook of research methods in human operant behavior (p. 229-262) New York: Plenum.

Hu, L., Pear, J. J., & Yu, C. T. (2012). Teaching university students to implement the Assessment of Basic Learning Abilities using Computer-Aided Personalized System of Instruction. Journal of Developmental Disabilities, 18, 12-19.

Keller, F. S. (1968). "Good-bye, Teacher ...". Journal of Applied Behavior Analysis, 1(1) 79-89.

Kinsner, W., & Pear, J. J. (1988). Computer-aided personalized system of instruction for the virtual classroom. Canadian Journal of Educational Communication, 17 21-36.

Lynch, D. C., & Cuvo, A. J. (1995). Stimulus equivalence instruction of fraction-decimal relations. Journal of Applied Behavior Analysis, 28(2), 115-126.

Mace, F. C., & Critchfield, T S. (2010). Translational research in behavior analysis historical traditions and imperative for the future. Journal of Applied Behavior Analysis, 93(3), 293-312.

Martin, G., & Pear, J. J. (2011). Behavior modification: What it is and how to do it Ninth edition (p. 266-268). Englewood Cliffs, NJ: Prentice-Hall, Inc.

Martin, T. L., Pear, J. J., & Martin, G. L. (2002). Analysis of proctor marking accuracy in a computer-aided personalized system of instruction course. Journal of Applied Behavior Analysis, 35(3), 309-312.

Pear, J. J., & Crone-Todd, D. E. (1999). Personalized system of instruction in cyberspace. Journal of Applied Behavior Analysis, 32(2), 205-209.

Pear, J. J., & Novak, M. (1996). Computer-aided personalized system of instruction: A program evaluation. Teaching of Psychology, 23, 119-123.

Pear, J. J., Schnerch, G. J., Silva, K. M., Svenningsen, L., & Lambert, J. (2011). Web based computer-aided personalized system of instruction. In W. Buskist & J. E.

Groccia (Eds.), New directions for teaching and learning. Vol. 128: Evidence-based teaching (pp. 85-94). San Francisco, CA: Jossey-Bass.

Saunders, K. J., & Williams, D. C. (1998). Stimulus control procedures. In: K. A. Lanal & M. Perone (Eds.), Handbook of research (p. 71-90). New York: Plenum Press.

Scherman, A. Z. (2010). Using Computer-Aided Personalized System of Instruction (CAPSI) to Teach Discrete-Trials Teaching (DTT) for Educating Children with Autism Spectrum Disorders (ASDs). Unpublished Master's Thesis, University of Manitoba Canada.

Sidman, M. (1971). Reading and auditory visual equivalences. Journal of Speech and Hearing Research, 14, 5-13.

Sidman, M. (1992). Equivalence relations: Some basic considerations. In: S. C. Hayes, & L. J. Hayes (Orgs.), Understanding Verbal Relations (p. 15-28). Reno: Context Press.

Souza, S. R. de., & Goyos, C. (2003). Ensino de leitura e escrita por maes de criangas com dificuldades de aprendizagem. In: M. C. Marquezine, M. A. Almeida, S. Omote, & E. D. O. Tanaka (Orgs.), Opapel da famflia junto ao portador de necessidades especiais. Primeira Edigao (p. 69-78). Londrina: EDUEL.

Stromer, R., & Mackay, H. A. (1993). Delayed identity matching to complex samples Teaching students with mental retardation spelling and the prerequisites for equivalence classes. Research in Developmental Disabilities, 14, 19-38.

Stromer, R., Mackay, H. A., & Stoddard, L. T. (1992). Classroom applications of stimulus equivalence technology. Journal of Behavioral Education, 2, 225-256.

Rehfeldt, R. A. (2011). Toward a technology of derived stimulus relations: an analysis of articles published in the Journal of Applied Behavior Analysis, 1992-2009. Journal of Applied Behavior Analysis, 44(1), 109-119

Ribeiro, D. M., Elias, N. C., Goyos, C., & Miguel, C. (2010). The effects of listener training on the emergence of tact and mand signs by individuals with intellectual disabilities. The Analysis of Verbal Behavior 26, 65-72.

Rossit, R. A. S., & Goyos, C. (2009). Deficiencia Intelectual e Aquisigao Matematica: Curriculo como rede de relagoes condicionais. Psicologia Escolar e Educacional, 13, 1-15.

Toussaint, K., & Tiger, J. T. (2010). Teaching early braille literacy skills within a stimulus equivalence paradigm to children with degenerative visual impairments. Journal of Applied Behavior Analysis, 43(2), 181-194.

Marileide Oliveira

Department of Psychology

Federal University of Sao Carlos

Celso Goyos

Department of Psychology

Federal University of Sao Carlos

Joseph Pear

Department of Psychology

University of Manitoba

* AUTHOR CONTACT INFORMATION

MARILEIDE OLIVEIRA

Phone number: (55 16) 3351-8498

Mailing address for correspondence:

Rodovia Washington Luis, km 235--SP-310

Sao Carlos--Sao Paulo--Brasil CEP 13565-905

E-mail address: marileide.antunes@yahoo.com.br

CELSO GOYOS

Phone number: (55 16) 3351-8498

Mailing address for correspondence:

Rodovia Washington Luis, km 235--SP-310

Sao Carlos--Sao Paulo--Brasil CEP 13565-905

E-mail address: celsogoyos@hotmail.com

JOSEPH PEAR

Phone number: (1 204) 480-1466

Mailing address for correspondence:

Department of Psychology

190 Dysart Road Winnipeg, Manitoba, Canada R3T 2N2

E-mail address: pear@cc.umanitoba.ca
Table 1. Percentage accuracy on conducting MTS training from
baseline to post-training for control and experimental groups.

                                       Baseline   Post-training

Control group        P1                  70%          100%
                     P2                  75%          100%
                     Combined scores    72.5%         100%

Experimental group   P3                  85%          100%
                     P4                 60.41%         90%
                     Combined scores    72.7%          95%
COPYRIGHT 2012 Behavior Analyst Online
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2012 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Oliveira, Marileide; Goyos, Celso; Pear, Joseph
Publication:The Behavior Analyst Today
Article Type:Report
Date:Jun 22, 2012
Words:6056
Previous Article:Prisoner reentry and recidivism according to the formerly incarcerated and reentry service providers: a verbal behavior approach.
Next Article:The relation between GPA and exam performance during interteaching and lecture.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters