Printer Friendly

The use of scoring rubics in management accounting.

ABSTRACT

The first two courses in accounting, principles I and II elicit fear and misunderstanding from most business students. Reinforcing the importance of accounting as a foundational building block in business education is critical to the students' success in later business courses. Yet, a large number of students who exit the accounting principles courses are not trained in using accounting for business decisions. In this study the treatment was a rubric assignment in one section of an accounting principles II course. Another section was maintained under the lecture, homework and exam format. Results indicated that students using scoring rubrics in the course initially struggled with incorporating the method into their learning process. Even after students were familiar with the rubric process, they did not show improvement over the control group. Although initial findings were not significant, issues discovered in current study will be used to refine future research.

INTRODUCTION

The recent accounting scandals ranging from Enron to WorldCom that have rocked the business world as reported in the Wall Street Journal, Business Week and other media outlets emphasize the need for change in accounting education. All business students now need to be able to accurately assess the financial statements and accounting records in business organizations. The conventional wisdom that accounting skills should be developed only by those intending to be accountants has proven to be a costly mistake. All managers now have the responsibility to be able to identify accounting inaccuracies within their own organization. The reaction from governmental bodies has been centered on increasing the validity of publicly issued accounting information, as the provisions of the Sarbanes-Oxley Act of 2002 have partially been intended to do.

One of the main goals behind recent changes in accounting has been to make non-accountants (Top-Management) responsible for publicly released accounting information. This outcome has led to other non-accountants being forced to reevaluate their accounting skills. Business Schools have responded by reemphasizing the principles of accounting courses and developing courses in ethics and corporate responsibility. Focusing on changing course content and adding new courses, however, does not address the fundamental problem of poor performance by students in the initial principles of accounting courses.

There are several possible reasons for the overall poor performance in the two initial accounting courses: 1) U.S. GAAP (Generally Accepted Accounting Principles), while highly developed, is not always intuitive. 2) More students who are non-accounting majors enroll in the principles of accounting courses. These students are typically not interested or motivated to perform well. 3) Fear of accounting related to perceptions of difficulties in learning accounting exists.

All three reasons indicate the need for developing more efficient methods of delivering accounting knowledge to business students. If students understand what they are supposed to learn from a course, and have guidelines on how they will be evaluated, then even students who are not accounting majors will be able to understand the basics of GAAP and other accounting methods. Clearly stated guidelines will be able to minimize or eliminate students' fear of accounting.

One of the tools available to enhance student learning in accounting courses is scoring rubrics. Arter and McTighe (2001) define a rubric as "scoring tools containing criteria and a performance scale that allows us to define and describe the most important components that comprise complex performances and products" (p. 8). Criteria are "standards by which something can be judged or valued" (Gregory, Cameron, and Davis, p. 7). By specifying the particular qualities or processes that must be exhibited, an instructor provides students with a clear description of and expectations for performance. The rubric clearly highlights the important components that comprise a particular problem or performance.

The purpose of this study is to introduce students enrolled in a principles of accounting class to the concept of interpreting accounting information and utilizing the interpretation to enhance students' decision making capabilities through the use of the scoring rubric. This paper extends the literature in accounting education by showing the effect of a scoring rubric on students' performance on examinations in an introduction to managerial accounting course.

REVIEW OF LITERATURE

Accounting education has historically been considered a necessary, but misunderstood area in Business Schools. Accounting knowledge is recognized as an essential part of the foundation of all business education programs: In reality, many students consider accounting coursework nothing more than a hurdle or impediment to their immediate goal of graduating or even surviving the current semester. The accounting education research on how to make more students successful in achieving the goals of understanding accounting concepts and methods is extensive.

Catanach, Croll and Grinaker (2000) found evidence that by introducing a creative approach to teaching in intermediate financial accounting courses, students learned accounting concepts at a much more detailed and applicable way than courses relying on traditional instructional methods. This creative approach, called the "Business Activity Model" (BAM), focused on developing accounting students' critical thinking, communication, and research skills. The AICPA (American Institute of Certified Public Accountants) has identified these three skills as important in understanding and delivering accounting information. Connecting accounting concepts to "real world" issues drove students' desire to understand the issues beyond what was necessary for passing the course.

Springer and Borthick (2004) also introduced real world issues into the accounting classroom with defined objectives. In several introductory accounting courses, students were given a business simulation consisting of eight different fundamental accounting concepts. By solving problems collaboratively in each of eight different simulations, students developed critical thinking skills specifically focused on accounting issues. Their ability to work together in groups and produce required written summaries influenced their accounting learning experience positively.

The use of real world exercises is powerful and motivates students to explore accounting issues. A key element to make real world exercises relevant for students is to make sure they have the basic decision making and critical thinking skills necessary to comprehensively examine accounting problems as presented in case scenarios.

Ammon and Mills (2005) move the literature in this area forward with their article on course embedded assessments. They found that by developing decision scenarios for accounting students that required input from marketing, operations, etc., students thought "outside" the accounting box and connected the interrelationships between functional business areas. Their introduction of a scoring rubric gave students a tool to use in assisting students in identifying the performance criteria on which they would be evaluated.

The Ammon and Mills paper introduces the rubric into the accounting education area and helps to establish the interconnectivity of improved accounting education and rubric development. Kealey, Holland and Watson (2005) provided further evidence of a distinct connection between students possessing general critical thinking skills and success in accounting. Students lacking the ability to think "critically" are at much higher risk of performing poorly in the first accounting course than students who enter the course with elemental skills in critical thinking. While other factors are involved, their paper does highlight the developing theory of student preparedness as a precursor to success in accounting.

Using various techniques to stimulate critical thinking responses continues to be a goal within the university system and more recently, based on changes in accounting practices, in accounting courses offered through School of Businesses. Critical thinking, however, is difficult to define and much confusion surrounds the teaching of critical thinking skills. In 1987 a panel of experts gathered to generate a consensus statement regarding critical thinking and the ideal critical thinker. The following statement comes from the Delphi Report.
 ... The ideal critical thinker is habitually inquisitive,
 well-informed, trustful of reason, open-minded, flexible, fair
 minded in evaluation, honest in facing personal biases, prudent in
 making judgments, willing to reconsider, clear about issues,
 orderly in complex matters, diligent in seeking relevant
 information, reasonable in selection of criteria, focused in
 inquiry, and persistent in seeking results which are as precise as
 the subject and the circumstances of inquiry permit ... (Facione,
 1990, p. 2)


For business educators, the goal is to work towards this ideal standard by establishing instructional practices that cultivate good critical thinking. Business educators have been attracted to critical thinking methods and approaches which produce employees who exemplify such dispositions and uphold these ideals. Paul and Elder (2001) profess that students need to learn to use critical thinking strategies which help them effectively think through complex problems encountered on the job and in daily life. This is done by identifying the logic of each task which includes the following elements of thought: 1) Identify goals and purposes; 2) Gather relevant information; 3) Formulate questions clearly and precisely; 4) Determine (and evaluate) assumptions; 5) Think through the implications of decisions; 6) Make logical and accurate inferences and interpretations; 7) Articulate clearly the concepts or ideas that are guiding their thinking; and 8) Consider alternate ways of looking at situations. The scoring rubric used in this research was developed using some of Paul and Elder's (2001) Universal Intellectual Standards: Clarity, accuracy, precision, relevance, depth, breadth, logic, significance, and fairness.

Traditional education methods dominate business education courses. Teaching tends to concentrate on presentational methods such as lecture. Students absorb information through listening to presentations made in the classroom and are expected to read the textbook and complete exercises. After several weeks of instruction, students are assessed on their knowledge of the content through a traditional test which assesses their knowledge of factual information and basic concepts through completing multiple choice items, true/false, and fill in the blank exercises. These types of questions are called "selected response" questions. These questions are easy to score because there is a right and wrong answer.

Students also need to indicate that they understand and can apply their learning. "Constructed response" assessments include essays and performance assessments requiring students to construct a product or perform a demonstration to show what they understand (Arter and McTighe, 2001). These constructed response measurement procedures require students to generate rather than select responses (Popham, 2002). Typically, in traditional classrooms, application of learning is assessed using essay questions or problem solving questions on an examination.

The difficulty in evaluating constructed responses is that sometimes the criteria used for evaluation are unclear to students. Students are either left to their own devices to figure out how they will be judged or students must wait until the test is returned. Even after the test is returned, the evaluation criteria are sometimes unclear. Students need to understand the criteria by which their work will be judged. If students know the criteria in advance, they have clear targets and clear goals which can improve their work and enhance their learning (Arter and McTighe, 2001).

Current pedagogical scoring tools which include criteria for determining the quality of student performance are called scoring rubrics, or simply "rubrics." According to Wiggins (1998), rubrics tell potential performers and judges which elements of performance matter most and how the work to be judged will be distinguished in terms of relative quality. Rubrics typically contain a scale of possible points and provide descriptors for each level of performance. These descriptors contain criteria which describe conditions that any performance must meet to be successful and they define what meeting the task requirements entails (Wiggins, 1998).

This research is intended to assess whether scoring rubrics used in a management accounting course improves student performance. In this research the following hypothesis was proposed:

H1: Students who received the scoring rubric will perform better on subsequent exams than students who do not receive the scoring rubric.

METHOD AND DESIGN

Participants in the study were 60 students in two sections of the Introductory Managerial Accounting course offered during the spring semester at a small public university.

Students were traditional in nature representing a range of academic abilities and an ethnically diverse population. All students in the College of Business were required to take this course. Several other majors in the university such as Agriculture also required this introductory managerial accounting course. All students had taken an introductory financial accounting course prior to enrolling in the course.

Both classes met on Tuesday and Thursday for one hour and 15 minutes. The control group met at 11:00 a.m. and the treatment group met at 2:00 p.m. Course material followed typical AACSB (Association for the Advancement of Collegiate Schools of Business) guidelines for content in an introductory managerial accounting course.

The same instructor taught both sections. Courses were mainly lecture format with regular break out sessions. During break out sessions students worked in groups of 3-5 on problem solving exercises. An introductory managerial accounting textbook was utilized as the primary reading material.

Students were evaluated on four (4) examination or "exam" scores, quizzes, homework, and participation. Exams represented the material covered in class and in the textbook and were combination multiple choice and problem solving exercises. Quizzes were essay in nature designed to elicit critical thinking skills. Unannounced quizzes were given randomly throughout the semester involving hypothetical scenarios created to generate critical thinking skills for students. Homework was assigned after each class period and homework problems were in alignment with the current material covered in class.

The two sections were divided with 38 students in the control group and 22 students in the treatment group. Random assignment determined treatment and control group sections.

For the first three weeks of the semester, both treatment and control groups received the same instruction and assignments by the instructor. At the end of the first three weeks, the instructor administered identical exams to both groups to establish a baseline. Exam one was used as the baseline for comparison. Following exam one, the rubric was introduced to the treatment group.

A copy of the rubric was distributed to individual students and the instructor had a copy of the rubric on the overhead projector. The purpose for the rubric was explained along with descriptions of the criteria. Criteria included: clarity, relevance, precision, and accuracy. The scoring scale ranged from 0-10 points for each criterion. Figure I contains a copy of the accounting rubric. Following exam one, the instructor took a sample of one of the problems and reviewed the problem using a "think aloud" to model how to utilize the rubric criteria. The purpose of this demonstration was to encourage and teach students how to use the rubric to develop critical thinking skills.

To reinforce the elements on the rubric, the instructor reviewed criteria on the rubric relative to the homework assignment. This occurred six (6) times during the semester. The instructor would put up the rubric matrix and review the criteria with the students. The second time the rubric criteria was mentioned, a model example was reviewed and scored for the students. During the fourth time, a top score from a student in the control group was used in the treatment class to go over the rubric grading criteria. This took approximately 5-10 minutes to review the rubric criteria and approximately 15 minutes to review the criteria with an actual scoring example.

The control group was not introduced to the rubric and continued to work in the same manner as both groups during the first three weeks in the course. All other instructional methods remained the same for both groups. At the end of the semester the treatment group participated in a qualitative survey. Sample survey questions were: Have you ever used a rubric before? Did you use the rubric in this class? What impact do you feel the rubric had on your learning in this class? In your opinion, what were some of the benefits and drawbacks of using the rubric?

The dependent variables were the students' percentage test and quiz scores. Exams were a combination multiple choice and problems/essays. The independent variable in this research was the scoring rubric. The control group did not receive instruction on the scoring rubric. Students in the treatment group were shown the scoring rubric prior to the second test.

SCORING RUBRIC DEVELOPMENT

The first step in developing the rubric was to identify questions which could be assessed using a scoring rubric on the chapter tests. The constructed response items, or open ended problem solving essay questions, were selected. Based on previous experience and knowledge with problem solving questions on tests, the features of the quality performance criteria were identified. Specific language for each criterion was developed using Paul and Elder Universal Intellectual Standards (2001). These intellectual standards check the quality of reasoning about a problem, issue or situation. A few samples of student work were used to refine the scoring rubric. Figure I. shows the rubric used with the treatment group.

STATISTICAL ANALYSIS AND RESULTS

Correlation analysis and a regression model were developed based on the previously mentioned hypothesis. Changes in students exam scores were the main variable analyzed. The regression model looked at students' final exam score as a function of their change in performance from exam one through exam three. Tables I and II show the correlations between the final exam score and changes in exam score from the first to second exam and the second to third exam for the control and treatment group.

For the control group, both change in exam scores are correlated to the final exam score and to each other at the .10 to .001 levels (Table I). For the treatment group, the only significant finding was the relationship between the changes in exam score from exam I to exam II to the change in exam score from exam II to exam III. This was significant at the .05 level.

The regression results for the model "Final Exam Score = Intercept + change1 + change 2 + error" for the control group and the treatment group respectively. The control group regression model F-value of 2.67 is significant at the .10 level, i.e. the larger the change in improvement from exam I to exam II and exam II to exam III, the higher the final exam score. The t-values for the individual parameters, however, do not show significance for either independent variable to the final exam score (Change1 T value 1.27, Change 2 T value 0.80). The results for the treatment group's regression model show no significant relationship between the dependent variable and the independent variables, either as a group or individually (F-value 0.42, Change1 t-value -0.53 and Change2 t-value -0.39).

Table I shows the average change in exam grade from first exam to second, from second to third, and from third to the final. The average change difference between treatment and control group is what change1 (exam I to exam II), change2 (exam II to exam III) and change 3 (exam III to final) show. The treatment group performed worse as a group in their change in exam score (at the.10 level) from exam I to exam II. After that, however, no change difference was noted.

Further analysis in Table III show individual t-tests between the control group and treatment group changes in exam scores from exam I to II and II to III and III to IV (final exam score). The change in score from exam I to exam II is significantly higher for the control group than the treatment group, at the .10 level. The change in score from exam II to exam III or exam III to IV, however, show no significant relationship between the control group and the treatment group. Further discussion on the implications of these findings follows.

DISCUSSIONS AND LIMITATIONS

One of the limitations of this research was the small sample size. In the treatment group there were only 22 participants while the control group had 38. Ideally, more participants in the treatment group is desirable.

Another limitation is that the treatment group entered the research study with a higher degree of knowledge in management accounting. As evidenced in the examination I scores, students in the treatment group possessed greater competency in the subject knowledge than the control group.

The reliability of the scoring rubric used may have been influenced by the design of the rubric. Typical scoring ranges from 3 to 7 score points. The number of score points for a rubric depends on the purpose of the rubric and the nature of what is to be assessed. The score point range used in this research was 10 points in order to facilitate grading on the exam and calculating final grades for the semester. It may be that with too many score points for the rubric made it difficult for students to distinguish the difference between score points.

When using a scoring rubric, evaluative judgments are made, hence, student work samples and anchor papers (what a score point "2" looks like) are needed to increase consistency. Student work samples were used once, along with a model work example; however, anchor papers may clearly show the different levels of quality on the scoring rubric. In the qualitative survey, students reported that the rubric "was too cluttered ... make it a checklist". Other comments were [the rubric] was "too outlined" and "Not easily accessible, would be easier to remember if it was kept in a condensed version." Perhaps two independent raters should have been used to acquire consistent scores, thus, increasing reliability.

Initially students were puzzled by the rubric. They were unaware of the purpose for using a scoring rubric and did not understand how to use the rubric. One comment from a student was "make it integral from the outset, not when we mess up." It was apparent that students were apprehensive of the usefulness of the scoring rubric and it caused some uncertainty about how to use the scoring rubric effectively in a managerial accounting setting. Data collected in the qualitative survey indicated that students varied in their experience with rubrics and their perceived benefit of the scoring rubrics.

A survey was taken by the students who responded based on their experiences with scoring rubrics and how they used it in the treatment group's class. 62.5% of the students who completed the questionnaire reported that they had never used a rubric prior to the research study. This is quite a substantial percentage of the students in the class. This shows that for the most part, students were unfamiliar with the purpose for a scoring rubric and could not build on prior knowledge to apply the use of the scoring rubric in the management accounting course. The survey also indicates that 50% of the students reported said that they used the rubric in the management accounting class. If only 50% of the students actually used the scoring rubric, then the independent variable, the scoring rubric, may not have been very effective in accounting for the differences between the control and treatment groups.

Of the 37.5% that did have experience only 12.5% reported using the scoring rubric to assist them. Students' prior experiences with the use of scoring rubrics may have helped or hindered their perceived benefit of the scoring rubric and affected their use of scoring rubrics in the study. One student reported that he or she did not use the scoring rubric because "I think the concept of the rubric was good, but it's kind of hard to study it to actually learn how to implement it." Some of the perceived benefits were: "it helped me to understand ... it helps you to organize everything" [it helped] "answer all problems as completely as possible" and "written clearly."

Prior research indicates that while rubrics are necessary, they are insufficient for good assessment and feedback. "To know what the rubric's language really means, both the student and judge need to see examples of work considered persuasive or organized" (Wiggins, 1998, pg. 158). Students need to know the purpose for using the rubric and how to use the rubric as well as see the relevance of the rubric. As part of instruction students need to be made aware of what it means to meet these criteria and the purpose for using the rubric.

Another consideration which may have affected student use of the rubric was difficulty with the critical thinking terminology used to define the criteria in the rubric. According to Moskal (2003), "the criteria set forth within a scoring rubric should be clearly aligned with the requirements of the task and the stated goals or objectives" (pg. 2). The critical thinking terminologies which were used as the criteria for evaluation were: clarity, relevance, precision and accuracy. These specific definitions may have posed a problem and confusion for students. Perhaps, if students better understood or used the critical thinking terminology as part of the everyday instruction, they could have more effectively applied the knowledge on their examinations.

Despite all the limitations mentioned above, 54% of the students surveyed reported a positive impact upon their learning by using the scoring rubrics in the management accounting course. 18% of them reported not much of an impact and 27% said that the rubric had no impact on their learning in the course.

FUTURE RESEARCH

Students seemed to be confused by the scoring rubric and did not have "buy-in" or appreciate the benefits of using the scoring rubric. Further research may identify the benefits of developing a scoring rubric that is generated with student input. Qualities of effective assessment include involving students in developing assessment standards and criteria. This could address student "buy in" and help students to value the purpose behind using a rubric. Perhaps the increased time spent on rubric development would also help students understand how to apply the scoring rubric.

The assessment task used in this research was a traditional "paper and pencil" type of examination. The scoring rubric was used to evaluate the open-ended items on the examinations and quizzes. Perhaps an assessment task which falls into the category "performance task" which is more open ended in nature, such as a final group presentation for a simulation project, is better aligned with the scoring design of the rubric rather than the constructed response item on an examination or quiz.

CONCLUSION

There is indication that students were trying to use the rubric to deepen their understanding of accounting concepts. Unfortunately, the results showed that students' performance in the treatment group decreased relative to performance by the students in the control group immediately after being exposed the scoring rubric. After the second exam, students in the treatment group did not see immediate success to use the scoring rubric. The emphasis for using the rubric seemed to diminish, and students did not improve or worsen their relative scores compared to the control group.

The initial design of the rubric may have misled students to believe that the rubric was the solution to understanding the accounting concepts rather than a tool to facilitate learning. They may have been disappointed with the results of the second exam after trying to use the rubric; therefore, some of the students may have reduced the emphasis upon using the rubric. The scoring rubric used was a "generic" rubric to highlight particular criteria which would enable students to understand how to critically approach answering open ended accounting problems not the "answers" to the problems. Indications are that there was a design flaw in the study. It is our belief that students were not clear on the purpose and utilization of the scoring rubric and were unclear about the critical thinking vocabulary. In the present study, we used critical thinking terminology such as clarity, relevance, precision, accuracy, breadth. Our next step is to redesign the development of the rubric which we hope will lead to more efficiently utilized scoring rubric by the students. One of our objectives in the next study is to increase ownership of the rubric by having students actively participate in developing the rubric as a class. These revisions should reduce the confusion surrounding the use of the rubric. To that end, our future research will explore those possibilities.

REFERENCES

Ammons, J. and Mills, S. (2005) Course-embedded assessments for evaluating cross-functional integration and improving the teaching-learning process. Issues in Accounting Education, 20(1) February, 1-20.

Arter, J. and McTighe, J. (2001) Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Thousand Oaks, California: Corwin Press.

Catanach, A., Croll, D. and Grinaker, R., (2000) Teaching intermediate financial accounting using a business activity model. Issues in Accounting Education, 15(4) November, 583-603.

Facione, P.A. (1990) Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction. The Delphi Report. Retrieved August 9, 2005, from http://www.insightassessment.com/pdf_files/DEXadobe.PDF

Gregory, K., Cameron, C. and Davies, A. (1997) Setting and using criteria. British Columbia: Connections Publishing.

Kealey, B. Holland, J. and Watson, M. (2005) Preliminary evidence on the association between critical thinking and performance in principles of accounting. Issues in Accounting Education, 20(1) February 33-50.

Moskal, B. (2003) Developing classroom performance assessments and scoring rubrics. ERIC Document 481715.

Paul, R. and Elder, L. (2001) The miniature guide to critical thinking: concepts and tools. The Foundation for Critical Thinking.

Paul, R. and Elder, L. (2001) Critical thinking: Tools for taking charge of your learning and your life. New Jersey: Prentice Hall.

Popham, W. J. (2002) Classroom assessment: what teachers need to know. Boston: Allyn & Bacon 3rd ed.

Springer, C. and Borthick, A., (2004) Business simulation to stage critical thinking in introductory accounting. Issues in Accounting Education, 19(3) August, 277-303.

Wiggins, G. (1998) Educative assessment: Designing assessments to inform and improve student performance. San Francisco: Jossey-Bass.

Jeffrey Decker, University of Illinois at Springfield Michele Ebersole, University of Hawaii at Hilo
Table I.: Correlation between exam scores (Treatment Group)

 Final exam Change in Score Change in Score
 Exam I to II Exam II to III

Final exam 1 -0.18701 -0.16817
 1.4047 0.4544

Change in score
Exam I to II -0.18701 1.00000 0.49784
 1.4048 0.0184
Change in score
Exam II to III -0.16817 0.49784 1.00000
 0.4544 0.0184
N=22

Number underneath correlation is probability under null hypothesis

Table II.: Correlation between exam scores (Control Group)

 Final exam Change in Score Change in Score
 Exam I to II Exam II to III

Final exam 1.00000 0.35015 0.3111
 0.0363 0.0648
Change in score
Exam I to II 0.35015 1.00000 0.59294
 0.0363 0.0001
Change in score
Exam II to III 0.31110 0.59294 1.00000
 1.648 0.0001
N=36

Number underneath correlation is probability under null hypothesis

Table III: Individual t-tests between control and treatment groups
for changes in exam scores

Variable DF T-value Pr > [absolute
 value of t]

Change1 40.9 1.92 0.0622
Change2 36.5 -0.23 0.8196
Change3 42.7 -0.14 0.8878

Satterthwaite unequal variance method reported

Figure 1. Accounting Rubric

Problem Solving 0 5
Descriptors

1) Clarity (Do you Main point was Main ideas restated
understand the problem? missed. but missing one or
Can you start the more elements.
set-up?)

2) Relevance (Are you Irrelevant Steps shown in a
able to identify information used sequential manner.
relevant aspects of to determine
the problem? Can solution.
you set-up the
problem
sequentially?)

3) Precision (Did you Labels not shown, Some labeling shown,
label it precisely? Is vague labels. but gaps noticeable.
the set-up easy to
follow?)

4) Accuracy (Is the Completely Answer that is
answer correct?) inaccurate inaccurate due to
 answer given. minor miscalculation
 given.

5) Breadth (Is there No attempt made. Some additional
other information information included,
you should attempted to elaborate
consider?) but used information
 inappropriately.
 Different situations
 used but inaccurately
 applied.

Problem Solving 10
Descriptors

1) Clarity (Do you Example taken from
understand the problem? illustration which clearly
Can you start the identified the critical
set-up?) elements.

2) Relevance (Are you All relevant information
able to identify presented. Irrelevant
relevant aspects of information disregarded.
the problem? Can
you set-up the
problem
sequentially?)

3) Precision (Did you Correct labels used
label it precisely? Is (connected solution to
the set-up easy to problem).
follow?)

4) Accuracy (Is the Completely accurate
answer correct?) answer given.

5) Breadth (Is there Additional information
other information included/used to solve
you should problem. Problem was
consider?) elaborated by explaining
 the next step. Problem
 applied in different
 situations.
COPYRIGHT 2007 The DreamCatchers Group, LLC
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2007 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Decker, Jeffrey; Ebersole, Michele
Publication:Academy of Educational Leadership Journal
Article Type:Report
Geographic Code:1USA
Date:May 1, 2007
Words:5360
Previous Article:Student perceptions and opinions toward e-learning in the college environment.
Next Article:Integrating ICT into higher education: a study of onsite vs online students' perceptions.
Topics:

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters