The use of scoring rubics in management accounting.
The first two courses in accounting, principles I and II elicit e·lic·it
tr.v. e·lic·it·ed, e·lic·it·ing, e·lic·its
a. To bring or draw out (something latent); educe.
b. To arrive at (a truth, for example) by logic.
2. fear and misunderstanding from most business students. Reinforcing the importance of accounting as a foundational building block in business education is critical to the students' success in later business courses. Yet, a large number of students who exit the accounting principles courses are not trained in using accounting for business decisions. In this study the treatment was a rubric RUBRIC, civil law. The title or inscription of any law or statute, because the copyists formerly drew and painted the title of laws and statutes rubro colore, in red letters. Ayl. Pand. B. 1, t. 8; Diet. do Juris. h.t. assignment in one section of an accounting principles II course. Another section was maintained under the lecture, homework and exam format. Results indicated that students using scoring rubrics in the course initially struggled with incorporating the method into their learning process. Even after students were familiar with the rubric process, they did not show improvement over the control group. Although initial findings were not significant, issues discovered in current study will be used to refine future research.
The recent accounting scandals Accounting scandals, or corporate accounting scandals are political and business scandals which arise with the disclosure of misdeeds by trusted executives of large public corporations. ranging from Enron to WorldCom that have rocked the business world as reported in the Wall Street Journal, Business Week and other media outlets emphasize the need for change in accounting education. All business students now need to be able to accurately assess the financial statements and accounting records in business organizations. The conventional wisdom that accounting skills should be developed only by those intending to be accountants has proven to be a costly mistake. All managers now have the responsibility to be able to identify accounting inaccuracies within their own organization. The reaction from governmental bodies has been centered on increasing the validity of publicly issued accounting information, as the provisions of the Sarbanes-Oxley Act See SOX. of 2002 have partially been intended to do.
One of the main goals behind recent changes in accounting has been to make non-accountants (Top-Management) responsible for publicly released accounting information. This outcome has led to other non-accountants being forced to reevaluate their accounting skills. Business Schools have responded by reemphasizing the principles of accounting courses and developing courses in ethics ethics, in philosophy, the study and evaluation of human conduct in the light of moral principles. Moral principles may be viewed either as the standard of conduct that individuals have constructed for themselves or as the body of obligations and duties that a and corporate responsibility. Focusing on changing course content and adding new courses, however, does not address the fundamental problem of poor performance by students in the initial principles of accounting courses.
There are several possible reasons for the overall poor performance in the two initial accounting courses: 1) U.S. GAAP GAAP
See: Generally Accepted Accounting Principles
See generally accepted accounting principles (GAAP). (Generally Accepted Accounting Principles The standard accounting rules, regulations, and procedures used by companies in maintaining their financial records.
Generally accepted accounting principles (GAAP) provide companies and accountants with a consistent set of guidelines that cover both broad accounting ), while highly developed, is not always intuitive. 2) More students who are non-accounting majors enroll in the principles of accounting courses. These students are typically not interested or motivated to perform well. 3) Fear of accounting related to perceptions of difficulties in learning accounting exists.
All three reasons indicate the need for developing more efficient methods of delivering accounting knowledge to business students. If students understand what they are supposed to learn from a course, and have guidelines guidelines,
n.pl a set of standards, criteria, or specifications to be used or followed in the performance of certain tasks. on how they will be evaluated, then even students who are not accounting majors will be able to understand the basics of GAAP and other accounting methods. Clearly stated guidelines will be able to minimize or eliminate students' fear of accounting.
One of the tools available to enhance student learning in accounting courses is scoring rubrics. Arter and McTighe (2001) define a rubric as "scoring tools containing criteria and a performance scale that allows us to define and describe the most important components that comprise complex performances and products" (p. 8). Criteria are "standards by which something can be judged or valued" (Gregory, Cameron, and Davis, p. 7). By specifying the particular qualities or processes that must be exhibited, an instructor provides students with a clear description of and expectations for performance. The rubric clearly highlights the important components that comprise a particular problem or performance.
The purpose of this study is to introduce students enrolled in a principles of accounting class to the concept of interpreting accounting information and utilizing the interpretation to enhance students' decision making capabilities through the use of the scoring rubric. This paper extends the literature in accounting education by showing the effect of a scoring rubric on students' performance on examinations in an introduction to managerial accounting Managerial Accounting
The process of identifying, measuring, analyzing, interpreting, and communicating information for the pursuit of an organization's goals.
REVIEW OF LITERATURE
Accounting education has historically been considered a necessary, but misunderstood mis·un·der·stood
Past tense and past participle of misunderstand.
1. Incorrectly understood or interpreted.
2. area in Business Schools. Accounting knowledge is recognized as an essential part of the foundation of all business education programs: In reality, many students consider accounting coursework coursework
work done by a student and assessed as part of an educational course
Noun 1. coursework - work assigned to and done by a student during a course of study; usually it is evaluated as part of the student's nothing more than a hurdle or impediment A disability or obstruction that prevents an individual from entering into a contract.
Infancy, for example, is an impediment in making certain contracts. Impediments to marriage include such factors as consanguinity between the parties or an earlier marriage that is still valid. to their immediate goal of graduating or even surviving the current semester se·mes·ter
One of two divisions of 15 to 18 weeks each of an academic year.
[German, from Latin (cursus) s . The accounting education research on how to make more students successful in achieving the goals of understanding accounting concepts and methods is extensive.
Catanach, Croll and Grinaker (2000) found evidence that by introducing a creative approach to teaching in intermediate financial accounting courses, students learned accounting concepts at a much more detailed and applicable way than courses relying on traditional instructional methods. This creative approach, called the "Business Activity Model" (BAM Bam (bäm), town (1996 pop. 70,100), Kerman prov., SE Iran, on the intermittent Bam River. Located on the western edge of the Dasht-e Lut, Bam is a trade center in a henna-growing region. Dates and other fruits are also grown; camels are raised. ), focused on developing accounting students' critical thinking, communication, and research skills. The AICPA AICPA
See American Institute of Certified Public Accountants (AICPA). (American Institute of Certified See certification. Public Accountants) has identified these three skills as important in understanding and delivering accounting information. Connecting accounting concepts to "real world" issues drove students' desire to understand the issues beyond what was necessary for passing the course.
a North American term commonly used to describe heifers close to term with their first calf. and Borthick (2004) also introduced real world issues into the accounting classroom with defined objectives. In several introductory accounting courses, students were given a business simulation Business simulation is simulation used for business training or analysis. It can be scenario-based or numeric-based, and it sometimes involves simulation games on personal computers or board games. consisting of eight different fundamental accounting concepts. By solving problems collaboratively in each of eight different simulations, students developed critical thinking skills specifically focused on accounting issues. Their ability to work together in groups and produce required written summaries influenced their accounting learning experience positively.
The use of real world exercises is powerful and motivates students to explore accounting issues. A key element to make real world exercises relevant for students is to make sure they have the basic decision making and critical thinking skills necessary to comprehensively examine accounting problems as presented in case scenarios.
Ammon Ammon, in the Bible
Ammon (ăm`ən), in the Bible, people living E of the Dead Sea. Their capital was Rabbath-Ammon, the present-day Amman (Jordan). Their god was Milcom, to whom Solomon built an altar. and Mills (2005) move the literature in this area forward with their article on course embedded Inserted into. See embedded system. assessments. They found that by developing decision scenarios for accounting students that required input from marketing, operations, etc., students thought "outside" the accounting box and connected the interrelationships between functional business areas. Their introduction of a scoring rubric gave students a tool to use in assisting students in identifying the performance criteria on which they would be evaluated.
The Ammon and Mills paper introduces the rubric into the accounting education area and helps to establish the interconnectivity of improved accounting education and rubric development. Kealey, Holland and Watson (2005) provided further evidence of a distinct connection between students possessing general critical thinking skills and success in accounting. Students lacking the ability to think "critically" are at much higher risk of performing poorly in the first accounting course than students who enter the course with elemental elemental
emanating from or pertaining to elements.
see elemental diet. skills in critical thinking. While other factors are involved, their paper does highlight the developing theory of student preparedness pre·par·ed·ness
The state of being prepared, especially military readiness for combat.
Noun 1. preparedness - the state of having been made ready or prepared for use or action (especially military action); "putting them as a precursor precursor /pre·cur·sor/ (pre´kur-ser) something that precedes. In biological processes, a substance from which another, usually more active or mature, substance is formed. In clinical medicine, a sign or symptom that heralds another. to success in accounting.
Using various techniques to stimulate critical thinking responses continues to be a goal within the university system and more recently, based on changes in accounting practices, in accounting courses offered through School of Businesses. Critical thinking, however, is difficult to define and much confusion surrounds the teaching of critical thinking skills. In 1987 a panel of experts gathered to generate a consensus statement regarding critical thinking and the ideal critical thinker. The following statement comes from the Delphi Report.
... The ideal critical thinker is habitually inquisitive, well-informed, trustful of reason, open-minded, flexible, fair minded in evaluation, honest in facing personal biases, prudent in making judgments, willing to reconsider, clear about issues, orderly in complex matters, diligent in seeking relevant information, reasonable in selection of criteria, focused in inquiry, and persistent in seeking results which are as precise as the subject and the circumstances of inquiry permit ... (Facione, 1990, p. 2)
For business educators, the goal is to work towards this ideal standard by establishing instructional practices that cultivate cul·ti·vate
tr.v. cul·ti·vat·ed, cul·ti·vat·ing, cul·ti·vates
a. To improve and prepare (land), as by plowing or fertilizing, for raising crops; till.
b. good critical thinking. Business educators have been attracted to critical thinking methods and approaches which produce employees who exemplify ex·em·pli·fy
tr.v. ex·em·pli·fied, ex·em·pli·fy·ing, ex·em·pli·fies
a. To illustrate by example: exemplify an argument.
b. such dispositions and uphold up·hold
tr.v. up·held , up·hold·ing, up·holds
1. To hold aloft; raise: upheld the banner proudly.
2. To prevent from falling or sinking; support.
3. these ideals. Paul and Elder (2001) profess pro·fess
v. pro·fessed, pro·fess·ing, pro·fess·es
1. To affirm openly; declare or claim: "a physics major that students need to learn to use critical thinking strategies which help them effectively think through complex problems encountered on the job and in daily life. This is done by identifying the logic of each task which includes the following elements of thought: 1) Identify goals and purposes; 2) Gather relevant information; 3) Formulate questions clearly and precisely; 4) Determine (and evaluate) assumptions; 5) Think through the implications of decisions; 6) Make logical and accurate inferences and interpretations; 7) Articulate clearly the concepts or ideas that are guiding their thinking; and 8) Consider alternate ways of looking at situations. The scoring rubric used in this research was developed using some of Paul and Elder's (2001) Universal Intellectual Standards: Clarity, accuracy, precision, relevance, depth, breadth, logic, significance, and fairness.
Traditional education methods dominate business education courses. Teaching tends to concentrate on presentational methods such as lecture. Students absorb information through listening to presentations made in the classroom and are expected to read the textbook textbook Informatics A treatise on a particular subject. See Bible. and complete exercises. After several weeks of instruction, students are assessed on their knowledge of the content through a traditional test which assesses their knowledge of factual information and basic concepts through completing multiple choice items, true/false, and fill in the blank exercises. These types of questions are called "selected response" questions. These questions are easy to score because there is a right and wrong answer.
Students also need to indicate that they understand and can apply their learning. "Constructed response" assessments include essays and performance assessments requiring students to construct a product or perform a demonstration to show what they understand (Arter and McTighe, 2001). These constructed response measurement procedures require students to generate rather than select responses (Popham, 2002). Typically, in traditional classrooms, application of learning is assessed using essay questions or problem solving problem solving
Process involved in finding a solution to a problem. Many animals routinely solve problems of locomotion, food finding, and shelter through trial and error. questions on an examination.
The difficulty in evaluating constructed responses is that sometimes the criteria used for evaluation are unclear to students. Students are either left to their own devices to figure out how they will be judged or students must wait until the test is returned. Even after the test is returned, the evaluation criteria are sometimes unclear. Students need to understand the criteria by which their work will be judged. If students know the criteria in advance, they have clear targets and clear goals which can improve their work and enhance their learning (Arter and McTighe, 2001).
Current pedagogical ped·a·gog·ic also ped·a·gog·i·cal
1. Of, relating to, or characteristic of pedagogy.
2. Characterized by pedantic formality: a haughty, pedagogic manner. scoring tools which include criteria for determining the quality of student performance are called scoring rubrics, or simply "rubrics." According to according to
1. As stated or indicated by; on the authority of: according to historians.
2. In keeping with: according to instructions.
3. Wiggins (1998), rubrics tell potential performers and judges which elements of performance matter most and how the work to be judged will be distinguished in terms of relative quality. Rubrics typically contain a scale of possible points and provide descriptors for each level of performance. These descriptors contain criteria which describe conditions that any performance must meet to be successful and they define what meeting the task requirements entails (Wiggins, 1998).
This research is intended to assess whether scoring rubrics used in a management accounting course improves student performance. In this research the following hypothesis was proposed:
H1: Students who received the scoring rubric will perform better on subsequent exams than students who do not receive the scoring rubric.
METHOD AND DESIGN
Participants in the study were 60 students in two sections of the Introductory Managerial Accounting course offered during the spring semester at a small public university.
Students were traditional in nature representing a range of academic abilities and an ethnically diverse population. All students in the College of Business were required to take this course. Several other majors in the university such as Agriculture also required this introductory managerial accounting course. All students had taken an introductory financial accounting course prior to enrolling in the course.
Both classes met on Tuesday and Thursday for one hour and 15 minutes. The control group met at 11:00 a.m. and the treatment group met at 2:00 p.m. Course material followed typical AACSB (Association for the Advancement of Collegiate col·le·giate
1. Of, relating to, or held to resemble a college.
2. Of, for, or typical of college students.
3. Of or relating to a collegiate church. Schools of Business) guidelines for content in an introductory managerial accounting course.
The same instructor taught both sections. Courses were mainly lecture format with regular break out sessions. During break out sessions students worked in groups of 3-5 on problem solving exercises. An introductory managerial accounting textbook was utilized as the primary reading material.
Students were evaluated on four (4) examination or "exam" scores, quizzes, homework, and participation. Exams represented the material covered in class and in the textbook and were combination multiple choice and problem solving exercises. Quizzes were essay in nature designed to elicit critical thinking skills. Unannounced quizzes were given randomly throughout the semester involving hypothetical Hypothetical is an adjective, meaning of or pertaining to a hypothesis. See:
The two sections were divided with 38 students in the control group and 22 students in the treatment group. Random assignment determined treatment and control group sections.
For the first three weeks of the semester, both treatment and control groups received the same instruction and assignments by the instructor. At the end of the first three weeks, the instructor administered identical exams to both groups to establish a baseline. Exam one was used as the baseline for comparison. Following exam one, the rubric was introduced to the treatment group.
A copy of the rubric was distributed to individual students and the instructor had a copy of the rubric on the overhead projector. The purpose for the rubric was explained along with descriptions of the criteria. Criteria included: clarity, relevance, precision, and accuracy. The scoring scale ranged from 0-10 points for each criterion. Figure I contains a copy of the accounting rubric. Following exam one, the instructor took a sample of one of the problems and reviewed the problem using a "think aloud" to model how to utilize the rubric criteria. The purpose of this demonstration was to encourage and teach students how to use the rubric to develop critical thinking skills.
To reinforce the elements on the rubric, the instructor reviewed criteria on the rubric relative to the homework assignment. This occurred six (6) times during the semester. The instructor would put up the rubric matrix and review the criteria with the students. The second time the rubric criteria was mentioned, a model example was reviewed and scored for the students. During the fourth time, a top score from a student in the control group was used in the treatment class to go over the rubric grading criteria. This took approximately 5-10 minutes to review the rubric criteria and approximately 15 minutes to review the criteria with an actual scoring example.
The control group was not introduced to the rubric and continued to work in the same manner as both groups during the first three weeks in the course. All other instructional methods remained the same for both groups. At the end of the semester the treatment group participated in a qualitative survey. Sample survey questions were: Have you ever used a rubric before? Did you use the rubric in this class? What impact do you feel the rubric had on your learning in this class? In your opinion, what were some of the benefits and drawbacks of using the rubric?
The dependent variables were the students' percentage test and quiz A quiz is a form of game or mind sport in which the players (as individuals or in teams) attempt to answer questions correctly. Quizzes are also brief assessments used in education and similar fields to measure growth in knowledge, abilities, and/or skills. scores. Exams were a combination multiple choice and problems/essays. The independent variable in this research was the scoring rubric. The control group did not receive instruction on the scoring rubric. Students in the treatment group were shown the scoring rubric prior to the second test.
SCORING RUBRIC DEVELOPMENT
The first step in developing the rubric was to identify questions which could be assessed using a scoring rubric on the chapter tests. The constructed response items, or open ended problem solving essay questions, were selected. Based on previous experience and knowledge with problem solving questions on tests, the features of the quality performance criteria were identified. Specific language for each criterion was developed using Paul and Elder Universal Intellectual Standards (2001). These intellectual standards check the quality of reasoning about a problem, issue or situation. A few samples of student work were used to refine the scoring rubric. Figure I. shows the rubric used with the treatment group.
STATISTICAL ANALYSIS AND RESULTS
Correlation analysis and a regression model were developed based on the previously mentioned hypothesis. Changes in students exam scores were the main variable analyzed an·a·lyze
tr.v. an·a·lyzed, an·a·lyz·ing, an·a·lyz·es
1. To examine methodically by separating into parts and studying their interrelations.
2. Chemistry To make a chemical analysis of.
3. . The regression model looked at students' final exam Noun 1. final exam - an examination administered at the end of an academic term
final examination, final
exam, examination, test - a set of questions or exercises evaluating skill or knowledge; "when the test was stolen the professor had to make a new set of score as a function of their change in performance from exam one through exam three. Tables I and II show the correlations between the final exam score and changes in exam score from the first to second exam and the second to third exam for the control and treatment group.
For the control group, both change in exam scores are correlated cor·re·late
v. cor·re·lat·ed, cor·re·lat·ing, cor·re·lates
1. To put or bring into causal, complementary, parallel, or reciprocal relation.
2. to the final exam score and to each other at the .10 to .001 levels (Table I). For the treatment group, the only significant finding was the relationship between the changes in exam score from exam I to exam II to the change in exam score from exam II to exam III. This was significant at the .05 level.
The regression results for the model "Final Exam Score = Intercept + change1 + change 2 + error" for the control group and the treatment group respectively. The control group regression model F-value of 2.67 is significant at the .10 level, i.e. the larger the change in improvement from exam I to exam II and exam II to exam III, the higher the final exam score. The t-values for the individual parameters, however, do not show significance for either independent variable to the final exam score (Change1 T value 1.27, Change 2 T value 0.80). The results for the treatment group's regression model show no significant relationship between the dependent variable and the independent variables, either as a group or individually (F-value 0.42, Change1 t-value -0.53 and Change2 t-value -0.39).
Table I shows the average change in exam grade from first exam to second, from second to third, and from third to the final. The average change difference between treatment and control group is what change1 (exam I to exam II), change2 (exam II to exam III) and change 3 (exam III to final) show. The treatment group performed worse as a group in their change in exam score (at the.10 level) from exam I to exam II. After that, however, no change difference was noted.
Further analysis in Table III show individual t-tests between the control group and treatment group changes in exam scores from exam I to II and II to III and III to IV (final exam score). The change in score from exam I to exam II is significantly higher for the control group than the treatment group, at the .10 level. The change in score from exam II to exam III or exam III to IV, however, show no significant relationship between the control group and the treatment group. Further discussion on the implications of these findings follows.
DISCUSSIONS AND LIMITATIONS
One of the limitations of this research was the small sample size. In the treatment group there were only 22 participants while the control group had 38. Ideally, more participants in the treatment group is desirable.
Another limitation is that the treatment group entered the research study with a higher degree of knowledge in management accounting. As evidenced in the examination I scores, students in the treatment group possessed greater competency in the subject knowledge than the control group.
The reliability of the scoring rubric used may have been influenced by the design of the rubric. Typical scoring ranges from 3 to 7 score points. The number of score points for a rubric depends on the purpose of the rubric and the nature of what is to be assessed. The score point range used in this research was 10 points in order to facilitate grading on the exam and calculating final grades for the semester. It may be that with too many score points for the rubric made it difficult for students to distinguish the difference between score points.
When using a scoring rubric, evaluative judgments are made, hence, student work samples and anchor papers (what a score point "2" looks like) are needed to increase consistency. Student work samples were used once, along with a model work example; however, anchor papers may clearly show the different levels of quality on the scoring rubric. In the qualitative survey, students reported that the rubric "was too cluttered clut·ter
1. A confused or disordered state or collection; a jumble: sorted through the clutter in the attic.
2. A confused noise; a clatter.
v. ... make it a checklist". Other comments were [the rubric] was "too outlined" and "Not easily accessible, would be easier to remember if it was kept in a condensed con·dense
v. con·densed, con·dens·ing, con·dens·es
1. To reduce the volume or compass of.
2. To make more concise; abridge or shorten.
a. version." Perhaps two independent raters should have been used to acquire consistent scores, thus, increasing reliability.
Initially students were puzzled by the rubric. They were unaware of the purpose for using a scoring rubric and did not understand how to use the rubric. One comment from a student was "make it integral from the outset, not when we mess up." It was apparent that students were apprehensive of the usefulness of the scoring rubric and it caused some uncertainty about how to use the scoring rubric effectively in a managerial accounting setting. Data collected in the qualitative survey indicated that students varied in their experience with rubrics and their perceived benefit of the scoring rubrics.
A survey was taken by the students who responded based on their experiences with scoring rubrics and how they used it in the treatment group's class. 62.5% of the students who completed the questionnaire reported that they had never used a rubric prior to the research study. This is quite a substantial percentage of the students in the class. This shows that for the most part, students were unfamiliar with the purpose for a scoring rubric and could not build on prior knowledge to apply the use of the scoring rubric in the management accounting course. The survey also indicates that 50% of the students reported said that they used the rubric in the management accounting class. If only 50% of the students actually used the scoring rubric, then the independent variable, the scoring rubric, may not have been very effective in accounting for the differences between the control and treatment groups.
Of the 37.5% that did have experience only 12.5% reported using the scoring rubric to assist them. Students' prior experiences with the use of scoring rubrics may have helped or hindered their perceived benefit of the scoring rubric and affected their use of scoring rubrics in the study. One student reported that he or she did not use the scoring rubric because "I think the concept of the rubric was good, but it's kind of hard to study it to actually learn how to implement it." Some of the perceived benefits were: "it helped me to understand ... it helps you to organize everything" [it helped] "answer all problems as completely as possible" and "written clearly."
Prior research indicates that while rubrics are necessary, they are insufficient for good assessment and feedback. "To know what the rubric's language really means, both the student and judge need to see examples of work considered persuasive or organized" (Wiggins, 1998, pg. 158). Students need to know the purpose for using the rubric and how to use the rubric as well as see the relevance of the rubric. As part of instruction students need to be made aware of what it means to meet these criteria and the purpose for using the rubric.
Another consideration which may have affected student use of the rubric was difficulty with the critical thinking terminology used to define the criteria in the rubric. According to Moskal (2003), "the criteria set forth within a scoring rubric should be clearly aligned with the requirements of the task and the stated goals or objectives" (pg. 2). The critical thinking terminologies which were used as the criteria for evaluation were: clarity, relevance, precision and accuracy. These specific definitions may have posed a problem and confusion for students. Perhaps, if students better understood or used the critical thinking terminology as part of the everyday instruction, they could have more effectively applied the knowledge on their examinations.
Despite all the limitations mentioned above, 54% of the students surveyed reported a positive impact upon their learning by using the scoring rubrics in the management accounting course. 18% of them reported not much of an impact and 27% said that the rubric had no impact on their learning in the course.
Students seemed to be confused by the scoring rubric and did not have "buy-in" or appreciate the benefits of using the scoring rubric. Further research may identify the benefits of developing a scoring rubric that is generated with student input. Qualities of effective assessment include involving students in developing assessment standards and criteria. This could address student "buy in" and help students to value the purpose behind using a rubric. Perhaps the increased time spent on rubric development would also help students understand how to apply the scoring rubric.
The assessment task used in this research was a traditional "paper and pencil" type of examination. The scoring rubric was used to evaluate the open-ended items on the examinations and quizzes. Perhaps an assessment task which falls into the category "performance task" which is more open ended in nature, such as a final group presentation for a simulation project, is better aligned with the scoring design of the rubric rather than the constructed response item on an examination or quiz.
There is indication that students were trying to use the rubric to deepen deep·en
tr. & intr.v. deep·ened, deep·en·ing, deep·ens
To make or become deep or deeper.
to make or become deeper or more intense
Verb 1. their understanding of accounting concepts. Unfortunately, the results showed that students' performance in the treatment group decreased relative to performance by the students in the control group immediately after being exposed the scoring rubric. After the second exam, students in the treatment group did not see immediate success to use the scoring rubric. The emphasis for using the rubric seemed to diminish, and students did not improve or worsen wors·en
tr. & intr.v. wors·ened, wors·en·ing, wors·ens
To make or become worse.
to make or become worse
worsening adjn their relative scores compared to the control group.
The initial design of the rubric may have misled mis·led
Past tense and past participle of mislead. students to believe that the rubric was the solution to understanding the accounting concepts rather than a tool to facilitate learning. They may have been disappointed with the results of the second exam after trying to use the rubric; therefore, some of the students may have reduced the emphasis upon using the rubric. The scoring rubric used was a "generic" rubric to highlight particular criteria which would enable students to understand how to critically approach answering open ended accounting problems not the "answers" to the problems. Indications are that there was a design flaw in the study. It is our belief that students were not clear on the purpose and utilization of the scoring rubric and were unclear about the critical thinking vocabulary. In the present study, we used critical thinking terminology such as clarity, relevance, precision, accuracy, breadth. Our next step is to redesign re·de·sign
tr.v. re·de·signed, re·de·sign·ing, re·de·signs
To make a revision in the appearance or function of.
re the development of the rubric which we hope will lead to more efficiently utilized scoring rubric by the students. One of our objectives in the next study is to increase ownership of the rubric by having students actively participate in developing the rubric as a class. These revisions should reduce the confusion surrounding the use of the rubric. To that end, our future research will explore those possibilities.
Ammons, J. and Mills, S. (2005) Course-embedded assessments for evaluating cross-functional integration and improving the teaching-learning process. Issues in Accounting Education, 20(1) February, 1-20.
Arter, J. and McTighe, J. (2001) Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Thousand Oaks, California Thousand Oaks, commonly referred to as "T.O." by residents, is a city in southeastern Ventura County, California, in the United States. It was named after the many oak trees that grace the area, and the city seal is adorned with an oak. : Corwin Press.
Catanach, A., Croll, D. and Grinaker, R., (2000) Teaching intermediate financial accounting using a business activity model. Issues in Accounting Education, 15(4) November, 583-603.
Facione, P.A. (1990) Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction. The Delphi Report. Retrieved August 9, 2005, from http://www.insightassessment.com/pdf_files/DEXadobe.PDF (Portable Document Format) The de facto standard for document publishing from Adobe. On the Web, there are countless brochures, data sheets, white papers and technical manuals in the PDF format.
Gregory, K., Cameron, C. and Davies, A. (1997) Setting and using criteria. British Columbia British Columbia, province (2001 pop. 3,907,738), 366,255 sq mi (948,600 sq km), including 6,976 sq mi (18,068 sq km) of water surface, W Canada. Geography
: Connections Publishing.
Kealey, B. Holland, J. and Watson, M. (2005) Preliminary evidence on the association between critical thinking and performance in principles of accounting. Issues in Accounting Education, 20(1) February 33-50.
Moskal, B. (2003) Developing classroom performance assessments and scoring rubrics. ERIC Document 481715.
Paul, R. and Elder, L. (2001) The miniature guide to critical thinking: concepts and tools. The Foundation for Critical Thinking This article or section is written like an .
Please help [ rewrite this article] from a neutral point of view.
Mark blatant advertising for , using . .
Paul, R. and Elder, L. (2001) Critical thinking: Tools for taking charge of your learning and your life. New Jersey: Prentice Hall Prentice Hall is a leading educational publisher. It is an imprint of Pearson Education, Inc., based in Upper Saddle River, New Jersey, USA. Prentice Hall publishes print and digital content for the 6-12 and higher education market. History
In 1913, law professor Dr. .
Popham, W. J. (2002) Classroom assessment: what teachers need to know. Boston: Allyn & Bacon 3rd ed.
Springer, C. and Borthick, A., (2004) Business simulation to stage critical thinking in introductory accounting. Issues in Accounting Education, 19(3) August, 277-303.
Wiggins, G. (1998) Educative ed·u·ca·tive
Adj. 1. educative - resulting in education; "an educative experience"
instructive, informative - serving to instruct or enlighten or inform assessment: Designing assessments to inform and improve student performance. San Francisco San Francisco (săn frănsĭs`kō), city (1990 pop. 723,959), coextensive with San Francisco co., W Calif., on the tip of a peninsula between the Pacific Ocean and San Francisco Bay, which are connected by the strait known as the Golden : Jossey-Bass.
Jeffrey Decker, University of Illinois at Springfield The University of Illinois at Springfield (UIS) is a small, liberal arts university and the third campus of the University of Illinois. UIS was established in 1969 as Sangamon State University Michele Ebersole, University of Hawaii (body, education) University of Hawaii - A University spread over 10 campuses on 4 islands throughout the state.
See also Aloha, Aloha Net. at Hilo
Table I.: Correlation between exam scores (Treatment Group) Final exam Change in Score Change in Score Exam I to II Exam II to III Final exam 1 -0.18701 -0.16817 1.4047 0.4544 Change in score Exam I to II -0.18701 1.00000 0.49784 1.4048 0.0184 Change in score Exam II to III -0.16817 0.49784 1.00000 0.4544 0.0184 N=22 Number underneath correlation is probability under null hypothesis Table II.: Correlation between exam scores (Control Group) Final exam Change in Score Change in Score Exam I to II Exam II to III Final exam 1.00000 0.35015 0.3111 0.0363 0.0648 Change in score Exam I to II 0.35015 1.00000 0.59294 0.0363 0.0001 Change in score Exam II to III 0.31110 0.59294 1.00000 1.648 0.0001 N=36 Number underneath correlation is probability under null hypothesis Table III: Individual t-tests between control and treatment groups for changes in exam scores Variable DF T-value Pr > [absolute value of t] Change1 40.9 1.92 0.0622 Change2 36.5 -0.23 0.8196 Change3 42.7 -0.14 0.8878 Satterthwaite unequal variance method reported Figure 1. Accounting Rubric Problem Solving 0 5 Descriptors 1) Clarity (Do you Main point was Main ideas restated understand the problem? missed. but missing one or Can you start the more elements. set-up?) 2) Relevance (Are you Irrelevant Steps shown in a able to identify information used sequential manner. relevant aspects of to determine the problem? Can solution. you set-up the problem sequentially?) 3) Precision (Did you Labels not shown, Some labeling shown, label it precisely? Is vague labels. but gaps noticeable. the set-up easy to follow?) 4) Accuracy (Is the Completely Answer that is answer correct?) inaccurate inaccurate due to answer given. minor miscalculation given. 5) Breadth (Is there No attempt made. Some additional other information information included, you should attempted to elaborate consider?) but used information inappropriately. Different situations used but inaccurately applied. Problem Solving 10 Descriptors 1) Clarity (Do you Example taken from understand the problem? illustration which clearly Can you start the identified the critical set-up?) elements. 2) Relevance (Are you All relevant information able to identify presented. Irrelevant relevant aspects of information disregarded. the problem? Can you set-up the problem sequentially?) 3) Precision (Did you Correct labels used label it precisely? Is (connected solution to the set-up easy to problem). follow?) 4) Accuracy (Is the Completely accurate answer correct?) answer given. 5) Breadth (Is there Additional information other information included/used to solve you should problem. Problem was consider?) elaborated by explaining the next step. Problem applied in different situations.