Printer Friendly
The Free Library
22,728,043 articles and books

Using projects scoring rubrics to assess student learning in an information systems program.



1. INTRODUCTION

Assessment as a term refers to the processes used to determine an individual's mastery of complex activities, generally through observed performance(Ewell, 2002:9). The topic of assessment is gaining importance in the field of education(see Banta and associates, 2002; Heywood, 2000; and others) and in the field of Information Systems Education. Insights about the overall process of establishing program assessment in an IS program may be found in Petkova and Jarmoszko(2004), Stemler and Chamblin (2006), Aasheim et al.(2006) and White and McCarthy (2007). Assessment activities in the IS discipline have been boosted by the work of the Center for Computing computing - computer  Education Research(See McKell et al., 2006).

An overview of academic program assessment methods is presented in Palomba and Banta(1999). They group program assessment approaches into the categories of direct and indirect methods. Direct assessment methods include:

* Exams, with multiple choice questions, essays, problem solving problem solving

Process involved in finding a solution to a problem. Many animals routinely solve problems of locomotion, food finding, and shelter through trial and error.
 using local or national instrument

* Performance measures(demonstrating student competence in one or more skills), including oral presentations, projects, demonstrations, case studies, simulations and portfolios

* Juried activities with outside panels rating student work

* Internships, national licensure licensure
(lī´snsh
 or professional exams.

Indirect methods include:

* Questionnaires designed to provide proxy information about student learning

* Interviews

* Focus groups.

Selection of assessment methods across an academic program is a complex process and involves multiple criteria reflecting the goals of the assessment exercise and the existing constraints. Possible selection criteria are the ability of these methods to address the necessary assessment questions and the ability to provide useful information that indicates whether students are learning and developing in desired ways. Other relevant selection criteria are: reliability, validity, timeliness, cost, motivation of the students to participate and ease of understanding and interpretation (Banta and Associates, 2002; Stassen et al, 2001).

Petkova et al.(2006) discuss the implementation of the assessment process in an IS program including these activities: curriculum mapping Curriculum Mapping is a procedure for reviewing the operational curriculum as it is entered into an electronic data base at any education setting. It is based largely on the work of Heidi Hayes Jacobs in Mapping the Big Picture: Integrating Curriculum and Assessment K-12  and syllabus A headnote; a short note preceding the text of a reported case that briefly summarizes the rulings of the court on the points decided in the case.

The syllabus appears before the text of the opinion.
 analysis, course-embedded assessment, portfolio assessment and performance appraisal Performance appraisal, also known as employee appraisal, is a method by which the performance of an employee is evaluated (generally in terms of quality, quantity, cost and time). . It is evident that these activities relate to methods which can be classified as direct assessment methods that are usually considered more objective and preferred to indirect methods. Course level assessment in an IS program is often left to the preferences of the individual professor and typically includes quizzes, home assignments, exams and a team project. At the program level, several assessment approaches are applicable including:

1. A student survey of IS knowledge and expectations or a direct entry level test at the start of the student's studies in the first specialized spe·cial·ize  
v. spe·cial·ized, spe·cial·iz·ing, spe·cial·iz·es

v.intr.
1. To pursue a special activity, occupation, or field of study.

2.
 courses of the program.

2. A senior survey of student experiences at the end of the capstone course.

3. Standardized exams.

4. Longitudinal lon·gi·tu·di·nal
adj.
Running in the direction of the long axis of the body or any of its parts.
 assessment of student learning in the core IS courses.

5. Student web portfolios.

The development of instruments for the first two program assessment approaches above may be based on published work such as Kim and Pick(2000). Standardized exams are discussed in Reynolds et al(2004) and Landry et al.(2006), while their use to support program assessment is demonstrated in White and McCarthy(2007). Longitudinal assessment studies in IS are rare with few exceptions like the one conducted by Williams and Price(2000). Portfolios are widely used as an assessment method in education but there are very few reports on their usage in computing programs like Higgs and Sabin Sa·bin , Albert Bruce 1906-1993.

American microbiologist and physician who developed a live-virus vaccine against polio (1957), replacing the killed-virus vaccine invented by Jonas Salk.
(2005).

Projects as artifacts artifacts

see specimen artifacts.
 demonstrating student performance play an important role in the IS program assessment methods. They correspond well to the practical orientation of an IS program as it prepares graduates for industry. Projects are often used for creating student portfolios and they can be included in single IS courses or they may continue through several courses over two or three years like in Jones and McMaster(2004). Sometimes projects may be the single major assessment outcome of a course as is discussed in Kurzel and Rath rath (rä, räth), circular hill fort protected by earthworks, used by the ancient Irish in the pre-Christian era as a retreat in time of danger. (2007). Hence the importance of having rubrics for their assessment that are derived from the overall IS program goals and have a standardized structure in various courses(on the development of such rubrics in an IS program see Petkov and Petkova, 2006).

Since project artifacts provide a direct measure for student learning, they are preferred to other indirect methods for program assessment(see Palomba and Banta, 1999), especially in a professional field like Information Systems. Measurement and comparisons of student performance in different courses through projects are unresolved Not completed; not finished; not linked together. See resolve.  issues in the fourth type of assessment methods listed above, involving longitudinal studies longitudinal studies,
n.pl the epidemiologic studies that record data from a respresentative sample at repeated intervals over an extended span of time rather than at a single or limited number over a short period.
, as well as the last one, portfolios.

The relevance and importance of this research stems from the fact that using project rubrics for assessment of student learning across an academic program is linked to several yet unresolved challenges for IS educators like how to:

* measure student performance in a uniform and objective way and reaching consensus among professors;

* monitor students' performance in some areas that may be better assessed only after allowing the maturation maturation /mat·u·ra·tion/ (mach-u-ra´shun)
1. the process of becoming mature.

2. attainment of emotional and intellectual maturity.

3.
 of students' understanding of certain principles(this is especially relevant to techniques which are applicable to several interrelated in·ter·re·late  
tr. & intr.v. in·ter·re·lat·ed, in·ter·re·lat·ing, in·ter·re·lates
To place in or come into mutual relationship.



in
 courses; for example, consider Systems Analysis and Database Design or Systems Analysis and Project Management in the Information Systems(IS) program);

* demonstrate explicitly and in a comparable format the level of skills achieved by the student majority at different stages of the academic program through projects captured in an electronic portfolio.

Addressing these challenges would help utilize better the assessment results in subsequent actions on improvement of teaching and student learning.

The following discussion will concentrate on the use of scoring rubrics to assess student performance in projects in courses at various stages of an academic program. This paper extends the work of Petkov and Petkova(2006) and shows how rubrics for project assessment can be used in measuring student performance in courses of an IS program. The goal of the paper is to present a methodology for using standardized rubrics for measuring student achievement in interrelated courses in an academic program. To the best of the authors' knowledge this has not been reported before in the literature. The approach is demonstrated on a pilot implementation involving two interrelated courses in an IS program, Systems Analysis and Database Design. This research follows a conceptual design approach motivated mo·ti·vate  
tr.v. mo·ti·vat·ed, mo·ti·vat·ing, mo·ti·vates
To provide with an incentive; move to action; impel.



mo
 by the discussion in Hevner et al.(2004) and Boland(2002).

The paper continues with a discussion on the use of project rubrics for assessment of student learning, followed by a brief summary of the authors' previous work on the formulation of a standard set of criteria for assessment of projects across an academic program. Then a methodology for using project rubrics in interrelated courses is proposed and illustrated on a pilot implementation, which is followed by the conclusion.

2. SOME ISSUES IN THE USE OF PROJECT RUBRICS FOR ASSESSMENT OF STUDENT LEARNING

According to according to
prep.
1. As stated or indicated by; on the authority of: according to historians.

2. In keeping with: according to instructions.

3.
 Heywood(2000:329) during an educational project a student is asked to plan, specify, make, test and evaluate an artifact A distortion in an image or sound caused by a limitation or malfunction in the hardware or software. Artifacts may or may not be easily detectable. Under intense inspection, one might find artifacts all the time, but a few pixels out of balance or a few milliseconds of abnormal sound  or an idea. Past research on the topic of project work is reviewed in Brown et al.(1997:121-122). An instructor may choose for assessment from a variety of outcomes of project work:

* artifact created during the project;

* project report;

* poster presentation/exhibition of the project;

* project presentation;

* log book for the project.

The selection of a particular set of methods will depend on the nature of the project. Thus, in an introductory course on IS fundamentals it is usually the report that is assessed; while in a systems analysis or a database class, it is usually the design which is the artifact assessed as it documents the major learning outcomes for that course(Petkov and Petkova, 2006).

Jones and McMaster(2004) point out that they have used predominantly pre·dom·i·nant  
adj.
1. Having greatest ascendancy, importance, influence, authority, or force. See Synonyms at dominant.

2.
 process oriented o·ri·ent  
n.
1. Orient The countries of Asia, especially of eastern Asia.

2.
a. The luster characteristic of a pearl of high quality.

b. A pearl having exceptional luster.

3.
 measures to assess student projects instead of the methods for assessing projects listed above. Student performance is assessed by them using a set of forms provided by the student team and also by the client for a particular project. Examples of assessment forms/devices include project reports, peer assessment reports and client feedback. Jones and McMaster(2004) justify their use of process measures by the diverse nature of projects(some being consulting projects and others leading to development of a particular product). The authors agree that process measures are applicable but believe that their methodology allows using both process measures and direct methods listed by Brown et al(2007) for assessing diverse projects.

Rubrics tell potential performers what elements of performance matter most and how the work to be judged will be distinguished in terms of relative quality(Wiggins, 1998:153). Scoring rubrics are descriptive scoring schemes that are developed by teachers or other evaluators to guide the analysis of products or processes of students' efforts (Moskal and Leydens, 2002).

Wiggins(1998) emphasizes the importance of the criteria/dimensions used to describe the traits central to a successful task performance. Two types of rubrics can be considered in describing these dimensions: holistic Holistic
A practice of medicine that focuses on the whole patient, and addresses the social, emotional, and spiritual needs of a patient as well as their physical treatment.

Mentioned in: Aromatherapy, Stress Reduction, Traditional Chinese Medicine
 and analytic-trait(Wiggins, 1998; Mertler, 2001). An analytic-trait rubric RUBRIC, civil law. The title or inscription of any law or statute, because the copyists formerly drew and painted the title of laws and statutes rubro colore, in red letters. Ayl. Pand. B. 1, t. 8; Diet. do Juris. h.t.  isolates each major trait trait (trat)
1. any genetically determined characteristic; also, the condition prevailing in the heterozygous state of a recessive disorder, as the sickle cell trait.

2. a distinctive behavior pattern.
 into a separate rubric with its own criteria while a holistic rubric yields a single score based on overall impression(Wiggins, 1998: 164). The richness of information provided by analytic rubrics is the reason the authors believe they are more appropriate for a multifaceted assessment of student achievement in an IS project.

According to Brualdi(2002:65) it is essential to define clearly the purpose of assessment. Questions that can be used to define the purpose of assessment are: What am I trying to assess?

What should the students know?

What is the level?

What type of knowledge?

The above questions are related also to the role of the course in the particular IS program according to Bloom's taxonomy taxonomy: see classification.
taxonomy

In biology, the classification of organisms into a hierarchy of groupings, from the general to the particular, that reflect evolutionary and usually morphological relationships: kingdom, phylum, class, order,
 of student learning outcomes(Bloom,1956; and Gorgone et al, 2002). Hence it is essential that any IS program needs to define first the skill sets resulting as learning outcomes linked to the goals of the program(see Petkova et al., 2006, Aasheim et al., 2007 and White and McCarthy, 2007).

The answers to the questions listed by Brualdi(2002) serve as background information to the next step--defining the criteria for assessing projects. Without claiming that every student performance needs to be assessed against all five types, Wiggins(1998:168) suggests five categories of criteria to be used in rubrics, relating to relating to relate prepconcernant

relating to relate prepbezüglich +gen, mit Bezug auf +acc 
 the impact, the craftsmanship Craftsmanship
Alcimedon

a first-rate carver in wood. [Rom. Lit.: Vergil Eclogues, iii. 37.]

Argus

skillful builder of Jason’s Argo. [Gk. Myth.: Walsh Classical, 29]

Athena

(Rom.
, the methods, the content and the sophistication so·phis·ti·cate  
v. so·phis·ti·cat·ed, so·phis·ti·cat·ing, so·phis·ti·cates

v.tr.
1. To cause to become less natural, especially to make less naive and more worldly.

2.
 of the performance. Further criteria are proposed in Brown et al.(1997).

Another important issue is the distinction between rubrics assessing generic skills or specific subject matter understanding. According to Wiggins(1998:176) reliability is no doubt served by using a rubric that is unique to a task and to the samples of performance that relate to that task.

The authors use different criteria in an analytic rubric for the assessment of generic skills(like presentation abilities) and for the assessment of specific issues related to a subject like technical skills for example(e.g. see Appendix 1).

Very useful guidelines guidelines,
n.pl a set of standards, criteria, or specifications to be used or followed in the performance of certain tasks.
 for designing rubrics can be found in Mertler(2001). The same author provides an example of an analytic rubric, where for every chosen evaluation criterion the same set of four possible levels of student achievement is suggested: beginning, developing, accomplished and exemplary. The authors have followed a similar distinction among the levels of student performance in the work reported here. While in some instances others have proposed single criterion holistic rubrics, the authors believe that analytic rubrics are more appropriate, as they provide rich information on student's achievements.

An important factor in assessing project outcomes is to separate the contribution of the students from that of the supervisor. The latter is unavoidable in the iterative it·er·a·tive  
adj.
1. Characterized by or involving repetition, recurrence, reiteration, or repetitiousness.

2. Grammar Frequentative.

Noun 1.
 process of refinement of the project deliverables. Brown et al(1997) quote an earlier suggestion by Black(1975) for minimizing this problem, according to which the weight is given for grades at different stages of the project as shown in Table 1:

In the authors' opinion, the strong emphasis on evaluation at the implementation stage can be a source of subjective judgment by the professor. It would be more suitable to combine the implementation stage grade with the log book and give it a reduced weight of 20% to minimize the possible effect of subjective errors. The authors suggest that the final report is allocated a weight of 30%. The reason for the suggestion is that students will have insufficient motivation for improving the final product if its weight is only 15%. Whether to assess a project against absolute or developmental standards is another important issue raised originally in Wiggins(1998). Absolute standards relate to excellent performance accepted within a particular field while developmental standards allow to judge as "acceptable" performance levels that are lower. The authors find this distinction to be very useful in understanding how similar rubrics might be applied for assessing projects in different courses at different levels of the curriculum. The development standards might be used in introductory and junior-level courses while absolute standards should be pursued at the level of capstone courses. The above discussion does not aim to be exhaustive on all issues related to the use of projects for assessment purposes and hence further details may be found in Brown et al.(1997), Wiggins (1998) and other sources. The next section deals with an issue that is a precondition pre·con·di·tion  
n.
A condition that must exist or be established before something can occur or be considered; a prerequisite.

tr.v.
 for the development of the methodology discussed in this paper.

3. ON THE ROLE OF A STANDARD SET OF ASSESSMENT CRITERIA FOR PROJECTS ACROSS AN ACADEMIC PROGRAM

The work on using project assessment rubrics in different courses can be framed within the general assessment process

in an IS program(see Petkova et al., 2006) and that process will not be discussed here. Having a standard set of criteria for assessment of projects in different subjects is useful in order to conduct a longitudinal study longitudinal study

a chronological study in epidemiology which attempts to establish a relationship between an antecedent cause and a subsequent effect. See also cohort study.
 investigating how student learning develops over the course of the program along each dimension. Petkov and Petkova(2006) have developed a method for deriving uniform project rubrics in different subjects of a program that are aligned with the program goals and are derived from the existing literature on using projects in education. They have suggested standardized structure for project rubrics for any course within an IS program. Petkov and Petkova(2006) have suggested the use of four criteria in assessing a project in an IS program. These dimensions are similar to those in the ACM/AIS/AITP curricular recommendations for IS programs(Gorgone et al., 2002). Table 2 shows how they correspond to the general assessment criteria of IS projects derived from the existing literature on the use of projects in education. As is evident from Table 2, the last two criteria are generic while the first two are specific for a particular course.

Following the procedure described by Petkov and Petkova(2006), criteria can be formulated for any courses in an IS program(e.g. for Systems Analysis see Table 2). An example of a rubric usable USable is a special idea contest to transfer US American ideas into practice in Germany. USable is initiated by the German Körber-Stiftung (foundation Körber). It is doted with 150,000 Euro and awarded every two years.  in a Systems Analysis and Design course is provided in Appendix 1. In a similar way one may define the criteria for a course in Database design. The criteria for both these courses are shown in Table 3. As is evident from Table 3, there are no differences between the two rubrics in the third and fourth criteria due to their generic nature. However the first two criteria are different in order to reflect the nature of the material covered in the particular course and the learning outcomes associated with the relevant technical and problem solving skills.

While assessing a project on the third criterion evidence may be found from the project recommendations and considerations for resources associated with it. Other evidence may be found from project logbooks, team member reports and other process oriented ways of assessing the project. The fourth criterion is associated in some projects with conducting presentations while in other projects completed in courses such as Systems Analysis or Database Design, presentations may be replaced with project walkthroughs.

Having a uniform structure for the criteria and sub-criteria of the rubrics(see Table 3) allows the measurement of students' progress through their studies in interrelated courses within a program.

Thus, for Systems Analysis and Design, the first two criteria would be defined in a way that fits the nature of the material covered in that course and the learning outcomes associated with these criteria:

* Ability to define user requirements of an information system and to design a system applying relevant techniques including UML (Unified Modeling Language) An object-oriented analysis and design language from the Object Management Group (OMG). Many design methodologies for describing object-oriented systems were developed in the late 1980s. .

* Ability to apply feasibility analysis, requirements analysis (project) requirements analysis - The process of reviewing a business's processes to determine the business needs and functional requirements that a system must meet.  and a design process model in practice.

The rubrics for Systems Analysis and for Database Design discussed here were introduced in the fall of 2005 at University A. A similar type of rubric for the Database Design course was introduced at University B, and the authors are expanding the use of such rubrics in other courses as well. In the next section, the methodology for deriving and using project rubrics in interrelated courses is documented and a brief account of its application is provided.

4. A METHODOLOGY FOR DERIVING AND USING PROJECT RUBRICS IN INTERRELATED COURSES AND AN EXAMPLE OF ITS APPLICATION

The adherence to the same number of criteria and sub-criteria organized in a uniform way is a precondition for comparison of student performance in different courses. The use of standardized rubrics allows deriving measures for improvement of student learning, for reaching a balance of emphasis among the four types of outcomes at the various levels of the IS program, and for curriculum improvement.

The use of standardized rubrics in different courses for obtaining evidence about student performance is justified by a principle related to the "absolute comparison mode" in a Multicriteria Decision Making approach called The Analytic Hierarchy Process The Analytic Hierarchy Process (AHP) is a technique for decision making where there are a limited number of choices, but where each has a number of different attributes, some or all of which may be difficult to formalize. (Saaty, 1990): a particular project is not judged with respect to another similar project but instead it is assessed with respect to the ideal level of achievement on a given criterion for a particular course. The absolute comparison mode allows the assessors to draw conclusions about whether students in a particular course have scored better or worse than those in another course with respect to the same criterion. The use of standardized rubrics in different courses with the same number of criteria and sub criteria as suggested in Petkov and Petkova(2006) allows a uniform way for evaluation of projects across particular subjects in an IS program as is shown here through the methodology presented in this paper. This is not only needed for comparison of student achievement in different courses across a program but it is also a necessary component for the successful implementation of student portfolios and it may allow tracking the evolution of student performance from one course to another or over a number of years. Steps in the methodology for deriving and using project rubrics in interrelated courses are given below.

Steps in the Methodology:

1. Identify how the learning outcomes for each course relate to the program's academic goals

2. Define a uniform set of criteria for assessment of student projects in selected courses of the program.

3. Customize the specific criteria and sub-criteria that reflect the nature of a particular course, while keeping the number and nature of sub-criteria the same across courses.

The generic assessment criteria are essentially the same in every course.

4. Define appropriate degrees of student performance for each criterion in the rubric.

5. Communicate the rubric to the students at the start of the project.

6. Use the rubric for rating the achievement of each team on every criterion evidenced through the completed project artifacts.

7. Calculate the average rating on each criterion for all student teams and then sum the average ratings for all sub-criteria of a given criterion.

8. Use the total rating for comparison of each team's performance in a course and of student teams in different courses across the program and apply the results for improvement of student learning and teaching practices.

It is essential that the project rubrics across interrelated courses have similar structure and that the criteria of similar nature are ordered and grouped in the same way. This will allow the comparison of results across the courses. It is necessary to underline underline

an animal's ventral profile; the shape of the belly when viewed from the side, e.g. pendulous, pot-belly, tucked up, gaunt.
 that student performance needs to be measured on every indicator with respect to the ideal for a particular course criterion since this condition justifies the comparison of achievement in different courses as pointed earlier. The assessment criteria need to be independent of each other. The mathematical foundations of the approach are based on the Simple Multiattribute Rating Technique (SMART), described in von Winterfeldt and Edwards(1986) which uses direct measurements as ratings in a way that is similar to the "absolute comparison mode" in AHP AHP Assistant House Physician. (see Saaty, 1990).

It is possible also to assign weights to the criteria and sub-criteria if their importance is are considered different within a particular academic program. Then before step seven in above methodology, one may include the calculation of the project rating on a sub-criterion as the multiplication of the weight of the corresponding sub-criterion and the rating on it. However, in the illustrative il·lus·tra·tive  
adj.
Acting or serving as an illustration.



il·lustra·tive·ly adv.

Adj. 1.
 example(see Appendices ap·pen·di·ces  
n.
A plural of appendix.
 2 and 3) the weight of all criteria is the same, equal to 1.

The rubrics have been used by the authors since 2006. Assessment of projects from a fall 2006 class in Systems Analysis and Design and a spring 2007 class in Database Design at University A were used for the purposes of demonstrating the application of the proposed methodology (see Appendices 2 and 3). The suggested approach allows the flexibility of reflecting the specific features of a course through the specific sub criteria as is shown in Table 2.

Following Wiggins(1998), the authors applied developmental standards during the assessment of the student results in both courses. Due to the small number of projects in each class, these results are only for illustrative purposes and cannot lead to statistical generalizations on the students' performances.

Each sub-criterion was considered equally important. The average rating for all projects in a course on each sub-criterion and their totals are shown in the last two columns in Appendices 2 and 3. These measures allow comparisons between student learning in different courses provided that the nature of the group criteria in the rubrics for each course is similar and they have a similar number of coherent sub-criteria in a given group.

In three of the four groups of rubrics criteria, the Database course group did not perform as well as the Systems Analysis group. This may be due to the fact that three of the top students in the Systems Analysis course(or 20% of that class) did not proceed immediately that year to the Database course as they are part-time students. Another possible factor is that only eight out of the ten students in Database design had taken the prerequisite pre·req·ui·site  
adj.
Required or necessary as a prior condition: Competence is prerequisite to promotion.

n.
 course in Systems Analysis while the remaining two were admitted to the course for contingency reasons. The negative differences between the average results for all groups in the courses, however, are relatively small. They were mostly related to the criteria requiring technical proficiency pro·fi·cien·cy  
n. pl. pro·fi·cien·cies
The state or quality of being proficient; competence.

Noun 1. proficiency - the quality of having great facility and competence
 in the techniques taught in a particular course(group 1 in the rubrics criteria), organizational, interpersonal in·ter·per·son·al  
adj.
1. Of or relating to the interactions between individuals: interpersonal skills.

2.
 and time management skills during the execution of the project(group 3 in the rubrics criteria) and the students' communication skills(group 4 in the rubrics criteria).

On two occasions concerning sub-criteria associated with Problem solving skills(the second rubric criterion in Table 3) the Database design class performed better. Those were related to the understanding of the requirements assumptions/relevant data and also to the understanding of UML process. These improvements show certain development in the maturity of developing problem solving skills by the Database design class compared to the Systems Analysis class which is a positive outcome. Since the two courses are closely interrelated, it is indeed expected that the understanding of requirements analysis and UML will be better in the database design course that is taught after Systems Analysis.

5. CONCLUDING REMARKS AND POSSIBLE FUTURE WORK

This paper provided an account of a methodology for using project rubrics for assessment of student achievement in different courses of an IS program and showed how it was applied in practice. The application of the proposed approach provides insights to several significant questions which can guide further effort on improving teaching practices:

* Are we achieving improved levels of proficiency in a subsequent course on a particular criterion?

* What skills that the students exhibit at a particular stage of their degree require further attention in a subsequent course?

* Can we identify substantial negative deviations in student achievement along any of the four general criteria within a course that needs corrective action A corrective action is a change implemented to address a weakness identified in a management system. Normally corrective actions are instigated in response to a customer complaint, abnormal levels if internal nonconformity, nonconformities identified during an internal audit or ?

The use of standardized rubrics allows a uniform way of evaluating projects across different courses in an IS program.

It is essential also for instructors to apply the same approach for assessing projects in courses with more than one section.

It is important that faculty realize the need for having a common approach.

A limitation of our illustration of applying the rubrics in two courses was the small number of student projects due to the small size of the IS program at University A. Another potential limitation of the approach is that student populations in an academic program are usually not homogeneous The same. Contrast with heterogeneous.

homogeneous - (Or "homogenous") Of uniform nature, similar in kind.

1. In the context of distributed systems, middleware makes heterogeneous systems appear as a homogeneous entity. For example see: interoperable network.
 as students progress in their degree studies at universities that do not have established learning communities of cohorts.

There are some unresolved theoretical and practical problems in using rubrics in assessment in general. The authors agree with Mertler(2001), who points out that a potentially frustrating frus·trate  
tr.v. frus·trat·ed, frus·trat·ing, frus·trates
1.
a. To prevent from accomplishing a purpose or fulfilling a desire; thwart:
 aspect of scoring student work with rubrics is the issue of converting them to "grades." Other problems include the open question as to how precise an analytic rubric can be in comparison to a holistic one. A further open issue for research is the efficiency in using various types of rubrics.

The next steps in the authors' work on rubric design and implementation in various is courses are to:

* Continue gathering data on student learning in other interrelated courses through rubrics in IS programs at Universities A and B.

* Define benchmarks indicating the desired level of student learning for each criterion within each course in the program or at least in several courses.

* Explore the role of learning communities in promoting better student learning outcomes evidenced through projects assessed with similar rubrics.

* Expand the research on assessing student learning from projects in interrelated IS courses to overall longitudinal IS program assessment and the use of eport-folios.

The research reported in this paper shows that the proposed methodology is applicable for assessment of student achievement in projects in interrelated courses or at particular key points of an IS program, which is useful for programs that are looking to quantify Quantify - A performance analysis tool from Pure Software.  their assessment work based on rubrics.

APPENDIX 1.

RUBRICS FOR PROJECT ASSESSMENT IN SYSTEMS ANALYSIS AND DESIGN

The criteria that will be used in the course Systems Analysis and Design need to reflect the four general criteria for evaluationof projects by measuring the learning outcomes covered in the course and the project specific goals:

1. Ability to define user requirements of an information system and to design a system in the Unified Modeling Language(UML).

2. Ability to apply techniques for feasibility analysis, requirements analysis and UML modeling in practice.

3. Ability to present the findings of the project within the report including time management issues

4. Ability to provide a convincing presentation.

Hence the following rubrics were defined for the evaluation of the project report:

N.B. Each sub-criterion was considered equally important. The columns on the right side contain the assessment evaluations of each project on every sub-criterion. Following the Simple Multi Attribute Rating Technique(SMART), the overall rating for a project would be obtained by adding all ratings in a column. If the weights of the sub-criteria were different, then the overall rating would be the sum of the multiplications of every rating by the weight of the corresponding sub-criterion.

We are calculating here the average rating for each sub-criterion and also the total of the average ratings for sub-criteria within each group, shown in the last two columns. These are useful measures allowing comparisons between student learning in different courses provided that the nature of the group criteria in the rubrics for each course is similar and they have similar number of coherent sub-criteria in a given group.

N.B. Each sub-criterion was considered equally important. The columns on the right side contain the assessment evaluations of each project on every sub-criterion. Following the Simple Multi Attribute Rating Technique(SMART), the overall rating for a project would be obtained by adding all ratings in a column. If the weights of the sub-criteria were different, then the overall rating would be the sum of the multiplications of every rating by the weight of the corresponding sub-criterion.

We are calculating here the average rating for each sub-criterion. These are useful measures allowing comparisons between student learning in different courses provided that the nature of the group criteria in the rubrics for each course is similar and they have similar number of coherent sub-criteria in a given group.

6. REFERENCES

Aasheim, C, JA Gowan gow·an  
n. Scots
A yellow or white wildflower, especially the Old World daisy.



[Probably alteration of Middle English gollan, a plant with yellow flowers; akin to Old Norse
, H. Reichgelt,(2007), "Establishing an Assessment Process for a Computing Program," Information Systems Education Journal(ISEDJ), 5(1).

Banta, T.W. and Associates(2002). Building a Scholarship of Assessment, Jossey-Bass Publ.

Black, J(1975), "Allocation and assessment of project work in the first year of the engineering degree course at the University of Bath," Assessment in Higher Education, 1,

Bloom, B.S.(1956), Taxonomy of educational objectives The Taxonomy of Educational Objectives, often called Bloom's Taxonomy, is a classification of the different objectives and skills that educators set for students (learning objectives). : The classification of educational goals: Handbook I, cognitive domain cognitive domain,
n area of study that deals with the processes and measurable results of study, as well as the practical ability to apply intelligence.
. New York New York, state, United States
New York, Middle Atlantic state of the United States. It is bordered by Vermont, Massachusetts, Connecticut, and the Atlantic Ocean (E), New Jersey and Pennsylvania (S), Lakes Erie and Ontario and the Canadian province of
; Toronto: Longmans, Green.

Brown, G, J Bull and M Pendleburry(1997), Assessing Student Learning in Higher Education, Routledge, London and New York.

Brualdi A,(2002), Implementing Performance Assessment in the Classroom, in Boston C.(ed.), Understanding scoring rubrics: a guide to teachers, pp. 1-4, Clearing house on education and assessment, University of Maryland University of Maryland can refer to:
  • University of Maryland, College Park, a research-extensive and flagship university; when the term "University of Maryland" is used without any qualification, it generally refers to this school
 

Boland, R.(2002), "Design in the Punctuation punctuation [Lat.,=point], the use of special signs in writing to clarify how words are used; the term also refers to the signs themselves. In every language, besides the sounds of the words that are strung together there are other features, such as tone, accent, and  of Management Action, a paper presented at the Workshop on Managing as Designing at CWRU CWRU Case Western Reserve University (Cleveland, OH) ," available from http://design.cwru.edu/2002workshop/Positions/boland.do c.

Ewell, P.T.,(2002)," An Emerging Scholarship: A Brief History of Assessment", in T.W.Banta(ed), Building a Scholarship of Assessment. Jossey- Bass Publ.

Gorgone,J.T., G.B., Davis, J.S., H.Toppi, D.L Fernstein, H.E. Longenecker,(2002). "IS 2002: Model Curriculum and Guidelines for Undergraduate Degree “First degree” redirects here. For the BBC television series, see First Degree.

An undergraduate degree (sometimes called a first degree or simply a degree
 Programs in Information Systems," available from http://www.acm.org/education/is2002.pdf.

Heywood, J.(2000). Assessment in Higher Education: Student Learning, Teaching, Programmes and Institutions, Jessica Kingsley Publishers.

Hevner, A. R., March, S. T., Park, J., & Ram, S.(2004), "Design Science in Information Systems Research," MIS Quarterly, 28(1), 75-105 Higgs, B. & Sabin, M.(2005), "Towards Using Online Portfolios in Computing Courses," Proceedings SIGITE SIGITE Special Interest Group for Information Technology Education (ACM; formerly Society for Information Technology Education)  2005, 323-328.

Kurzel, F. and Rath, M.(2007). "Project based learning and learning environments," Issues in Informing Science and Information Technology(IISIT), 4, 503-510.

Landry, J., Pardue, H., Longenecker, H., Reynolds, J., McKell, L. & White. B.(2006), "Using the IS Model Curriculum and CCER CCER China Center for Economic Research (Peking University, Beijing, China)
CCER Centre for Computer-Aided Egyptological Research
CCER Catholic Commission for Employment Relations (Australia) 
 Exit Assessment Tools For Course-Level Assessment," Information Systems Education Journal(ISEDJ), 4(73)

McKell, L, J. Reynolds, H. Longenecker, J., Landry, H. Pardue.(2006), "The Center for Computing Education Research(CCER): A Nexus for IS Institutional and Individual Assessment," Information Systems Education Journal(ISEDJ).4(69).

Mertler, C.A.(2001). "Designing scoring rubrics for your classroom," Journal of Practical Assessment, Research and Evaluation, 7(25).

Moskal B M and Leydens J A.,(2002), "Scoring Rubric Development: Validity and Reliability" in Boston C.(Ed), Understanding scoring rubrics: a guide to teachers, pp. 25-33, Clearing house on education and assessment, University of Maryland

Palomba C. and Banta T.(1999). Assessment Essentials, Jossey-Bass, San Francisco San Francisco (săn frănsĭs`kō), city (1990 pop. 723,959), coextensive with San Francisco co., W Calif., on the tip of a peninsula between the Pacific Ocean and San Francisco Bay, which are connected by the strait known as the Golden .

Petkov D. and Petkova O.(2006)., "Development of Scoring Rubrics for Projects as an assessment Tool across an IS Program," Issues in Informing Science and Information Technology(IISIT), 3, 499-509.

Petkova, O & Jarmoszko, T.(2004), "Assessment loop for the MIS program at Central Connecticut State University Central Connecticut State University is a state university in New Britain, Connecticut. It is the oldest public university and ranks third oldest of all universities in Connecticut, having been founded in 1849. : a practice of learning, reflection and sharing," ISECON ISECON Information Systems Education Conference  2004 Proceedings. Appeared later in ISEDJ in 2006.

Petkova, O., Jarmoszko, A.T.., and D'Onofrio, M.J.(2006). "Management Information Systems(MIS) program assessment: Toward establishing a foundation," Journal of Informatics Same as information technology and information systems. The term is more widely used in Europe.  Education Research(JIER), 8(2), Spring.

Pick, J.B., and J. Kim,(2000). "Program assessment in an undergraduate information systems program: Prospects for curricular and programmatic pro·gram·mat·ic  
adj.
1. Of, relating to, or having a program.

2. Following an overall plan or schedule: a step-by-step, programmatic approach to problem solving.

3.
 enhancement," Proceedings of the 15th Annual Conference of the IAIM IAIM International Academy for Information Management , Brisbane, Australia, 2000.

Reynolds, J.H., H.E.Longnecker Jr, J.P..Landry, J.H.Pardue and B.Applegate(2004). "Information Systems National Assessment Update: The Results of a Beta Test A test of new or revised hardware or software that is performed by users at their facilities under normal operating conditions. Beta testing follows alpha testing. Vendors of packaged software often offer their customers the opportunity of beta testing new releases or versions, and the  of a New Information Systems Exit Exam Based on the IS 2002 Model Curriculum," Information Systems Education Journal 2(24).

Saaty T.(1990) Multicriteria Decision Making--The Analytic Hierarchy Process, 2nd ed. RWS RWS Rijkswaterstaat
RWS Running with Scissors
RWS IEEE Radio and Wireless Symposium
RWS Romano-Ward Syndrome
RWS Remote Weapon Station (US Army)
RWS Remote Winsock
RWS Range While Search
RWS Radar Warning System
 Publications, Pittsburgh.

Stassen, L.A.M., K. Doherty and M.Poe, 2001. Program based Review and Assessment: Tools and Techniques for Program Improvement, Office of Academic Planning and Assessment, University of Massachusetts The system includes UMass Amherst, UMass Boston, UMass Dartmouth (affiliated with Cape Cod Community College), UMass Lowell, and the UMass Medical School. It also has an online school called UMassOnline. , Amherst.

Stemler, L, C. Chamblin,(2006), "The Role of Assessment in Accreditation: A Case Study for an IS Department," Information Systems Education Journal,(ISEDJ), 4(39).

White, B. & McCarthy, R.,(2007). "The Development of a Comprehensive Assessment Plan: One Campus' Experience," Proceedings ISECON 2007.

Wiggins, G.(1998). Educative ed·u·ca·tive  
adj.
Educational.

Adj. 1. educative - resulting in education; "an educative experience"
instructive, informative - serving to instruct or enlighten or inform
 Assessment. Designing Assessments to Inform and Improve Student Performance, Jossey Bass Publ Co.

Williams, S.R and B A Price, "Strengths and Weaknesses of an Information Systems Program: A Longitudinal Assessment of Student Perceptions," Proceedings of the 15th Annual IAIM Conference, Brisbane, Australia.

Winterfeldt D. von, and Edwards, W.(1986). Decision Analysis and Behavioral Research, Cambridge University Press Cambridge University Press (known colloquially as CUP) is a publisher given a Royal Charter by Henry VIII in 1534, and one of the two privileged presses (the other being Oxford University Press). , Cambridge, UK.

Doncho Petkov is a Full Professor and Coordinator of the BIS program at Eastern Connecticut State University Eastern Connecticut State University is a public, coeducational liberal arts university and is a member of the Council of Public Liberal Arts Colleges. It is located in Willimantic, Connecticut. , USA. He is a deputy editor (USA) for Systems Research and Behavioral Science behavioral science
n.
A scientific discipline, such as sociology, anthropology, or psychology, in which the actions and reactions of humans and animals are studied through observational and experimental methods.
 and Senior Area Editor(Software Engineering) of IJITSA . His publications have appeared in the Journal of Systems and Software The Journal of Systems and Software is a computer science journal in the area of software systems, founded in 1979 and published by Elsevier.

The journal publishes research papers, state-of-the-art surveys, and practical experience reports.
, Decision Support Systems, Telecommunications Policy, IRMJ, International Journal on Technology Management, IJITSA, JITTA JITTA Journal of Information Technology Theory and Application , JISE JISE Joint Intelligence Support Element , JITCA, South African Computing Journal and elsewhere.

Olga Petkova is a Full Professor in MIS at Central Connecticut State University, USA. Previously she has taught at several universities in South Africa In 2004 South Africa started reforming its higher education system, merging and incorporating small universities into larger institutions, and renaming all higher education institutions "university" (previously there had been several types of higher education institution).  and Zimbabwe and worked at the Bulgarian Academy of Sciences The Bulgarian Academy of Sciences (abbreviated BAS, in Bulgarian: Българска академия на науките,  in Sofia. Her publications are in software development productivity, systems thinking and Information Systems Education. They have appeared in Decision Support Systems, Journal of Information Technology Theory and Applications(JITTA), Journal of Informatics Education Research, JISE, JITCA and elsewhere.

Marianne D'Onofrio is a Full Professor and Chair in MIS in the School of Business at Central Connecticut State University, USA. She earned her Ph.D. from The Ohio State University Ohio State University, main campus at Columbus; land-grant and state supported; coeducational; chartered 1870, opened 1873 as Ohio Agricultural and Mechanical College, renamed 1878. There are also campuses at Lima, Mansfield, Marion, and Newark. . Prior to CCSU CCSU Central Connecticut State University
CCSU Central Corporate Services Unit (AU)
CCSU Chaudhary Charan Singh University (India)
CCSU Cabinet Committee for the Social Union (Canada) 
, Dr. D'Onofrio was a professor at Utah State University Utah State University, mainly at Logan; coeducational; land-grant and state supported; chartered 1888, opened 1890. It publishes Utah Science, Western Historical Quarterly, and Western American Literary Journal. , University of Wisconsin-Madison “University of Wisconsin” redirects here. For other uses, see University of Wisconsin (disambiguation).
A public, land-grant institution, UW-Madison offers a wide spectrum of liberal arts studies, professional programs, and student activities.
, and Indiana University Indiana University, main campus at Bloomington; state supported; coeducational; chartered 1820 as a seminary, opened 1824. It became a college in 1828 and a university in 1838. The medical center (run jointly with Purdue Univ. . She is on the boards of several academic journals. Her research and teaching interests are in Group Support Systems, Decision Support.

Andrzej Tomasz Jarmoszko is Associate Professor in MIS at Central Connecticut State University, USA. His primary teaching areas are systems analysis and design and data communications data communications, application of telecommunications technology to the problem of transmitting data, especially to, from, or between computers. In popular usage, it is said that data communications make it possible for one computer to "talk" with another.  and networking. Prior to joining CCSU, Dr. Jarmoszko was Manager for Strategic Planning Strategic planning is an organization's process of defining its strategy, or direction, and making decisions on allocating its resources to pursue this strategy, including its capital and people.  a a major mobile communications company Communications Company is a communications unit of the United States Marine Corps. They are part of Combat Logistics Regiment 37 , 3rd Marine Logistics Group (3MLG) and III Marine Expeditionary Force (III MEF). The unit is based out of the Marine Corps Base Camp Smedley D.  in Central Europe Central Europe is the region lying between the variously and vaguely defined areas of Eastern and Western Europe. In addition, Northern, Southern and Southeastern Europe may variously delimit or overlap into Central Europe. . His current research interests include information systems curriculum, mobile information systems, aligning knowledge management with the strategy process and strategic management in the communications industry communications industry, broadly defined, the business of conveying information. Although communication by means of symbols and gestures dates to the beginning of human history, the term generally refers to mass communications. .

Doncho Petkov

Eastern Connecticut State University

Willimantic, CT 06226, USA

petkovd@easternct.edu

Olga Petkova

Marianne D'Onofrio

A.T. Jarmoszko

Central Connecticut State University

New Britain New Britain, city, United States
New Britain, industrial city (1990 pop. 75,491), Hartford co., central Conn.; settled c.1686, inc. 1871. The tin shops and brassworks in the city were established in the 18th cent.
, CT 06050, USA

petkovao@ccsu.edu donofrio@ccsu.edu jarmoszkoa@ccsu.edu
Table 5: Weight allocation of deliverables in a project as
percentage of the total grade (based partly on Black, 1975)

                        Weight by
Deliverable            Black (1975)     Our weight

Implementation stage       30%        Combine 1 and 2
Log book                    5%              20%
Draft report               50%              50%
Final report               15%              30%

General criteria for assessment of
IS projects (derived from the
IS2002 standard which is usually
used also to guide the goals of a          Derived criteria
particular IS program)                     from the literature

Technical level of proficiency             Craftsmanship is
demonstrated through application of        the term used by
the technical knowledge associated         Wiggins, 199).
with the course.

Problem solving skills and ability to      Method used in
organize information, ability to           project, content
compare a problem situation against        (Wiggins, 1998).
best business practices or to select and
justify the best alternative solution.

Organizational, interpersonal and time     Impact (Wiggins,
management skills demonstrated in          1998), Project
the execution of the project and its       management skills
recommendations                            (Brown, 1997).

Communication skills, demonstrated         Sophistication of
through the organization of the            performance
project and its presentation               (Wiggins, 1998).

Table 3. An example of how the general project assessment criteria
(developed as a synthesis of the IS program learning goals and the
published research on project evaluation) can be transformed into a
uniform set of criteria in two IS subjects: a course on Systems
Analysis and Design and a course on Database design

General Project Assessment              Systems Analysis and Design
Criteria

Criteria                                Criteria and sub-criteria

1. Technical level of proficiency       1. Ability to define user
demonstrated through application        requirements of an information
of the technical knowledge              system and to design a system
associated with the subject.
                                        1.1. Correct application of
                                        analysis and design principles
                                        and techniques including UML

                                        1.2. Appropriate requirements
                                        gathering

                                        1.3. Is the final product
                                        relevant for a practical
                                        implementation of the
                                        information system

2.Problem solving methodological        2. Ability to apply feasibility
skills and ability to organize          analysis, requirements
information, ability to compare a       analysis and a design process
problem situation against best          model in practice:
business practices or to select and
justify the best alternative solution   2.1. How are requirements
                                        assumptions relevant

                                        2.2. Is there evidence of
                                        application of the analysis
                                        and design principles

                                        2.3. Is there evidence of
                                        applying correctly the system
                                        life cycle model

3. Organizational, interpersonal        3. Execution and
and time management skills              Recommendations of the
demonstrated in the execution of        project
the project and its
recommendations                         3.1. Have the main points to
                                        emerge from the project being
                                        picked up for discussion in
                                        the documentation?

                                        3.2. Is there a consideration
                                        on the resources needed for the
                                        suggested system and the
                                        schedule

                                        3.3. Was the project developed
                                        within the time allocated for
                                        the analysis and design phases?

4. Communication skills,                4. Presentation
demonstrated through the
organization of the project and its     4.1. Clarity of explanation and
presentation                            conclusions

                                        4.2. Visual impact of the
                                        project walk-through

                                        4.3.Use of audio visual
                                        aids, body language

                                        4.4. Response to questions

General Project Assessment              Database design
Criteria

Criteria                                Criteria and sub-criteria

1. Technical level of proficiency       1. Ability to define user
demonstrated through application        requirements of a data model
of the technical knowledge              and transform them into
associated with the subject.            logical and physical design

                                        1.1. Correct application of
                                        database design principles and
                                        UML techniques

                                        1.2. Appropriate data
                                        collection

                                        1.3. Is the final product
                                        relevant for a practical
                                        implementation of the database

2.Problem solving methodological        2. Apply suitable data,
skills and ability to organize          database administration and
information, ability to compare a       UML process knowledge
problem situation against best
business practices or to select and     2.1. How is the sample data
justify the best alternative solution   relevant

                                        2.2. Is there evidence of
                                        application of database
                                        administration principles

                                        2.3. Is there a consideration
                                        of UML process knowledge

3. Organizational, interpersonal        3. Execution and
and time management skills              Recommendations of the project
demonstrated in the execution of
the project and its                     3.1. Have the main points to
recommendations                         emerge from the project being
                                        picked up for discussion in the
                                        documentation?

                                        3.2. Is there a consideration
                                        on the resources needed for the
                                        suggested system and the
                                        schedule

                                        3.3. Was the project developed
                                        within the time allocated for
                                        the analysis and design phases?

4. Communication skills,                4. Presentation
demonstrated through the
organization of the project and its     4.1. Clarity of explanation and
presentation                            conclusions

                                        4.2. Visual impact of the
                                        project walk-through

                                        4.3. Use of audio visual aids,
                                        body language

                                        4.4. Response to questions

                                            Definition of rubrics and
                                                    scale (1-4)

                                             Beginning     Developing
Criteria                                         1             2

1. Ability to define user requirements
of an information system and to design a
system

1.1. Correct application of analysis and   Inappropriate   Partial
design principles and techniques
including UML

1.2. Appropriate requirements gathering    No evidence     Secondary

1.3. Is the final product relevant for a   No evidence     Occasional
practical implementation of the
information system

2. Ability to apply feasibility
analysis, equirements analysis and a
design process model in practice:

2.1. How are requirements assumptions      Initial         Developing
relevant

2.2. Is there evidence of application      No appraisal    Occasional
of the analysis and design principles

2.3. Is there evidence of applying         No attempt      Somewhat
correctly the system life cycle model
and the UML process model

3. Project execution and findings

3.1. Have the main points to emerge        No evidence     Occasional
from the project being picked up for
discussion?

3.2. Is there a consideration on the       No appraisal    Occasional
needed for the suggested system and the
resources schedule

3.3. Was the project developed within      No              Mostly on
the time allocated for the analysis                        time
and design phases?

4. Presentation
4.1. Clarity of explanation and            Lacking         Developing
conclusion

4.2. Impact of the presentation/project    No              Only text
walk-through

4.3. Use of audio visual aids, body        Poor            Developing
language

4.4. Response to questions                 Poor            Developing

                                           Definition of rubrics and
                                                  scale (1-4)

                                           Accomplished    Exemplary
Criteria                                         3             4

1. Ability to define user requirements
of an information system and to design a
system

1.1. Correct application of analysis and   Well-defined    Results
design principles and techniques                           analyzed
including UML

1.2. Appropriate requirements gathering    Interviews      Integrated
                                                           sources

1.3. Is the final product relevant for a   Good evidence   Evidence and
practical implementation of the                            good
information system                                         analysis

2. Ability to apply feasibility
analysis, equirements analysis and a
design process model in practice:

2.1. How are requirements assumptions      Very good       Very
relevant                                                   well
                                                           justified

2.2. Is there evidence of application      Attempted       Critical
of the analysis and design principles      minor errors    appraisal
                                                           no errors

2.3. Is there evidence of applying         Attempted       Well defined
correctly the system life cycle model
and the UML process model

3. Project execution and findings

3.1. Have the main points to emerge        Good evidence   Evidence and
from the project being picked up for                       analysis
discussion?

3.2. Is there a consideration on the       Attempted       Well
needed for the suggested system and the    minor errors    defined-
resources schedule                                         errors

3.3. Was the project developed within      On time         On time and
the time allocated for the analysis                        with no
and design phases?                                         errors

4. Presentation
4.1. Clarity of explanation and            Very good       Excellent
conclusion

4.2. Impact of the presentation/project    PPTS with       Well
walk-through                               color           designed

4.3. Use of audio visual aids, body        Very good       Excellent
language

4.4. Response to questions                 Very good       Excellent

APPENDIX 2.
ASSESSMENT RESULTS FOR THE PROJECTS IN A SYSTEMS ANALYSIS AND DESIGN
CLASS (FALL 2006)

Criteria                                 Proj1   Proj2   Proj3   Proj4

1. Ability to define user requirements
of an information system and to design
a system

1.1. Correct application of analysis       3       3       4       2
and design principles and techniques
Including UML

1.2. Appropriate requirements              3       4       4       3
gathering

1.3. Is the final product relevant for     2       3       4       2
a practical implementation of the
information system

2. Ability to apply feasibility
analysis, requirements analysis and
a design process model in practice

2.1. How are requirements assumptions      3       3       3       3
relevant

2.2. Is there evidence of application      3       3       4       3
of the covered analysis and
design principles

2.3. Is there a evidence of correct        2       3       4       2
application of the systems life
cycle model and the UML process
model

3. Project execution and findings

3.1. Have the main points to emerge        3       3       3       3
from the project being picked up for
discussion?

3.2. Is there a consideration on the       3       2       4       2
resources needed for the suggested
system and the schedule?

3.3. Was the project developed             3       3       3       2
within the time allocated for the
analysis and design phases?

4. Presentation

4.1. Clarity of explanation and            3       3       4       3
conclusion

4.2. Impact of the presentation/           3       4       4       2
project walk-through

4.3. Use of audio visual aids, body        3       3       3       3
language

4.4. Response to questions                 3       4       4       3

Overall rating for the project:           37      41      48      33

                                                Criteria
Criteria                                 AVG     Totals

1. Ability to define user requirements
of an information system and to design
a system

1.1. Correct application of analysis     3.00     9.25
and design principles and techniques
Including UML

1.2. Appropriate requirements            3.50
gathering

1.3. Is the final product relevant for   2.75
a practical implementation of the
information system

2. Ability to apply feasibility                   9.00
analysis, requirements analysis and
a design process model in practice

2.1. How are requirements assumptions    3.00
relevant

2.2. Is there evidence of application    3.25
of the covered analysis and
design principles

2.3. Is there a evidence of correct      2.75
application of the systems life
cycle model and the UML process
model

3. Project execution and findings                 8.00

3.1. Have the main points to emerge      3.00
from the project being picked up for
discussion?

3.2. Is there a consideration on the     2.75
resources needed for the suggested
system and the schedule?

3.3. Was the project developed           2.75
within the time allocated for the
analysis and design phases?

4. Presentation                                  13.00

4.1. Clarity of explanation and          3.25
conclusion

4.2. Impact of the presentation/         3.25
project walk-through

4.3. Use of audio visual aids, body      3.00
language

4.4. Response to questions               3.50

Overall rating for the project:

                                         Deve-    Accomp-   Exem-
                             Beginning   loping   lished    plary
DEFINITIONS OF ACHIEVEMENT       1          2        3        4

APPENDIX 3. RESULTS FOR THE PROJECTS IN A DATABASE DESIGN CLASS
(SPRING 2007)

Criteria                                        Proj1   Proj2   Proj3

1. Ability to define user requirements of and
transform them into logical and physical
design.

1.1. Correct application of design                3       3       3
principles.

1.2. Appropriate data collection                  3       3       3

1.3. Is the final product relevant for a          3       2       3
practical implementation of the database

2. Apply suitable data, database
administration and security
principles:

2.1. How is the sample data relevant              4       3       3

2.2. Is there evidence of application of          3       3       3
database administration principles

2.3. Is there a consideration of UML              3       3       3
knowledge

3. Project execution and findings

3.1. Have the main points to emerge from          3       3       3
the project being picked up for discussion?

3.2. Is there a consideration on the resources    3       2       3
needed for the suggested database and the
schedule?

3.3. Was the project developed within the for     3       3       2
the phases?

4. Presentation

4.1. Clarity of explanation and conclusions       4       3       3

4.2. Impact of the presentation/project           3       3       3
walk-through

4.3.Use of audio visual aids, body language       3       3       3

4.4. Response to questions                        3       3       2

Overall rating for the project::                 41      37      37

                                                       Criteria
Criteria                                        AVG     totals

1. Ability to define user requirements of and            8.67
transform them into logical and physical
design.

1.1. Correct application of design              3.00
principles.

1.2. Appropriate data collection                3.00

1.3. Is the final product relevant for a        2.67
practical implementation of the database

2. Apply suitable data, database                         9.33
administration and security
principles:

2.1. How is the sample data relevant            3.33

2.2. Is there evidence of application of        3.00
database administration principles

2.3. Is there a consideration of UML            3.00
knowledge

3. Project execution and findings                        8.34

3.1. Have the main points to emerge from        3.00
the project being picked up for discussion?

3.2. Is there a consideration on the resources  2.67
needed for the suggested database and the
schedule?

3.3. Was the project developed within the for   2.67
the phases?

4. Presentation                                         12.00

4.1. Clarity of explanation and conclusions     3.33

4.2. Impact of the presentation/project         3.00
walk-through

4.3.Use of audio visual aids, body language     3.00

4.4. Response to questions                      2.67

Overall rating for the project::

                                         Deve-    Accomp-
                             Beginning   loping   lished    Exemplary
DEFINITIONS OF ACHIEVEMENT      1          2         3          4

APPENDIX 3. RESULTS FOR THE PROJECTS IN A DATABASE DESIGN CLASS
(SPRING 2007)

Criteria                    Proj1   Proj2   Proj3   AVG    Criteria
                                                           totals

1. Ability to define user    3       3       3      3.00   8.67
requirements of a data
model and transform them
into logical and physical
design.
1.1. Correct application
of design principles.

1.2. Appropriate data        3       3       3      3.00
collection

1.3. Is the final product    3       2       3      2.67
relevant for a practical
implementation
of the database

2. Apply suitable data,                                    9.33
database administration
and security principles:

2.1. How is the sample       4       3       3      3.33
data relevant

2.2. Is there evidence of    3       3       3      3.00
application of database
administration principles

2.3. Is there a              3       3       3      3.00
consideration of UML
knowledge

3. Project execution and                                   8.34
findings
3.1. Have the main points    3       3       3      3.00
to emerge from the
project being picked up
for discussion?

3.2. Is there a              3       2       3      2.67
consideration on the
resources needed for the
suggested database and
the schedule?

3.3. Was the project         3       3       2      2.67
developed within the time
allocated for the phases?

4. Presentation                                            12.00

4.1. Clarity of              4       3       3      3.33
explanation and
conclusions

4.2. Impact of the           3       3       3      3.00
presentation-project
walk-through

4.3.Use of audio visual      3       3       3      3.00
aids, body language

4.4. Response to             3       3       2      2.67
questions
                            41      37      37
Overall rating for
the project::

              Beginning   Developing   Accomplished   Exemplary

DEFINITIONS       1           2             3             4
OF
ACHIEVEMENT
COPYRIGHT 2008 Journal of Information Systems Education
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Petkov, Doncho; Petkova, Olga; D'Onofrio, Marianne; Jarmoszko, A.T.
Publication:Journal of Information Systems Education
Article Type:Report
Geographic Code:1USA
Date:Jun 22, 2008
Words:8424
Previous Article:Integrating soft skills assessment through university, college, and programmatic efforts at an AACSB accredited institution.
Next Article:Editor's message.
Topics:

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters