Printer Friendly

Standardized testing for outcomes assessment: reanalysis of the major field test for the MBA (MFT-MBA), with corrections and clarifications: (rejoinder to R. Wright, Standardized testing for outcome assessment: analysis of the Educational Testing Service MBA tests).

Wright (2010) questioned the reliability and validity of the Major Field Test for MBA (MFT-MBA) and made a series of claims against the use of the test as an outcomes assessment for MBA programs. These claims, including an incorrect interpretation of the mean percent correct scores of schools (also called the assessment indicators, or AIs), are summarized and corrected in this paper. The paper ends with a conclusion that the MFT-MBA is a reliable and valid tool that can be used as an outcomes measure by MBA programs and accreditation organizations.

**********

The Major Field Test for the MBA (MFT-MBA) is a test that assesses the mastery of concepts, principles, and knowledge of MBA students nearing completion of their studies. The test is developed by a panel of subject matter experts, including a group of MBA faculty members (ETS, 2010a). The test includes 124 items coveting five subject areas and skills that are common to most MBA programs. Schools can add an optional section of 50 locally authored items and administer it together with the standard MFT-MBA.

The MFT-MBA reports several different types of scores for individuals and institutions. For an individual, a scale score is reported. For an institution, the mean scale score of the group is reported, as well as scores on the Assessment Indicators (AI). The AIs measure the performance of the group on questions in five different content areas. They are expressed as the mean percent correct score across all questions in a content area. In addition, the MFT-MBA provides percentile ranks for these individual and institutional scores, based on comparative data from all participating MBA programs. An optional item information report can also be obtained to evaluate the item level performance for a program or cohort. ETS strongly recommends that these scores and comparative information be used in conjunction with other information when making decisions about programs or individuals, and cautions test users against the practice of using a cut score or percentile on the MFT-MBA as a condition for a student's graduation (ETS, 2010b).

In his paper titled, Standardized testing for outcome assessment: Analysis of the educational testing systems MBA tests, Wright (2010) evaluated the MFT-MBA using information from the ETS/MFT website, and concluded that the ETS outcomes assessment methodology may not be optimal. He made a series of incorrect claims about the MFT-MBA, which are summarized, clarified, and corrected in the following section.

Wright (2010) claimed, "Faculty members are not involved in the construction of these tests" (pp. 144-145). The fact is that the MFT-MBA is developed by content experts in the field, including a committee of current business faculty members who determined the test specifications, test questions, and types of scores to be reported (ETS, 2010b). The test is based on a core curriculum identified in a national survey of MBA programs. The industry standards for educational testing (e.g., AERA, APA, & NCME, 1999; ETS, 2002) are strictly adhered to by the program.

Wright believed that the test was inappropriately difficult. He explained that although students take the MFT-MBA test before completing their MBA programs, only 5% of them could answer 69% or more of the questions correctly using the 95th percentile of AI1 (Marketing) scores as an example. He then claimed the test cannot be a valid measure of business knowledge and skills that the MBA programs intend to teach.

However, the test is not as difficult as Wright claimed. Wright interpreted the mean percent correct scores of an institutional assessment indicator as if they were the percentiles of individual scores. As mentioned earlier in this paper, the AI is a group-level mean score on a specific content area and can be used to compare schools, but not individual students. In the example Wright used, a school with an AI-1 (Marketing) score of 69% (corresponding to a percentile rank of 95) only means that students in this school answered 69% of the questions correctly, on average, and this school is ranked higher than 95% of the schools in the comparative data group. This statistic does not imply that only 5% of students in this school could answer 69% or more of the questions correctly (1), nor does it mean that only 5% of all test takers across schools could answer 69% or more of questions correctly. The appropriate statistic to be used for Wright's argument would be the percent correct score corresponding to the 95th percentile of all test-takers.

However, the reliability of the content area scores only supports reporting them for group-level scores (assessment indicators) and not for individual test-takers. That being said, even if individual content area scores were reliable enough to report, their 95th percentile would typically be greater than the 95th percentile of the corresponding group- level assessment indicator. The difference is due to the variability of the distributions of individual scores at each school, and their sample sizes.

In addition, Wright believed that 36 minutes in each of the five subject areas--a total of 180 minutes--is not enough time to assess a student's knowledge. The fact is that MFT-MBA is not intended to be a measure of individual student performance. Its primary purpose is to provide information about groups of students. Moreover, even this information should be, at most, a small part of all information that needs to be considered in making decisions about the program (ETS, 2010a).

Wright is correct to caution against the use of the MFT-MBA test scores in decisions related to faculty members. However, such considerations apply to any test, regardless of the nature, subject, or owner of the test. In fact, ETS clearly recommends that the MFT-MBA scores should primarily be used at the group-level (e.g., school, program, or classroom level). Any use of the test scores for individual students or faculty members requires further investigation to assure the validity of score use.

Wright seemed to use the term reliability in a way that is not consistent with what is commonly accepted in the field of educational testing (see Haertel, 2006). Wright claimed that the MFT-MBA lacked reliability based on his misinterpretation of percentiles of school means, which is not directly related to test score reliability. The fact is that the MFT-MBA total score does have a high reliability coefficient in terms of Cronbach's alpha (.90; ETS, 2010c).

Wright questioned the use of standardized tests like MFT-MBA in schools because he believed that the MFT does not provide instructors with information about students' performance on individual items, and thus it does not help improve teaching. Contrary to his claim, the MFT-MBA provides users with an item information report of group performance on each individual question, as an optional paid service. The item information report helps faculty members understand the strengths and weaknesses of the student groups and can facilitate teaching by identifying areas that need to be improved.

Finally, Wright claimed that the adoption of the MFT-MBA at a school implies a distrust of its faculty members, since the test imposes questions that the faculty members are not aware of and cannot prepare for. He suggested that schools should develop their own tests, tailored specifically to their own curriculum, and include case analysis as well as questions testing fundamental concepts. Locally developed tests can indeed provide information that cannot be obtained from standardized tests, just as standardized tests can provide information that cannot be obtained from locally developed tests. The MFT- MBA, as a standardized test, provides objective evidence of MBA students' learning outcomes and enables the comparison across schools and programs, on a measure of content areas determined by a committee of MBA faculty members representing a broad selection of schools in the field. Locally developed tests lack such comparability. In recognition of the uniqueness of each MBA program, the MFT-MBA offers the option of adding a locally developed section to the standard test determined by each program (ETS, 2010d).

In summary, Wright's (2010) claims against the use of MFT-MBA as an outcomes assessment for MBA programs were based on his misinterpretation of the information presented on the ETS/MFT website and some misunderstandings of certain concepts commonly accepted in the field of educational testing. Contrary to Wright's claims, the MFT-MBA is a reliable and valid measure that can be (and is being) widely used as an outcomes measure for MBA programs in the United States.

(i) The original article title, "Standardized testing for outcome assessment: Analysis of the educational testing systems MBA tests" uses "... Educational Testing Systems ...", which should be replaced with the correct name for ETS (Educational Testing Service), as in the title of this rejoinder paper. However, those wishing to read the original article may need to use the incorrect title when seeking the document on-line or through a reference library.

(ETS is a registered trademark of Educational Testing Service.)

References

AERA, APA, & NCME. (1999). Standards for educational and psychological testing. Washington, DC: AERA.

ETS. (2002). ETS standards for quality and fairness. Princeton, NJ: ETS.

ETS. (2010a). How can you take your MBA program to the next level? Assess your students with ETS's Major Field Test. Retrieved from http://www.ets.org/Media/Tests/MFT/pdf/mft testdesc mba 4bmf.pdf

ETS. (2010b). Score usage. Retrieved from http://www.ets.org/mft/scores/usage/

ETS. (2010c). Reliability Coefficients and Standard Errors of Measurement (SEM). Retrieved from http://www.ets.org/Media/Tests/MFT/ pdf/mft reliability sem.pdf

ETS. (2010d). Major Field Tests. Retrieved from htttp://www.ets.org/mft/about/content

Haertel, E. H. (2006). Reliability. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 65-110). Westport, CT: American Council on Education/Praeger.

Wright, R. (2010). Standardized testing for outcome assessment: Analysis of the educational testing systems MBA tests. College Student Journal, 44, 1,143-147.

GUANGMING LING

ETS, Princeton, NJ

(1) Given these scores are means, some students in this school may have had a much higher percent-correct score than 69, because it is unlikely everyone in the school had the same score.
COPYRIGHT 2011 Project Innovation (Alabama)
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2011 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Master of Business Administration
Author:Ling, Guangming
Publication:College Student Journal
Article Type:Report
Geographic Code:1USA
Date:Sep 1, 2011
Words:1659
Previous Article:Laboratory experiences of science and engineering graduate students at three research-oriented universities in Taiwan.
Next Article:Student stalking of faculty: results of a nationwide survey.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters