Using Bilingual Students to Link and Evaluate Different Language Versions of an Exam.
To read the full text of this article, click here: http://www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED503879
Many researchers and the International Test Commission's (Hambleton, 2005) caution against treating scores from different language versions of a test as equivalent, without conducting empirical research to verify such equivalence. In this study, we evaluated the equivalence of English and Malay versions of a 9th-grade math test administered in Malaysia by conducting several statistical analyses. All analyses were conducted on data from a large sample of English-Malay bilingual students who took both versions of the exam. First, we conducted two equating analyses--one based on classical test theory and another based on item response theory (IRT). Then differential item functioning analyses (DIF) were performed to see if any items functioned differentially across their English and Malay versions. The DIF results flagged 7 items for statistically significant DIF, but only one had a non-negligible effect size. We then conducted another equating analysis dropping the DIF items. The equating results suggested an adjustment of 1 or 2 points, depending on the mathematics achievement levels. The results indicate that bilingual examinees can be useful for evaluating different language versions of a test and adjusting for differences in difficulty across test forms due to translation. (Contains 5 tables and 2 figures.)
|Printer friendly Cite/link Email Feedback|
|Author:||Ong, Saw Lan; Sireci, Stephen G.|
|Date:||Nov 30, 2008|
|Previous Article:||Does it Pay to Invest in Computer Based Testing Technology? Realities to Implement an Internet Based University Entrance Examination (iB PAU).|
|Next Article:||Use of Concept Cartoons as an Assessment Tool in Physics Education.|