Printer Friendly

Effect size and rehabilitation research.

Research in the area of rehabilitation has long relied on the test of statistical significance as the primary means for evaluating the meaningfulness of empirical research results. The concept of statistical significance is deeply rooted in the null hypothesis inference testing model and perpetuates the myth that it is not possible to draw any meaningful conclusion without a significant test result. Under the null hypothesis inference testing model, most tests of statistical significance are the product of the sample size utilized in the study and size of the effect (significance = effect size x sample size) (Rosenthal & Rosnow, 1991). Reliance on the null hypothesis significance testing model has had significant implications for rehabilitation research and the Journal because most rehabilitation researchers are faced with samples that come from two extremes. One extreme are samples that come from large data sets such as RSA-911 database. With these samples researchers often find significant test results but differences that are of little practical significance. If only statistical significance is reported it may lead to an inappropriate interpretation of the findings. At the other extreme are small samples that are commonly drawn from surveys or agency participants. With these samples researchers very often find insignificant results but differences that are of practical significance for the rehabilitation practitioner. Again if only statistical significance is reported it may lead to an inappropriate interpretation of the findings.

To address the problems associated with the null hypothesis inference testing model, psychology and related fields are starting to move in the direction of placing more emphasis on practical significance, the degree of relationship between variables, or the magnitude of the effect, instead of tests of statistical significance. For example, the 5th edition of the American Psychological Association Manual (APA, 2001) emphasizes the necessity to include some index of effect size or strength of relationship in psychological research further stating that the failure to report indicators of effect or strength of relationship to be a "defect" (p.5). In addition, 23 professional research journals have also identified the importance of reporting effect sizes by requiring authors and researchers to report indicators of effect in their papers submitted for publication (Harris, 2003; Snyder, 2000; Trusty, Thompson, & Petrocelli, 2004).

In simplistic terms, effect size refers to any statistic that describes the degree of difference or relationship between the variables of interest. According to Vacha-Haase and Thompson (2004) the following three major types of effect size indicators can be utilized in psychological research: standardized differences, variance accounted for effects, and corrected effect sizes. As editors of the Journal we also feel that these indicators of effect size have direct application to rehabilitation research and therefore will require that indicators of effect be incorporated into empirically-based research articles that are submitted for publication to the Journal. The utilization of effect size indicators for research published in the journal will allow the readers and rehabilitation researchers to better utilize the research published in the journal to make programmatic decisions or further research in the area of rehabilitation.

As a result of our requiring that indicators of effect be included in empirical research submitted to the journal, we are offering the following recommendations:

1) It will be important for researchers seeking publication in the journal to get familiar with indicators of effect size as they apply to rehabilitation research. There are numerous articles and statistical textbooks that provide a very good and easy to understand explanation of indicators of effect. Many of these articles and texts are included in the reference list of this editorial. For starters, the editors would highly recommend reading the article by Vacha-Haase & Thompson (2004) regarding how to estimate and interpret various indicators of effect. This article provides a very good discussion of the three classes of effect size, strategies for obtaining effect size indicators utilizing SPSS, and guide lines for reporting effect size in research results.

2) Read research articles utilizing indicators of effect in the analysis and discussion. Reading these articles should provide the authors with some models of how effect sizes have been incorporated in the research design and analysis.

3) Seek statistical consultation if you have questions or concerns about how to compute effect size or determine what effect size indicators may be appropriate for your research design and analysis. Remember, most of the time there are several indicators of effect that may be appropriate for the specific research being conducted.

4) Finally, buy, read, and keep handy a good basic research design book that you can refer to while conducting your research study. While there are several good books out there, we would recommend the 1991 text by Rosenthal and Rosnow.

We would like to thank the following consultant reviewers for their service to the Journal and their assistance with the editorial process this past year.

Pamela A. Cogdal

The University of Memphis

Robin A. Cook

Wichita State University

Chandra Donnell

The University of Memphis

Mary Lou Duffy

Florida Atlantic University

Yolanda V. Edwards

University of Maryland

Michael Frain

Florida Atlantic University

Michael Millington

Rehabilitation Consultant, New Orleans, LA

Andrew Phemister

Minnesota State University

Dion F. Porter

Jackson State University

David F. Roberts

The University of Memphis

Jamie F. Satcher

The University of Alabama

Keith Storey

Chapman University

Hector WH Tsang

The Hong Kong Polytechnic University

John Wadsworth

The University of Iowa

Selected References

American Psychological Association. (2001). Publication manual of the American Psychological Association (5th ed.). Washington, D.C.: Author.

Carver, R. (1993). The case against statistical significance testing, revisited. Journal of Experimental Education, 61, 287-292.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillside, NJ: Erlbaum.

Cohen, J. (1994). The earth is round (p<.05). American Psychologist, 49, 997-1003.

Harris, K. (2003). Journal of Educational Psychology: instructions for authors. Journal of Educational Psychology, 95, 201.

Huberty, C.J. (2002). A history of effect size indices. Educational and Psychological Measurement, 43, 227-240.

Kirk, R.E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56, 746-759.

Kline, R. (2004). Beyond significance testing: Reforming data analysis methods in behavioral research. Washington, DC: American Psychological Association.

Kosciulek, J.F., & Szymanski, E. M. (1993). Statistical power analysis or rehabilitation counseling research. Rehabilitation Counseling Bulletin, 36, 212-219.

Lipsey, M.W. (1990). Design sensitivity: Statistical power for experimental research. Newbury Park, CA: Sage.

Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L.V. Hedges (Eds.), The handbook of research synthesis (pp. 231-244). New York: Russell Sage Foundation.

Rosenthal, R., Rosnow, R.L., & Rubin, D.B. (2000). Contrasts and effect sizes in behavioral research: A correlational approach. New York: Cambridge Press.

Rosenthal, R., & Rosnow, R. (1991). Essentials of behavioral research: Methods and data analysis (2nd ed.). Boston: McGraw-Hill.

Snyder, P. (2000). Guidelines for reporting results of group quantitative investigations. Journal of Early Intervention, 23, 145-150.

Szymanski, E.M., & Parker, R.M. (1992). Low statistical power: A blight on research. Rehabilitation Counseling Bulletin, 36, 2-5

Thompson, B. (2002). "Statistical," "practical," and "clinical": How many kinds of significance do counselors need to consider? Journal of Counseling and Development, 80, 64-71.

Trusty, J., Thompson, B., & Petrocelli, J.V. (2004). Practical guide to implementing the requirement of reporting effect size in quantitative research in the Journal of Counseling & Development. Journal of Counseling & Development, 82, 107-110.

Vacha-Haase, T. Nilsson, J.E., Reetz, D. R., Lance, T.S., & Thompson, B. (2000). Reporting practices and APA editorial policies regarding statistical significance and effect size. Theory & Psychology, 10, 413-425.

Vacha-Haase, & Thompson, B. (2004). How to estimate and interpret various effect sizes. Journal of Counseling Psychology, 51, 473-481.
COPYRIGHT 2004 National Rehabilitation Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Editor's Comment
Author:Strauser, Dave
Publication:The Journal of Rehabilitation
Geographic Code:1USA
Date:Oct 1, 2004
Words:1257
Previous Article:Handbook of Clinical Health Psychology, Vol. 3.
Next Article:Psychological factors in work-related amputation: considerations for rehabilitation counselors.
Topics:


Related Articles
Geriatric rehab program focuses on research, training and service.
Independent living programs: impact of program age, consumer control, and budget on program operation.
Cultural issues in the rehabilitation of Hispanics.
Private sector rehabilitation: insurance, trends & issues for the 21st century, a summary of the 17th Mary E. Switzer Memorial Seminar.
Focus groups: a tool for consumer-based program evaluation in rehabilitation agency settings.
Race as a Correlate of Vocational Rehabilitation Acceptance: Revisited.
Evaluating differences in demographics, services, and outcomes for vocational rehabilitation consumers with hearing loss versus consumers with other...
Editor's comment.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters