Printer Friendly

Training in Personnel Selection Assessment: Survey of Graduate I/O Programs.

We analyzed data from 75 master's and doctoral programs in Industrial/Organizational (I/O) psychology to examine the emphasis on specific assessment techniques in graduate personnel courses in the United States and Canada. The findings indicated that most instructors do not focus on in-depth teaching of specific psychological tests within such courses. Indeed, the techniques that received the most emphasis were assessment centers and honesty tests, followed by an assortment of personality, aptitude, and vocational measures. The discussion centers on the implications for graduate training for I/O students.

Over the past 50 years, a number of surveys on professional training in psychodiagnostic testing have appeared in the literature (Piotrowski & Zalewski, 1993). However, researchers have not investigated the degree of training emphasis on specific testing measures and techniques in the area of personnel selection. A comprehensive review of the literature identified only one study that reported on the implications for assessment training in the I/O area (Piotrowski & Keller, 1992).

Information on this topic would be beneficial given recent attempts by professionals in I/O psychology to identify critical issues for work roles and training needs of graduate students in the area. These issues include a discussion of the relevance of licensure, the identification of competencies needed by I/O graduates, and the establishment of standards for graduate I/O training (e.g., Dale, 1988; Lowe, 1990, 1993; ODEER, 1991; Shippmann, Hawthorne, & Schmitt, 1992).

Finally, training in assessment skills has been recognized as a critical component of graduate courses in personnel selection and industrial psychology (Society for Industrial and Organizational Psychology, 1994). Interestingly, recent graduates of an I/O program rated assessment training as a highly valued instructional component (Erffmeyer & Mendel, 1990). But, data are still lacking as to the level and extent of training with assessment techniques in graduate-level I/O programs. Therefore, the purpose of the present study was to determine the relative emphasis given to specific tests/instruments covered in personnel selection courses in the United States and Canada.


We surveyed I/O programs (MA & PhD) from schools contained in the Society for Industrial-Organizational Psychology (1995) listing of graduate training programs. The two-page form asked respondents to rate their degree of coverage of 36 specific published tests, as well as three broad assessment techniques (i.e., assessment center exercises, honesty tests, leadership scales). The meaning of "extent of coverage" was left to the interpretation of the instructor to possibly include instruction such as test administration procedures, psychometrics, scoring, interpretation, and/or hands-on experience. The ratings (using a checklist) were indicated on a 5-point scale ranging from 1 (none) to 5 (extensive). The selection of the tests/techniques were based on coverage in major personnel selection and psychological assessment texts, and coverage in personality assessment books (e.g., Lanyon & Goldstein, 1997; Miller, 1976). Multiple surveys were mailed to each program in the event that personnel selection courses are taught by two or more instructors within the department. The number of surveys mailed was based on the size of the I/O faculty in each program, with larger programs receiving more copies than smaller ones. Consequently, an average of three surveys were mailed to a total of 100 graduate I/O programs during the summer of 1997. A small follow-up mailing to non-responding schools was conducted in early fall of the same year. Only those teachers who taught graduate-level courses in personnel selection or industrial psychology were asked to participate.

A total of 85 completed surveys (88% of which were usable) were returned, with 50 of these from separate programs. Data from the usable surveys (N = 75)indicated that 24 of the respondents taught in master's programs, 49 were from doctoral institutions, and two did not indicate their program level. Respondents taught I/O courses for an average of 11.2 years (SD = 8.3), and the mean number of years each taught a graduate personnel course was 9.9 years (SD = 9.5).

Results and Discussions

A oneway MANOVA indicated that no significant differences existed between the ratings of test usage and level (MA, PhD) of graduate program F(1,39) = 1.31, p [is greater than] .05. Consequently, the data were combined for subsequent analyses. Table 1 presents the rank order and frequency distribution of the "Top 10" instruments or techniques (out of a total of 39 included on the survey) as rated by our academic sample. Interestingly, numerous well-established tests are not being covered (see category "None") in personnel selection courses. That is, nearly half of the sample ignored coverage of the California Psychological Inventory (CPI), Hogan Personality Inventory, Strong Interest Inventory, and leadership scales. Only the broad category of assessment center exercises and honesty/integrity tests were emphasized as "considerable."
Table 1
Rank-order of measures used in graduate personnel selection courses

 Scale ratings
Techniques Mean None Some Moderate
 (1) (2) (3)

Assessment Center
Exercises 3.3 4 10 28
Honesty Tests 3.0 3 24 22
NEO Personality
Inventory 2.2 23 29 13
General Aptitude
Test Battery 2.1 18 40 11
Leadership Scales 2.0 34 13 14
Personnel Test 2.1 21 35 13
Strong Interest
Inventory 1.8 38 17 12
Hogan Personality
Inventory 1.8 33 25 10
MMPI-MMPI-2 1.8 27 33 12
CPI 1.7 37 23 9

 Scale ratings
Techniques Considerable Extensive Weighted
 (4) (5) Score

Assessment Center
Exercises 19 12 244
Honesty Tests 21 4 221
NEO Personality
Inventory 7 3 163
General Aptitude
Test Battery 5 1 156
Leadership Scales 8 2 134
Personnel Test 5 1 142
Strong Interest
Inventory 5 1 129
Hogan Personality
Inventory 6 0 137
MMPI-MMPI-2 1 0 133
CPI 4 0 126

Note: The weighted score is the sum of the 5-point scale value (i.e., 1, 2, 3, 4, 5) multiplied by the frequency of use for each test/technique.

These findings suggest that I/O graduate faculty are not emphasizing specific assessment techniques, during formal coursework, in an in-depth manner. Perhaps academicians believe that it is more prudent to provide a general review of assessment approaches complemented by extensive coverage of a limited number of specific instruments. Approximately 23% (N = 20) of the respondents commented on why they did not focus their instruction on specific tests. Eight of these 20 instructors (40%) indicated that they covered broad categories, methods, or types of tests rather than specific tests. Five of the respondents' comments (25%) reflected a focus on test development or psychometrics, and four instructors (20%) mentioned that specifics tests were taught in other courses within their department (e.g., Psychological Assessment; Personality Testing). Regarding the latter, enrollment in such courses at the graduate level may be limited to students in clinical/counseling psychology, particularly within doctoral programs. It is possible that instructors emphasize the measures and techniques that they feel most confident in or those with which they have had the most applied experience. Admittedly, there are a host of instruments and techniques applicable in the I/O field and in-depth study of only a few is realistic.

Acknowledging the limitations of graduate preparation, at least in terms of formal coursework, what might aspiring I/O students do to buttress their competency in assessment skills? Perhaps this training may be best accomplished outside didactic coursework (e.g., supervised field projects, practicum and internship experience, and thesis/dissertation research). That is, extensive knowledge and/or hands-on experience (e.g., practica) of selected I/O assessment instruments, within the context of an organization and/or research project, may be a viable avenue for graduate training in assessment. Such an approach may lead not only to the acquisition of competency, but also to professional confidence. However, not all graduate programs require, or even encourage, practicum or internship experience. The Society for Industrial and Organizational Psychology (1995) listing of graduate I/O programs indicates that only 22% of doctoral I/O programs, and 25% of master's I/O programs, require internships as part of their graduate training. Practicum experiences were required by only 20% of I/O doctoral programs, compared to 43% of MA I/O programs. At the very least, students may need to accept the reality that training in specific assessment techniques may need to be acquired through independent self-study.

Although the findings of the present research can be viewed with some concern, some caveats are in order. For instance, given the weak psychometric data of personality and aptitude tests for use in personnel selection, instructors may be wise in covering such tests in a global manner. Also, faculty may be opting to use instructional time to focus on other assessment techniques that have been well accepted in the I/O literature (e.g., work samples, structured interviews, biodata). In a related vein, I/O instructors may be simply following the manner in which tests are presented in their textbook. That is, the most popular personnel selection textbooks used by our sample (see Piotrowski & Vodanovich, 1998) offer relatively little coverage of individually published tests or techniques as selection devices.

In conclusion, these findings suggest that I/O graduates enter the professional ranks with relatively little exposure to specific tests and assessment techniques, which in our opinion indicates an instructional shortcoming. For instance, many well-respected and widely used I/O and personnel selection textbooks (e.g., Cascio, 1991; Gatewood & Feild, 1998; Miller, 1976; Muchinsky, 1998) posit that many of the measures (e.g., Miner Sentence Completion Scale, CPI, Myers-Briggs Type Indicator) and types of tests (e.g., cognitive ability, personality, vocational), included in our survey, are commonly employed for selection purposes in industry. Also, several reviews have commented on the heightened adoption of personality inventories in organizational contexts, particularly given the relative lack of adverse impact and palpable predictive validity of these measures (Dunnette, 1998; Hough, Eaton, Dunnette, Kamp, & McCloy, 1990). Consequently, our position is that I/O graduate students need to be trained, beyond a cursory fashion, on such tests. In- depth coverage of these tests will likely enhance graduate training standards by addressing important competencies required for I/O graduates. This may be most pertinent for students who wish to pursue a career as a practitioner in the field of personnel selection.


Cascio, W. F. (1991). Applied psychology in personnel management (4th ed.). Englewood Cliffs, NJ: Prentice Hall.

Dale, R. H. I. (1988). State psychological associations, licensing criteria, and the "master's issue." Professional Psychology: Research and Practice, 19, 589-593.

Dunnette, M. D. (1998). Emerging trends and vexing issues in industrial and organizational psychology. Applied Psychology: An International Review, 47, 129-153.

Erffmeyer, E. S., & Mendel, R. M. (1990). Master's level training in industrial/organizational psychology: A case study of the perceived relevance of graduate training. Professional Psychology: Research and Practice, 21, 405-408.

Gatewood, R. D., & Feild, H. S. (1998). Human resource selection (4th ed.). Fort Worth, TX: Dryden Press.

Hough, L. M., Eaton, N. K., Dunnette, M. D., Kamp, J. D., & McCloy, R. A. (1990). Criterion-related validities of personality constructs and the effect of response distortion on those validities. Journal of Applied Psychology, 75, 581-595.

Lanyon, R. I., & Goldstein, L. D. (1997). Personality assessment (3rd ed.). New York: Wiley.

Lowe, R. H. (Ed.). (1990). Proceedings of the national conference on applied master's training in psychology. (Available from Richard D. Tucker, Department of Psychology, University of Central Florida, Orlando, FL 32816.)

Lowe, R. H. (1993). Master's programs in industrial/organizational psychology: Current status and a call for action. Professional Psychology: Research and Practice, 24, 27-34.

Miller, K. M. (1976). Psychological testing in personnel assessment. Surrey: Biddies, Ltd.

Muchinsky, P. M. (1998). Psychology applied to work (4th ed.). Pacific Grove, CA: Brooks/ Cole.

ODEER, American Psychological Association. (1991). Levels of education and training in psychology: "Psychology's second century self-study." Washington, DC: Author.

Piotrowski, C., & Keller, J. W. (1992). Psychological testing in applied settings: A literature review from 1982-1992. Journal of Training & Practice in Professional Psychology, 6(2), 74-82.

Piotrowski, C., & Vodanovich, S. J. (1998). Textbook preference in personnel selection coursework. Journal of Instructional Psychology, 25, 209-210.

Piotrowski, C., & Zalewski, C. (1993). Training in psychodiagnostic testing in APA-approved PsyD and PhD clinical psychology programs. Journal of Personality Assessment, 61, 394-405.

Shippman, J. S., Hawthorne, S. L., & Schmitt, S. D. (1992). Work roles and training needs for the practice of industrial-organizational psychology at the master's and PhD level. Journal of Business and Psychology, 6, 311-331.

Society for Industrial and Organizational Psychology, Inc. (1994). Guidelines for education and training at the master's level in industrial/organizational psychology. Arlington Heights, IL: Author.

Society for Industrial and Organizational Psychology, Inc. (1995). Graduate training programs in industrial/organizational psychology and related fields. Bowling Green, OH: Author.

Stephen J. Vodanovich and Chris Piotrowski, Department of Psychology, University of West Florida.

Correspondence concerning this article should be addressed to Stephen J. Vodanovich, Department of Psychology, University of West Florida, 11000 University Parkway, Pensacola, FL 32514.

A portion of this research was presented at the annual meeting of the Southeastern Psychological Association, Mobile, AL (March, 1998).
COPYRIGHT 1999 George Uhlig Publisher
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1999, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Author:Piotrowski, Chris
Publication:Journal of Instructional Psychology
Article Type:Statistical Data Included
Geographic Code:1USA
Date:Sep 1, 1999
Previous Article:Co-Occurrence of Attention-Deficit Disorder and Learning Disability: An Overview of Research.
Next Article:Video Imagery and Children's Acquisition of Televised Geographic Information: Affecting More Than Visual Content.

Related Articles
Microsoft Excel For Data Analysis in Schools.
Technology-Enhanced Counselor Training: Essential Technical Competencies.
Skills, knowledge, and abilities of graduates from accredited environmental health science and protection undergraduate programs. (Features).
Examining the effectiveness of innovative instructional methods on reducing statistics anxiety for graduate students in the social sciences.
Employment success and satisfaction among graduates of Tennessee Technological University's Master of Science program in fisheries management.
Educator effectiveness in identifying symptoms of adolescents at risk for suicide.

Terms of use | Privacy policy | Copyright © 2022 Farlex, Inc. | Feedback | For webmasters |