Printer Friendly

Evolving personnel selection practices in New Zealand organisations and recruitment firms.

Interviews were conducted with representatives from 100 randomly-selected organisations with 200 or more employees, and 30 recruitment consultancies in New Zealand in order to determine current personnel selection practices, how those practices have changed over recent years, and respondents' beliefs about the predictive validity of alternative selection procedures. In comparison to a parallel survey conducted in the early 1990's (Taylor, Mills & O'Driscoll, 1993), results demonstrate a substantial increase in the use of psychological tests, greater standardization/structure of selection procedures and linking of selection methods to pre-determined job competencies. Areas where practice-research gaps in personnel selection continue are discussed in terms of both how practitioners could better apply research findings and how research could better respond to practitioner needs.

**********

One of the fundamental applications of organisational psychology, both in New Zealand and overseas, has been the development, implementation, and evaluation of methods for staff selection. While this aspect of organisational psychology has been a rich area of research for over 70 years, there has been a proliferation of personnel selection theory and research over the past 15 years.

The extent to which selection research is able to improve organisations' ability to better match people and jobs-ultimately to improve staff members' productivity, tenure and job satisfaction-is dependent on whether findings are adopted by practitioners. The identification of research-practice gaps is important to determine both areas of research which require further dissemination to practitioners, and areas of practice which can be improved through the application of research.

This paper reports on two surveys of selection practices in New Zealand, one of relatively large organisations (employing 200 or more staff) and the other of recruitment consultancies. These surveys follow up on two, parallel surveys conducted in 1991 and reported in the New Zealand Journal of Psychology (Taylor, Mills & O'Driscoll, 1993), with the aim of identifying changes that may have occurred in organisations' and consultancies' selection practices in the intervening years.

Recent Influences on Selection Practices in New Zealand

A few factors are likely to have led to recent changes in staff selection practices in New Zealand over the past decade. These include: (1) a proliferation of selection research; (2) improved dissemination of research findings to recruitment practitioners; (3) greater accessibility of occupational tests; and (4) changes in employment-related legislation. We provide a brief review of these influences below.

Proliferation of selection research. Starting in the mid-1980's, there has been a resurgence of interest in selection research. Major organisational psychology journals, such as the Journal of Applied Psychology, Personnel Psychology, and the Journal of Occupational and Organisational Psychology, regularly publish research on personnel selection methods, and even journals with broader scopes than organisational psychology (e.g., Psychological Bulletin, Human Performance, Journal of Organizational Behavior) have recently included articles reporting selection research.

Much of the recent staff selection research is applicable to recruitment practice. For example, personality research has recently focused on particular, job-relevant personality traits, primarily the Five-Factor model, though also other job-oriented traits, such as integrity, proactivity and customer service orientation. Large scale meta-analyses of the validity of personality tests in occupational settings (e.g., Barrick & Mount, 1991; Tett, Jackson & Rothstein, 1991) have now demonstrated that measures of particular job-related personality constructs are able to predict important aspects of job performance, and in the case of conscientiousness and integrity, personality can make incremental contributions to validity beyond cognitive ability test scores (Schmidt & Hunter, 1998). Furthermore, recent research has suggested that response distortion among applicants when completing personality tests (i.e., `faking good') may not be as grave a concern as was previously believed (Hough, Eaton, Dunnette, Kamp & McCloy, 1990), although this finding continues to be investigated and debated. Together, this recent body of research on occupationally-relevant personality testing would suggest that, provided practitioners have learned of recent selection research: (1) personality testing is now likely to play a more significant role in personnel selection than it did a decade ago; and (2) there should now be greater use of occupationally-oriented personality traits, particularly those which have been identified in research as having the most predictive validity.

Similarly, recent research on employment interviews has not only confirmed the greater reliability (Conway, Jako, & Goodman, 1995), predictive validity (McDaniel, Whetzel, Schmidt, & Maurer, 1994) and fairness (Huffcutt & Roth, 1998) of structured over unstructured interviews, but it has also begun to clarify the specific aspects of interview structure that are most salient for improved validity. For example, it now appears that highly structured interviews, in which candidates are always asked identical interview questions without follow-probes, leads to no higher validity than moderately structured interviews, which allow for tailoring of interview questions (Huffcutt & Arthur, 1994). Practitioners who have followed employment interview research might be expected to use structured employment interviews which exploit those particular aspects of interview structure.

Improved dissemination of research findings. New research evidence can only influence practice if professionals in a field learn of that research. Seven years ago, few recruitment professionals in New Zealand knew of recent research findings (Taylor et al., 1993), though we believe that this situation is likely to have changed because of three newly-developed avenues for selection research to inform practice.

First, there are now two magazines in New Zealand targeted for recruitment professionals, Human Resources (formally the Institute of Personnel Management News) and Employment Today, which include occasional articles disseminating research findings. Second, the two major professional organisations in New Zealand for recruitment practitioners, the Industrial/Organisational Psychology Division of the New Zealand Psychological Society, and the New Zealand Institute of Human Resource, have both sponsored activities designed to improve the dissemination of information within the profession. These include national workshops on particular selection techniques, local branch discussion groups, and electronic discussion forums which include both academics and practitioners.

Finally, academic training programmes for industrial and organisational (I/O) psychologists have expanded considerably in recent years. While there was only two or three identifiable programmes of study in I/O psychology seven years ago, organisational psychology courses are now taught on five university campuses. Thus recruitment professionals in New Zealand are now more likely to have received tertiary and graduate qualifications which have included studies in I/O psychology than ever before. These graduates are not only knowledgeable about selection research that they have studied, but they will hopefully have been indoctrinated into a culture of keeping abreast of research relevant to their profession.

Greater accessibility of occupational tests. Worldwide, there has been a proliferation of psychological tests, simulations and other instruments specifically designed for selecting staff within organisations, and many of these are available locally through the New Zealand Council of Educational Research. Other psychological test distributors have emerged recently in New Zealand, most noticeably, Saville & Holdsworth Ltd, which has become one of the largest developers and distributors of psychological instruments for organisational settings. With this proliferation of occupational tests available in New Zealand, practitioners might be expected to employ more tests in selection than they did a decade ago.

Changes in employment-related legislation. Finally, a series of employment-related laws have been introduced in the early 1990s, and one in particular--the New Zealand Human Rights Act 1993, can be expected to have influenced selection practices by having strengthened requirements on employers to demonstrate that their selection practices do not discriminate on the basis of gender, race, age, etc. As a result of such legislation, organisations might be expected to now use selection methods in such a way as to minimize the potential for discrimination, such as through greater use of job analysis, and more structured and well-documented selection procedures.

Previous Surveys, and Aims of the Present Studies

Several other surveys of selection practices in New Zealand have been published in recent years, most notably a survey of recruitment consulting firms' selection practices by Harris, Toulson and Livingston (1996), a survey of a broad range of human resource practices (including employee selection) conducted by Johnson (2000), and a multi-national survey of organisations' selection practices across 20 countries by Ryan, McFarland, Baron and Page (1999). Harris et al's (1996) survey provides insight into the selection practices that consultancies have been using in relation to consultants' beliefs about, and research evidence concerning, the predictive validity of various selection methods. These authors concluded that New Zealand practitioners use rather low-validity selection methods (e.g., interviews that are not `situational' in format, reference checks and work experience), however many of the crucial validity estimates used in their study were based on older research which has more recently been updated (e.g., reference checks, interviews, cognitive ability tests). Had they used more recent validity estimates for selection methods, their conclusions would have been substantially different.

Johnson's (2000) survey provides data on a variety of human resource practices across a large sample of organisations in New Zealand, although coverage of personnel selection practices was necessarily limited in scope, given the survey's broad aims. Similar to conclusions drawn by Harris et al. (1996), Johnson (2000) criticized the frequent use of application forms, individual interviews and references in light of these being the "least valid predictors". While we do not argue with Johnson's survey findings, we fail to see research support for his claim that these are the least valid predictors of job performance. We are unaware of studies that have estimated the validity of (unweighted) application forms; research comparing the validities of individual with panel employment interviews consistently finds little differences (see, for example, McDaniel et al., 1994; Huffcutt & Woehr, 1999); and the most recent meta-analysis estimating the validity of references (Hunter & Hunter, 1984) suggests a population validity coefficient of .26, which while not high, is certainly not among the "least valid predictors".

Ryan et al's (1999) survey focused on differences in methods used across countries, and the extent to which cultural dimensions of countries can explain these differences. While results generally failed to support their hypotheses concerning relationships between cultural dimensions and selection practices, they did find that a modest portion of the variance in staffing practices can be explained by national differences.

Our aims for the present survey were different from these previous surveys in two ways. First, we were interested in identifying changes in New Zealand selection practices over time, and so we were careful to replicate, where possible, the methodology employed in the Taylor et al., (1993) survey. Second, we were also interested in probing recruitment professionals about how they used particular selection methods. Recent selection research has provided insights into how particular selection methods can be used to enhance their psychometric properties, and so determining the extent to which selection research is influencing practice requires knowing how, specifically, selection methods are being employed.

Method

Samples

Organisational sample. A random sample of New Zealand organisations with 200 or more employees was generated from the Kompass (Profile Publishing, 1997) database, which includes all New Zealand organisations. In order to maximize representativeness, the sample was stratified by number of employees, industry, and public/private status. Stratification was achieved by breaking the population of New Zealand organisations with 200 or more employees into cells based on (1) three levels of number of employees (200-499, 500-999, 1,000 or more); (2) seven types of industries, as used in the Kompass database (manufacturing, commercial, transport and communication services, trading, production, building and public works, hotel); and (3) public versus private status. Sampling was then performed within each of the 42 (3x7x2) cells in order to ensure that the each cell was represented in the sample at the same proportion as in the total population of organisations.

In order to achieve a useable sample of 100 organisations, sampling was done in an iterative process in which re-sampling was performed as necessary to replace organisations which refused to participate or repeatedly failed to return telephone calls. A total of 152 organisations were contacted in order to obtain a usable sample of 100, yielding a 66% participation rate. Of the 52 organisations contacted but not included, most (32) refused to participate, saying that they were too busy or uninterested. The remaining 20 were ineligible to participate, either because an individual knowledgeable about the firm's selection practices was unidentifiable/unavailable, or because the organisation had changed in such a way that it no longer met sample selection criteria (e.g., dropped below 200 employees; merged with another firm).

Recruitment firm sample. In order to be consistent with the sampling procedure used in the earlier, comparison survey (Taylor, et al., 1993), and to minimize travel/toll call costs, recruitment firms were randomly selected from two local, urban regions: Auckland and Hamilton. These cities represent two of approximately six, predominant business regions in New Zealand, and selection practices of recruitment firms located in these two cities were believed to be representative of recruitment firm practices elsewhere in New Zealand. In many cases, recruitment firms contacted in these cities were regional offices of multi-location recruitment consultancies.

The population of recruitment firms in these locations were identified using the relevant telephone directories, under listings for "personnel consultants" and "management consultants". Firms were contacted, in a randomised order, until the desired sample of 20 Auckland and 10 Hamilton firms, both eligible and willing to participate, was obtained. In total, 73 firms were contacted. Of the 43 firms contacted but not included in this survey, 38 were not eligible to participate because they did not perform managerial recruitment, and the remaining five declined to participate. Thus the participation rate for eligible recruitment firms was 86%. These firms reported filling between two and 500 positions, on average, per year.

Procedure

Organisational sample. Organisations were contacted by telephone through the corporate receptionist, who was asked to identify the person responsible for personnel selection in the organisation. In most, but not all cases, interviewees were human resource staff. Once agreement to participate was obtained, the telephone interview was conducted immediately, or an appointment was made for another time to conduct the interview. Interviews ranged from 15 to 45 minutes in length, averaging 25 minutes.

Recruitment firm sample. Firms were asked to identify recruitment consultants/managers who were knowledgeable about the selection practices used by the firm, and once eligibility and willingness to participate were established, interviews were scheduled. All but two interviews with the 30 respondents were conducted in person, the others by telephone.

Survey Questions

Interview questions asked of the organisation and recruitment firm samples were similar, and generally consistent with questions asked in the earlier, comparison survey (Taylor, et al., 1993). As in the previous survey, respondents were asked: (1) if, and how, job requirements are established (i.e., job analysis) prior to selection; (2) the extent to which various selection methods are used; and (3) their views about the predictive validity of various selection methods.

Questions in two additional areas were covered in the present survey: (1) how, specifically, selection methods are implemented; and (2) what recent changes, if any, have been made to selection procedures. Details about how selection methods are used were gathered because, for some selection methods, the method's predictive validity has been found to be influenced by factors in how the method is used. For example, the validity of employment interviews has been found to be a function of levels of interview structure (Huffcutt & Arthur, 1994), and personality test validity has been found to be highest when particular personality traits have been identified through job analysis (Tett et al., 1991).

Where possible, survey questions were structured to follow an open-ended format, out of concern that closed-ended questions might lead respondents to provide socially desirable answers. For example, asking respondents whether their organisation performed job analyses prior to each staff selection might have lead respondents to answer in the affirmative, even if they rarely did so. Consequently, respondents were asked, instead, to describe how they typically determine job requirements.

Finally, questions asked of organisational respondents concerned staff selection practices used for filling managerial and non-managerial jobs separately, since their procedures often differ. Recruitment firm respondents were asked only about their procedures for managerial jobs because they rarely recruit staff for non-managerial positions.

Results

Interview results are summarized below, organised in sections covering: (1) how job requirements are determined; (2) prevalence of selection method use; (3) how particular selection methods are used; (4) practitioners' perceptions about the validity of alternative selection methods; and (5) recent changes in selection practices.

Determining Job Requirements

Most organisations reported using relatively informal approaches for determining job requirements, rather than job analysis methods. Slightly over half the sample (54/ 100) reported that their organisations determined job requirements by looking at existing job descriptions, and for approximately half of these organisations (25/54), respondents did not know how job descriptions had been developed.

The most commonly-reported method of determining job requirements was to infer job requirements from job tasks listed in job descriptions. Job analysis methods, such as questionnaires, the critical incidents technique and the repertory grid technique, were reportedly used by only a few of the 100 organisations surveyed.

Not at all dissimilar to findings in 1991, primarily immediate managers and human resources/personnel staff determined job requirements. Approximately one-quarter of respondents indicated that job incumbents or other managers are often involved in determining this information.

Ninety-five percent of organisations commented that organisational fit plays a part in determining job requirements. When asked what proportion of job requirements focused on fit with the organisation (versus fit with the specific job), respondents' reported, on average, that 37% of overall job requirements were focused on organisational fit.

Recruitment consulting firms. Virtually all (28/30) recruitment firm respondents reported that they collected person specification and job description information from their clients. Sources of this information were normally a senior manager or managing director (33%), or job incumbents and superiors (30%).

Use of Selection Methods

Numbers and percentages of large organisations and recruitment consultancies using various personnel selection methods are presented in Table 1. As indicated in this table, virtually all large organisations and recruitment firms typically use employment interviews, personal history information (primarily obtained through curricula vitae), and reference checks for both managerial and non-managerial appointments. While over two-thirds of recruitment consulting firms now regularly use occupational tests, particularly personality tests, slightly less than half of large organisations are using such tests for managerial positions, and even fewer organisations use psychological tests for non-managerial positions. Similarly, assessment centres/ work sample tests were more likely to be used by recruitment consultancies than by large organisations (37% of recruitment firms, but only 10-14% of organisations).

Comparison between industries in the organisational sample revealed that occupational tests are being used widely for management selection amongst production firms, and less within other industries. For non-management positions, major differences included very little use of cognitive ability tests amongst building and public works, trading, hotel, and commercial industries; assessment centres used by many organisations in building and public work organisations, but not other industries; and manufacturing and transport and communication firms more often than other industries using some less common selection tools. The size of the organisation appeared to make little difference in the type of selection tools used. Similarly, few differences were found between firms with private versus public status, except that public organisations were more likely to use assessment centres for non-management positions than private organisations, and that occupational testing was more prevalent for managerial appointments within private organisations than in public organisations.

How Selection Methods are Used

Respondents were asked to provide information concerning how they implemented various selection tools as a means of determining whether recent advances in selection research had been adopted.

Interviews. Interview validity, reliability, and fairness have all been linked to the degree of interview structure (Conway et al., 1995; Huffcutt & Roth, 1998; McDaniel et al., 1994). We were reluctant to simply ask respondents whether their organisations conducted "structured" employment interviews because the term may have different meanings, and because of the social desirability of affirmative responses. Instead, we asked about how interviews are conducted in terms of various facets of interview structure, such as how interview questions are determined, how they are asked, and how candidates' responses are judged. We believed that these more specific questions would yield a more accurate picture of the degree to which New Zealand organisations and recruitment consulting firms have adopted structured interviewing approaches.

Participants were asked how they determined their interview questions. Seventy-one percent of organisations and 40% of recruitment firms said that their organisations derived predetermined interview questions from person specification information. Fourteen percent of organisational respondents said that they typically do not plan interview questions beforehand, and instead, ad-lib questions during the interview. Forty-three organisational respondents indicated that they predominately ask interview questions of a particular format, and most of these (35) indicated that they used behaviourally-based questions (e.g., "Can you think of a time when ... what did you do?"), while the remaining eight used primarily situational/hypothetical questions "(e.g., "Assume that you were faced with the following situation ... what would you do?"). Similarly, 43% of recruitment firm respondents (13/30) said that they regularly use behaviourally-based interview questions.

Most of both the organisational and recruitment firm respondents indicated that they asked at least some generic interview questions which were not job-specific in interviews (70/100 organisational respondents; 23/30 recruitment firm respondents). The most commonly asked generic interview questions reported by organisational respondents included asking about candidates' strengths and weaknesses (27/100 regularly ask this question); asking candidates to describe their experience and skills (21/100); asking about leadership/teamwork ability (18/100); and asking why the candidate applied for this particular job/ organisation (15/100).

Another aspect of the amount of structure in employment interviews involves the degree of freedom interviewers are given in how they ask questions, ranging from no freedom at all (i.e., interviewers must ask all questions as planned, without the use of follow-up probes) to complete freedom (no instructions at all). When asked how much freedom interviewers are given in how they ask interview questions, 28/100 organisational respondents indicated that interviewers are given complete freedom in how they ask questions, and 22/100 instruct interviewers to ask the same questions in the same order for all applicants. The remaining 50 reported a semi-structured approach to asking questions, e.g., same core questions get asked of all applicants, but interviewers are free to vary order of questions or to ask follow-up questions.

Another aspect of interview structure concerns how interviewers make judgements about applicants, such as whether, and how, applicant rating scales are used. Just over half of the organisational respondents (52%) indicated that applicants' responses in interviews are rated on either dimensions/competencies, or on each interview question. Predominantly these organisations use rating scales with generic scale anchors (e.g., "very effective", "somewhat effective"), rather than behavioural anchors tailored to each competency or question. Less use of rating scales was found in recruitment firms: While 50% indicated that candidates are judged against selection criteria, only 10% mentioned that candidates are actually rated. A substantial proportion of recruitment firms (23%) reported that candidates are assessed on intuition, feelings and experience.

Personal history information. As indicated in Table 1, respondents from virtually all organisations and recruitment firms reported using curricula vitae (CVs) in obtaining personal history information from applicants. We were interested in determining the extent to which CVs were evaluated systematically, such as by rating applicant CVs on job-related competencies, versus "wholistically". When asked how CVs were judged, 83% of organisational respondents indicated that no formal method of comparing/ rating applicants against job competencies is used, with 16% using a competency-rating system. Just over half of the recruitment firm respondents (16/30), on the other hand, reported that their firms assess applicants' CVs against selection criteria established during job analysis.

Of the 46 organisations which reported using application forms, almost all (42) used a single form for all jobs. While many of the organisations using application forms reported putting differential weights on various aspects/items of their application forms, only one of the 46 organisations using application forms reported using a systematic weighted application form method. Only three recruitment firms reported using application forms at all, and so no follow-up analyses were possible on how recruitment firms use application forms.

Reference checks. Virtually all organisational and recruitment firm respondents said that their organisations regularly check applicants' references, and all of these reported using telephone reference checks. Many of the organisational respondents indicated that their organisations also use written reference information, such as reading letters of recommendation (61%), or asking referees to complete standard referee forms (9%). Only nine percent of organisations reported using structured means of assessing reference check information, such as rating competencies individually.

Cognitive ability tests. Cognitive ability tests were used by over half (54%) of organisations, in particular for managerial selection (47%). The ability tests most frequently used by these organisations were Saville & Holdsworth Limited's numerical tests, used by 30% of the organisations and Saville & Holdsworth Limited's verbal tests, used by 26% of organisations. A host of others existed but were only used by four or less organisations, such as the Watson Glaser Critical Thinking Test, Wonderlic Personnel Test and the Ravens Progressive Matrices Test. In 16% of cases the organisation did not know the name of the tests that were used on applicants, often because this function was outsourced to consultants. Of the organisations using ability tests, most (34/54) use cut-off scores, seven use a "top-down" approach, and the remaining 20 judge ability test scores subjectively, in relation to norms.

Among recruitment consulting firms, the use of ability tests was found to be more prevalent with 64% of consultants reporting that they use cognitive ability tests. Again, the Saville & Holdsworth Limited verbal and numeric ability tests were by far the most popular with 11 of the 30 consultants using these tests, and three or less organisations using specific alternative tests.

Personality tests. Forty-five organisations indicated that they use personality questionnaires as part of their selection processes for either management positions, non-management positions, or both. A variety of tests were reportedly used, the most prevalent being Saville & Holdsworth's Occupational Personality Questionnaire (OPQ). Others used by five or fewer organisations included the Myers Briggs Type Indicator, in-house developed tests, the 16 Personality Factor (16PF), Californian Personality Inventory, Adult Personality Inventory and the Omnia Environment Compatibility Rating. Eleven organisations were unaware of which personality questionnaires were administered, ordinarily due to administration of these tests being carried out by an external consultant.

Only two of the 45 organisations using personality tests reported that they focused on specific, job-relevant competencies in assessing candidates' personality test scores. The majority (26 of the 45 organisations using personality tests) said they view personality test results as a whole (i.e., looking at each candidate's entire personality profile), judging candidates' suitability subjectively.

The vast majority of recruitment firms (89%) said that they regularly use personality tests, most (13/25) using Saville & Holdsworth Limited's Occupational Personality Questionnaire, with others using tests such as the California Psychological Inventory, Preview, the Myers-Briggs Type Indicator, the Omnia Profile, and the 16 Personality Factor (16PF) Questionnaire.

Assessment centres. We asked both organisational and recruitment firm respondents, who reported that their organisation used assessment centers, what exercises their assessment centres included. Responses indicated that a variety of exercises are used, with virtually no consistent patterns. Eleven of the 16 organisations using assessment centres (69%) reported inclusion of at least one work sample exercise, while 5 included leaderless group discussions, 5 role-plays, 4 in-basket exercises and 4 presentations. All eight of the recruitment firms who reported using assessment centres include an in-basket (i.e., in-tray) exercise, while 5 reported using leaderless group discussions, 5 team-building exercises, 3 presentations, and 2 role-plays. In most (12/ 16) organisations in which assessment centres have been used, exercises were developed in-house.

Principles published by the Task Force on Assessment Center Guidelines (Joiner, 2000) led us to ask organisational respondents using assessment centres a few additional questions about how they are implemented: 1) whether assessment centres have been designed based on job analysis; 2) how assessors judge/rate candidates; and 3) training provided to assessors. Most (11/16) organisations using assessment centres indicated that they had been designed based on either a job analysis or at least an understanding of competencies or tasks related to target jobs. Only half (8/16), however, have assessors judge candidates using ratings of competencies, and fewer (7/16) said that they provide assessors training. Those which do provide training reported an average training period of one day.

Perceived Validity of Selection Methods

Organisational and recruitment firm respondents were asked how valid they believed each selection method was in predicting future job performance, in comparison to alternative methods, using a three-point scale (comparatively low, medium or high validity). Table 2 presents a rank-ordering of selection methods in each sample, based on the relative proportions of responses in each of the three response categories for each selection method. Substantial agreement exists between the two samples, as indicated in the similarities of rankings.

Interestingly, perceptions about the relative validity of cognitive ability and personality tests differed between these two samples. Organisational respondents generally perceived cognitive ability tests to be more valid than personality tests. While recruitment firm respondents were initially asked only about "psychological testing" as a whole, they were then asked a follow-up question concerning the relative validity of cognitive ability and personality tests. Most recruitment firm respondents believed either that these two test types have about equal predictive validity (11/30 respondents) or that personality tests have superior validity (7/30), while only three recruitment firm respondents indicated that cognitive ability tests have higher validity. Five of the remaining nine recruitment firm respondents reported that they did not know the relative validity of these two major types of psychological tests used in personnel selection, and four did not respond to the question.

Organisational respondents who indicated that they believed a particular selection method they do not use had high predictive validity (e.g., assessment centres, psychological tests), they were asked why. The most commonly cited reasons were the cost and time required.

Recent Changes in Selection Practices

Respondents were asked what, if any, changes to staff selection practices have been made since 1991, when survey data were collected for the Taylor et al. (1993) study, and these results are presented in Tables 3 (organisations) and 4 (recruitment firms). Seventy-three of the 100 organisational respondents indicated that at least some changes had taken place. Almost half of these (42%) said their organisations had made their selection processes more standardized across their organisations. Other changes included introducing rating of applicant information, competency-based selection, inclusion of cognitive ability and personality tests, assessment centres, increasing structure of selection methods, tying competencies to organisational core competencies and culture, and using consultants. Commonly cited reasons for making these changes to selection practices were to improve validity, objectivity, and fairness; and to avoid potential litigation.

Recruitment firm responses (Table 4) were somewhat similar to those made by organisational respondents, with most frequently-reported changes including the introduction of psychological testing and increased structure to selection processes as a whole, and more specifically in reference checks and interviews and reference checks. Many of these identified that both the Human Rights Act and Privacy Act had had an effect, causing them to change their selection practices.

A number of organisations and recruitment firms indicated they were anticipating further changes to their selection practices. For example, 12 organisations were considering introducing cognitive ability tests, 10 personality questionnaires, 7 structured interviews and 5 assessment centres. A similar result was also found for recruitment firms, where 5 were considering alternative psychological tests, 2 assessment centres and 1 was considering introducing behavioural interview techniques.

Discussion

A comparison of the present survey results with those reported by Taylor et al. (1993) suggests shifts in practice over the past decade in a few areas. Most striking is the marked increase in organisations' use of both cognitive ability and personality tests, and recruitment firms' increase in using personality tests. The proportions of organisations using cognitive ability tests for selecting both managerial and non-managerial staff have more than doubled, from 20% (managerial) and 15% (non-managerial) a decade ago to 47% and 31%, respectively, at present. Similarly, though less dramatic, is the increase in recruitment firms' use of personality tests, which has increased from 67% to 89% at present.

New Zealand organisations are now more likely to use psychological tests in personnel selection than in most other countries. In a survey comparing staff selection practices across 18 countries, Ryan et al. (1999) found use of cognitive ability tests to be more prevalent in New Zealand than in just three other countries (Belgium, the Netherlands, and Spain), and use of personality tests in selection more prevalent in only four other countries (Belgium, South Africa, Spain, and Sweden).

The recent increase in use of psychological tests may have resulted, in part, from favourable research reviews published by academics. For example, Fisher and Boyle (1997) published quite an extensive and very positive review of recent occupational personality test research in the Asia Pacific Journal of Human Resources, a journal distributed widely to members of the Human Resource Institute of New Zealand.

A more likely influence on New Zealand organisations' adoption of psychological testing, however, is the strong presence over the past decade of test-selling firms such as Saville & Holdsworth Ltd, and organisations picking up on practices used by other organisations. Johns (1993) has suggested that, while academic psychologists frame decisions about psychology-based personnel practices from a technical perspective, practising managers are more likely to see personnel practices as matters of administrative style, and often choose practices based on imitation of other organisations. Similarly, Terpstra and Rozell (1997) have suggested that practitioners probably decide on staff selection practices on their perceived credibility, relevance and practical usefulness, rather than on empirical findings.

Another noteworthy change reported by many organisational and recruitment firm respondents was that they have made their selection process more standardized, and based on job-related competencies. Similarly, there was a slight increase in organisations' use of rating scales in interviews--one component of structured employment interviews, from approximately 1/3 of organisations in the early 1990s to about half at present; and more recruitment firms appear to be basing interview questions on job-related selection criteria (from 10% in the early 1990s to about half at present). These changes appears to be bringing practice closer to what selection researchers generally advocate, particularly if competencies are identified using sound job analysis techniques.

Finally, changes in practitioners' perceptions about the predictive validity of some selection methods are apparent when comparing the present findings to those reported in Taylor et al (1993). Both organisational and recruitment firm respondents viewed assessment centres as the most valid means of selecting staff in the present surveys, while a decade ago these were seen by organisational respondents as less valid than interviews, personal history information, reference checks, and cognitive ability tests.

The other change in perception about validity concerns recruitment firm respondents and employment interviews. While interviews were seen in the early 1990s as quite valid indicators of performance by most organisational respondents, they were evaluated by most recruitment firm respondents as having relatively poorer validity. This latter perception appears to have changed, however, with most recruitment firm respondents now indicating that, next to assessment centres, employment interviews have high validity.

Closing the Remaining Research-Practice Gaps in Personnel Selection

A comparison of the results of the present survey with those of the early 1990s suggests that the gap between personnel selection research and current practice in New Zealand has, in some areas, narrowed. While it is encouraging to see greater use of occupationally-oriented psychological testing, greater structure to selection processes, and use of job-related competencies in the selection process, areas remain where practice diverges from what research would suggest to be sound practice. We highlight these below.

Job analysis. As found a decade ago, most organisations seem to continue using rather informal means of determining job requirements, rather than using job analysis methods developed in the field of organisational psychology. Of particular concern here is the fact that studies which have established predictive validities for a variety of selection methods (e.g., structured interviews, personality tests, assessment centres, weighted application blanks) have all been based on comprehensive job analyses, and so use of any selection method without a thorough job analysis would lead to suspect predictive validity.

Furthermore, New Zealand's Human Rights Act (1993) implicitly requires that personnel selection practices which may adversely affect one or more of the groups protected by that legislation must be defended as clearly job-related. A thorough job analysis may be required to provide such a defence (Thompson & Thompson, 1982).

Even for very small organisations in New Zealand with limited budgets for job analyses, resources have become increasingly available. With the recent proliferation of graduate training programmes in organisational psychology through the university system, increasing numbers of graduate students are now looking for applied projects, such as job analyses, to perform in conjunction with their studies. The US Department of Labor's Occupational Information Network (O*NET) database of job information has recently become available on line (http://www.onetcenter.org/) for organisational psychologists and human resource practitioners to use or download, at no cost, providing a comprehensive source of job information for a wide range of jobs.

Interviews. We found evidence in the present surveys of many organisations using some aspects of structured employment interviews, such as behavioural and situational interview questions. Other aspects, however, were reportedly used infrequently. For example, many organisational and recruitment firm respondents reported that they use `generic' interview questions (i.e., same questions, regardless of position). Few respondents indicated that use rating scales in judging candidates. Even when rating scales were used, they were typically dimensional (i.e., candidates are rated on job-related dimensions/competencies) and with generic scale anchors (e.g., "somewhat effective"). Recent research has suggested that even higher interviewer reliability and predictive validities can be gained from structured employment interviews when question-specific rating scales are used (Taylor & Small, in press).

Psychological tests. While the increased use of occupationally-oriented tests is encouraging, we were disappointed to learn that most personality test users assess candidates' entire personality profile, rather than focusing on job-related personality traits. A major meta-analysis of personality test validities (Tett, Jackson & Rothstein, 1991) has found that personality test validities were substantially higher when specific personality traits, identified through job analysis, were focused on.

A second noteworthy research-practice gap concerns recruitment consultants' beliefs about the relative validity of tests of personality versus cognitive ability. Two-thirds of the recruitment firm sample thought that personality tests were more valid than, or equally as valid as, cognitive ability tests. This finding is surprising, given the vast amount of validation research on these two types of occupational assessment, and the consistent finding that cognitive ability tests are among the best predictors of job performance across a wide variety of job types (see Schmidt & Hunter, 1998 for a recent review). While occupationally-oriented personality variables have shown promise in recent years (Fisher & Boyle, 1997), validities have rarely approached levels found for cognitive ability tests.

Assessment centres/work sample tests. About half of the organisations using assessment centres or work sample tests reported that they rated candidates on dimensions/ competencies, and that they actually trained assessors. Even the organisations which indicated that they trained assessors provided only one day of training, on average. These practices are of some concern in light of the more rigorous standards applied in studies which have established the validity of assessment centres as well as guidelines established by an international taskforce (Joiner, 2000).

Interestingly, both organisational and recruitment firm respondents considered assessment centres to be the most valid predictor of job performance, indicating that its use is not more prevalent because of the high cost of implementation. In fact, the validity of assessment centres for predicting supervisory ratings of job performance has been found to be only moderately high (corrected mean validity of .36; Gaugler, Rosenthal, Thornton & Bentson, 1987).

Further Research Needed to Aid Local Selection Practices

Based on personnel selection practices identified in the present survey, areas where further research would be helpful can be identified. With the majority of organisations and recruitment firms using informal methods of establishing job requirements, future research could aim at determining what, if any, decays in validity occur as selection methods are implemented with less rigour. For example, virtually all studies on which estimates of the validity of structured employment interviews have been based included a critical incident job analysis as a means of specifying performance dimensions and developing interview questions. Few organisations in our sample reported using critical incident job analysis, and so the validity of using structured employment interviews, as commonly practiced in New Zealand, remains uncertain.

Similarly, few organisations reported using weighted application blanks (WABs) or biodata--methods of collecting personal history information with relatively high validity. Instead, virtually all organisations surveyed either use CVs or unweighted application forms. Both WABs and biodata have substantial development costs and sample size requirements, beyond what is practical for all but the very largest organisations in New Zealand. Surprisingly little research has been conducted on structured ways of evaluating CVs, developing applicant forms without large criterion samples; and developing generalizable/transportable WAB items available to organisations through the public domain--all of which would be more suitable for small to mid-size organisations with limited staff selection budgets.

Another area requiring further research is the establishment of incremental validities when multiple selection methods are used. Many organisations and recruitment firms reported using combinations of interviews, personal history information, reference checks and psychological tests in selecting staff, which presumes that each selection method adds incremental information beyond other methods in the selection process. To date, research has established the incremental validity of some personality traits over cognitive ability (Schmidt & Hunter, 1998), but little is know about the extent that personality traits, structured employment interviews and reference checks can make unique contributions of information beyond each other.

Finally, research is needed on ethnic group differences within New Zealand on commonly used selection methods. With both organisations' and the Human Rights Commission's interest in ensuring that selection methods do not inadvertently disadvantage particular ethnic groups, such information is critical to collect and publish. In cases where meaningful group differences between ethnic groups are found for particular selection methods, job performance scores also need to be assessed to test for the presence of test bias (i.e., differential prediction/validity).

To date, little information exists about ethnic group differences on commonly used selection methods. Gibb and Taylor (in press) found no appreciable ethnic group differences in structured interview scores in a sample of European/Pakeha, Maori and Pacific Island social workers within a large social service agency in New Zealand. Similar studies on cognitive ability and personality tests would be helpful, particularly given the sizeable ethnic group differences found overseas for cognitive ability tests.

Conclusions

While there continues to be areas in which research and practice in personnel selection are quite far apart, we are less pessimistic about the research-practice gap than other researchers who have reported on surveys of selection practices in New Zealand (Harris et al., 1996; Johnson, 2000) and Australia (Di Milia & Smith, 1997). We attribute this difference in conclusion to researchers' understanding of the recent personnel selection literature and how they have asked practitioners about their staff selection practices.

Di Milia and Smith (1997), Harris et al. (1996), and Johnson (2000) all bemoaned the fact that practitioners' continue to use employment interviews, reference checks and application forms in view of the their (supposed) low predictive validity. Validation research has demonstrated, however, that all three of these selection methods can have varying degrees of predictive validity, depending on how they are implemented (for interviews, see McDaniel et al., 1994; for reference checks, see Pajo, 1996). Thus survey respondents need to be asked how they have implemented such selection methods to determine the likely validity of their approaches in practice.

In order to continue closing the research-practice gap in personnel selection, researchers and practitioners will need to continue learning about each other's work. As a means of ensuring that practitioners are familiar with staff selection research, Terpstra and Rozell (1997) have suggested that organisations should hire human resource staff with advanced degrees in either organisational psychology or human resource management. With the recent growth in graduate training programmes in organisational psychology in New Zealand, human resource departments and consultancies are likely to be staffed in the future by growing numbers of organisational psychology graduates.
Table 1. Use of Personnel Selection Methods Among Large
Organisations and Recruitment Consulting Firms

                                                           Recruitment
                                  Large organisations         firms

                                    Non-                   (Management
                                 management   Management   positions
Selection method                 Positions    positions       only)
                                  (N=100)       (N=96)       (N=30)

Interviews                       100 (100%)   96 (100%)     30 (100%)

Personal history information
  Curricula vitae                 92 (92%)    89 (93%)      29 (97%)
  Application forms               45 (45%)    30 (31%)       3 (10%)
  Reference checks                98 (98%)    93 (97%)      30 (100%)

Occupational tests
  Cognitive ability tests         31 (31%)    45 (47%)      18 (64%)
  Non-cognitive ability tests      8 (8%)      6 (6%)        1 (3%)
  Personality tests               24 (24%)    44 (46%)      25 (89%)

Assessment centres/work sample    14 (14%)    10 (10%)      11 (37%)
  tests

Other selection methods *         24 (24%)    24 (25%)       1 (3%)

* Includes medical and drug tests, value orientation test, internal
performance information, audio checks, written interview schedules,
performance reviews, sample of past performance, credit checks.
Table 2. Rank Order of Selection Methods based on (Averaged)
Respondents' Perceptions of the Validity of Each Method

Organisational Respondents          Recruitment Firm Respondents
(N = 100)                           (N = 30)

1.   Assessment centres             1.   Assessment centres
2.   Selection interviews           2.   Selection interviews
3.   Cognitive ability tests        3.   References checks
4.   References checks              4.   Psychological testing
5.   Personal history information   5.   Personal history information
6.   Personality tests
7.   Letters of recommendation

Selection methods are ranked such that "1" indicates the selection
method ranked as having the highest perceived validity. Questions
differed slightly across the two samples: recruitment firms were
not asked about letters of recommendation, and were asked about the
validity of psychological tests as a whole, rather than personality
and cognitive ability tests separately.
Table 3. Changes Reported in Organisations' Staff
Selection Practices in Past Seven Years

Changes in Selection Practices                  # of
                                            Organisations

Have standardised the selection process          42
  across the organisation
Applicant information is now rated               23
Have introduced a competency-based               21
  selection process
Have introduced cognitive ability tests          16
Have introduced personality tests                13
Have introduced assessment centres                8
Have increased the degree of structure in         6
  selection methods
Have incorporated organisation's core             5
  competencies/culture into selection
Other changes (reported by less than             24
  5 organisations)
Organisation reported having made no             27
  significant changes

Some organisations responded with more than one change, and
so total > 100
Table 4. Changes Reported in Recruitment Firms' Staff
Selection Practices in Past Seven Years

Changes in Selection Practices                          # of
                                                     Recruitment
                                                        Firms

Increased use of psychological testing                   12
Selection process more focused and systematic             8
More structured and thorough reference checking           5
Introduced (structured) behavioural/competency-           5
  based interviewing
Other changes (reported by less than 5 recruitment       12
  firms)

Some recruitment firms responded with more than one
change, and so total > 30


Acknowledgement

We are grateful to anonymous reviewers for helpful suggestions on this paper. This paper is based, in part, on data collected for Yvonne Keelty's Masters thesis and Bridget McDonnell's Masters dissertation.

References

Barrick, M.R., & Mount, M.K. (1991). The big five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44, 1-26.

Conway, J.M., Jako, R.A., & Goodman, D.F. (1995). A meta-analysis of interrater and internal consistency reliability of selection interviews. Journal of Applied Psychology, 80, 565-579.

Di Milia, L., & Smith, P. (1997). Australian management selection practices: Why does the interview remain popular? Asia Pacific Journal of Human Resources, 35(3), 90-103.

Fisher, C.D., & Boyle, G.J. (1997). Personality and employee selection: Credibility regained. Asia Pacific Journal of Human Resources, 35(2), 26-40.

Gaugler, B.B., Rosenthal, D.B., Thornton, G.C., III, & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72, 493-511.

Gibb, J.L., & Taylor, P.J. (in press). Past experience versus situational employment interview questions in a New Zealand social service agency. Asia Pacific Journal of Human Resources.

Harris, N.J., Toulson, P.K, & Livingston, E.M. (1996). New Zealand personnel consultants and the selection process. Asia Pacific Journal of Human Resources, 34(2), 71-87.

Hough, L.M., Eaton, N.K., Dunnette, M.D., Kamp, J.D., & McCloy, R.A. (1990). Criterion-related validities of personality constructs and the effect of response distortion on those validities. Journal of Applied Psychology, 75, 581-595.

Huffcutt, A.I., & Arthur, W., Jr. (1994). Hunter and Hunter (1984) revisited: Interview validity for entry-level jobs. Journal of Applied Psychology, 79, 184-190.

Huffcutt, A.I., & Roth, P.L. (1998). Racial group differences in employment interview evaluations. Journal of Applied Psychology, 83, 179-189.

Huffcutt, A.I., & Woehr, D.J. (1999). Further analysis of employment interview validity: A quantitative evaluation of interviewer-related structuring methods. Journal of Organizational Behavior, 20, 549-560.

Hunter, J., & Hunter, R. (1984). The validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72-98.

Johns, G. (1993). Constraints on the adoption of psychology-based personnel practices: Lessons from organizational innovation. Personnel Psychology, 46, 569-592.

Johnson, E.K. (2000). The practice of human resource management in New Zealand: Strategic and best practice? Asia-Pacific Journal of Human Resources, 38(2), 69-83.

Joiner, D.A., (2000). Guidelines and ethical considerations for assessment center operations. Taskforce on Assessment Center Guidelines. Public Personnel Management, 18, 457-470.

McDaniel, M.A., Whetzel, D.L., Schmidt, F.L., & Maurer, S. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79, 599-616.

Pajo, K. (1996). Reference reports: A meta-analytic review of predictive validity and an experimental study of rating accuracy. Unpublished doctoral thesis, Massey University, New Zealand.

Profile Publishing (1997). Kompass--The Authority on New Zealand Business [electronic database]. New Zealand: Profile Publishing.

Ryan, A.M., McFarland, L., Baron, H., & Page, R. (1999). An international look at selection practices: Nation and culture as explanations for variability in practice. Personnel Psychology, 52, 359-392.

Schmidt, F.L., & Hunter, J.E. (1998). The validity of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 125, 262-274.

Taylor, P.J., Mills, A., & O'Driscoll, M. P. (1993). Personnel selection methods used by New Zealand organisations and personnel consulting firms. New Zealand Journal of Psychology. 22, 19-31.

Taylor, P.J., & Small, B. (in press). Asking applicants what they would do versus what they did do: A meta-analytic comparison of situational and past behaviour employment interview questions. Journal of Occupational and Organizational Psychology.

Terpstra, D.E., & Rozell, E.J. (1997). Why some potentially effective staffing practices are seldom used. Public Personnel Management, 26, 483-495.

Tett, R.P., Jackson, D.N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703-742.
Address for correspondence:
Paul Taylor, PhD
Department of Psychology
Chinese University of Hong Kong
Shatin, New Territories
Hong Kong

Fax: (852) 2603-5019
Email: ptaylor@psy.cuhk.edu.hk
Paul Taylor
The Chinese University of Hong Kong and the University of Waikato

Yvonne Keelty
PricewaterhouseCoopers Ltd, Christchurch

Bridget McDonnell
Greens Industries Ltd, Hamilton
COPYRIGHT 2002 New Zealand Psychological Society
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2002 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Taylor, Paul (American choreographer); Keelty, Yvonne; McDonnell, Bridget
Publication:New Zealand Journal of Psychology
Geographic Code:8NEWZ
Date:Jun 1, 2002
Words:8529
Previous Article:Cognitive appraisal, negative affectivity and psychological well-being.
Next Article:Effects of iron treatment on cognitive performance and working memory in non-anaemic, iron-deficient girls.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters