Printer Friendly

Development of an attitudes and intentions scale for construction skills training programs.

Attrition is a major problem in construction skills training programs for young adults, with dropout rates ranging between 45% and 65%. Training resources could be used more efficiently if trainees' completion rates were increased. Understanding and measuring the psychological characteristics contributing to attrition allows training practitioners to implement interventions to reduce dropout. This article reports the development of an instrument designed to measure constructs that inform participants' performance and success within the construction training domain. Instrument reliability and validity as well as areas of further research are discussed.

**********

The financial crisis of 2007-2009 (the Great Recession) resulted in an increase of 30 million unemployed individuals (Wanberg, 2012). Providing adequate jobs to support and sustain economic growth is a major challenge worldwide (International Labour Organization, 2012). According to the U.S. Department of Labor (USDOL) Employment and Training Administration (2007), "as the demand for workers with specialized skills and training grows, some economists fear that America is facing a 'skills gap,' a situation in which the demand by employers for skilled workers outpaces supply" (p. 23). Construction is an industry sector facing a shortage of skilled workers (Whyte, Elvin, Piper, Slaughter, & Stilley, 2013). In recent years, many jobs have been added to the labor market because of technical skills training (Hemphill & Perry, 2012). However, high attrition is a major problem in training programs (Sabates, 2008).

Workforce development and training consists of publicly and privately funded programs serving participants of widely varying skill levels. Although training organizations aim to facilitate participants' employment or reemployment through skills enhancement, the diverse nature of programs and trainees produces varying levels of success (Weigensberg et al., 2012). A dropout rate of 51.8% was reported for construction apprenticeship programs registered with the USDOL Bureau of Apprenticeship and Training (Bilginsoy, 2003). Although the length of apprenticeship programs (up to 27 months in Bilginsoy's, 2003, study) may partially explain dropout statistics, high attrition rates also plague shorter duration programs. The USDOL Office of Inspector General's (20ff) audit of the Employment and Training Administration's YouthBuild program indicated an attrition rate of 47.9%, and the Texas Job Corps programs reported dropout rates exceeding 65% (U.S. Government Accountability Office, 2011). A national survey of more than 340,000 Job Corps participants ages 16 to 24 years revealed a 35% dropout rate within 90 days of commencing training (Ginsburg et al., 2000).

According to Mathieu, Tannenbaum, and Salas (1992), "to maximize the benefits of training, researchers and practitioners must know more than whether [training] worked. Many authors have called for greater attention to why training worked" (p. 828). Empirical investigations of training effectiveness are not new, and occupational training researchers have used various approaches to correlate individual and situational variables with training outcomes. The use of pretraining measures to predict individual behavior has received significant attention; however, many of these measures were general rather than domain specific. The predictive value of instruments is enhanced when items are tailored to a specific domain.

Noe and Schmitt (1986) posited that "determining the specific individual characteristics that influence the effectiveness of training is of utmost importance in order to understand how to increase the likelihood that behavior change and performance improvement will result from participation in training programs" (p. 498). Ginsburg et al. (2000) and Weigensberg et al. (2012) concluded that trainees' commitment, attitude, motivation, and confidence were paramount in the decision to complete or drop out of training. Other researchers (Colquitt, LePine, & Noe, 2000; Martoechio & Judge, 1997; Noe, 1986; Noe & Schmitt, 1986; Noe & Wilk, 1993) have predicted occupational training outcomes (e.g., skills transfer) using locus of control (LOC) and training-domain-level self-efficacy and motivation. For a review of self-efficacy within social cognitive theory, see Bandura (1977, 1997), Colquitt et al. (2000), and Pajares (2002). For a review of LOC, see Hotter (1966) and Spector (1988). Finally, Spector (2011) found that behavior in vocational settings could be predicted using the constructs (intention, attitude, subjective norm, and perceived behavioral control) embedded in Ajzen's (1991) theory of planned behavior.

To address high attrition rates plaguing construction-related skills training programs, researchers should empirically investigate trainees' characteristics to elucidate their role in program performance and completion. Therefore, the purpose of this study was to develop a survey instrument specific to the construction training domain to assess the appropriate constructs among training participants. The development of the survey hinged on a comprehensive review of the literature to identify psychological constructs useful in understanding and predicting human behavior in occupational training. The constructs emerging from this review were training-specific self-efficacy and motivation; work LOC; participants' attitudes toward training; and participants' intention, norm, and perceived behavioral control regarding attendance on educational setting. In this article, we document the development and testing of the survey instrument to evaluate program effectiveness and reduce trainee attrition. We also discuss the validation and refinement of the instrument and its use by construction training organizations.

METHOD

Figure 1 provides the conceptual framework and the steps we followed during the instrument's development. In this section, we discuss the following development procedures: instrument selection and adaptation, participants and data collection, Phase 1 survey item reduction, exploratory factor analysis (EFA), emergent factor identification and internal consistency reliability, Phase 2 administration, and validity testing.

Instrument Selection and Adaptation

Instrument development began with the identification of appropriate constructs from previous occupational and educational research. We adapted constructs informing training behaviors and outcomes from existing instruments for use within the domain of construction training. A review of the vocational training literature revealed two critically relevant instruments: the Attitudes Toward Training Utility scale (ATTU; Ford & Noe, 1987; Noe & Schmitt, 1986; Noe & Wilk, 1993) and the Training Attitudes Inventory (TAI; R. A. Noe, personal communication, June 15, 2011). The pertinent constructs of the ATTU and the TAI, namely, motivation to train, attitudes toward education, attitudes toward training utility, and training self-efficacy, were adapted to the construction training domain. Spector's (1988) Work Locus of Control Scale (WLCS) was designed to measure perceived LOC within vocational settings. We adapted the WLCS items to assess LOC toward training participation and outcomes. Finally, Ajzen's (n.d.) Class Attendance Opinion Survey (CAOS) was designed to predict students' attendance behavior through measures of perceived behavioral control, attitudes, subjective norms, and intentions. We adapted the CAOS items to assess class attendance in construction skills training. The response format, domain of measurement, source, and internal consistency reliability of the existing constructs are provided in Table 1.

The adaptation process resulted in a Phase 1 instrument containing 98 items distributed across eight subscales. Subscales were intended to measure respondents' perceptions of (a) construction training self-efficacy (CTSE; 20 items; e.g., "When faced with an unfamiliar problem during construction training, I expect to be able to solve it"), (b) construction training motivation (CTM; 19 items; e.g., "If I'm involved in construction training sessions and I can't understand something I get so frustrated that I stop trying to learn"), (c) training locus of control (TLOC; 17 items; e.g., "Most people are capable of successfully completing training if they make the effort"), (d) attitudes toward construction training (ATCT; 14 items [combined items from the ATTU and TAI scales]; e.g., "I think the best way to learn construction skills is by trial and error on the job, as opposed to attending a construction-related training program"), (e) perceived behavioral control in training (PBCT; eight items; e.g., "For me to attend the meetings of a construction training program on a regular basis is [extremely difficult-extremely easy}"), (f) construction training value attitudes (CTVA; six items; e.g., "For me to successfully complete construction training is [extremely valuable--extremely worthless]"), (g) construction training norms (CTN; eight items; e.g., "Most people who are important to me think that [I should-I should not] attend construction training"), and (h) construction training intentions (CTI; six items; e.g., "I will attend the meetings of this construction training program on a regular basis [extremely likely--extremely unlikely]"). All response formats were converted to either a 5-point Likert (1 = strongly disagree, 5 = strongly agree) or bipolar adjective (e.g., 1 = extremely difficult, 5 = extremely easy) scale in the instrument. Survey items were randomly distributed throughout the instrument to prevent item responses from influencing one another (Amedeo, Golledge, & Stimson, 2009).

Higher subscale scores indicate higher perceived self-efficacy toward construction training (CTSE), greater motivation toward construction training (CTM), a higher external LOC orientation (TLOC), and a more favorable attitude toward construction training (ATCT). For the subscales derived from the theory of planned behavior, higher scores indicate a respondent's positive expectations to regularly attend and/or successfully complete construction training (PBCT), a perception that construction training is viewed in a positive way by others (CTN), a positive perception regarding training usefulness (CTVA), and the likelihood that a respondent will expend effort to attend and complete construction-related training (CTI).

Participants and Data Collection

The instrument was administered in two phases to students at a large western university (University 1, N = 418) and a small midwestern university (University 2, N = 127). Data were collected in person via a hard-copy survey administered by research assistants at the beginning or end of a class session. Participants were enrolled in bachelor's degree programs designed to prepare students for management careers in the construction industry. For a full description of construction management education, see American Council for Construction Education (2013). The sample consisted of 1st-. 2nd-, 3rd-, and 4th-year undergraduate students and graduate students enrolled in 100- through 400-level construction management courses. Enrollment was controlled by prerequisite or corequisite courses to minimize the risk of the same students completing the instrument multiple times.

The total potential sample (N = 545) was divided into two groups, creating separate and distinct samples for each phase of the instrument's development. The Phase 1 instrument was administered to 333 potential participants across all courses surveyed at University 1 and University 2. For Phase 1, 247 students returned usable surveys for a response rate of 74.2%. The Phase 2 validated instrument was administered to 212 potential participants at University 1. For Phase 2,174 usable surveys were returned for a response rate of 82.1%.

Of the Phase 1 respondents (N = 247), 200 (81.0%) were male and 41 (16.6%) were female, with six (2.4%) students choosing not to report their gender. Most of the respondents were majoring in construction management (72.9%, n = 180) and reported being between the ages of 18 and 24 years (86.2%, n = 213). With regard to academic status, 57 (23.1%) were 1st-year students, 50 (20.2%) were 2nd-year students, 61 (24.7%) were 3rd-year students, 73 (29.6%) were 4th-year students, and two (0.8%) were graduate students; four (1.6%) participants chose not to report their year in school. Of the Phase 2 respondents (N = 174), 146 (83.9%) were male and 27 (15.5%) were female, with one (0.6%) student choosing not to report his or her gender. The majority of the respondents were majoring in construction management (87.9%, n = 153) and reported being between the ages of 18 and 24 years (78.2%, n = 136). With regard to academic status, 33 (19.0%) were 1st-year students, 33 (19.0%) were 2nd-year students, 38 (21.8%) were 3rd-year students, 63 (36.2%) were 4th-year students, and six (3.4%) were graduate students; one (0.6%) participant chose not to report his or her year in school.

Phase 1 Survey Item Reduction

Phase 1 data comprised 247 responses to the 98-item instrument. Item reduction initially involved identifying clusters within the instrument's eight subscales (i.e., CTSE, CTM, ATCT, CTVA, TLOC, PBCT, CTI, and CTN; Pett, Lackey, & Sullivan, 2003). For each construct subscale, we developed and analyzed a two-tailed Pearson correlation matrix with p set at .01 (i.e., a separate correlation matrix was produced for the 20-item CTSE subscale, the 17-item TLOC subscale, etc.). Subscale items with correlations of r [less than or equal to] .40 are unlikely to share common variance with other items in the same construct and may load on different factors (Pett et al., 2003). However, Field (2009) recommended a minimum interitem correlation of r [less than or equal to] .30 for the initial investigation of psychological constructs. Although we used the more conservative recommendation (r [less than or equal to] .40) as the initial threshold, we used the r [less than or equal to] .30 criterion for subscales with fewer items so that all of the subscales would be adequately represented in the EFA. Next, we investigated items with high interitem correlations (r [greater than or equal to] .80). Although high correlations are expected between survey items relating to the same construct (Field, 2009), items that are too highly correlated (r [greater than or equal to] .80) introduce multicollinearity and redundancy (Field, 2009; Pett et al., 2003). Therefore, for highly correlated subscale items that were determined to measure the same construct, we retained only one item.

Reduction then involved calculating the eigenvalues for the subscale items that demonstrated acceptable interitem correlations (.30 [less than or equal to] r [less than or equal to] .80). We used an eigenvalue loading greater than or equal to .40 on a single factor as the criterion for subscale unidimensionality (Field. 2009). Survey items with eigenvalues greater than or equal to .40 on multiple factors were considered for removal.

Overall, we removed 38 items through subscale analysis, leaving 60 items for the EFA. Internal consistency reliability coefficients (Cronbach's alphas) were calculated for each subscale. High Cronbach's alphas provide evidence of internal consistency reliability in the adapted constructs (Pett et al., 2003). Alpha levels of .70 to .80 are considered acceptable (Field, 2009). The Cronbach's alphas for the current subscales ranged from .77 (PBCT) to .95 (CTSE), with a Cronbach's alpha of .87 for the 60-item instrument. Table 2 provides the important pre- and post-reduction parameters for the subscale analysis process.

EFA

According to Pett et al. (2003), the decision as to the number of factors to be retained [in an instrument] should be based on an artful combination of the outcomes obtained from statistical indicators, the factors' theoretical coherence, a desire for simplicity, and the original goals of the factor analysis project, (p. 167)

Because the instrument was adapted from existing constructs, we evaluated the emergent factor structure using an EFA. We performed the Kaiser-Meyer-Olkin (KMO) test to ensure an adequate sample size (N = 247) for the EFA and attained a KMO of .94 for the 60 retained items. A KMO statistic greater than .90 is considered "superb" (Field, 2009, p. 647) evidence for sample adequacy.

Pett et al. (2003) stated that an unrotated factor solution rarely provides meaningful and understandable item clusters and often indicates that a general factor may be a statistical artifact. Hence, the initial step in factor identification was the development of a varimax-rotated factor solution. Using a total eigenvalue of greater than or equal to 1.00 as an acceptance threshold, we found that 10 factors emerged in the initial 60-item rotated factor solution (Pett et al., 2003). Seventeen items loaded with eigenvalues of greater than or equal to .40 on two factors. We removed 11 of the 17 items that loaded with eigenvalues above .50 on multiple factors. The remaining six items loaded on two factors with eigenvalues that differed by more than .10. We initially retained these six items in the factor with the higher loading for investigation of their theoretical fit within the emergent factors. Overall, we removed 11 items in the initial EFA, leaving 49 survey items.

We performed a second EFA on the remaining 49 items and identified seven factors with eigenvalues greater than or equal to 1.00. However, no items loaded on Factors 6 or 7 with eigenvalues greater than or equal to .40, thus yielding a five-factor model. Seven of the 49 items loaded with eigenvalues greater than or equal to .40 on two factors. Three of the seven items loaded with eigenvalues between .57 and .63 on Factor 5. Because scrutiny of the wording of these items indicated no coherent theoretical framework, we removed these three items. All four of the remaining dual-loading items originated from the CTSE subscale and loaded with higher eigenvalues on Factor 2. After examining the wording of these items, we determined that they were congruent with the theoretical framework of self-efficacy within construction training (e.g., "My past experience in accomplishments increases my confidence that I will be able to complete a construction training program") and therefore retained these items in Factor 2. Overall, we removed three items in the second EFA, leaving 46 survey items.

In the final EFA, we calculated factor loadings for the remaining 46 items to confirm the factor structure. The factor matrix indicated that Item 8 of the CTN subscale loaded on Factors 1 and 5 with eigenvalues of .68 and .53, respectively. Consequently, we removed this item, reducing the total eigenvalue of Factor 5 below the 1.00 threshold and yielding a 45-item instrument containing four emergent factors.

Emergent Factor Identification and Internal Consistency Reliability

The rotated factor matrix contained eigenvalues for each survey item within the four emergent factors. A four-factor solution was confirmed in the scree plot. Factor 1 contained 14 items with eigenvalues ranging between .69 and .82. All of the items within Factor 1 were adapted from Ajzen's (n.d.) Theory of Planned Behavior questionnaire. Regarding internal consistency, Factor 1, hereinafter Planned Training Behavior (PTB), achieved a Cronbach's alpha of .94. Factor 2 contained 14 items with eigenvalues ranging between .56 and .81. All of the items in Factor 2 were retained from the adapted CTSE subscale. Factor 2, hereinafter CTSE, achieved a Cronbach's alpha of .94. Factor 3 contained 10 items (four items from the ATCT subscale and six items form the CTM subscale) with eigenvalues ranging between .58 and .77. Factor 3, hereinafter Training Motivation Attitudes (TMA), achieved a Cronbach's alpha of .94. Observation of item reliabilities indicated that the Cronbach's alpha of TMA would be higher (.94) if we removed one of the ATCT subscale items. Removal of that item yielded a nine-item TMA subscale and a 44-item instrument. Factor 4, hereinafter TLOC, contained seven items from the adapted TLOC subscale and achieved eigenvalues ranging between .60 and .74 and a Cronbach's alpha of .83. The 44-item instrument, hereinafter called the Construction Training Attitudes and Intentions Scale (CTAIS), achieved an internal consistency (Cronbach's alpha) of .93. The four factors explained 60.51% (PTB, 35.66%; CTSE, 14.12%; TMA. 6.52%; TLOC, 4.21%) of the variance in participants' responses.

Phase 2 Administration

The CTAIS was administered to a separate and distinct group of participants in Phase 2, and internal consistency reliabilities were calculated using the Phase 2 data (N = 174). The CTAIS achieved a Cronbach's alpha of .90, and the 14-item CTSE, 14-item PTB. nine-item TMA, and seven-item TLOC subscales achieved Cronbach's alphas of .95, .91, .93, and .83, respectively.

Validity Testing

According to Patten (2007), construct validity is the combination of judgmental and empirical validity where the "researchers hypothesize about how the construct that the instrument is designed to measure should affect or relate to other variables" (p. 69). In the current study, face validity was confirmed through the emergent factor structure resulting from the EFA. To determine construct validity of the CTAIS, we developed hypotheses regarding the relationships between construct subscales on the basis of our review of the literature. This approach to construct validity was appropriate because previous occupational training research has reported associations among the constructs adapted for use in the CTAIS.

Face validity. Evidence for face validity was gained when the emergent factor structure from the EFA mirrored the factor structure of the original, previously validated constructs. Because the initial constructs had been validated through previous research, we expected that the adapted TLOC subscale items would load on the same factor. In addition, we anticipated that the adapted CTSE subscale items would load on a separate and distinct construct from CTM (and other adapted constructs) as evident in previous research (Colquitt et al., 2000). As noted, the retained CTAIS items loaded in an emergent factor structure that mirrored the factor structure of the original constructs after being adapted to the construction training domain. Therefore, the emergent factor structure in and of itself provided initial evidence of face validity.

Convergent construct validity and hypothesis testing. We developed associational research questions and hypotheses on the basis of our review of the literature and tested these using both Phase 1 and Phase 2 data. Findings in agreement with the hypotheses provided support, albeit indirectly, for the convergent construct validity of the CTAIS (Patten, 2007). The results of the emergent factor correlations mirrored those found in the literature. For example, there was a significant inverse correlation between CTSE and TLOC that mirrored the relationship found between self-efficacy and work LOC in occupational research (Chiaburu & Marinova, 2005; Colquitt et al., 2000; Phillips & Gully, 1997). Positive correlations between TMA and CTSE (Chiaburu & Marinova, 2005;

Colquitt et al., 2000: Phillips & Gully, 1997) and negative correlations between TMA and TLOC (Ng, Sorensen, & Eby, 2006) were supported in the literature. Correlations observed between the emergent factors using Phase 1 and Phase 2 data are found in Table 3.

DISCUSSION

This article documents the development and testing of the CTAIS and its internal consistency reliability and validity. The study sample was composed of undergraduate and graduate students (N = 421) enrolled in construction management classes at two universities. The factors (i.e., CTSE, PTB, TMA, and TLOC) included in the instrument are domain specific and can be applied to any construction training population. However, our study's results likely reflect the nature of the sample. Study participants had already made the decision to begin construction training by enrolling in construction management classes. In addition, students in higher level classes possess construction knowledge and potentially have a history of successful construction training performance (e.g., a track record of successful course completion). In contrast, the target population may possess little knowledge of construction practices and have no previous success in training or construction experiences. Previous construction-related successes and knowledge will influence responses on the CTAIS, and the nature of this influence can be understood only by administering the survey to unemployed trainees with little or no construction experience.

The structure of and correlations between emergent factors were found to be congruent with previous research, thus providing support for face and convergent construct validity. However, the design of the study and the items included in the CTAIS did not allow for empirical comparison of participants' responses with established physiological metrics. Therefore, we were not able to investigate criterion-related validity during this initial step of instrument validation. Administration of the CTAIS in conjunction with established and previously validated construct measures is the next logical step in the validation of the CTAIS. Validation, rarely accomplished through a single study (Beattie, Pinto, Nelson, & Nelson, 2002; Norbeck, 1985; Yang, 2003), should be an ongoing process in instrument development.

Weigensberg et al. (2012) reported that training providers desired a more complete picture of participants' individual characteristics (beyond gaining employment) to gauge their training success. Although training assessments exist that measure participants' self-efficacy, motivation, and other characteristics pretraining, the metrics that are used often assess the constructs at a general level and comparisons between the characteristics and training outcomes are not reported (Weigensberg et al., 2012). According the Pajares (1996), general psychological constructs, such as self-efficacy, are unreliable predictors of behavior or expectations because they lack domain or task specificity.

The CTAIS provides practitioners with a measure of CTSE, PTB, TMA, and TLOC at the construction training domain level. Increasing the specificity of a measure should increase its predictive power (Bandura, 1997; Schwarzer, Bassler, Kwiatek, Schroder, & Zhang, 1997). The instrument has several potential uses. It may be used as (a) a pretraining measure of participants' characteristics to align trainees' needs with training providers' services; (b) a tool to assess participants' characteristics at multiple points during training interventions; (c) a means to determine the effectiveness of pre- and posttraining interventions; and (d) a metric by which to investigate differences in CTSE, PTB, TMA, and TLOC by participant attribute variables to better align training providers' services with the target population's needs.

The study was not structured to allow comparison of the CTAIS with established measures of training performance; therefore, the predictive value of the CTAIS was not explored. Pretraining instrument administration and comparison of CTAIS scores among trainees who complete or fail to complete training would illuminate its value as a predictor of attrition. Posttraining survey administration in conjunction with an established construction-related certification test can be used to evaluate training performance and provide support for criterion-related validity. In this case, the certification test must be directly aligned with the content focus of the training program. Otherwise, the usefulness of the survey as a predictive tool is compromised. Targeted programs for further administration and validation of the CTAIS include the USDOL's YouthBuild and Training Adjustment Assistance programs and construction craft training programs administered by organizations (e.g., National Center for Construction Education and Research) in the private sector.

CONCLUSION

The instrument developed and validated through this study allows training organizations to quantitatively measure and evaluate construction-domain-level characteristics shown in research to predict performance in work settings and attendance in educational settings. Identification of participants' characteristics that contribute to performance and attrition in construction training can assist training organizations in programmatic decision making. Preprogram assessment of trainees allows practitioners to make informed decisions at the individual level about appropriate interventions to increase the likelihood of training success. The CTAIS, when administered pre- and posttraining, provides trainers with a measure of individual characteristics that correlate with training successes. High levels of self-efficacy and motivation are predictive of persistence in job search activities and on-the-job performance. Therefore, higher posttraining CTSE and TMA subscale scores are indicators of training program effectiveness.

Further research is required to determine the predictive value of the CTAIS; however, it can be used to gauge trainees' perceptions of PTB, CTSE, TMA, and TLOC in its current state. Pretraining identification of participants' characteristics allows practitioners to better serve trainees and more effectively allocate scarce training resources. Eden and Aviram (1993) found that an individual's confidence level can be elevated through self-efficacy-boosting interventions, which presumably increases a participant's likelihood of completing training. On the other hand, trainees with high levels of CTSE may not require such interventions, and these training resources (e.g., staff, funding) could be directed elsewhere to maximize training effectiveness. As greater investments are made in training for workforce development, it is imperative that training providers customize programs to meet the needs of the target trainee population.

Received 04/02/14

Revised 05/14/14

Accepted 05/15/14

DOI: 10.1002/joec.12011

REFERENCES

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179-211. doi: 10.1016/0749-5978(91 )90020-T

Ajzen, I. (n.d.). Sample TPB questionnaire. Retrieved from University of Massachusetts Amherst website: http://people.umass.edu/aizen/pdf/tpb.questionnaire.pdf

Amedeo, D., Golledge, R. G., & Stimson, R. J. (2009). Person-environment-behavior research: Investigating activities and experiences in spaces and environments. New York, NY: Guilford Press.

American Council for Construction Education. (2013). Standards and criteria for accreditation of postsecondary construction education degree programs (Document 103). Retrieved from http://www.acce-hq. org/images/uploads/Document_103_0531132.pdf

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191-215. doi: 10.1037/0033-295x.84.2.191 Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: Freeman.

Beattie, P. F., Pinto, M. B., Nelson, M. K., & Nelson, R. (2002). Patient Satisfaction With Outpatient Physical Therapy: Instrument validation. Physical Therapy, 82, 557-565.

Bilginsoy, C. (2003). The hazards of training: Attrition and retention in construction industry apprenticeship programs. Industrial and Labor Relations Review, 57, 54-67.

Chiaburu, D. S., & Marinova, S. V. (2005). What predicts skill transfer? An exploratory study of goal orientation, training self-efficacy and organizational supports. International Journal of Training and Development, 9, 110-123. doi:10.111 l/j.l468-2419.2005.00225.x

Colquitt, J. A., LePine. J. A., & Noe, R. A. (2000). Toward an integrative theory of training motivation: A meta-analytic path analysis of 20 years of research. Journal of Applied Psychology, 85, 678-707. doi: 10.1037/0021-9010.85.5.678

Eden, D., & Aviram, A. (1993). Self-efficacy training to speed reemployment: Helping people to help themselves. Journal of Applied Psychology, 78, 352-360. doi: 10.1037/0021-9010.78.3.352

Field, A. (2009). Discovering statistics using SPSS (3rd ed.). London, England: Sage.

Ford, J. K., & Noe, R. (1987). Self-assessed training needs: The effects of attitudes toward training, managerial level, and function. Personnel Psychology, 40, 39-53. doi: 10.1111/j.1744-6570.1987.tb02376.x

Ginsburg, K. R., Forke, C. ML, Kinsman, S. B., Fleegler, E., Grimes, E. K., Rosenbloom, T., ... Gibbs, K. P. (2000). Retention in the United States Job Corps: Analysis and recommendations. Retrieved from U.S. Department of Labor website: http://wdr.doleta.gov/opr/fulltext/00-jobcorps.pdf

Hemphill, T. A., & Perry, M. J. (2012, February 27). U.S. manufacturing and the skills crisis. The Wall Street Journal. Available from http://www.wsj.com/

International Labour Organization. (2012). Global employment trends 2012. Retrieved from http://www. ilo.org/wcmsp5/groups/public/@dgreports/@dcomm/@publ/documents/publication/wcms_171571.pdf

Martocchio, J. J., & Judge, T. A. (1997). Relationship between conscientiousness and learning in employee training: Mediating influences of self-deception and self-efficacy. Journal of Applied Psychology, 82, 764-773. doi:10.1037/0021-9010.82.5.764

Mathieu, J. E., Tannenbaum, S. I., & Salas, E. (1992). Influences of individual and situational characteristics on measures of training effectiveness. Academy of Management Journal, 35, 828-847. doi: 10.2307/256317

Ng, T. W. H., Sorensen, K. L., & Eby, L. T. (2006). Locus of control at work: A meta-analysis. Journal of Organizational Behavior, 27, 1057-1087. doi:10.1002/job.416

Noe, R. A. (1986). Trainees' attributes and attitudes: Neglected influences on training effectiveness. Academy of Management Review, II, 736-749. doi: 10.2307/258393

Noe, R. A., & Schmitt, N. (1986). The influence of trainee attitudes on training effectiveness: Test of a model. Personnel Psychology, 39, 497-523. doi: 10.1111/j.1744-6570.1986.tb00950.x

Noe, R. A., & Wilk, S. L. (1993). Investigation of the factors that influence employees' participation in development activities. Journal of Applied Psychology, 78, 291-302. doi:10.1037/0021-9010.78.2.291

Norbeck, J. S. (1985). What constitutes a publishable report of instrument development? Nursing Research, 34, 380-382.

Pajares, F. (1996). Self-efficacy beliefs in academic settings. Review of Educational Research, 66, 543-578. doi: 10.3102/00346543066004543

Pajares, F. (2002). Overview of social cognitive theory and of self-efficacy. Retrieved from University of Kentucky website: http://www.uky.edu/~eushe2/Pajares/eff.html

Patten, M. L. (2007). Understanding research methods: An overview of the essentials (6th ed.). Glendale, C A: Pyrczak. Pett, M. A., Lackey, N. R., & Sullivan, J. J. (2003). Making sense of factor analysis: The use of factor analysis for instrument development in health care research. Thousand Oaks, CA: Sage.

Phillips. J. M., & Gully, S. M. (1997). Role of goal orientation, ability, need for achievement, and locus of control in the self-efficacy and goal-setting process. Journal of Applied Psychology, 82, 792-802. doi: 10.1037/0021-9010.82.5.792

Rotter, J. B. (1966). Generalized expectancies for internal versus external control of reinforcement. Psychological Monographs: General and Applied, 80. 1-28. doi:10.1037/h0092976

Sabates, R. (2008). The impact of lifelong learning on poverty reduction (IFLL Public Value Paper 1). Retrieved from National Institute of Adult Continuing Education website: http://www.niace.org.uk/ lifelonglearninginquiry/docs/public-value-paper-1.pdf

Schwarzer, R., Bassler, J.. Kwiatek, P, Schroder, K., & Zhang, J. X. (1997). The assessment of optimistic self-beliefs: Comparison of the German, Spanish, and Chinese versions of the General Self-Efficacy Scale. Applied Psychology, 46, 69-88. doi: 10.1111/j. 464-0597.1997.tb01096.x

Spector, P. E. (1988). Development of the Work Locus of Control Scale. Journal of Occupational Psychology, 61, 335-340. doi: 10.1111/j.2044-8325.1988.tb00470.x

Spector, P. E. (2011). The relationship of personality to counterproductive work behavior (CWB): An integration of perspectives. Human Resource Management Review, 21, 342-352. doi: 10.1016/j. hrmr.2010.10.002

U.S. Department of Labor, Employment and Training Administration. (2007). U.S. Department of Labor, Employment and Training Administrations five-year pilot, demonstration and evaluation strategic plan for 2007-2012 (Series No. ETAOP 2007-04). Retrieved from http://wdr.doleta.gov/research/ FullText_Documents/ETA%20Five-Year%20Research%20Plan.pdf

U.S. Department of Labor, Office of Inspector General. (2011). Recovery Act: ETA needs to strengthen management controls to meet YouthBuildprogram objectives (Report No. 18-11-001-03-001). Retrieved from http://www.oig.dol.gov/public/reports/oa/2011/18-l l-001-03-001.pdf

U.S. Government Accountability Office. (2011). Employment and training programs: Opportunities exist for improving efficiency (Report No. GAO-11-506T). Retrieved from http://www.gao.gov/assets/130/125957.pdf

Wanberg, C. R. (2012). The individual experience of unemployment. Annual Review of Psychology, 63, 369-396. doi: 10.1146/annurev-psych-120710-100500

Weigensberg, E., Schlecht, C., Laken, F., Goerge, R., Stagner, M., Ballard, P., & DeCoursey, J. (2012). Inside the black box: What makes workforce development programs successful? Retrieved from Chapin Hall at the University of Chicago website: http://www.chapinhall.org/sites/default/files/Inside%20 the%20Black%20Box_04_23_12_0.pdf

Whyte, D., Elvin, L., Piper, B., Slaughter, J. G., & Stilley, M. (2013). Craft workforce development 2013 and beyond: A case for greater stakeholder commitment. Retrieved from National Center for Construction Education and Research website: http://www.nccer.org/uploads/filelibrary/craft_wfd_20l3_and_beyond.pdf

Yang, B. (2003). Identifying valid and reliable measures for dimensions of a learning culture. Advances in Developing Human Resources, 5, 152-162. doi: 10.1177/1523422303005002003

Jonathan Elliott, Department of Construction Management, Colorado State University; Carla Lopez del Puerto, Department of Civil Engineering and Surveying, University of Puerto Rico at Mayaguez. Correspondence concerning this article should be addressed to Jonathan Elliott, Department of Construction Management, Colorado State University, 1584 Campus Delivery, Fort Collins, CO 80523-1584 (e-mail: jon.elliott@colostate.edu).

TABLE 1

Description of Existing Constructs Prior to Adaption

Existing Construct and                 Domain of
Response Format                       Measurement         Source

Motivation to train
  Likert scale (b), 5-point             Training      Noe (2011) (c)
Attitudes toward education
  Likert scale (b), 5-point             Academic      Noe (2011) (c)
Attitudes toward training utility
  Likert scale (b), 7-point             Training     Ford & Noe (1987)
Training self-efficacy
  Likert scale (b), 5-point             Training     Noe & Wilk (1993)
Work locus of control
  Likert scale (b), 5-point           Occupational    Spector (1988)
Perceived behavioral control
  Bipolar adjective scale, 7-point      Academic       Ajzen (n.d.)
Attitudes
  Bipolar adjective scale, 7-point      Academic       Ajzen (n.d.)
Subjective norms
  Bipolar adjective scale, 7-point      Academic       Ajzen (n.d.)
Intentions
  Bipolar adjective scale, 7-point      Academic       Ajzen (n.d.)

Existing Construct and
Response Format                       [alpha] (a)

Motivation to train
  Likert scale (b), 5-point               .80
Attitudes toward education
  Likert scale (b), 5-point               --
Attitudes toward training utility
  Likert scale (b), 7-point               --
Training self-efficacy
  Likert scale (b), 5-point               .79
Work locus of control
  Likert scale (b), 5-point             .75-85
Perceived behavioral control
  Bipolar adjective scale, 7-point      .61-90
Attitudes
  Bipolar adjective scale, 7-point        --
Subjective norms
  Bipolar adjective scale, 7-point        --
Intentions
  Bipolar adjective scale, 7-point        --

Note. Dash indicates that the internal consistency reliability was
not reported.

(a) Internal consistency reliability observed in previous research.

(b) Scale anchors: strongly disagree-strongly agree. (c) R. A. Noe
(personal communication, June 15, 2011).

TABLE 2

Phase 1 Item Reduction: Subscale Internal Consistency
Reliabilities, Interitem Correlations, and Eigenvalues

             Initial
            Instrument        Post-Phase 1 Item Reduction

Subscale   n    [alpha]   n    [alpha]      r      Eigenvalue

CTSE       20     .95     18     .95     .36-.71     .64-83
CTM        19     .83      9     .94     .43-.69    .56-.71
CTI         6     .91      6     .91     .46-.74     .77-84
ATCT       14     .61      6     .87     .41-.74     .71-86
CTVA        6     .84      4     .84     .47-67      .64-83
TLOC       17     .62      7     .83     .36-.51     .71-77
CTN         8     .86      5     .82     .34-.69     .69-83
PBCT        8     .28      5     .77     .30-.65     .64-78

Total      98     CO      60   .87 (a)

Note. CTSE = Construction Training Self-Efficacy; CTM = Construction
Training Motivation; CTI = Construction Training Intentions; ATCT =
Attitudes Toward Construction Training; CTVA = Construction Training
Value Attitudes; TLOC = Training Locus of Control; CTN = Construction
Training Norms; PBCT = Perceived Behavioral Control in Training.

(a) Mean alpha coefficient.

TABLE 3

Means, Standard Deviations, and Correlations for
Phase 1 and Phase 2 Emergent Factors

Factor                                     N     M      SD       1

Phase 1

1. Planned Training Behavior              247   4.19   0.71     --
2. Construction Training Self-Efficacy    247   4.05   0.61   .28 **
3. Training Motivation Attitudes          247   4.12   0.68   .51 **
4. Training Locus of Control              247   2.13   0.63   -.22 **

Phase 2

1. Planned Training Behavior              170   4.47   0.51     --
2. Construction Training Self-Efficacy    172   4.31   0.63   .28 **
3. Training Motivation Attitudes          174   4.35   0.65   .38 **
4. Training Locus of Control              174   1.99   0.67   -.17 *

Factor                                       2         3      4

Phase 1

1. Planned Training Behavior
2. Construction Training Self-Efficacy      --
3. Training Motivation Attitudes          .71 **      --
4. Training Locus of Control              -.37 **   -.39 **   --

Phase 2

1. Planned Training Behavior
2. Construction Training Self-Efficacy      --
3. Training Motivation Attitudes          .85 **      --
4. Training Locus of Control              -.55 **   -.52 **   --

* p < .05, two-tailed. ** p < .01, two-tailed.
COPYRIGHT 2015 American Counseling Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Elliott, Jonathan; del Puerto, Carla Lopez
Publication:Journal of Employment Counseling
Article Type:Report
Date:Sep 1, 2015
Words:6187
Previous Article:Effects of business internships on students, employers, and higher education institutions: a systematic review.
Next Article:Relationships of the practice of hijab, workplace discrimination, social class, job stress, and job satisfaction among Muslim American women.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters