Printer Friendly

Assessing student perception of practice evaluation knowledge in introductory research methods.

EVIDENCE-BASED PRACTICE (EBP) is becoming a key component of social work standards for education (Briggs & Rzepnicki, 2004; Howard, McMillan, & Pollio, 2003; Pollio, 2006; Zlotnick, 2004). Cournoyer (2004) defines evidence-based social work as the "mindful and systematic identification, analysis, evaluation and synthesis of evidence of practice effectiveness, as a primary part of an integrative and collaborative process concerning the selection of application of service to members of target client groups" (p. 4). Others draw on definitions based in evidence-based medicine that focus on the use of best evidence to make conscientious decisions about patient care (Straus, Richardson, Glasziou, & Haynes, 2005). There are a variety of ways in which social workers evaluate their practice. Most commonly, a single-subject design is employed where a target behavior of an individual, or a small number of individuals, is established and observed over time (Creswell, 2009). In addition, two or more groups can be compared in a between-subjects design or a factorial design. More complex methods of practice evaluation involve the use of random assignment and control groups, as in the pretest-posttest control-group design (Creswell, 2009). Social work educators and practitioners are making a concerted effort to apply these new standards (Frost, 2002) and are moving toward systematic application of current evidence (Rosen, 2003; Whittaker et al., 2006). Furthermore, social work literature is calling for exploration of curriculum reform in order to place a greater emphasis on EBP throughout undergraduate and graduate programs (Howard, Allen-Meares, & Ruffolo, 2007; Soydan, 2008).

Implementation models for EBP discuss specific steps, one of which includes evaluating the application of an intervention, otherwise termed practice evaluation (Cournoyer, 2004; Gibbs, 2003). Practice evaluation, also discussed as empirically based practice, has a long history of importance within social work and is a key component of EBP. Beginning in the second half of the 20th century, social work began the call for development of its own scientific knowledge base and for the evaluation of practice. This was largely the result of the field's acknowledgement that practitioners were not using the best evidence available to guide their practice decisions. And yet, practitioners often argued that a scientific orientation was disempowering and too mechanical. Unfortunately, this alienation between researchers and practitioners persists and has made the integration of research methods and findings into social work practice quite difficult (Rosen, 2003). This certainly poses problems for newly graduated social work students who find that community agencies embracing EBP anticipate hiring graduates who are equipped with evaluation skills and prepared to lead the way in implementation (Edmond, Megivern, Williams, Rochman, & Howard, 2006). In response, Drake, JonsonReid, Hovmand, and Zayas (2007) discuss the process of infusing EBP throughout an MSW curriculum, and the role of research courses in providing the skills necessary for evaluation of current evidence, as well as generation of new evidence.

There is a current body of literature that discusses techniques to improve undergraduate research education. A variety of approaches are provided, including aspects of participatory or service learning (Knee, 2002) and community research (Anderson, 2002). Even with these approaches, faculty members are still faced with the question of which methods are best in evaluating student knowledge retention and learning. The most common methods of evaluating learning are through embedded assignments, class participation, grades, and course evaluations. Although these evaluation methods are useful, there are few standardized methods available to evaluate whether or not there has been a change in student knowledge over time. Holden, Barker, Meenaghan, and Rosenberg (1999) created the Research Self-Efficacy Scale (RSES) to assess student confidence level in the ability to complete general research activities. This scale was the first of its type to standardize the measurement of student competence in general research activities; however, it does not focus specifically on practice evaluation. The Evaluation Self-Efficacy Scale (ESES), in development during the same time period as the Practice Evaluation Knowledge Scale (PEKS), has been explored as a method to assess student competence at the graduate level (Holden, Barker, Rosenberg, & Onghena, 2008).

In spite of the current move toward EBP as a driving force in undergraduate social work education, little has been done to systematically evaluate whether students are, in fact, gaining knowledge in key components of the EBP approach, such as in practice evaluation skills. Although an elegant approach would be to ascertain whether the EBP model leads to gains relative to other approaches, simply examining whether students acquired and maintained practice evaluation skills would represent a minimal first step to understanding the impact of the social work educational process.

Thus, the purpose of this study was to assess student acquisition and knowledge retention of practice evaluation skills after completion of an undergraduate research methods class. PEKS, previously used to explore practice evaluation activities among practitioners (Baker, Stephens, & Hitchcock, 2010), was used to systematically assess whether students gain usable evaluation skills after completion of the class. It was hypothesized that there would be a significant gain in PEKS scores post class completion. Specifically, we hypothesized that: (1) students would report increased knowledge in evaluation skills, and (2) students' knowledge of evaluation skills would be maintained 1 year later.

Methods

Participants

A convenience sample of two semesters of undergraduate social work students enrolled in the Introduction to Social Work Research Methods course was selected. Students were provided with an Informed Consent document and received an explanation of the research project. This project received approval from the Institutional Review Board at the University of Alabama at Birmingham. Students who agreed to participate in the study completed pretest (T1) measures at the start of the course and posttest (T2) measures at the completion. Students were then asked to complete a follow-up measure (T3) during their field practicum, approximately 2 semesters after completion of the research methods class. Students enrolled in or preparing for field practicum received the follow-up measure through the field coordinator, or received an e-mailed copy of the measure by the investigator. Students not completing the surveys were e-mailed two reminders. In addition, three students could not be contacted through the social work program at the time of follow-up and therefore did not complete the final measure. In total, 34 students completed the pretest (T1) and posttest (T2) measures, and 25 students (74%) completed the follow-up measure (T3). Students were 94% female (n=32), 6% male (n=2), and all were in their junior year of undergraduate study.

Research Methods Course

The research methods course was taught using Research Methods in Social Work (Royse, 2007), a generalist perspective text. Course content included instruction in formulating research questions, research ethics and design, sample selection, quantitative versus qualitative methods and analysis, instrument development, and survey research. Students were required to complete a single-system design project as a part of the course requirements. In addition, both semesters of the research methods course were taught by the same faculty member to ensure consistency of teaching methods. As social work shifts to the more scientifically oriented focus of EBP, several suggestions for education have been made. First and foremost, students should be taught specific methods of EBP so that the outcomes of services can be evaluated. Howard and colleagues (2007) go on to suggest that gained evaluation skills should be demonstrated in classes such as the field placement before students are graduated. And yet, before such skills can be demonstrated, they first must be learned. As a result, the authors consider an introductory research methods course as the primary way in which students learn about practice evaluation, and this provides the rationale for why such a course was chosen as the focus of this study.

Outcome Measure

Student knowledge was measured with PEKS (see Figure 1). PEKS is an eight-item instrument developed to measure social work practitioners' beliefs about their knowledge of practice evaluation competencies. Respondents rate the degree with which they agree to each item on a 5-point Likert scale with 1 indicating strongly disagree and 5 indicating strongly agree. Higher scores indicate a greater confidence level with each competency. The most recent preliminary study of PEKS involved examining 170 completed scales from a random sample of Alabama social workers. PEKS demonstrated excellent internal consistency ([alpha]=.925) and validity (Baker & Ritchey, 2009). Criterion validity was examined by exploring relationships between scale scores and PEKS' items related to practitioner setting, position, and educational level. T-test results indicated no significant differences, which were mostly likely related to the low power of the sample (Baker & Ritchey, 2009). And yet PEKS continues to be strengthened through continued psychometric testing, and the sample of the current study yielded similar internal consistency results ([alpha]=.949).

A paired samples t-test was computed between T1 and T2 to test Hypothesis 1 and was again computed between T1 and T3 to test Hypothesis 2. The paired samples t-test statistic was calculated with a 95% confidence interval using the Statistical Package for the Social Sciences (SPSS) version 16.0. Effect size was interpreted using the standards set forth by Cohen (1988) for small, medium, and large effect sizes. It was anticipated that T2 scores higher than T1 scores indicate acquisition of knowledge and that T3 scores higher than T1 scores indicate retention of knowledge.

Results

The first hypothesis explored whether there would be a statistically significant difference between pretest (T1) and posttest scores (T2) among students enrolled in the research course. Statistically significant differences were denoted for all eight items: "I have been adequately trained to conduct practice evaluation," t(33)=-9.354, p<.000, d=.391, small to medium effect size; "I am comfortable with my knowledge of evaluation designs," t(33)=-8.186, p<.000, d=.214, small effect size; "If I had to design an evaluation plan I would know where to begin," t(33)=-11.551, p<.000, d=.382, small to medium effect size; "I am able to identify an evaluation outcome," t(33)= -7.895, p<.000, d=.317, small to medium effect size; "I am familiar with issues of reliability and validity," t(33)=-7.215, p<.000, d=.226, small effect size; "I am able to locate measures and scales to assist in evaluation," t(33)= -8.155, p<.000, d=.366, small to medium effect size; "I am comfortable with data analysis techniques," t(33)=-7.799, p<.000, d=.395, small to medium effect size; and "The statistics I am required to keep are useful for evaluating outcomes," t(33)=-6.606, p<.000, d=.446, small to medium effect size. Posttest scores were higher for all items than pretest scores (see Table 1). The second hypothesis explored whether students' scores remained stable over the course of 2 semesters. Similar to Hypothesis 1, follow-up scores (T3) were higher than pretest scores (T1) and statistically significant differences were found for all eight items (see Table 2).

Additional statistics were computed in order to examine the comparison between T2 and T3. Since these two particular measures were separated by 2 full semesters, a loss of specific course-related knowledge was possible. Paired samples t-tests were computed with a 95% confidence interval for posttest scores (T2) and follow-up scores (T3) to explore whether there was a change in scores over time. There were no significant differences in six of the eight items on the measure. There were, however, significant differences for two of the items; "I am comfortable with data analysis techniques," t(24)=-.2.914, p<.01, d=.26 and "The statistics I am required to keep are useful in evaluating outcomes," t(24)=-2.089, p<.05, d=.15. For both of these items, the scores were higher at follow-up that at posttest (see Table 2). These differences may reflect that students complete the statistics course after completion of research methods, predictably increasing their comfort level with the use of statistics and data analysis.

Discussion and Applications to Practice

The findings from this study present reassuring evidence that students report gains in practice evaluation skills after completing a research methods course and that these gains maintain over time. As an initial test of the basic assumption that students are gaining evaluation knowledge, these results are extremely positive. The findings clearly support both hypotheses. Students report significant perceived gains across the variety of concepts measured in the PEKS scale (Hypothesis 1); further, these gains were maintained across time (Hypothesis 2) with reasonable effect sizes.

And yet this study is not without several significant limitations, which will be discussed in rough order of concern. The primary limitation is that the scale tested perception of knowledge gained, rather than actual knowledge. In other words, without actual demonstrations of newly learned skills, students simply provided a self-report of their comfort level, which does not guarantee knowledge acquisition. Future studies should strive to make tests of knowledge acquisition more concrete by requiring that students actually demonstrate their newly learned practice evaluation skills.

Additional limitations include the lack of a comparison group and the fact that a convenience sample was used. Similarly, limited attention was paid to factors that impacted on the size and type of gains. Although these limitations should be addressed in future research, they have potentially less impact on the generalizability and meaning of the present findings than might initially be expected. In terms of comparison conditions, there are no obvious ones available. As a social work investigation, comparisons outside of the field are not useful. In addition, because all students enrolled in accredited BSW programs are required to take a research methods course, no population exists at a similar stage of education that has not taken such a course.

Similarly, the standardization required by accreditation means that the curriculum has to include certain elements, making the course itself somewhat generalizable. Given these limitations, future research could potentially focus on whether the comfort in knowledge gained and research skills maintain over time.

An additional concern in this study is sample attrition across time. A significant proportion of the sample was not available for retesting at follow-up. Although it is impossible to examine this in the current data, it is possible that selection bias impacts the significance levels of the T1 to T3 tests. However, the relatively small difference in the pretest scores for the entire sample (N-34) relative to those included in the paired-sample follow-up (N=25) suggests that those not included in the follow-up answered pretest questions in a manner similar enough to those available for follow-up that this limitation, although worth noting, may be of limited importance. Nonetheless, future research should incorporate additional efforts to minimize attrition, which is of particular importance if previous recommendations for long-term follow-ups are followed.

A related limitation of this study is the potential coercion and/or bias that may have occured. Coercion certainly poses limitations when using one's own students as the research subjects. Similarly, self-report scales can be inaccurate because respondents often distort their responses in a deliberate attempt to create either a falsely favorable or unfavorable impression (Piedmont, McCrae, Riemann, & Angleitner, 2000). In the present study, interactions between the researchers (who are also professors) and the students may have resulted in biased responses (Vogt, 2005). In other words, given that students were aware of the purpose of the scale, a student participating in this study as a research subject may have wanted to demonstrate knowledge gains in research methods because this is what was expected of him or her. Future research involving skill demonstrations and continuing after graduating from the social work program would somewhat address this limitation. Additionally, sampling in multiple classrooms where the student is not exposed to unintentional coercion would be a logical next step in research.

Another significant limitation of the current study is its direct connection to education based in EBP. The test for gains in evaluation knowledge does not indicate that an EBP approach actually improves knowledge; rather, it indicates only that students appear to have gained knowledge as a result of general social work education. This relates to the issue of infusion raised by Drake and colleagues (2007), who suggested that students may gain evaluation skills through mere participation in current undergraduate educational efforts, rather than through completion of a particular research methods course. Ideally, undergraduate social work students are continuously exposed to elements of research methods regardless of the courses in which they are enrolled. Elements of research methods are infused throughout the entire curriculum instead of being course-specific. As a result of this infusion, the present study's comparison of T3 scores to T1 scores may be a poor predictor of knowledge retention. Higher T3 scores may simply be reflecting the impact of infusion rather than the impact of a particular research methods course. Despite this uncertainty, the findings suggest that, at least in terms of practice evaluation skills, undergraduate social work students in field settings do have some of the knowledge of EBP emphasized by the field (Edmond et al., 2006). Future research might best address this potential limitation by sampling across institutions with different approaches to infusion of evidence-based education models, then examining the impact of the differing approaches on long-term outcomes.

Highlighting a gaping hole in the current movement toward establishing EBP in social work education is perhaps the most important contribution of the current research. The call to propel social work education toward an evidence-based focus is not based on any evidence that this type of education actually improves students' knowledge or practitioners' skills. Although detailing the forces that have led to the adoption of EBP as an educational paradigm is beyond the scope of this paper, the lack of actual evidence supporting the EBP educational process demands inquiry into its potential impact on subsequent education and practice outcomes. Relatively simple studies, such as the current one, represent only the barest beginnings toward answering key questions around the impact of EBP on the field of social work education and practice.

References

Anderson, S. G. (2002). Engaging students in community-based research: A model for teaching social work research. Journal of Community Practice, 10, 71-87.

Baker, L. R., & Ritchey, F. (2009). Assessing practitioner's knowledge of practice evaluation: Initial psychometrics of the Practice Evaluation Knowledge Scale. Journal of Evidence-Based Social Work, 6, 376-389.

Baker, L. R., Stephens, F., & Hitchcock, L. (2010). Social work practitioners and practice evaluation: How are we doing? Journal of Human Behavior in the Social Environment, 20(8), 963-973.

Briggs, H. E., & Rzepnicki, T. L. (2004). Using evidence in social work practice: Behavioral perspectives. Chicago, IL: Lyceum Books.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Earlbaum.

Cournoyer, B. R. (2004). The evidence-based social work skills book. Boston, MA: Pearson Education.

Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage.

Drake, B., Jonson-Reid, M., Hovmand, P., & Zayas, L. (2007). Adopting and teaching evidence-based practice in master's level social work programs. Journal of Social Work Education, 43, 431-446.

Edmond, T., Megivern, D., Williams, C., Rochman, E., & Howard, M. (2006). Integrating evidence-based practice and social work field education. Journal of Social Work Education, 42, 377-396.

Frost, N. (2002). A problematic relationship? Evidence and practice in the workplace. Social Work and Social Sciences Research, 10, 38-50.

Gibbs, L. E. (2003). Evidence-based practice for the helping professions. Pacific Grove, CA: Brooks/Cole-Thompson Learning.

Holden, G., Barker, K., Meenaghan, T., & Rosenberg, G. (1999). Research self-efficacy: A new possibility for educational outcomes assessment. Journal of Social Work Education, 35, 463-476.

Holden, G., Barker, K., Rosenberg, G., & Onghena, P. (2008). The Evaluation Self-Efficacy Scale for assessing progress toward CSWE accreditation-related objectives: A replication. Research on Social Work Practice, 18, 42-46.

Howard, M. O., Allen-Meares, P., & Ruffolo, M. (2007). Teaching evidence-based practice: Strategic and pedagogical recommendations for schools of social work. Research on Social Work Practice, 17, 561-568.

Howard, M. O., McMillan, C., & Pollio, D. E. (2003). Teaching evidence-based practice: Toward a new paradigm for social work education. Research on Social Work Practice, 13, 234-259.

Knee, R. T. (2002). Can service learning enhance student understanding of social work research? Journal of Teaching in Social Work, 22, 213-225.

Piedmont, R. L., McCrae, R. R., Riemann, R., & Angleitner, A. (2000). On the invalidity of validity scales: Evidence from self-reports and observer ratings in volunteer samples. Journal of Personality and Social Psychology, 78, 582-593.

Pollio, D. E. (2006). The art of evidence-based practice. Research on Social Work Practice, 16, 224-232.

Rosen, A. (2003). Evidence-based social work practice: Challenges and promise. Social Work Research, 27, 197-208.

Royse, D. (2007). Research methods in social work (5th ed.). Belmont, CA: Brooks/Cole.

Soydan, H. (2008). Applying randomized controlled trials and systematic reviews in social work research: Reviews. Research on Social Work Practice, 18, 311-318.

Straus, S. E., Richardson, W. S., Glasziou, P., & Haynes, R. B. (2005). Evidence-based medicine: How to practice and teach EBM (3rd ed.). Edinburgh, UK: Churchill Livingston.

Vogt, P. W. (2005). Dictionary of statistics and methodology: A nontechnical guide for the social sciences (3rd ed.). Thousand Oaks, CA: Sage.

Whittaker, J. K., Greene, K., Schubert, D., Blum, R., Cheng, K., Blum, K,.... Savas, S. A. (2006). Integrating evidence-based practice in the child mental health agency: A template for clinical and organization change. American Journal of Orthopsychiatry, 76, 194-201.

Zlotnick, J. L. (2004). Evidence-based practices in health care: Social work possibilities. Health and Social Work, 29, 259-261.

Accepted: 11/10

Lisa R. Baker is associate professor at the University of Alabama at Birmingham. David E. Pollio is professor and Ashley Hudson is a graduate student at the University of Alabama.

Address correspondence to Lisa R. Baker, University of Alabama at Birmingham, Department of Social Work 1530 3rd Avenue South, Birmingham, AL 35294; e-mail: lrbaker@uab.edu.

Lisa R. Baker

University of Alabama at Birmingham

David E. Pollio

University of Alabama

Ashley Hudson

University of Alabama

DOI: 10.5175/JSWE.2011.200900127
TABLE 1. Comparison of Student Response by Item, Pretest,
Posttest, and  Follow-Up

          Pretest       Posttest                Pretest
          (n=34)         (n=34)    H1 Sig.       (n=25)
                                    level
PEKS
Item     M     SD      M     SD       M       SD     M

1      2.03   1.11   3.76   .781    .000 *   2.12  1.130
2      1.97   1.00   3.56   .786    .000 *   2.0   1.00
3      1.68   .843   3.59   .892    .000 *   1.72   .843
4      2.26   1.11   3.76   .699    .000 *   2.08  1.08
5      2.65   1.18   4.15   .657    .000 *   2.44  1.12
6      2.15   1.11   3.79   .978    .000 *   2.04  1.02
7      2.06   1.07   3.41   .657    .000 *   1.88   .881
8      2.38   1.21   3.68   .912    .000 *   2.32  1.18

         Posttest
           (n=25)   H2 Sig.
                     level
PEKS
Item    SD     M       SD

1      3.92  .909    .000 *
2      3.68  .900    .000 *
3      3.60  .866    .000 *
4      4.00  .764    .000 *
5      4.08  .862    .000 *
6      4.00  .816    .000 *
7      3.96  .841    .000 *
8      4.04  .735    .000 *

Note. PEKS=Practice Evaluation Knowledge Scale.

* p<.001.

TABLE 2. Comparison of Student Pretest and Follow-Up Response by Item

                Pretest      Follow-up
                 (n=25)       (n=25)
                                                    Sig.    Effect
PEKS Item     M      SD      M     SD    t-score    level    size

1           2.12   1.130   3.92   .909    -7.348   .000 *   .294
2            2.0   1.00    3.68   .900    -6.913   .000 *   .185
3           1.72    .843   3.60   .866    -7.824   .000 *   .011
4           2.08   1.08    4.00   .764    -9.252   .000 *   .405
5           2.44   1.12    4.08   .862    -6.216   .000 *   .135
6           2.04   1.02    4.00   .816    -8.363   .000 *   .200
7           1.88    .881   3.96   .841   -10.023   .000 *   .274
8           2.32   1.18    4.04   .735    -6.422   .000 *   .081

Note. PEKS=Practice Evaluation Knowledge Scale.

* p<.001.

FIGURE 1. Practice Evaluation Knowledge Scale

Practitioner Evaluation Knowledge Scale (PEKS) (Student Version)

Practice evaluation is a process in which a practitioner applies
systematic measurement of client  goals and progress in order to
assess treatment or intervention effectiveness. Please rank your
beliefs about practice evaluation using the following scale.

Strongly disagree 1  2  3  4  5            Strongly agree

1. I have been adequately trained to conduct      1   2   3   4   5
practice evaluation.

2. I am comfortable with my knowledge of          1   2   3   4   5
evaluation designs.

3. If I had to design an evaluation plan I        1   2   3   4   5
would know where to begin.

4. I am able to identify an evaluation outcome.   1   2   3   4   5

5. I am familiar with issues of reliability and   1   2   3   4   5
validity.

6. I am able to locate measures and scales to     1   2   3   4   5
assist in evaluation.

7. I am comfortable with data analysis            1   2   3   4   5
techniques.

8. The statistics I am required to keep are       1   2   3   4   5
useful in evaluating outcomes.
COPYRIGHT 2011 Council On Social Work Education
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2011 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Baker, Lisa R.; Pollio, David E.; Hudson, Ashley
Publication:Journal of Social Work Education
Date:Sep 22, 2011
Words:4123
Previous Article:Social work students and self-care: a model assignment for teaching.
Next Article:Students' distress over grades: entitlement or a coping response?
Topics:

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters