Printer Friendly

Utilization And Perceived Utility of Institutional Administrative And Student Affairs Assessment Resources.

Regional accreditation is the mechanism through which many institutions account for the quality of the education provided to their students as well as the quality of the environment within which this education is provided. With the growing move toward accountability in higher education (Martin, Goulet, Martin, & Owens, 2015), institutions have found themselves facing more rigorous assessment demands from their regional accreditors (Eaton, 2013). Without regional accreditation, institutions are unable to offer federal financial aid, the primary funding source for many students. Nor can their students graduate with degrees from programs holding disciplinary accreditation, which is a must for many employers. Given the trend toward increased accountability both during and after students' time on campus, investigating the quality of a comprehensive institutional assessment process is vital to both student and institutional success. This challenge can only be met by institutions being actively and effectively engaged in the assessment process.

Considering the breadth of assessment being conducted across institutions, effectively promoting and sustaining institutional assessment processes can be overwhelming for those officially charged with the tasks. The number of faculty and staff in need of training and support in this critical institutional function is often disproportionately large compared to the number of assessment professionals available. In response, many assessment offices have implemented assessment teams to assist both faculty and administrative and student affairs units across campuses in promoting and sustaining effective assessment processes (Fishman 2017; Krzykowski & Kinser, 2014; Slager & Oaks, 2013).

This study sought to better understand participants' perceptions of their own knowledge of and confidence in the assessment process. Specifically, this study examined how those perceptions are impacted by the peer review process facilitated by an Institutional Effectiveness (IE) Review Team and by other specific resources supported by an Office of Institutional Effectiveness (OIE), such as consultation and website materials. The research questions that guided this study were as follows: (a) What are the perceived strengths and weaknesses of the resources in place to develop knowledge of and confidence in the assessment process? (b) How does perceived utility differ among divisions of the institution? (c) How do participants perceive their own knowledge of and confidence in the assessment process? and (d) What is the relationship between knowledge of and confidence in the assessment process and the utility of specific resources in place?

Because research in the field of assessment has been lacking in terms of data-driven processes to assess the effectiveness of institutional assessment practices, particularly related to administrative and student affairs units, a gap in the literature exists and thus, further research was warranted.

Background

Common resources used to promote effective institutional assessment processes include the use of rubrics and peer review and feedback (Fulcher, Coleman, & Sundre, 2016; Jonsson, 2013; Kahlon, Delgado-Angulo, & Bernabe, 2015; Panadero & Romero, 2014). Assessment teams often apply institutional rubrics to annual assessment reports to supplement quantitative evaluation with qualitative feedback. Apart from this annual process, assessment offices may provide additional resources, such as consultation opportunities or website materials. However, any relationship between these resources, assessment teams, and successful assessment processes, "is only speculative until systematically evaluated" (Fulcher & Bashkov, 2012, p. 7). Assessment offices and the review teams devote significant effort in applying rubrics, providing feedback, and developing support materials. Impact of these efforts is difficult to gauge, but programmatic evaluation allows institutions to look at the impact of a multitude of practices to determine if they have the most appropriate resources in place to positively impact assessment processes across campus (Fink, 2013). Any programmatic assessment process "should continue to undergo evaluation where it can be modified to ensure that every element contributes to the program's outcomes" (Shutt, Garrett, Lynch, & Dean, 2012, p. 78).

This focus on specific assessment resources is important because often institutions focus their assessment on participant satisfaction instead of the impact of specific resources on assessment outcomes (Chalmers & Gardiner, 2015). For example, Meyer and Murrell (2014) examined how a variety of institutions evaluated their faculty development programs in online learning and found that 95% of responding institutions focused outcome measures on faculty satisfaction with the training, and 90% focused outcome measures on faculty perception of the usefulness of the training. A more effective approach may be to collect data addressing the frequency with which participants consult specific resources provided and apply the skills learned, as well as their reasons for not using the specific resources provided or applying the skills taught (Yarber et al., 2015). Collecting data specific to the utility and application of specific resources could allow program developers to address more systematically any weaknesses or shortcomings participants reveal.

Methods

Research Design

The purpose of this nonexperimental quantitative study was two-fold. First, the researchers sought to better understand participants' perceptions of their own knowledge of and confidence in the assessment process. Second, this study identified perceived strengths and weaknesses of existing resources to determine their utility. In doing so, the researchers intended to go beyond anecdotal findings and examine a model being implemented at one large public, southeastern university. Specifically, this study examined how participants' perceptions are impacted by the peer review process facilitated by the IE Review Team and by other specific resources supported by the OIE. This study examined the "process of interaction" between IE Review Team members and administrative and student affairs units, relying on the participants' views of the process to construct a clearer picture of perceived strengths and weaknesses of the resources in place (Creswell, 2014, p. 8).

Participants

Researchers used saturation sampling to survey all administrative and student affairs unit administrators, assessment coordinators, and staff who were responsible for or had contributed to the preparation of their units' annual assessment reports or plans during any of the six previous assessment cycles. Total study population was 85, and of the 85 surveyed, 61 participants provided data, yielding a response rate of 72%.

Data Collection

This study relied on data collected by the OIE through an anonymous electronic survey, modified, with permission from the original authors, and adapted to accurately reflect the resources specific to the research university (Rodgers, Grays, Fulcher, & Jurich, 2013). The complete survey instrument is included in Appendix A. Administered at the conclusion of a yearly assessment cycle, the survey addressed two main areas, Use of Assessment Resources and Assessment Environment. Each item in the Use of Assessment Resources section described a unique resource available to administrative and student affairs units, such as face-to-face feedback from an IE Review Team member or general information on the OIE website. Likert-scaled responses included: I did not know about this resource; I knew about this resource but did not use it; This resource was not at all helpful; This resource was a little helpful; This resource was quite helpful; and This resource was very helpful. Each item in the Assessment Environment section addressed participants' confidence in their understanding of good assessment processes, their ability to conduct assessment activities, and their ability to successfully report assessment activities. Likert-scaled responses for all questions included: Very Untrue, Somewhat Untrue, Neither True nor Untrue, Somewhat True, and Very True.

Creswell (2014) stated that "[when] one modifies an instrument ... the original validity and reliability may not hold for the new instrument, and it becomes important to reestablish validity and reliability during data analysis" (p. 160). To establish validity and reliability, the OIE pilot tested the survey with the Associate Vice President for Institutional Effectiveness, and all seven members of the IE Review Team provided feedback regarding item clarity and arrangement of scale items. Gay, Airasian, and Mills (2009) stated that "if numbers are used to represent the response choices," as with the series of Likert-scaled items that make up the research instrument for this study, "analysis for internal consistency can be accomplished using Cronbach's alpha" (p. 161). Reliability of the instrument was assessed using Cronbach's Alpha, and results showed moderate reliability for utility of individual practices ([alpha] = .64) and high reliability for knowledge of and confidence in assessment ([alpha] = .92).

Data Analysis

Researchers used descriptive statistical measures to evaluate perceived knowledge of and confidence in the assessment process and utility of specific resources. Mean scores were calculated both in the aggregate and by division to determine any variance in utility amongst the divisions represented. These data addressing the first three research questions provided the OIE with a better understanding of participants' knowledge of and confidence in the assessment process, as well as the perceived strengths and weaknesses of the resources the OIE supports.

Treating the impact of specific resources as an independent variable, the researchers applied regression and correlation methods to determine if relationships existed between each independent variable and a constructed dependent variable, the knowledge of and confidence in assessment composite score (KCC score). Researchers constructed individual KCC scores by calculating an average of each participant's responses to the three questions in the Assessment Environment section of the survey. Regression coefficients provided the means of estimating the extent to which one variable impacted another, while correlation coefficients provided a way to assess the accuracy of those estimates (de Vaus, 2014). This provided an appropriate means of examining the effects of specific resources supported, such as face-to-face feedback and written feedback, and knowledge of and confidence in the assessment process. Correlation matrices were compiled to display and review the results of these analyses.

Results and Discussion

Findings are presented in two primary categories. The first category addresses perceived utility of specific resources and participants' perception of their knowledge of and confidence in the assessment process. The second category addresses the relationship between perceived utility of specific resources and participants' perception of their knowledge of and confidence in the assessment process.

Individual Practices and Processes

Participants rated the utility of each specific resource using a six-point Likert scale, with 1 indicating I did not know about this resource, 2 indicating I knew about this resource but did not use it, and three through six indicating levels of utility, ranging from This resource was not at all helpful (3) to This resource was very helpful (6). Individual items addressed the utility of General information about assessment from OIE's website (OIE Website), General information about assessment from sources other than the OIE website, such as assessment books or conference workshops (External Individual Resources), Face-to Face (F2F), feedback from IE Review Team Members during the annual review, Electronic feedback from OIE and IE Review Team Members outside the annual review (Electronic Feedback), Consultation with IE Review Team Members outside the annual review (RT Off Cycle), Consultation with OIE staff outside the annual review (OIE Off Cycle), Administrative, Academic, and Student Support Services Rubric (OIE Rubric), and the Rubric and example specific to each division (Divisional Example). Table 1 highlights descriptive statistics for the specific resources while Table 2 presents inter-item correlations.

In the aggregate, participants reported the least useful resources to be the OIE Website and External Resources that participants seek or experience outside their interaction with the OIE. Means were 3.21 and 3.00 respectively, indicating these individual practices were not helpful. The highest means were reported for F2F Feedback and Electronic Feedback, with means of 5.11 and 4.92 respectively, indicating these specific resources were helpful. Regarding correlations between resources, statistically significant correlations were notably found between resources of similar format. For example, relatively static sources of information (i.e., OIE Website and External Resources) showed a mild, statistically significant correlation of .47. Similarly, static templates or examples (i.e., OIE Rubric and Divisional Example) demonstrated a high, statistically significant correlation of .82. Perhaps not surprisingly, resources incorporating some form of dynamic, personalized interaction (i.e., F2F, Electronic Feedback, RT Off Cycle, and OIE Off Cycle) produced multiple statistically significant correlations (see Table 2).

Research question two examined the variation in utility of specific resources among the different divisions represented. F2F and Electronic Feedback were perceived by participants to have the most utility in three of the five divisions represented, which included the division of Vice President--Academic Affairs, President, and Vice President--Student Affairs and Enrollment Management. The divisions of Vice President-- Business and Finance (VPBF) and Chief Information Officer/Information Technology (CIOIT) rated OIE Off Cycle as the most useful, followed by F2F Feedback.

Knowledge of and Confidence in Assessment

Research question three addressed participants' perceptions of their own knowledge of the assessment process and their confidence in applying that knowledge. Participants responded to a series of Likert-scaled questions focusing on Assessment Environment, with responses ranging from Very untrue (1) to Very true (5). Items addressing knowledge of and confidence in assessment were: 1) I have a solid understanding of what constitutes good assessment practice; 2) I am confident I can successfully conduct assessment activities in my unit; and 3) I am confident I can successfully report assessment activities in my unit (see Table 3).

In all three cases, mean scores reported were all slightly higher than 4.00, indicating that, in the aggregate, participants felt it is at least Somewhat true that they understand what constitutes good assessment processes, they can conduct assessment, and they can report their assessment activities. As with utility of individual practices, however, there is variation when results were viewed by division. Participants from the divisions of VPBF and CIOIT have comparatively less confidence in all three areas. Emil and Cress (2014) noted that perceived skill can affect engagement. Therefore, although it may be true in the aggregate, these common barriers to engagement in assessment may not apply in this case. If the results are in fact a true reflection of participants' perceptions of their knowledge and confidence, some divisions may be more likely to engage than others.

Correlational Analyses

After review of the descriptive statistics for each item, correlational analyses were utilized to investigate the relationship between knowledge of and confidence in the assessment process and the utility of specific resources in place. To facilitate these analyses, the KCC for each participant was derived from participants' responses to the same three Assessment Environment items listed above. All three items used in the composition of the KCC demonstrated high statistically significant correlations suggesting concurrent validity (see Table 4).

Correlations between the KCC score, individual practices, and number of assessment cycles in which participants have engaged were then reviewed (see Table 5).

Participants' KCC Scores and Utility of Specific Resources

As shown in Table 5 below, of the eight specific resources identified for this study, only two were shown to have statistically significant relationships with participants' KCC scores. Using Pearson's correlation, both Electronic Feedback and resources on the OIE Website demonstrated statistically significant positive relationships with participants' KCC scores at the p < 0.05 level.

Before conducting regression analyses, the researchers conducted a second set of descriptive and correlational analyses, excluding all responses of (1) I did not know about this resource or (2) I knew about this resource but did not use it from section two of the survey instrument. This manipulation of the data permitted analyses of the perceived utility of each specific resource as reported only by participants who actually used each resource. Descriptive statistics are presented in Table 6 below. The sample size varies due to the number of participants who used each resource.

In the aggregate, participants who have used the specific resources the OIE supports reported the least useful resources to be the OIE Rubric and the OIE Website, with means of 4.48 and 4.60 respectively. The highest means were reported for OIE Off Cycle and F2F, with 5.18 and 5.17 respectively. These targeted times for interaction with assessment coordinators and the IE Review Team and OIE staff provided the opportunity to encourage needed reflection and engagement in the assessment process as indicated in the literature (Gebelica, Van den Bossche, De Maeyer, Segers, & Gijselaers, 2014). Both are needed for participants to see the benefit of assessment beyond external factors and to develop confidence and skill in the process (Emil & Cress, 2014).

Statistically significant results for this question of KCC correlation differed when conducting analyses based on the entire sample for the study versus only those participants who have actively participated by using a particular resource. In the aggregate, only Electronic Feedback and the OIE Website demonstrated statistical significance. When removing participants who had not used specific resources from the correlation, Electronic Feedback continued to produce statistical significance, but the OIE Website did not. Instead, four additional individual resources, including F2F, RT Off Cycle, OIE Off Cycle, and the OIE Rubric, demonstrated statistically significant relationships with KCC scores. The work of Panadero and Romero (2014) is corroborated in the reported utility of the institutional rubric in that it is helpful for participants to have an idea of what their final products should look like, and the OIE rubric provides that guidance. Overall, however, the opportunities for personal or electronic interaction continued to have the most perceived utility. These findings are similar to those of Rodgers et al. (2013), which also supported consultation with assessment professionals and the use of feedback, and Kahlon et al. (2015), which promoted formative feedback, particularly in a face-to-face setting.

Further analysis was next conducted to explore the relationship between participants' KCC scores and those specific resources with statistically significant relationships to the participants' KCC scores. As shown in Table 7 below, of the eight specific resources identified for this study, when considering only those participants who have used the specific resources provided, five resources were shown to have statistically significant relationships with participants' KCC scores, as opposed to two when considering all participants. Using Pearson's correlation, F2F, Electronic Feedback, RT Off Cycle, OIE Off Cycle, and the OIE Rubric demonstrated statistically significant relationships with KCC at the p < 0.01 level as depicted in Table 7.

Regression Analyses

Finally, while the correlational analyses indicated significant relationships between participants' KCC scores and five of the specific resources, researchers were interested in the variance (in participants' KCC scores) accounted for by specific resources. Hierarchical regression was applied using results from correlational analyses and researchers' discretion in composing model steps. Specifically, RT Off Cycle and OIE Off Cycle served as step one of the model and F2F, Electronic Feedback, and the OIE Rubric were selected as step two of the model (see Table 8).

Both steps of the model were found to be statistically significant at the p < .05 level, predicting variance within participants' KCC score. Step one of the model accounted for approximately31%ofthe variance, with step two adding a slight increase of approximately 8%. Review of histograms suggested normal distribution of residuals; however, collinearity statistic s (i.e., tolerance and VIF) suggested caution (Field, 2018).

Implementing successful institutional assessment processes is important both in terms of external accountability and internal success. The findings from this study support that participants value the opportunities the OIE provides for indirect and direct interaction with members of the OIE staff and the IE Review Team. Although existing literature regarding the benefits of peer review focus largely on academic assessment (Jonsson, 2013; Kahlon et al., 2015), the premise is very much the same. Like students, participants in this study appreciated both face-to-face and electronic feedback provided during the institution's annual review process.

The majority of the OIE's and the IE Review Team's contact with administrative and student affairs units each year is focused on preparing annual assessment plans and reports. IE Review Team members review both documents and provide feedback to those responsible for report preparation. Written feedback is first shared electronically and is then shared during an annual face-to-face review process during which those who contribute to these documents and those who review them discuss opportunities to improve final reports and develop assessment plans for the coming year. IE Review Team members assist responsible administrators and staff in identifying positive attributes, as well as addressing weaknesses. Gebelica, Van den Bossche, De Maeyer, Segers, and Gijselaers (2014) found support for "accurate and timely feedback" in encouraging "active engagement" and "reflective interactions" (p. 93), which is consistent with the findings of this study. This face-to-face review process provides units with dedicated time to work with IE Review Team members and think critically about the objectives they were trying to accomplish, to determine how effective their strategies were in accomplishing those objectives, and to identify what they may need to do differently going forward. These established feedback processes have demonstrated value to participants and may continue to promote productive engagement in the institution's assessment processes if continued.

Although the aggregate mean scores for consultation with OIE staff or IE Review Team members varied slightly when considering all participants versus only those participants who used these specific resources, consultations outside the annual review process were still perceived to be among the top four most useful resources and further corroborated the benefits of peer feedback (Nicol, Thomson, & Breslin, 2014). Of the participants, 33% were either unaware they have the option of consulting with an OIE staff member outside the annual review process or they chose not to pursue the option. Furthermore, of these, 28% were unaware of this same option for consulting with a member of the IE Review Team. However, for both resources, when considering only those participants who had used them, the most common response was This resource was very helpful (6). Additionally, results from the regression analyses suggest that off-cycle consultation with OIE staff serves as a greater predictor of participant confidence in assessment (i.e., KCC score) than even more dynamic (e.g., face-to-face) forms of interaction during recognized assessment periods. Given these facts, the OIE may benefit from better publicizing such options moving forward (Hahn & Lester, 2012).

Both the OIE Rubric and the Divisional Example present additional publicity possibilities for the OIE. Panadero and Romero (2014) concluded that rubrics, when "well-designed ... can have a positive impact on performance" (p. 142). As with the opportunities for consultation outside the annual review cycle, 28% of participants were either unaware of the OIE rubric used to evaluate the quality of completed assessment reports or chose not to consult it, and 26% were either not aware of or chose not to consult the Divisional Example designed as an example of strong assessment reporting for each division. For those using these specific resources, mean scores in the aggregate showed that each were almost squarely between a little helpful (4) and quite helpful (5). Results by division show that only the VPSAEM participants felt the Divisional Example was at least quite helpful (5), while the OIE Rubric was only a little helpful (3), and for all other divisions, reported means for both the OIE Rubric and the Divisional Example were also only a little helpful (3). This suggests the OIE may have opportunities for improvement on both of these specific resources.

Finally, although the OIE Website and External Resources were perceived to be at least a little helpful (4), in the aggregate, results considering only those participants who used these specific resources highlight additional publicity efforts may be in order. Forty-three percent of participants were either unaware of materials posted on the OIE website or chose not to use them, and 46% were either unaware that External Resources were available or chose not to use them. While the OIE cannot control the utilization of specific resources, it can take steps to be certain that those resources it does provide via its website are helpful to those who seek them. It may therefore be beneficial for the OIE to examine more closely if resources are recognized but not used or truly are not recognized as available options.

Limitations, Delimitations, and Assumptions

The immediate results of this study are limited to one university, but the results can extend the body of literature that exists relative to administrative and student affairs assessment in higher education. Existing literature often fails to go beyond anecdotal evidence in support of concrete quantitative data and this study provided quantitative data to support which specific assessment resources were perceived to be more helpful than others. Specifically in regard to the regression analyses of this study, tolerance statistics suggested interpretation of results with caution due to multicollinearity concerns. Furthermore, because data were collected to study the impact of administrative and student affairs assessment processes at one large, public southeastern university, generalizability is limited; however, the results should still be of use to assessment practitioners beyond the study setting.

Implications for Practice

Findings from this study are the first step in conducting ongoing programmatic assessment of the effectiveness of administrative and student affairs assessment practices at one large public, southeastern university. Data collected provide the baseline assessment data regarding the perceived strengths and weaknesses of specific resources supported by OIE assessment teams. Additional data provided new insight into participants' perceptions of their own knowledge of and skill in applying assessment processes.

In expanding the assessment process, it is vital to recruit professionals who have demonstrated some skill in applying effective assessment processes. Data from this study suggest that, in the aggregate, all participants in this study felt it is at least somewhat true that they are able to do so. The OIE and the assessment team may consider revising this section of the survey instrument to better determine those individuals who may be best suited to coach others in conducting and reporting assessment activities. It is possible, for example, that participants feel reasonably certain they can perform these activities themselves, but they are far less certain they could assist others in doing so. As the OIE and the assessment team consider revising individual practices and processes, it could be helpful to collect qualitative information from participants regarding ways to improve the utility of each.

Conclusion

The OIE has established and developed assessment resources over time but their impact has not been routinely and formally investigated. Although this study was limited to a single office working with a specific population of administrative and student affairs assessment coordinators, administrators, and staff, study findings corroborate the positive impact of rubrics and peer review and feedback, providing the OIE with a basis for continuing to support many of its existing resources.

This study was intended to help address questions about the effectiveness of the resources in place in support of institutional administrative and student affairs assessment units to help ensure all resources contribute to the effectiveness of the assessment process, and the researchers believe the findings support these efforts. It is important to "ask the tough questions and to get the news that something is not working (or working as assumed) and should therefore be revised or eliminated" (Meyer & Murrell, 2014, p. 4). This study, which may serve as a model for other institutions that support similar resources, provided baseline data for assessment teams to begin a decision-making process and determine, based on evidence collected, which resources should be continued or modified to attain the most beneficial assessment outcomes.
Appendix

Assessment Resources and Environment Survey Instrument *

* Adapted, with permission, from Rodgers, M., Grays, M.,
Fulcher, K, & Jurich, D. (2013)

Thinking about the assessment resources provided on campus,
please choose the phrase that best describes your perception
of the usefulness of each resource.

                            This        This         This
                            resource    resource     resource
                            was very    was quite    was a
                            helpful.    helpful.     little
                                                     helpful.

General information
about assessment from
OIE's website

General information
about assessment from
sources other than the
OIE website, such as
assessment books or
conference workshops

Face-to-Face feedback
from IE Review Team
Member (during annual
review)

Electronic feedback
from OIE and IE
Review Team Member
(during annual review)

Consultation with IE
Review Team Member
(outside annual review
sessions)

Consultation with OIE
staff (outside annual
review sessions)

Administrative,
Academic, and Student
Support Services Rubric

Rubric and example
specific to my division
(e.g., VPBF, VPSAEM,
etc.)

Thinking about the assessment environment in your particular division
(for example, Business and Finance or Academic Affairs), how would
you respond respond to each statement?

                            Very true    Somewhat    Neither true
                                         true        nor untrue

I have a solid
understanding of what
of what constitutes good
assessment practice.

I am confident I can
successfully conduct
assessment activities
in my unit.

I am confident I can
successfully report
assessment activities
in my unit.

Number of assessment cycles in which you have participated

                           1            2       3

Your reporting division

                           President    VPAA    VPSAEM

Thinking about the assessment resources provided on campus,
please choose the phrase that best describes your perception
of the usefulness of each resource.

                            This            I knew         I did not
                            resource        about this     know about
                            was not at      resource       this
                            all helpful.    but did        resource
                                            not use it.

General information
about assessment from
OIE's website

General information
about assessment from
sources other than the
OIE website, such as
assessment books or
conference workshops

Face-to-Face feedback
from IE Review Team
Member (during annual
review)

Electronic feedback
from OIE and IE
Review Team Member
(during annual review)

Consultation with IE
Review Team Member
(outside annual review
sessions)

Consultation with OIE
staff (outside annual
review sessions)

Administrative,
Academic, and Student
Support Services Rubric

Rubric and example
specific to my division
(e.g., VPBF, VPSAEM,
etc.)

Thinking about the assessment environment in your particular
division (for example, Business and Finance or Academic Affairs),
how would you respond respond to each statement?

                            Somewhat    Very
                            untrue      untrue

I have a solid
understanding of what
of what constitutes good
assessment practice.

I am confident I can
successfully conduct
assessment activities
in my unit.

I am confident I can
successfully report
assessment activities
in my unit.

Number of assessment cycles in which you have participated

                           4       5 or more

Your reporting division

                           VPBF    CIOIT


References

Chalmers, D., & Gardiner, D. (2015). An evaluation framework for identifying the effectiveness and impact of academic teacher development programmes. Studies in Educational Evaluation, 46 (Evaluating Faculty Development), 81-91.

Creswell, J.W. (2014). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage Publications.

de Vaus, D. (2014). Surveys in social research. 6th ed. New York: Routledge/Taylor & Francis Group.

Eaton, J. S. (2013). Accreditation and the next reauthorization of the higher education act. Inside Accreditation, 9(3).

Emil, S., & Cress, C. (2014). Faculty perspectives on programme curricular assessment: individual and institutional characteristics that influence participation engagement. Assessment & Evaluation in Higher Education, 39(5), 531.

Field, A. (2018). Discovering statistics using IMB SPSS statistics (5th ed.). Thousand Oaks, CA: Sage Publications.

Fink, L. D. (2013). Innovative ways of assessing faculty development. New Directions for Teaching & Learning, 2013(133), 47-59.

Fishman, S. M. (2017). Utilizing an assessment planning model: Revision through reflection. American Association of University Administrators, 32(1), 186-194.

Fulcher, K. H., Coleman, C. M., & Sundre, D. L. (2016). Twelve tips: Building high-quality assessment through peer review. Assessment Update, 28(4), 1.

Fulcher, K. H., & Bashkov, B. M. (2012). Do we practice what we preach? The accountability of an assessment office. Assessment Update, 24(6), 5.

Gay, L. R., Airasian, P. W., & Mills, G. E. (2009). Educational research: Competencies for analysis and applications. Upper Saddle River, N.J.: Merrill/Pearson.

Gebelica, C., Van den Bossche, P., De Maeyer, S., Segers, M., & Gijselaers, W. (2014). The effect of team feedback and guided reflexivity on team performance change. Learning and Instruction, 3486-96.

Hahn, T. B., & Lester, J. (2012). Faculty needs and preferences for professional development. Journal of Education for Library and Information Science, (2), 82.

Jonsson, A. (2013). Facilitating productive use of feedback in higher education. Active Learning in Higher Education, 14(1), 63-76.

Kahlon, J., Delgado-Angulo, E. K., & Bernabe, E. (2015). Graduates' satisfaction with and attitudes towards a master programme in dental public health. BMC Medical Education, 15(1), 61.

Krzykowski, L., & Kinser, K. (2014). Transparency in student learning assessment: Can accreditation standards make a difference? Change, 46(3), 67-73.

Martin, I. H., Goulet, L., Martin, J. K., & Owens, J. (2015). The use of a formative assessment in progressive leader development. Journal of Leadership Education, 14(4).

Meyer, K. A., & Murrell, V. S. (2014). A national survey of faculty development evaluation outcome measures and procedures. Online Learning, 18(3).

Nicol, D., Thomson, A., & Breslin, C. (2014). Rethinking feedback practices in higher education: A peer review perspective. Assessment & Evaluation in Higher Education, 39(1), 102-122.

Panadero, E., & Romero, M. (2014). To rubric or not to rubric? The effects of self-assessment on self-regulation, performance and self-efficacy. Assessment in Education: Principles, Policy & Practice, 21 (2), 133-148.

Rodgers, M., Grays, M., Fulcher, K., & Jurich, D. (2013). Improving academic program assessment: A mixed methods study. Innovative Higher Education, 38(5), 383-395.

Shutt, M. D., Garrett, J. M., Lynch, J. W., & Dean, L. A. (2012). An assessment model as best practice in student affairs Journal of Student Affairs Research and Practice, 49(1), 65-82.

Slager, E. M., & Oaks, D. J. (2013). A Coaching model for student affairs assessment. About Campus, 18(3), 25-29.

Yarber, L., Brownson, C. A., Jacob, R. R., Baker, E. A., Jones, E., Baumann, C., & Brownson, R. C. (2015). Evaluating a train-the-trainer approach for improving capacity for evidence-based decision making in public health. BMC Health Services Research, 15(547).

AUTHORS

Cynthia D. Groover, Ed.D

Georgia Southern University

Juliann Sergi McBrayer, Ed.D

Georgia Southern University

Richard Cleveland, Ph.D.

Georgia Southern University

Amy Jo Riggs, Ph.D.

Georgia Southern University

CORRESPONDENCE

Email cgroover@georgiasouthern.edu.
Table 1

Descriptive Statistics for Utility of Specific Resources

                OIE        External
                Website    Resources       F2F

Mean             3.21         3.00        5.11
Median           4.00         3.00        5.00
Mode             1.00         1.00        6.00
Std. Dev.        1.77         1.81        0.92
Variance         3.14         3.27        0.84
Skewness        -0.02         0.07       -0.91
Kurtosis        -1.52        -1.66        0.78
Range            4.00         5.00        5.00

                Electronic    RT Off    OIE Off
                Feedback      Cycle      Cycle

Mean              4.92          4.05      4.21
Median            5.00          4.00      5.00
Mode              5.00          6.00      6.00
Std. Dev.         1.01          1.72      1.77
Variance          1.01          2.95      3.14
Skewness         -1.16         -0.43     -0.54
Kurtosis          2.45         -1.23     -1.20
Range             4.00          5.00      5.00

                OIE       Divisional
                Rubric    Example

Mean             3.54       3.70
Median           4.00       4.00
Mode             4.00       5.00
Std. Dev.        1.76       1.80
Variance         3.09       3.25
Skewness        -0.27      -0.39
Kurtosis        -1.19      -1.24
Range            5.00       5.00

Note. n = 61

Table 2

Inter-Item Correlations for Specific Resources

                        OIE        External
                        Website    Resources     F2F

External Resources      0.47 *                  0.40 *
F2F                     0.45 **     0.40 *
Electronic Feedback     0.45 **     0.18        0.71 **
RT Off Cycle            0.48 *      0.41        0.86 **
OIE Off Cycle           0.57 **     0.57 **     0.90 **
OIE Rubric              0.24        0.21        0.26
Divisional Example      0.08       -0.03        0.40 **

                        Electronic    RT Off
                        Feedback      Cycle

External Resources      0.18          0.41
F2F                     0.71 **       0.86 **
Electronic Feedback                   0.68 **
RT Off Cycle            0.68 **
OIE Off Cycle           0.71 **       0.87 **
OIE Rubric              0.48 **       0.28
Divisional Example      0.47 **       0.27

                        OIE Off      OIE
                        Cycle        Rubric

External Resources      0.57 **      0.21
F2F                     0.90 **      0.26
Electronic Feedback     0.71 **      0.48 **
RT Off Cycle            0.87 **      0.28
OIE Off Cycle                        0.50 **
OIE Rubric              0.50 **
Divisional Example      0.46 *       0.82 **

Note. n = 61. ** Denotes significant at the p<0.01 level;
* denotes significant at p<0.05.

Table 3

Mean Scores, Knowledge of and Confidence in
Assessment

               Q1       Q2       Q3

Mean          4.15     4.18     4.15
Median        4.11     4.11     4.11
Mode          4.11     4.11     4.11
Std. Dev.     1.85     1.97     1.88
Variance      1.71     1.94     1.78
Skewness     -1.78    -1.18    -1.85
Kurtosis      1.31     1.14     1.26
Range         3.11     4.11     3.11

Note, n = 61.

Table 4

Correlational Relationships between Variables
Contributing to KCC

            Practice    Conduct    Report

Practice                 0.79 *     0.71 *
Conduct      0.79 *                 0.87 *
Report       0.71 *      0.87 *

Note. n = 61. * Denotes significant at the p<0.05 level.

Table 5

Correlational Relationships between Participant KCC Scores
and Utility of Specific Resources

                 OIE        External                   Electronic
       Cycles    Website    Resources    F2F           Feedback

KCC    0.11      0.38 *     0.22         0.25          0.32 *

       RT Off    OIE Off    OIE          Divisional
       Cycles    Cycles     Rubric       Example

KCC    0.04      0.18       0.24         0.17

Note. n = 61. * Denotes significant at the p<0.05 level.

Table 6

Descriptive Statistics for Utility of Specific Resources
Manipulated

                  OIE        External
                  Website    Resources    F2F

N                  35         33           60
Mean               4.60       4.55         5.17
Median             5.00       5.00         5.00
Mode               5.00       5.00         6.00
Std. Deviation     0.85       0.79         0.83
Variance           0.72       0.63         0.68
Skewness          -0.03      -0.16        -0.51
Kurtosis          -0.50      -0.25        -0.82
Range              3.00       3.00         3.00

                  Electronic    RT Off    OIE Off
                  Feedback      Cycle     Cycle

N                  60            43        44
Mean               4.98          5.02      5.18
Median             5.00          5.00      5.00
Mode               5.00          6.00      6.00
Std. Deviation     0.87          0.91      0.92
Variance           0.76          0.83      0.85
Skewness          -0.44         -0.44     -0.75
Kurtosis          -0.58         -0.85     -0.55
Range              3.00          3.00      3.00

                  OIE       Divisional
                  Rubric    Example

N                  44        45
Mean               4.48      4.62
Median             4.00      5.00
Mode               4.00      5.00
Std. Deviation     1.02      1.05
Variance           1.05      1.10
Skewness           0.13     -0.28
Kurtosis          -1.67     -1.08
Range              3.00      3.00

Note. n varies from 33 to 60.

Table 7

Correlation between Participant KCC Score and Utility of
Specific Resources Manipulated

       OIE        External                Electronic
       Website    Resources    F2F        Feedback

KCC    0.33       0.29         0.35 **    0.34 **

       RT Off     OIE Off      OIE        Divisional
       Cycle      Cycle        Rubric     Example

KCC    0.54 **    0.55 **      0.42 **    0.14

Note. ** Denotes significance at the p < 0.01 level (2-tailed)

Table 8

Linear Regression Model Summary

                                         Adjusted

                   R    [R.sup.2]     SE      b      [beta]

Step 1           0.60     0.31      0.71
  Constant                                   1.236
  RT Off                                    -0.2     -0.02
  OIE Off                                    0.58     0.62
Step 2           0.67     0.32      0.71
  Constant                                   0.56
  RT Off                                    -0.46    -0.44
  OIE Off                                    0.60     0.65
  F2F                                        0.12     0.01
  Electronic                                 0.39     0.37
     Feedback
  OIE Rubric                                 0.17     0.20
COPYRIGHT 2019 Research & Practice in Assessment
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Groover, Cynthia D.; McBrayer, Juliann Sergi; Cleveland, Richard; Riggs, Amy Jo
Publication:Research & Practice in Assessment
Date:Jun 22, 2019
Words:6371
Previous Article:Holistically Assessing Critical Thinking and Written Communication Learning Outcomes with Direct and Indirect Measures.
Next Article:How are Faculty Rewarded and Recognized for Assessment Work Outside the Classroom?

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters