Printer Friendly

Program evaluation: a review of impact, method and emerging trends for music education.

Music making is the synthesis of many highly developed motor, cognitive and affective skills that culminate in a form of human art and capacity that is expressive and diverse. Our communal desire to organize those important teachings has given shape to modern courses of instruction and programs that have come to establish the music education that is seen in private studios, classrooms and lecture halls. The success and shortcomings of these programs has in part a great deal to do with the successful implementation, analysis and revision that is offered through systematized program evaluations. The following work aims to help collect and condense materials about program evaluation within the field of music education to distill its findings and pose several questions as to how this information is relevant for future endeavours.

Let us begin with a brief overview of what exactly constitutes a program evaluation and what differentiates it from other forms of assessment. Evaluations seek to provide information to those who hold a stake in what is being evaluated (Mark, Henry & Julnes, 2000). Program evaluation involves the collection and analysis of data pertaining to a particular program or course, in which the results of such findings will in turn be used by the stakeholders to make improvements for future delivery of the program. Program evaluations can serve in a formative or summative manner, can be administered and analyzed by internal or external bodies and can be conducted for a variety of circumstances with varied goals. Fitzpatrick, Sander & Worthen (2011) observe that evaluations, "improve our ways of thinking and therefore, our ways of developing, implementing and changing programs and policies" (p. 33). Evaluations therefore are not only framed through discovery of tangible traits but through the methodological-distance that such a process offers.

Many of the issues within music education are shared amongst many arts program, but it is important that as music education is forced to meet the challenges of diversified classes, limited resources and changing public interests in music, sound program evaluation initiatives are in place to provide the information required to meet these challenges. It is imperative that there be a critical review of the evaluative processes that exist within the field of music education for this same reason. This article aims to review (1) the role that program evaluation plays in critically forming and shaping music programs, (2) how external political and philosophical factors impact the program evaluation process for music programs, and (3) approaches and tools used in program evaluation literature for music education, and how these tools and methods have yielded their desired outcomes.

Program Evaluation as a Guiding Force--The role that program evaluation plays in critically forming and shaping music programs

One of the crucial roles of program evaluation and systemic analysis of courses is to provide data that can be utilized in the shaping and reforming of current and future courses of instruction. Cronbach (Colwell, 2006) proposes that the role of the program evaluation is to act as the arbiter for positive change within the classroom and for the training of future musicians. This sentiment is shared by Colwell (2006) who argues that program evaluation has the power to alter the consciousness and awareness of music as it connects to the formative and summative goals of the curriculum in question (p. 226). This position is supported by Boyle (1992) who adds that program evaluation provides a "sociopolitical" (p. 66) means of which to disseminate the values of the program in question and helps to strengthen the role of music within that learning community. To this end program evaluation functions in a dual role: the process that will yield valuable and insightful data to the efficient delivery of a program of instruction, and the process that will help to better situate the music program within the greater educational context that it exists in. Colwell and Cronbach support the understanding that evaluation is a critical part of the process necessary to deliver a transformative and genuine musical experience to students who are part of the program. At the same time, evaluations add to the music program by causing self-reflection and force the stakeholders to examine their relationship and merit in relationship to other activities, programs and individuals around them.

Grimmett et. al (2010) report on their program evaluation of a suburban music program pilot. The research team and evaluators found several beneficial and detrimental factors as a result of the pilot and program evaluation that ensued. One positive benefit of the evaluative process was affirmative data regarding the program's ability to provide effective instruction of music literacy skills (p. 59) and an awareness of homeroom teachers being able to effectively carry out the sustainment activities that were initiated through the music specialist. Conversely, Grimmett et. al. (2010) discovered that the presence of the evaluative team caused the loss of extra-curricular music ensembles and performance analysis of teachers resulting in a precarious standoff between the evaluators and staff (p. 59). This scenario that has been offered to us details the dual-role that program evaluations can have on participants. Such programs, on one hand, can bring about heightened awareness of student involvement and success rates resulting in increases in funding for such programs and a deeper understanding of how the specialist and generalist teachers can better communicate and function incognito to monitor and sustain student learning. On the other hand, although program evaluations, in some circumstances, can introduce unforeseen tensions and animosities within the personnel, curriculum and context of the classroom, program evaluations have the ability to uncover and disclose unintended factors that constitute the success of a program. Access to such evaluative data can not only disclose information about the intended evaluation goal but also uncover data that will aid in shaping budgets, faculty training and student involvement initiatives.

The practice of program evaluation is also imperative in shaping the relationship between larger stakeholder units within the music education model. The interaction between colleagues (music specialists, generalists and administration) is an ongoing and developing relationship that is formed out of professional guidelines, regulations and performance reviews but can also be informed by the information delivered through the program evaluation. In their final report, Hewton, Byrne & Rohde (1985) came to a conclusion in their evaluation of eight Queensland middle-school programs that the lack of involvement from music specialists in the planning of curriculum and scheduling was impacting the quality of the music programs to the point where music's value was questioned by generalist teachers and administrators (p. 47). The evaluators suggest that in order for the music programs to grow within the district, music must be incorporated into the broader curricular planning process for the program to run at greater capacity and efficiency. In a scenario like this, we can see that program evaluative data has suggested that there is a deficiency in the planning and engagement of teachers resulting in underperformance. This indicates that program evaluation not only provides input into student learning but also provides the insight necessary to alter the curriculum and how staffing is utilized to perform their tasks in order to positively impact program success goals and benchmarks.

The goal of program evaluation is to provide the stakeholder (group) with the necessary information to adjust the planning of curriculum and not exclusively provide the guidelines by which to make adjustments. Understanding the relationship and impact of program evaluation is a complex process that is too often over-simplified in favour of easily distinguishable, causal relationships. We are likely to see an increased integration of program evaluation in both the formative and summative assessment process of music education programs at all levels of the schooling system. If researchers and scholars are correct, the results of music program evaluations being utilized as formative tools by boards of education as well as localized practitioners will help guide the implementation of curriculum and allot the appropriate place for evaluations to act in a formalized capacity to help guide the implementation and design structures of music programs globally.

External Issues to Program Evaluations in Music--Political and philosophical factors that impact the program evaluation process

Program evaluation exists within a world of complex and interconnected ideas, principles and realities that an evaluator must have knowledge of in order to conduct the evaluation and produce meaningful results. Program evaluation in music is a growing field that is moving forward with new developments spurred on by a variety of factors. One of the most important external factors impacting program evaluation in music is the lack of published materials on how to conduct and review evaluations. Much of the body of literature on music-based program evaluations consists of unpublished dissertations that are used to satisfy thesis and dissertation requirements (Ferguson 2007, p. 12) and are less 'useful' in their perceived contribution to the academic repository of knowledge for music educators. Ferguson discusses two concerns. Firstly, the field of program evaluation is attempting to reconcile its own understanding of the value of program evaluations and the literature that spawns from them, in relationship to the political necessity to contribute to other segments of the music education field (lesson planning, curriculum planning, repertoire studies). Secondly, publishing such articles and evaluation findings may create "negative results" (Ferguson 2007, p. 12) with external, administrative and managerial powers. No matter the interpretive outcome of data collected through an evaluation, this valuable information is imperative in order to inform stakeholders about the viability, security and longevity of their program for end-users.

The final consideration that is to be drawn upon is how philosophical imperatives affect program evaluations within public music education. Music education continues to be shaped in practice by philosophical works that espouse to create value and 'meaning' through the application of prescribed programs, ideals, and evaluative techniques. Colwell (1973) understood the presence of a philosophy, no matter how simple and rudimentary, as being critical to outlining the goals that the program wishes to accomplish (p. 136). To this extent, the presence of an individual program philosophy (as stipulated by a school, board or ministry) informs the parameters by which evaluators will hone and establish success criteria from which to evaluate. Whatever position a program takes, it is important to realize that the evaluation program statements not only inform the course of instruction and individual assessment, but also frame how the evaluator will utilize that information to determine an appropriate evaluation model(s) based upon the contextual backdrop. As the literature on the topic is sparse, it is unclear as to the role that such positions such as 'aesthetic' (Reimer 2003) and 'praxial' (Elliott 1995) philosophies alter the methods chosen or the outcomes yet it is reasonable to acknowledge that as evaluations become more plentiful and detailed, the music education community will be able to best establish criteria and choose mixed methods to collect data and make judgments on such programs.

The analysis of external factors and their role on program evaluations at this point is by no means conclusive. Until now, it is important to understand contributing factors that inform the process of program evaluations and some of the dominating philosophies and topics that effect music programs, yet until a methodology is developed where evaluators can analyze the plethora of items that might alter a music program evaluation making a definitive statement is impossible. Understanding the factors that affect music program evaluations might benefit from the untapped process of meta-evaluation (Stufflebeam & Welch 1986, p. 166) where an evaluation team has the time and ability to analyze surrounding factors that they see could have affected the outcome of their project. Meta-evaluation can be particularly beneficial when incorporating course instructors and directors into the process, requiring them to not only examine the evaluation findings but allowing the community of practice to make critical judgments about individual performance. Future study on the impact of external factors surrounding the evaluated program will better inform evaluators about how these forces could alter methodological decisions, stakeholder response and outcomes.

Procedures and Results of Music Program Evaluations Approaches and tools used in program evaluation literature

We turn our attention now to review a variety of program evaluations and their reports in an attempt to analyze the key methodological technique and to summate their effectiveness in accomplishing given tasks.

The program evaluation conducted by Hetwon, Byrne & Rohde (1985) titled Primary Music Evaluation Report 3: Profiles of Eight School Music Programs was initiated by the Department of Education, Queensland, Australia. The research team was tasked to evaluate the effectiveness of the current music curriculum implementation in eight primary schools as well as evaluate the potential for implementation of a Kodaly-based music program. The evaluation utilized a series of semi-structured interviews, questionnaires, pupil tests and researcher observations. Methods of data collection in this evaluation utilized a mixed-methods approach that involved pre-testing and post-testing of student competency in music before and at the end of the observation period as well as qualitative data with regards to pupil engagement with the curriculum and subject matter as well as their interest on sustainment studies beyond the music classroom. In all of the eight individual case studies, there is a consistent approach to techniques that seek to explore primarily the causal relationships between instruction and student performance within the current curriculum.

The report published by the researchers indicated that there was indeed a relationship between student performance and music teacher involvement within the program. Hetwon, Byrne & Rohde (1985) go so far as to suggest that student performance was dependent on the continued presence of a music specialist (p. 47). The evaluation was useful in providing data and producing suggestions for how teacher involvement can be enhanced to improve student learning but there are very few summative statements as to how there is an explicit, causal relationship between students, their performance and the mandated state curriculum. It is fair to argue that the program evaluation did accomplish its objective to evaluate student performance within the current curriculum and chose to understand the causal effect that the music teacher has on imparting that learning within the classroom. To this extent, Hetwon, Byrne & Rohde's (1985) data suggested that teachers did not often use the stipulated curriculum guides when initiating their programs, instead relying on alternative media and sources when teaching (p. 48). This may indicate the research team's rationale for shifting their attention to teacher engagement over student engagement. Focusing on process is imperative as long as one does not jeopardize final outcomes. Although this evaluation cannot be seen as a failure in accomplishing its intended goal, it did begin to answer and diagnose the primary evaluative question. The evaluation report did not suggest any findings with regards to the implementation of a Kodaly-based pedagogical model. In summary, the model employed by the researcher was effective and well-orchestrated yet the shift in focus of the research question permeated the outcomes of the program. In doing so, it is a reminder of the flexibility that an evaluator must have in the field when faced with a vague evaluative question and a perplexing scenario.

Hobson & Burkhardt (2012) were tasked with evaluating an early music education program to be delivered to pre-school students in an undisclosed city. The evaluators were tasked with producing data to support future program implementations as well as suggesting possible program-specific outcomes. The data collection was to occur continually over a period of two years. Data collection was done primarily though field notes and interviews with stakeholders as well as a controversial data collection tool that was provided by the program implementer. The tool was a point of contention with the evaluators as they were obligated to use this tool that collected data that was inapplicable to the scenario in question but nevertheless required the team to input data of questionable validity (Hobson & Burkhardt, 2012, p.11). The design of the evaluation utilized a P-PE (Practical-Participatory Evaluative) model that was aimed at linking parents, as the primary stakeholders, to the evaluation team to have a sharing of information and data between both sides. This approach combined with random controlled tests in the form of monthly, random phone calls constituted the degree of contact that evaluators had with their primary stakeholders. The methods indicated by the report focus heavily on qualitative methods of data collection which can explain how the evaluators felt inadequate due to the time and data collection constraints.

The evaluation report yielded a set of interesting results. The final report suggested that the orientation and guiding research question shifted through the course of data collection and observation as the team was forced to deal with the constraints on time, data, politics and budget (Hobson & Burkhardt, 2012, p.12-13). As a result, the evaluation shifted to focusing on how parents and key administrative individuals involved with the program could best help students guarantee success as well as how administration can reconcile their inadequacies for future implementations of the program. Solving the political tensions between primary and client stakeholders became the primary product of this evaluation and suggested methods by which future engagement of parents within the program and administrative oversight can increase student success and promote a more productive environment. The findings of this evaluation demonstrate the shifting focus of the participatory evaluation approach as indicated by alterations to their research questions. Utilizing this method was successful for the evaluators in addressing their primary research question by looking at the ways that parents can be part of the success of the program that their young children are part of. The evaluators summate that by lessening the time burden on parents and encouraging a shift towards practical participation approach, better data and understandings can be drawn to further the original mandate. This evaluation is also beneficial to the greater literature of program evaluation in music as it suggests ways in which evaluators might move from the use of a CIPP (Context, Input, Process, Product) model to a P-PE model through encouraging discussion with other evaluators as to how one situates stakeholders in relationship to their desired outcomes. It is critical in this evaluation to note the evaluator's decision to move the focus of the evaluation as the situation in the field dictated. Although uncommon in practice, situations like this will arise where evaluators will have to make critical decisions in order to balance professional objectivity with stakeholder interests.

Grimmett et. al. (2010) conducted a program evaluation to monitor and evaluate the success of a comprehensive music program with students of a variety of academic and developmental need in order to understand if the program offered noticeable benefits to the student population. The program in question was monitored for a total of three years during which data was collected. The team utilized a CIPP model to better understand "how and why these achievements occurred" (Grimmett et. al. 2010, p.55). The primary means of data collection included interviews, group panels and short questionnaires with pupils and teachers involved in the program. The contextual data was collected in order to understand the background and history of the program, staffing concerns and information from stakeholders involved in the current program. Input data was collected and scrutinized to understand the program's relationship to mandated guidelines of achievement. Procedural and product data involved an extended period of observation and interviewing within the program community to collect this material. The evaluators did not focus on collecting quantitative, measurable results that came from the program, instead focusing on how students, parents and the community surrounding them were being impacted by this school music program. An eclectic, mixed methodology was not utilized in this circumstance which does not appear to have hindered the objectivity or outcomes of their evaluation.

The final report released by Grimmett et. al. (2010) concluded that there were indeed benefits that came to students by way of being involved in such a music program despite its limitations when it came to the implementation of "high quality music" (p. 56). The results that came from analysis of the program were beneficial for administrative support but the most significant outcomes came from the impact that such a program had on the community around the participant schools. The evaluation brought into question issues of student engagement with the curriculum, the necessity of such a program to pre-prescribe a certain music genre (Western art music) as the sole source of musical literature as well as administrative oversight in correct staffing to implement the running of such a program. In summary, the program did meet some of its requirements from the onset, yet there were alterations and compromises that had to be endured in order to collect valid data and make it relevant to its audience. The use of a CIPP model was useful in this circumstance and as a result, Grimmett et. al. (2010) can conclude that more effective use of checklists and structured data collection, on the part of the evaluative criteria they were instructed to use, might have allowed for easier compartmentalization of data for analysis and extrapolation. In all, this evaluation effectively utilized its assigned approach model to effectively approach its criteria and conduct an informative study. Contextual and production data was of primary concern for the evaluators based upon the stakeholder concern that was to discover how the program interacted with its surrounding community. Given the stakeholder desires and circumstances, it may have been more appropriate for the evaluators to employ a Utilization-Focused Evaluation (UFE) in order to involve their stakeholders more wholly within the evaluative process as well as focus their data collection and analysis on the community and product instead of allocating resources and time to explore variables which many not have enhanced the evaluation's outcomes.

Through the analysis of these program evaluations we can draw upon several themes that run throughout these exemplars. Firstly, the use of evaluative approaches and methods is varied within music program evaluations. Ferguson (2007) suggests that these evaluative procedures must closely align themselves to the desired outcomes they wish to accomplish and absorb model tools to be as broad as possible in their ability to collect data. As seen in Hobson & Burkhardt (2012), the use of a singular, pre-prescribed method of data collection was met with resistance and ultimately narrowed the scope of data collection and potential criteria that could have been employed to make for a more effective evaluation. Evaluations must avoid the tendency to rely too heavily upon one methodology or approach and evaluators must have the expertise to provide professional guidance when necessary to guarantee objectivity. Secondly, the evaluation effectiveness relies heavily upon the involvement of administrative bodies and their ability to clearly outline and give parametric guidance to evaluators. The challenges seen with regards to unclear guidelines, pre-imposed models have forced evaluators to interpret the wishes of client stakeholders and leave room to be desired within the potential for accomplishments. Humphrey & Moon (2010) comment on this necessity for clear guidelines when highlighting Thomas' unwillingness to establish measurable outcomes within the Manhattanville Music Report (p. 87). Although it is not possible at this point to draw definitive guidelines for selecting 'the perfect' approaches, criteria and guidelines by which to conduct any given evaluation, music educators can see that the success of evaluations and their techniques is invariably dependent on their ability to hybridize and adapt their methodologies to their given contextual constraints.

For future practice, it would be helpful to have compilation and analysis of a broader cross-section of music program evaluations conducted within North America and to compare and contrast these findings and techniques with those of evaluations conducted in continental Europe and Asia to understand how evaluators from other parts of the globe are approaching and framing their own evaluations with regards to the contextual factors that are indigenous to those environments.

Conclusions--The Future of Program Evaluation in Music Education

Indeed, the question as to the value of program evaluation is undeniable. Whether present in education, enterprise, public service or the armed forces, program evaluation is an effective and essential component of ensuring a course's success and student learning. Music programs, like many arts, have just begun to embrace the practice of program evaluation for the multitude of functions that it can serve the educator, administration, parents and students' learning. Program evaluations have rendered valuable data resulting in the ability to bring about socio-political awareness of programs within greater bodies of learning, insight into teacher-student relationships and an awareness of music's enhanced performance potential when incorporated into the decision-making process of curriculum design.

The role that external political, administrative and philosophical factors play on program evaluations is indeed an area of program evaluation that will continue to be understood as a result of the critical analysis of program evaluation literature that will continue to be released. Realizing the impact that external factors can play on the success of a program will allow evaluators as well as practitioners to better understand not only the products of those interactions but how they might find ways to utilize those interactions to better understand and support their own programs.

The repository of program evaluation literature for music programs is indeed growing yet there is a need to have more literature published and scrutinized to unlock the knowledge contained within them. There is a host of successful methods that have been used to collect and extrapolate knowledge but that success must be harnessed through mixed methods and the ability to adapt to the given environment.

The questions that have been posed in the preceding article are what I interpret as the cornerstone-questions in redefining the place and value of program evaluation to the music teacher. As a music teacher and professional educator, I hope that the content of this article, although effective in its own right, is but a starting point from which music educators learn to understand, question and conduct their own program evaluations to not only aid in student success but to contribute to the awareness and presence of program evaluation within the broader educational system. The value of program evaluation has to also be understood not only in the data and published outcomes but also through the reflective process that it brings about to those who participate in it. Evaluations enable participants to reflect on their own methods and practices and provide the place for which we can learn how to make constructive critique that will aim at directly improving the quality of experience that we offer to others and demand of ourselves.

Lastly, I would suggest supporting the growth of internal evaluations conducted by instructors acting as both evaluator and stakeholder. The identification and effective use of stakeholders within the evaluation process will not only provide subjective data but also help form the parameters and context of the evaluation. Effective utilization of the stakeholders and the matrices of experiences and perspectives that they bring into the evaluation will not only enrich the methodologies employed but also help to deconstruct the complex philosophical perspectives that exist within music education programs. The literature and opinions expressed thus far in the field have largely been those of expert, external evaluators who have utilized teachers in a largely passive manner only engaging them for the purpose of data collection and contextualization. For program evaluation to become richer, more accessible and more supportive of student learning in the music classroom, it is imperative that the teacher be afforded the skills, resources and support to conduct evaluations and interpret their own findings to help their practice and in doing so contribute to the academic literature in the field. Further study and research of pre-service teacher education programs may need to be conducted to better understand the impact and gains of equipping teacher-practitioners with the knowledge base and skills to conduct program evaluations to evaluate classroom practices.

I hope this article will inspire and guide teachers and academics to work concurrently as collaborators to grow in their understanding of program evaluation for the benefit of all involved in music education. It is my wish that this work will provoke talk and discussion about how program evaluation can become a pivotal tool within existing music education programs, no matter what level one practices in.

References

Boyle, J. David. (1992). Program evaluation for secondary music programs. NASSP Bulletin, 76(544), p. 63-68.

Colwell, R. J. (2006). Assessment's potential in music education. R.J. Colwell (Ed.), The MENC handbook of research methodologies. pp.199-269. New York: Oxford University Press.

Colwell, R, J. (1973). The evaluation of music teaching and learning. In Robert Sindel (Ed.) building instructional programs in music education. Englewood Cliffs: Prentice-Hall.

Elliott, David. (1995). Music matters. New York: Oxford University Press. Ferguson, David. (2007). Program evaluations in music education. Applications of Research in Music Education, 25.2, p. 4-15.

Fitzpatrick, J., Sanders, J., & Worthen, B. (2011). Program evaluation: Alternative approaches and practical guidelines. (4th Ed.) Upper Saddle River: Pearson Education.

Grimmett, H., Rickard, N., Gill, A. & Murphy, F. (2010). The perilous path from proposal to practice: A qualitative program evaluation of a regional music program. Australian Journal of Music Education. Issue 2,p.52-65.

Hacker, W., Hewton, J., Byrne, M., Rohde, A. and Tainton, B. (1985). Primary music evaluation report 2: The pilot music program in three Brisbane state primary schools. Queensland: Department of Education Research Services.

Hewton, J., Byrne, M., Rohde, A. (1985). Primary Music Evaluation Report 3: Profiles of Eight School Music Programs. Queensland: Department of Education Research Services.

Hobson, Kristin A. & Burkhardt, Jason T. (2012). A lesson in carefully managing resources: A case study from an evaluation of a music education program. Journal of Multi Disciplinary Evaluation, 8:19, p. 8-14.

Humphreys, Jere T. & Moon, Kyung-Suk. (2010). The Manhattanville Music Curriculum Program: 1966-1970. Journal of Historical Research in Music Education, 31.2, p. 75-99.

Mark, M., Henry, G.,& Julnes, G. (2000). Toward an integrative framework for evaluation practice. American Journal of Evaluation, 20, p. 177-198.

Reimer, Bennett. (2003). A Philosophy of Music Education (3rd ed.). Upper Saddle River: Prentice Hall.

Stufflebeam, Daniel & Welch, Wayne. (1986). Review of research on program evaluation in United States school districts. Educational Administration Quarterly, 22(3), p. 150-170

Matthew Moreno is a graduate of York University's concurrent music education program where he studied with Michael Marcuzzi and William Thomas. He is currently a graduate student at the University of Toronto/OISE in the department of Curriculum, Teaching and Learning. His research interests include curriculum design systems, aesthetic culture and arts education. He remains an active performer and studio teacher in the GTA.
COPYRIGHT 2014 Canadian Music Educators Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:principal themes
Author:Moreno, Matthew
Publication:Canadian Music Educator
Article Type:Report
Date:Mar 22, 2014
Words:5068
Previous Article:Metis fiddling: a hidden Canadian legacy.
Next Article:Resolving a choral dichotomy.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |