Printer Friendly

Beyond simple participation: providing a reliable informal assessment tool of student engagement for teachers.

Assessment of 'True Engagement'

The importance of student engagement has been well-documented for learners of all ages because increased engagement levels have been associated with several positive student educational outcomes including increases in retaining students' interest in school (Yair, 2000) and higher levels of academic achievement (Joselowsky, 2007). The connection of engagement to positive learning outcomes suggests attention to monitoring and assessing engagement by teachers so that they can assess their ability to deliver engaging instruction. The assessment of "true" engagement has been problematic for teachers because they may be relying on anecdotal methods of determining engagement including gauging engagement by students' participation or assuming engagement when students are behaving. That is, if students appear to be raising their hands, answering questions and behaving well, then teachers may consider them to be engaged. This assumption may not be correct. Wasserstein (1995) states "true engagement" extends beyond students keeping busy and includes psychological aspects including self-motivation. Self-motivation comes from a desire to understand something interesting or learning in order to achieve personal goals.

This article describes a research team's further development of a simple and easy to use theoretically based checklist that teachers can use to have students rate their degree of engagement across various teaching deliveries and content (Johnston et al., 2015). The checklist is intended to be an informal assessment generalizable for students of all grade levels that includes the self-motivational aspects of engagement. This measure will allow teachers to discern students who are truly engaged in their learning and students who are simply participating and behaving. Additionally, the measurement of true engagement is important because teachers may use the information aggregated from engagement assessment to make data driven instructional decisions that serve to drive instruction towards more engaging classroom deliveries and environments. What better way to know how to teach than finding out what methods are most engaging to students? Teachers can easily calculate percentages of students at each engagement level and compare percentages between lesson types and content areas. For example, do a higher percentage of students indicate "true" engagement during collaborative plans or during Socratic discourse? How do percentages of engagement rates vary between math classes? How do levels of engagement differ between low and high achieving students? If classroom data suggest students in a seventh grade science class are more engaged during interactive lecture over group work, then a teacher can provide more interactive lecture deliveries moving the students to a more engaged learning environment.

Engagement Theory into Assessment Checklist

The research team first created a checklist for students to complete after varying lessons. The checklist was based on the work of Schlechty (2011). He explains that students who are highly engaged are focused and invested in what they are doing. Their learning is done with the idea that the topic has perceived purpose and value. He operationalized the construct of engagement by delineating five levels of student engagement. The five levels of engagement range include True Engagement, Strategic Compliance, Ritual Compliance, Retreatism and Rebellion (Table 1 summarizes the descriptions).

These levels provided the basis for the five checklist items that measure student engagement. Grounding the items in well-established engagement literature that carefully delineated the construct provided evidence that suggests the checklist is assessing the intended construct. That is, the theoretical base of the checklist better assures that teachers are actually assessing engagement and not another factor like "participation" or "behavior".

Five rating statements were created to reflect each of the five levels of engagement (Johnston et al., 2015). Additionally, content experts were independently asked to rate the alignment of each checklist item with the level of engagement it was intended to represent as suggested by Standard 1.7 (AERA, APA & NCME, 2008). These statements are the basis for the checklist teachers give to students to assess their level of engagement but wording may be adapted to adjust for students of all ages (Table 2).

Reliability of the Checklist

The checklist serves as formative/informal assessment and is not subject to the same requirements of evidence of validity and estimations of reliability as high stakes tests. However, due to a growing collective interest in the checklist, the research team chose to estimate the stability of the student ratings across occasions. That is, to what degree is a student's rating of their engagement of a lesson the same upon repeated occasions?



The research team applied the assessment checklist during two week-long professional development workshops for high school math and science teachers in the state of Florida. The workshops were developed, and run by the Science-Math-Masters (SM2) grant team with the primary objectives of providing engaging curriculum and model lessons aligned with current state standards for high school Biology and Geometry courses. Participants in the workshops consisted of middle and high school science and mathematics teachers from 32 counties throughout the state of Florida. The majority of the participants were responsible for teaching either Geometry or Biology at the high school level. A total of 49 Geometry participants and 53 Biology participants attended workshops during week one while 31 Geometry participants and 40 Biology participants attended during week two. During each week, two groups each of Biology and Geometry teachers attended workshops and were introduced to novel, interactive model lessons tied to challenging state standards.

There were a total of 173 participants in the workshops. The 102 participants from the first week were assessed after two model lessons, while the remaining 71 participants from week two were only assessed after a single model lesson. Participation in the exercise was voluntary and there was some failure among the participants to provide their identifier on their assessment checklist making it necessary to discard that data. The resulting number of valid test-retest data points was 224 (N = 224). All participants were adults and employed as educators in the state of Florida.

Data Collection Assessment

During the first week on the workshops, all participants were asked to complete the described engagement checklist for each of two model lessons. Since these lessons were different for the Geometry and Biology groups, a total of four different model lessons were assessed. Immediately following a model lesson on day 1 of the workshop, teachers were asked to complete the checklist using notecards with the five statements. As a means of validating the assessment checklist, new notecards with engagement statements were provided a second time for teachers to complete before the commencement of activities on day 2. Teachers used randomly assigned codes as a means for identification. This process was repeated for a model lesson on day 2. The assessment checklist was distributed and completed by teachers immediately following completion of the second model lesson and then again at the beginning of day 3's activities. To ensure there was a suitable sample size for statistical analysis, the procedure was repeated during week two for the day 1 model lesson only.


The reliability was calculated by using the test-retest method. During the described workshops, the initial assessment (test) was provided immediately following the conclusion of a model lesson. The second assessment (retest) 18 hours after the initial assessment (test), but before any new lessons had begun.

Two correlation coefficients are in common use: Spearman rank-order correlation coefficient for nonparametric data and Pearson product-moment correlation coefficient for parametric data. The research team applied the Spearman rank-order correlation coefficient. However, the resulting Pearson product-moment correlation coefficient was equivalent to the Spearman coefficient for our data.

Our statistical comparison of the engagement checklist reliability using the test-retest methodology began by establishing the null hypothesis, [H.sub.0], which asserts that the results from initial assessment (Test 1) and second assessment (Test 2/retest) are independent and the alternate hypothesis, [H.sub.1], asserting that the results from the initial assessment and the second assessment are positively associated.

To compute the Spearman rank-order correlation coefficient the initial assessment data and the second assessment data were separately ranked. Data was put in order from greatest to least. The reliability test statistic [r.sub.s], which accounts for cases of tied results from the initial assessment and the second assessment, as was represented in many of our observations, was calculated accordingly. The statistical outcome for the research team's comparison of the participant's initial engagement response and an individual's second response was [r.sub.s] [approximately equal to] 0.5787. Note that the result of this formula was verified with the Excel commands RANK.AVG and CORREL.

For large n, a normal approximation to our test statistic was recommended. Fahoome (2002) gives the appropriate lower limit for the use of this approximation at the 0.01 significance level as 40, which is well below our n of 224.

The normal approximation is computed using the formula (Fahoome, 2002):

[r.sub.s.sup.*] = [(n - 1).sup.(1/2)] [r.sub.s]

Since we have an n of 224, [r.sub.s.sup.*] = [square root of 223] ([r.sub.s]) [approximately equal to] 8.6420 which is a z-value. Further, 8.6420 [greater than or equal to] [z.sub.0.01] = 2.33. Therefore, [H.sub.0] was rejected and the research team determined that the results from initial assessment and second assessment have a positive association at the 0.01 significance level.


The research team presents a validated assessment checklist that can be easily employed by educators at all levels of instruction. This assessment checklist consists of five statements associated with each engagement level and can be completed quickly at the end of an assignment, lesson or activity. The reliability of this instrument was performed using a group of middle and high school science and math educators during professional development workshops. The reliability was tested using four different model lessons providing a sample size over 200. A test-retest method of reliability was employed through the use of the same assessment checklist at the completion of a lesson and approximately 18 hours later prior to the start of the next day's activities. Statistical analysis of the resulting data indicate a high degree of correlation between the initial and follow-up assessments thereby supporting the reliability of the assessment checklist.

Rebecca J. Waggett *

Patricia Johnston

Leslie B. Jones

The University of Tampa


American Educational Research Association, American Psychological Association and National

Council on Measurement in Education. (2008). Standards for Educational and Psychological Testing. Washington D.C.: American Educational Research Association.

Fahoome, G.F. 2002. Twenty nonparametric statistics and their large sample approximations. Journal of Modern Applied Statistical Methods. 1(2), 248-268.

Johnston, P., Jones, L., Beaudoin, C. and Waggett, R. (2015). Participation docs not equal engagement: Quick and easy assessment of K-Adult student engagement for beginning teachers. Kappa Delta Pi Record, 51, 42-45.

Joselowsky, F. (2007). Youth engagement, high school reform, and improved learning outcomes: Building systemic approaches for youth engagement. NASSP Bulletin, 91(3), 257-276.

Schlechty, P. (2011). Engaging Students. New York, New York: Jossey-Bass.

Wasserstein, P. (1995). What middle schoolers say about their schoolwork. Educational Leadership, 53, 41-43.

Yair, G. (2000, October). Educational battlefields in America: The tug-of-war over students' engagement with instruction. Sociology of Education, 73(4), 247-269.
Table 1 Summary of Schlechty's (2011) Five Levels of Engagement

Level    Title             Description

1        True Engagement   Student sees the activity as personally
                           meaningful, worthwhile and is trying
                           to "get it right".

2        Strategic         The reasons for doing the work are not
         Compliance        the true nature of the task rather
                           grades, rank, acceptance, and approval
                           provide the motivation for completing
                           the assigned tasks.

3        Ritual            Student seeks to avoid confrontation or
         Compliance        disapproval and is focused on minimum

4        Retreatism        Student is disengaged from current
                           goals and is thinking about
                           other things.

5        Rebellion         Student is completely
                           disengaged and acting out.

Table 2 Five Rating Statements Used for the Engagement Checklist

Statement                             Corresponding Engagement
                                      Level (after Schlechty, 2011)

I saw this assignment as              True Engagement
meaningful and believe something
of worth may be accomplished by
doing this task.

I saw this task as something I did    Strategic Compliance
because it was expected of me to

I saw this task as having little      Ritual Compliance
to no meaning to me and completed
it to avoid getting in trouble

I saw this task as having no          Retreatism
meaning to me and was thinking
about other things while
completing the task

I saw this assignment as having no    Rebellion
relevance and misbehaved while
completing the assignment.
COPYRIGHT 2017 Project Innovation (Alabama)
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Waggett, Rebecca J.; Johnston, Patricia; Jones, Leslie B.
Article Type:Report
Date:Jun 22, 2017
Previous Article:Confidence in science and achievement outcomes of fourth-grade students in Korea: results from the times 2011 assessment.
Next Article:Routines are the foundation of classroom management.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters