Printer Friendly

Categorizing College Students Based on Their Perceptions of Civic Engagement Activities: A Latent Class Analysis Using the Social Agency Scale.

The State Council for Higher Education in Virginia (SCHEV) recently added civic engagement (CE) as a core competency, which is an area of knowledge and/or skills considered essential to the success of all undergraduates regardless of their discipline or institution (State Council for Higher Education in Virginia, 2017). CE now shares the same status as critical thinking, written communication, and quantitative reasoning in being one of the required areas for assessment by all Virginia institutions. SCHEV's move to elevate the status of CE corresponds with recent calls to reinvigorate higher education's civic mission across the nation. For instance, arguments for a renewed focus on CE were made in "A Crucible Moment," a 2012 report commissioned by the U.S. Department of Education (National Task Force on Civic Learning and Democratic Engagement, 2012).

Defining CE

Given the attention institutions are encouraged to devote to this competency, it is important to provide a definition. A popular definition is provided by Ehrlich (2000):
   Civic engagement means working to make a difference in the civic
   life of our communities and developing the combination of
   knowledge, skills, values, and motivation to make that difference.
   It means promoting the quality of life in a community, through both
   political and nonpolitical processes. (p. vi)


A notable feature of this definition is the inclusion of both political and nonpolitical processes. These two types of processes align with two areas from which much of our understanding of CE is derived: community service-learning, which is largely nonpolitical in nature, and political engagement (Finley, 2011; Reason & Hemer, 2015). Community service-learning programs are often characterized by the pairing of learning with community service, with the programs providing an experiential learning experience for the student while at the same time addressing a community need. In contrast, political engagement programs emphasize the systems, policies, and societal structures that contribute to the community need. To clarify the distinction (1), consider a student in a leadership class who works with an area food bank to organize a food drive. This is an example of non-political community service or non-political civic engagement (NPCE). If instead the student investigates and takes action to affect the systems, policies, and structures that contribute or cause people in the community to go hungry in the first place, the activity is an example of political civic engagement (PCE). If the student organizes the food drive and also investigates and takes action to affect the causes of hunger, the CE activity has both political and non-political elements and is best classified as PCE.

Recognizing that CE activities can be classified as being NPCE, PCE, or both NPCE and PCE, leads to the question of what kinds of activities should be promoted at an institution. One factor to consider when answering this question is the kind of training students need, which can be understood through assessment. If assessment reveals that students are well-prepared for one kind of CE but not the other, a university might decide to devote more resources to the area in need of development.

Assessing Social Agency: Different Approaches to Summarizing and Presenting Results

A comprehensive CE assessment approach would address a wide array of knowledge, skills, values, attitudes, and behaviors. In this paper, we focus on only one aspect of the value component, which is social agency, described by Eagan et al. (2017) as "the extent to which students value political and social involvement as a personal goal" (p. 56). A popular approach to the assessment of social agency includes a collection of items that have been used for over 40 years by the Higher Educational Research Institute (HERI) in the CIRP surveys. Various civic activities are presented to students (e.g., helping others who are in difficulty, promoting the political structure) who rate the importance of each activity to them personally. The same or similar items appear on the civic action subscale of the Civic Attitudes and Skills Questionnaire (Moely, Mercer, Ilustre, Miron, & McFarland, 2002) and the Political and Social Involvement scale (Center of Inquiry in the Liberal Arts, 2013), which is used in the Wabash National Study, a longitudinal study of college student learning and developmental outcomes.

An important consideration when using the social agency scale to inform the development or effectiveness of programming is how to summarize and present the results. There are three possibilities. The first approach is to summarize and present the results for each item. That is, the frequencies of responses for each item are calculated and compared across items. To illustrate, Figure 1 provides results from the administration of the American Freshman Survey to entering college students at four-year U.S. colleges and universities in 2015 and 2016 (Eagan et al., 2015; Eagan et al., 2017). This presentation of results is useful for conveying the typical response to each item, with results indicating the majority of students believe it is important to help others and far fewer believe it is important to influence the political structure.

A second approach to presenting the results is to compute a single score from the items, either by summing the item responses or using item response theory to estimate a theta value for each student. Several researchers have used a single score for the items in their studies (e.g., O'Neill, 2012; Pascarella, Ethington, & Smart, 1988; Rhee & Dey, 1996). Although there is some support for the unidimensionality of the items (Lott & Eagan, 2011), a single score is not useful if the purpose in using the scale is to understand the types of CE activities students deem important. For instance, if the items in Figure 1 were summed to produce a single score and a student received a score of five, we would know the student considers five of the seven activities important but we would not know which activities they consider important.

A third approach to presenting the results involves classifying students into classes based on their patterns of responses to the items. The term "classes" instead of "groups" is used with classification techniques to distinguish categorizations of persons created by the analysis (classes) from existing categorizations of persons (groups). Using classification techniques such as cluster analysis or latent class analysis (LCA), the number and nature of different classes with different profiles of responses across items can be captured. For instance, use of these techniques might indicate there is a class of students who value all activities and another class that favors only nonpolitical activities. Use of a classification technique with the social agency items is useful over a single score because it conveys the types of activities different classes of students deem important. Classification techniques are also advantageous over the overall results provided for each item (as in Figure 1) in being better able to capture the variability among students in their civic preferences as well as covariability among item responses. Thus, applying classification techniques to students' responses can reveal an abundance of new information about students and their perceptions of CE activities that are unobtainable when the responses are summarized using the previous two approaches.

To date, classification techniques have not been used with these items exclusively (2), but these techniques have been used with other CE measures to classify people into different categories based on their CE preferences or behaviors (see Table 1). The studies in Table 1 differ greatly from one another in the variables and analyses used to classify individuals and in the individuals classified. Despite these differences some common classes were identified. Almost all studies found a small-to-medium-sized class of what Weerts, Cabrera, and Meijas (2014) call super engagers, or individuals who prefer or engage in both NCPE and PCE activities (Lopez et al., 2006; Moely et al., 2008) All studies also identified a class of nonengagers, or individuals who do not prefer or engage in either NCPE and PCE activities (Lopez et al., 2006; Moely et al., Torney-Purta, 2009; Weerts et al., 2014). Some studies also found a relatively small class of political engagers who preferred or engaged in PCE activities over NCPE activities (Lopez et al., 2006; Moely et al., 2008). Non-political engagers, or those who prefer NCPE activities over PCE activities, were also identified as a small-to-medium-sized class by some studies (Lopez et al., 2006; Moely et al., 2008; Weerts et al., 2014).

Purpose of the Study

To date, classification techniques have not been used to categorize college students according to the importance they assign to various CE activities. Because the kinds of CE activities students value may be more informative than the number of CE activities they value, the present study performs LCA using the social agency items in Figure 1 to classify students into classes according to the kind of activities they deem important. Understanding what kinds of classes exist is useful for two primary reasons. First, the results can be informative to the development of CE initiatives on campus. For instance, if a large class of non-political engagers is identified a campus might decide to place more emphasis on helping students connect politics with their NCPE experiences or create and promote PCE initiatives. Second, the results are also useful for assessment purposes. For example, if action is taken on a campus to promote PCE activities, the percentage of students in classes that value both NCPE and PCE can be compared before and after the promotion. The membership of the same student in various classes can also be tracked over time. For instance, it would be favorable to find a student who started college as a non-engager transition during their academic career to a class that valued one or both types of CE.

Given the potential utility of a social agency typology, we conducted LCA on the social agency items to address the following research questions:

1. In how many different ways might students be categorized with respect to their CE preferences? In other words, how many different classes exist?

2. What is the nature of the classes? How might the classes be characterized with respect to the importance they assign to various CE activities?

3. What percentage of students belong to each class and how accurately can students be classified?

4. In what other ways do the classes differ?

The first three research questions were pursued to describe the number and nature of social agency classes at our university. Based on the results of other classification studies in the CE literature we anticipated we might find one or more of the following classes: super engagers, who value both NPCE and PCE activities, non-engagers, who find little value in CE activities, non-political engagers, who value NPCE activities more than PCE activities, and political engagers, who value PCE activities more than NPCE activities.

The purpose in pursuing the last research question was to provide validity evidence for our LCA solution. Validity evidence for our typology can be obtained by considering how classes differ on variables beyond those used in their classification (i.e., auxiliary variables). Auxiliary variables had to be chosen from those collected at the same time as the social agency items because the data in this study were not collected specifically for this research. Of these variables, those that are used often in CE research were selected with the resulting auxiliary variables including gender, race, and student academic classification (e.g., freshman, sophomore, junior, senior). We also used cohort (i.e., academic year of the response) as an auxiliary variable since data from multiple cohorts were used in our study.

Prior research and knowledge of our campus' practices informed the hypotheses guiding our validity analyses. For instance, because O'Neill (2012) and Lott and Eagan (2011) found that seniors assigned higher levels of importance to social agency items than incoming students, we hypothesized that classes emerging from the analysis characterized by endorsement of more activities would consist of more upperclassmen. We also hypothesized that students in more recent cohorts would be represented in classes where more activities were valued because of our campus' recent heightened emphasis on CE. Because prior classification studies found more females in classes preferring NPCE over PCE (Lopez et al., 2006; Moely et al., 2008), we anticipated the same gender discrepancy in our own study if such a class emerged. We also anticipated more males in classes preferring PCE over NPCE based on findings from other classification studies (Brunton-Smith, 2011; Lopez et al., 2006). Findings regarding racial differences in class membership were mixed across studies. Support for the hypothesis that a larger number of minorities would be found in classes that value both NPCE and PCE or PCE over NPCE is based on Moely et al. (2008), who found non-Whites more likely to be in the class endorsing both types of engagement, and Lopez et al. (2006) and Eagan et al. (2015) who both found that minorities value political involvement more than Whites. In summary, to provide supportive validity evidence for our LCA solution we expected the following hypotheses to be supported:

1. More upperclassmen and students from recent cohorts represented in classes valuing a larger number of civic activities

2. If such classes emerge, more females in classes valuing NPCE over PCE and more males in classes valuing PCE over NPCE

3. A larger percentage of minorities in classes where PCE activities are valued

Methods

The social agency items and auxiliary variables were all collected as part of an annual survey at our university for institutional research and assessment purposes. In the following sections we first describe the general procedures for the survey, participants, and variables used in our study. Then, we describe the details of the LCA and validity analyses.

Procedure

The survey is administered to a sample of students during the middle of the fall semester each year. A sample of roughly 30% of the 20,000 undergraduate student body is selected, resulting in an overall sample of 6,000 undergraduate students. Because the survey is administered via paper and pencil, only on-campus course sections are selected, resulting in a possible population of 19,000 students. A random sample of on-campus undergraduate course sections is compiled and then manually adjusted to ensure that the sample is representative of the university population concerning important demographic features such as gender, race, and student academic classification (e.g., freshman, sophomore, junior, senior). To maximize the number of survey items while also minimizing survey fatigue, five different versions of the survey are used. All students answer a common set of demographic questions followed by one of five sets of items. The different versions of the survey are distributed randomly throughout each sampled course section such that all versions might be answered by different students in a single section. The items used for this research all came from one version of the survey.

Participants

Data collected in three different years were combined to create the data set used in the analyses (3). The final sample consisted of 2,591 students with 27%, 47%, and 27% from the 2013/2014, 2015/2016, and 2016/2017 administrations (4), respectively. The distribution of gender and race aligns with the overall distribution at our university, with 62% of the sample identifying as females and 81% of the sample identifying as White. Students were fairly evenly distributed across credit-hour categories, with 20% having completed fewer than 28 credit hours (freshman), 25% having completed between 28 and 59 credit hours (sophomores), 27% having completed between 60 and 89 credit hours (juniors) and 29% having completed more than 89 credit hours (seniors).

Variables

Latent class analysis variables. The CIRP social agency items (5) were used to classify students into categories using LCA. Students originally responded to these items using a four-point Likert scale (1 = Essential; 2 = Very Important; 3 = Somewhat Important; 4 = Not Important). Due to the skewed distributions of responses, with most reporting either Essential (1) or Very Important (2), we decided to collapse the four response categories into two response categories to avoid estimation issues and simplify the interpretations of the results. Thus, the two response categories included in our analyses were Important (1), which included Essential and Very Important, and Not Important (0), which included Somewhat Important and Not Important. The same approach to collapsing response categories is used in the reporting of the results for these items by CIRP (Eagan et al., 2015; Eagan et al., 2017).

Auxiliary variables. Once the final LCA solution was obtained (i.e., the best fitting LCA was determined), we conducted validity analyses to ascertain whether the resulting categorizations of students aligned with prior research. As mentioned above, we used gender (female; male), race (White; non-White), student academic classification (freshman; sophomore; junior; senior), and cohort (2013/2014; 2015/2016; 2016/2017) as auxiliary variables.

Data Analysis

Latent class analysis. We conducted a series of LCAs on the social agency items to explore if different types (classes) of students exist who differ in how much they value involvement in various civic activities. We initially fit a one-class model to the data and in subsequent analyses we increased the number of classes (C) by one. We followed this model-building procedure until estimation issues were encountered. The equation for the general C-class LCA model with binary indicators is presented below, where j is used to refer to item j, with there being j = 1 to J items, and c is used to refer to a specific class, with there being c = 1 to G classes:

P ([x.sub.j] = 1) = [c.summation over (c=1)] [[rho].sub.c] P([x.sub.j] = 1|c)

The general C-class LCA equation specifies the marginal probability of endorsing Important on item j, P([x.sub.j]=1), as equal to the weighted sum of the conditional probability of endorsing Important on item j in each class, P([x.sub.j]=1|c). The weights, [[rho].sub.c], represent the proportion of students in each class c. The number of estimated parameters in the general C-class LCA model depends on the number of items (J) and classes (C). For example, in a 2-class LCA model with seven dichotomous items, a total of 15 parameters are estimated: one class weight (6) and 14 conditional probabilities (7 items x 2 classes).

We estimated all LCA models using full information maximum likelihood (FIML) estimation via the Expectation Maximization (EM) algorithm in Mplus version 7.3 (Muthen & Muthen, 1998-2012). A common concern when estimating LCA models is converging on a local maxima. To avoid this issue Mplus implements a two-stage estimation procedure in which multiple sets of random start values are first generated and optimized up to 10 iterations (initial stage). Then, the best sets of random start values (i.e., the ones with the highest likelihood of producing the data) are used as starting values in the subsequent step and optimized to completion (final stage). We specified a random start value of 1,000 and final stage optimization value of 500 for our study. Thus, for each LCA model, Mplus generated 1,000 sets of random start values and optimized them to 10 iterations. Then, Mplus used the best 500 sets of random start as starting values in the subsequent step and optimized them to completion to obtain the final model solution.

Model fit. We examined model-data fit via the log-likelihood (LL), Bayesian information criterion (BIC; Schwarz, 1978), and sample-size adjusted BIC (SSABIC; Sclove, 1987). The LL for each model represents the likelihood of the data given the specified estimated model parameters. LL values closer to zero indicate a higher likelihood of the data and thus, better model-data fit. Because LL values will always be closer to zero for more complex models (e.g., models with more classes), we also examined model-data fit via two information criteria measures: BIC and SSABIC. The BIC and SSABIC penalize the LL for model complexity in different ways, with smaller values indicating more superior model-data fit. The BIC and SSABIC have been shown to perform well in simulation studies (Henson, Reise, & Kim, 2007; Tofighi & Enders, 2008). We championed the model with the lowest BIC and SSABIC values as the best-fitting model in our study.

Model comparison. We compared models differing in the number of classes using the Lo-Mendell-Rubin likelihood test (LMRT; Lo, Mendell, & Rubin, 2001), bootstrap likelihood ratio test (BLRT; McLachlan & Peel, 2000), and the approximate Bayes factor (BF). The LMRT and BLRT compare a C class model to a C-1 class model. A significant LMRT or BLRT would indicate that the model with C classes fits the data significantly better than the model with C-1 classes. The approximate BF compares the BIC values between two models ([BF.sub.1,2]),

[BF.sub.1,2] = exp[(-0.5[BIC.sub.1]) - (-0.5[BIC.sub.2])]

where [BIC.sub.1] and [BIC.sub.2] represent the BIC values associated with model one and model two (e.g., one-class model and two-class model). A BF value greater than one would imply that model one is more strongly supported by the data than model two (Wasserman, 2000).

Validity analysis. It is important for researchers to validate the identified classes because classes that emerge in LCA may be an artifact of the data and not true qualitatively different groups of students. A variety of methods have been developed to obtain validity evidence in LCA. One simple method is to modally assign students to classes based on their highest posterior probability and use the new class membership variable in subsequent traditional analyses (e.g., ANOVA, regression). To clarify, consider a 2-class model. Each individual in a 2-class model has two posterior probabilities: one conveying their probability of membership in Class 1 and another conveying their probability of membership in Class 2. Thus, a fictitious individual might have posterior probabilities of .85 and .15 for Classes 1 and 2, respectively. The new class membership variable captures the class for which the posterior probability is the highest, which would be Class 1 for our fictitious individual. Once the new class membership variable is created traditional analyses can be used to relate it to other variables. This method, however, assumes perfect classification accuracy (i.e., all posterior probabilities are one or zero). For this reason, other methods that account for classification accuracy have been developed (e.g., 3-step method, Lanza, and BCH). The choice among the latter methods is dependent on whether (a) the auxiliary variables are treated as predictors or outcomes of class membership and (b) the auxiliary variables are continuous or categorical. In our study we treated gender, race, student academic classification and cohort as categorical predictors of class membership. Given these criteria, we chose to use the 3-step method (Asparouhov & Muthen, 2014; Vermunt, 2010) to conduct our validity analyses, running the analysis separately for each auxiliary variable. In the 3-step method, multinomial regression is used to regress the new class membership variable on auxiliary variable(s) while taking into account the classification accuracy of the model.

Results

Descriptive Statistics

The percentage of students considering each CE activity important is reported in Table 2. Compared to the percentages based on the dichotomized responses obtained by Eagan et al. (2015) and Eagan et al. (2017) from entering college students shown in Figure 1, a larger percentage of our students perceived the CE activities as being important (see Table 2). Note, however, that Eagan et al. (2015) and Eagan et al. (2017) surveyed only entering college students whereas our sample consisted of a wide range of students at our university. Thus, this may be one reason for the discrepancy. Despite this, the trend of responses was similar. The majority of our students believe it is important to help others (88%) and far fewer believe it is important to influence the political structure (42%).

Latent Class Analysis

We estimated a total of five LCA models. When estimating the 5-class model, we encountered estimation issues. Specifically, the 5-class solution had estimated conditional probabilities that were at the boundary of the parameter space (0 or 1.0). We chose not to interpret the results from the 5-class model and only consider the results from the remaining models because such solutions are typically deemed as untrustworthy (Geiser, 2013).

Model fit. The fit indices for the models are presented in Table 3. The 4-class model, overall, provided better fit to the data compared to the other three models. The BIC and SSABIC fit indices were lowest for the 4-class model. The LMRT and BLRT were both statistically significant, which indicated the 4-class model fit significantly better than the 3-class model. Lastly, the BF was greater than 10, which suggested the 4-class model is more strongly supported by the data than the 3-class model. The entropy statistic for the 4-class model is .66, which indicates only moderate certainty about classifying individual students into classes.

Four-Class Solution. Figure 2 illustrates the probability of considering each CE activity as important based on the 4-class model. The four classes found in our study closely align with those identified by previous researchers. Class 1, which contained 27% of students, was characterized by having high probabilities of considering all CE activities as important. Students in this class resemble individuals previously identified as super engagers (Lopez et al., 2006; Moely et al., 2008; Weerts et al., 2014). Class 2, which contained 16% of students, was characterized by having high probabilities of considering PCE activities as important and low to moderate probabilities of considering NPCE activities as important. Students in this class resemble individuals previously identified as political engagers (Lopez et al., 2006; Moely et al., 2008). Class 3, which contained 36% of students, was characterized by having low probabilities of considering PCE activities as important and high probabilities of considering NPCE activities as important. Students in this class resemble individuals previously identified as non-political engagers (Lopez et al., 2006; Moely et al., 2008; Weerts et al., 2014). Lastly, Class 4, which contained 20% of students, was characterized by having low to moderate probabilities of considering all CE activities as important. Students in this class resemble individuals previously identified as non-engagers (Lopez et al., 2006; Moely et al., 2008; al., 2006; Moely et al., 2008; Weerts et al., 2014).

Validity Evidence

The validity results are presented in Table 4, which contains the parameter estimates of the multinomial logistic regression models used in the 3-step method for each auxiliary variable (gender, cohort, race, and student academic classification). To aid in the interpretation of the significant results (7), the estimates were used to obtain the predicted probabilities of class membership, also shown in Table 4 along with a detailed interpretation of the findings. Statistically significant differences in class membership that aligned with our hypotheses were found for gender, race, and cohort but not for student academic classification. As hypothesized, there were significant differences among classes in gender composition, with females more likely to be classified as non-political engagers and males more equally dispersed across classes, including the political engagers class.

The distribution of class membership also differed across race. Although we hypothesized that minorities would have a stronger representation in classes favoring PCE activities, our results indicate that minorities have a stronger representation in the super engager class favoring both NCPE and PCE activities. Our hypothesis regarding class differences in cohort membership was also supported, with members of the most recent cohort more likely to be classified as super engagers than members in earlier cohorts. The remaining hypothesis was not supported. Latent classes did not significantly differ from one another in student academic classification (e.g., freshman, sophomore, junior, senior).

Discussion

Although there may be disagreement on the precise definition of CE researchers agree that the construct is multidimensional and is characterized by a wide array of knowledge, skills, attitudes, values, and behavior. One facet of CE that is commonly assessed is social agency, or the extent to which one considers involvement in civic or political activities as a personal goal. For decades, CIRP surveys have included social agency items, with the same or similar items appearing on other scales. Given the popularity of these items and the potential for their results to inform CE programming and assessment, this study utilized a classification technique to explore if the results could be summarized and presented in a manner more informative than use of a single score or descriptive statistics based on the individual item scores. LCA was used with the responses from students at our university to identify four classes of students who differed in the kinds of CE activities they valued. In the sections below we consider the results of validity analyses (which were mainly supportive of the 4-class solution), the implications of our results for CE programming, limitations of our study, directions for future research, and implications of our results for broad definitions of CE.

Validity Results

Our validity hypotheses were supported for three of the four variables. Classes differed as hypothesized based on gender, race, and cohort membership. Although we suspect the increase in CE programming at our university might explain why members of more recent cohorts were likely to be classified as super engagers, our study does not allow for the exploration of whether the increase in the number of activities valued in recent years is a function of CE programming at our university or other factors (e.g., 2016 general election). Although we hypothesized for more upperclassmen to be in classes where a larger number of CE activities is valued (Class 1) our validity results did not support this hypothesis. Therefore, it is reasonable to question both the meaningfulness of our 4-class solution and our hypothesis. For instance, we based our hypothesis about student academic classification on two studies (Lott & Eagan, 2011; O'Neill, 2012) indicating that seniors assigned higher levels of importance to social agency items than incoming students. However, other research studies did not find student academic classification differences among college students when grouped according to their CE activity preferences (Moely et al., 2008). More research is certainly needed to explore these competing explanations. In the meantime, our results offer a first step in understanding the validity of the 4-class solution on which future research can build.

Implication of Results

Although more research is needed to support the 4-class solution, the validity evidence was mainly supportive; importantly, the nature and number of classes aligned with classes found in other CE classification studies. For these reasons, we proceed below in considering the results and their implications for CE programming.

First, we found it encouraging that only 1/5 of the student population in this study was classified as non-engagers (Class 4) and that more recent cohorts had a smaller probability of membership in this class. Of course, the presence of any non-engagers is not ideal. Therefore, an important next step is to consider the characteristics of students in this class. For instance, if particular majors are heavily represented in this class, CE programming might be targeted to such majors. We also found it encouraging that although the probabilities in Figure 2 are low for most activities for non-engagers, the probability is equal to .63 for the item "helping others who are in difficulty." Thus, perhaps an important way to increase the value these students place in CE activities is to convey to them how such activities help others who are in difficulty.

Second, we were encouraged to find nearly 1/3 of students in the super engagers class (Class 1) and a higher probability of membership in this class for more recent cohorts. This class is the most ideal class because all kinds of CE activities--political, environmental, community-oriented--are considered important. Because this class is ideal, it is important to consider how the political engagers (Class 2) and non-political engagers (Class 3) differ from super engagers. The political engagers are similar to the super engagers in having high probabilities on the items with the exception of low probabilities on two items: one asking about participation in community action programs and another asking about involvement in environmental programs. To promote transition of political engagers to super engagers, programming would need to increase the value these students place in environmental stewardship activities (which may not be seen by some as relevant to CE), and participation in community action programs. With respect to the latter, it is possible that some students, including those in the political engagers class, have a low endorsement of this item (8) because they do not understand what is meant by "community action programs". We personally consider this description vague and suspect that is why it does not appear on the Political and Social Involvement scale (Center of Inquiry in the Liberal Arts, 2013).

When considering how non-political engagers compare to the super engagers, the largest differences are in the importance placed on political activities, with non-political engagers unlikely to consider these activities important. It is encouraging that non-political engagers value many activities, but the low endorsement of PCE activities is troubling, particularly given the size of this class. Universities can help non-political engagers transition to super engagers by providing and promoting PCE programming and helping students consider their NPCE activities through a political lens.

Limitations of Study & Directions for Future Research

In the above section we considered different actions that might be taken to help develop students in various classes. Although it is tempting to classify the individual students in our sample into the four classes so that we might be better able to direct them to suitable CE programs on campus, the moderate classification accuracy of our model prohibits us from doing so. To clarify, it is important to understand how individual students would be assigned to classes. Assignment of individuals to classes involves the use of the posterior probabilities of class membership for each student, which here would be four values capturing the probability of the student's membership in each of the four classes. In an ideal situation, the probability would be one for single class and zero for the remaining classes. As indicated by our entropy value of .66, the classification accuracy of our model is not perfect, so use of the posterior probabilities to assign individuals to classes is not straightforward. Although a less than perfect entropy value does not affect our use of the LCA result to understand the number and nature of latent classes it does affect our use of the results to classify individual students. Thus, the moderate entropy value does not discount our results; it just cautions the use of results for the classification of individual students. To use LCA with these items in this population to classify individuals, steps would need to be taken to increase its classification accuracy. This can be accomplished by using more items or better quality items (i.e., those useful for discriminating among classes) or by including predictors of latent class membership in the analysis.

One of the largest limitations in our study is the sample, which includes students at only one university. Exploring the extent to which the results replicate across institutions is needed, with the CIRP surveys or the Wabash National Study being ideal data sources for such an investigation. The variables included in our analyses were also not ideal. Because the data were collected for another purpose, we were limited in what auxiliary variables could be used and based our hypotheses on research that sometimes was not strongly aligned with the present research. Future research should consider other auxiliary variables, such as student's major or their actual civic engagement behaviors, that may yield stronger hypotheses with respect to class differences.

Another suggestion for future research is to consider the extent to which socially desirable response behavior (Spector, 2004) is influencing the results. Although our validity results suggest that most super engagers are students who value multiple civic engagement activities, it is possible that this class is also capturing students who are prone to socially desirable response behavior. Exploring the extent to which members in this class are prone to such behavior is warranted. If socially desirable responding is considered an issue, the use of different item types less susceptible to socially desirable responses (e.g., forced-choice items) should be pursued (Christiansen, Burns, & Montgomery, 2005).

We also have concerns about the social agency items used to classify students in the present study. Having students verbalize their thoughts while reading and responding to items would be useful to ensure that respondents understand the items and are interpreting them in the same way because the language used in some of the items is vague. The results of Sequiera, Holzman, Horst, and Ghant (2017) underscore the need to ensure respondents understand the terms used in CE assessments. When Sequeira et al. (2017) asked college students to describe the ways in which their community service experience related to a current social justice issue several students reported that they did not know what was meant by "social justice." A study examining respondents' understanding of items is worth pursuing if the items continue to be used. But should these items continue to be used? Is this list of activities current and comprehensive if we are trying to capture the kind of civic and political activities students value? We believe these are important questions to address before moving forward in this line of research.

Other limitations in our study are more methodological. We recognize that we engaged in the frowned-upon practice of dichotomizing variables, which results in a loss of information (MacCallum, Zhang, Preacher, & Richer, 2002). We did try LCA using responses on their original 4-point scale but quickly encountered computational issues. Researchers with larger data sets from multiple institutions may not encounter these issues and are encouraged to explore LCAs with the original responses if possible and dichotomized responses if not.

Our final limitation has to do with the narrow aspect of CE assessed by the social agency items included in our study. Our study only provided information on how different classes of students valued different kinds of civic activities; it did not characterize student differences with respect to the many other aspects of CE (e.g., knowledge, skills, motivations, attitudes, behaviors) that exist. To do so, a measure addressing multiple facets is needed, with the Civic Competency and Engagement assessment (Torney-Purta, Cabrera, Roohr, Liu, & Rios, 2015) being a promising assessment for such research.

Implications of a broad definition of CE

In the beginning of this paper we provided commonly used definitions of CE that encompassed both political and non-political processes. Advantages to adopting a broad definition of CE are its inclusiveness, allowing many activities to be subsumed under single heading, and its flexibility, allowing universities to focus on those aspects of CE that best align with their unique strengths. There are disadvantages, however, to including NPCE and PCE within the larger umbrella of CE. One potential disadvantage is the risk of PCE getting lost within the broader CE initiative. As highlighted by the results of this study and several others, many students value NPCE activities over PCE activities. Use of a broad definition therefore runs the risk of PCE, which needs to be emphasized on campuses, not receiving enough attention if it is subsumed under the larger CE umbrella. PCE initiatives on campus should be highlighted, and the link between NPCE and politics made explicit, in order to increase students' political involvement and help them see political action as an avenue for helping others.

References

Asparouhov, T., & Muthen, B. (2014). Auxiliary variables in mixture modeling: Three-step approaches using Mplus, Structural Equation Modeling: A Multidisciplinary Journal, 21, 329-341. doi: 10.1080/10705511.2014.915181

Brunton-Smith, I. (2011). Modelling existing survey data: Full technical report of PIDOP work package 5. Department of Sociology, University of Surrey.

Center of Inquiry in the Liberal Arts. (2013). Wabash national study 2006-2012: Outcomes and experiences measures.

Christiansen, N. D., Burns, G. N., & Montgomery, G. E. (2005). Reconsidering forced-choice item formats for applicant personality assessment. Human Performance, 18(3), 267-307. doi: 10.1207/s15327043hup1803_4

Geiser, C. (2013). Data analysis with Mplus. New York, NY: Guilford Press.

Eagan, M. K., Stolzenberg, E. B., Bates, A. K., Aragon, M. C., Suchard, M. R., & Rios-Aguilar, C. (2015). The American freshman: National norms fall 2015. Los Angeles: Higher Education Research Institute, UCLA.

Eagan, M. K., Stolzenberg, E. B., Zimmerman, H. B., Aragon, M. C., Whang Sayson, H., & Rios-Aguilar, C. (2017). The American freshman: National norms fall 2016. Los Angeles: Higher Education Research Institute, UCLA.

Enders, C. K. (2010). Applied missing data analysis. New York, NY: Guilford Press.

Ehrlich, T. (2000). Civic responsibility and higher education. Phoenix, AZ: Oryx Press.

Finley, A. (2011). Civic learning and democratic engagement: A review of the literature on civic engagement in postsecondary education. Washington, DC: Association of American Colleges and Universities.

Henson, J. M., Reise, S. P., & Kim, K. H. (2007). Detecting mixtures from structural model differences using latent variable mixture modeling: A comparison of relative model fit statistics. Structural Equation Modeling: A Multidisciplinary Journal, 14(2), 202-226. doi: 10.1080/10705510709336744

Lo, Y., Mendell, N. R., & Rubin, D. B. (2001). Testing the number of components in a normal mixture. Biometrika, 88(3), 767-778.

Lopez, M. H., Levine, P., Both, D., Kiesa, A., Kirby, E., & Marcelo, K. (2006). The 2006 civic and political health of a nation: A detailed look at how youth participate in politics and communities. College Park, MD: Center for Information and Research on Civic Learning and Engagement.

Lott, J. L., III, & Eagan, M. K., Jr. (2011). Assessing the psychometric properties of civic values. Journal of Students Affairs Research and Practice, 48(3), 333-327. doi: 10.2202/1949-6605.6288

MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7(1), 19-40. doi: 10.1037/1082-989X.7.1.19

McLachlan, G. J., & Peel, D. (2000). Finite mixture models. New York, NY: Wiley.

Moely, B. E., Furco, A., & Reed, J. (2008). Charity and social change: The impact of individual preferences on servicelearning outcomes. Michigan Journal of Community Service Learning, 15, 37-48

Moely, B. E., Mercer, S. H., Ilustre, V., Miron, D., & McFarland, M. (2002). Psychometric properties and correlates of the Civic Attitudes and Skills Questionnaire (CASQ): A measure of students' attitudes related to service-learning. Michigan Journal of Community Service Learning, 8(2), 15-26.

Muthen, L. K., & Muthen, B. O. (1998-2012). Mplus user's guide (7th ed.). Los Angeles, CA: Muthen & Muthen.

National Task Force on Civic Learning and Democratic Engagement (2012). A crucible moment: College learning and democracy's future. Washington, DC: Association of American College and Universities.

O'Neill, N. (2012). Promising practices for personal and social responsibility: Findings from a national research collaborative. Washington, DC: Association of American Colleges and Universities.

Pascarella, E. T., Ethington, C. A., & Smart, J. C. (1988). The influence of college on humanitarian/civic involvement values. The Journal of Higher Education, 59(4), 412-427. doi: 10.1080/00221546.1988.1178

Reason, R. D., & Hemer, K. M. (2015). Civic learning and engagement: A review of the literature on civic learning, assessment and instruments.

Rhee, B. S., & Dey, E. L. (1996, October). Collegiate influences on the civic values of students. Paper presented at the Annual Meeting of the Association of Study of Higher Education.

Rios-Aguilar, C. & Mars, M. M. (2011). Integration or fragmentation? College student citizenship in the global society. Education, Knowledge, & Economy, 5 (1-2), 29-44.

Sequiera, S. N., Holzman, M. A., Horst, S. J., & Ghant, W. A. (2017). Developing college students' civic-mindedness through service-learning experiences: A mixed-methods study. The Journal of Student Affairs Inquiry, 2(1), 1-32.

State Council of Higher Educational in Virginia (2017). Policy on student learning assessment and quality in undergraduate education.

Sclove, S. L. (1987). Application of model-selection criteria to some problems in multivariate analysis. Psychometrika, 52(3), 333-343. doi: 10.1007/BF0229436

Schwarz, G. (1978). Estimating the dimension of a model. Annals of Statistics, 6(2), 461-464.

Spector, P. E. (2004). Social desirability bias. In M. S. Lewis-Beck, A. Bryman, & T. Futing Liao, (Eds), The

SAGE encyclopedia of social science research methods. Thousand Oaks, CA: SAGE Publications Ltd. doi: 10.4135/9781412950589.n932

Tofighi, D., & Enders, C. K. (2008). Identifying the correct number of classes in growth mixture models. In G. R.

Hancock & K. M. Samuelsen (Eds.), Advances in latent variable mixture models (pp. 317-341). Greenwich, CT: Information Age Publishing, Inc.

Torney-Purta, J. V. (2009). International psychological research that matters for policy and practice, American Psychologist, 64(8), 825-837. doi: 10.1037/0003-066X.64.8.825

Torney-Purta, J., Cabrera, J. C., Roohr, K. C., Liu, O. L., & Rios, J. A. (2015). Assessing civic competency and engagement in higher education: Research background, frameworks, and directions for next-generation assessment (Research Report No. RR-15-34). Princeton, NJ: Educational Testing Service.

Vermunt, J. K. (2010). Latent class modeling with covariates: Two improved three-step approaches. Political Analysis, 18(4), 450-469. doi: 10.1093/pan/mpq025

Wasserman, L. (2000). Bayesian model selection and model averaging. Journal of Mathematical Psychology, 44, 92-107. doi: 10.1006/jmps.1999.1278

Weerts, D. J., Cabrera, A. F., & Meijas, P. P. (2014). Uncovering categories of civically engaged college students: A latent class analysis. The Review of Higher Education, 37(2), 141-168. doi: 10.1353/rhe.2014.0008

Westheimer, J., & Kahne, J. (2004). What kind of citizen? Politics of educating for citizenship. American Educational Research Journal, 41, 237-269. doi: 10.3102/00028312041002237

AUTHORS

Dena A. Pastor, Ph.D.

James Madison University

Thai Q. Ong, M.A.

James Madison University

Christopher D. Orem, Ph.D.

James Madison University

CORRESPONDENCE

Email pastorda@jmu.edu

(1) Example adopted from Westheimer and Kahne (2004). However, they used this example to make the distinction between participatory citizens and justice-oriented citizens, not between NPCE and PCE.

(2) Rios-Aguilar and Mars (2011) used the social agency items in a classification study employing cluster analysis with data from CIRP's 2005 Continuing Senior Survey. The social agency items were separated into two different subscales (i.e., Community Action and Political Action) along with other items. These subscales were used along with six other subscales and demographic variables to classify students into classes. Because demographic variables were used to create classes and more importantly, because the resulting classes only differed meaningfully in their demographics, the results are not included in Table 1.

(3) Because data collected across different years were combined, it is possible for a single student to be represented multiple times in our final data set. For example, if a student were randomly selected to complete the survey in both 2013/2014 and 2015/2016 they would be represented twice in the data. Because no identifying information was collected from students we cannot ascertain the extent to which this occurred, although we suspect it is rare. To clarify, consider a student attending the university during all three years of data collection, where the probability of being selected for the survey is .30 (because we are obtaining a random sample of 30% of the student population). The probability of this student being randomly selected to complete the survey twice is .09 (.302) and three times is .03 (.303). Therefore, it is possible but unlikely for the same student to be surveyed multiple times. Given the infrequency with which this is likely occurring the impact on our results is suspected to be negligible.

(4) For reasons unrelated to this research, the survey was not administered in 2014/15.

(5) Items are from the 2017 Cooperative Institutional Research Program (CIRP) Freshman Survey (Eagan et al., 2017). These items were used with permission from the Higher Education Research Institute.

(6) Only C-1 weights are estimated because the weights, [[rho].sub.c], are constrained to be positive and to sum to one across classes.

(7) In addition to the information in Table 4 we also considered the multinomial logistic regression results using each class as the baseline category in the model. Table 4 provides the results using Class 1 as the baseline category; the results using every other class as the baseline category are provided in the Mplus output and available to readers upon request.

(8) Interestingly, this item also has the largest amount of missing data (see Table 2). It is possible that students did not respond to this item because they did not understand it.
Table 1
Summary of Previous CE Classification Studies

Study                 Sample                 Indicators

Lopez et al. (2006)   1,700 young adults,    19 questions on the
                      ages 15-25             Civic and Political
                                             Health of a Nation
                                             Survey about
                                             participation in
                                             NPCE and PCE
                                             activities

Moely, Furco, &       2,000+ college         Questions about
Reed (2008)           students enrolled in   preference of
                      service learning       engagement in
                      courses across         service learning
                      various institutions   activities aligned
                                             with the charity
                                             paradigm (similar to
                                             NPCE) and social
                                             change paradigm
                                             (similar to PCE)

Weerts, Cabrera,      1,876 recent           Items on the ACT
& Meijas (2014)       graduates from         Alumni Outcomes
                      bachelor degree        Survey asking about
                      programs between       level of involvement
                      1999 and 2003          in various kinds of
                                             organizations (e.g.,
                                             environmental,
                                             political, social)

Brunton-Smith (2011)  Survey data            Variables capturing
                      collected from         participation
                      adults in several      in different kinds
                      countries in the       of civic
                      European Union         activities: voting
                                             in the national
                                             election,
                                             conventional
                                             political
                                             participation
                                             beyond voting
                                             (e.g., campaigning
                                             or donating
                                             money), nonconventional
                                             political
                                             participation
                                             (e.g., boycotting,
                                             signing a
                                             petition,
                                             protesting), and
                                             involvement
                                             in nonpolitical
                                             organizations

TorneyPurta (2009)    30,000 14-year olds    12 social and
                      in 10 European         political
                      countries during       attitudinal
                      1999                   scales administered
                                             as part of the
                                             Civics
                                             Education Study by
                                             the Institute of
                                             Educational Sciences

Study                 Classification         Findings
                      Technique

Lopez et al. (2006)   Classified by number   4 classes: Electoral
                      and type of activity   specialists
                                             participated in at
                                             least two PCE
                                             activities (17%);
                                             civic specialists
                                             participated in at
                                             least two NPCE
                                             activities (12%);
                                             disengaged did not
                                             meet the criteria
                                             for either class
                                             (58%); and dual
                                             activists met the
                                             criteria for both
                                             classes (13%)

Moely, Furco, &       Median split           4 classes: the
Reed (2008)                                  social change
                                             preference class
                                             (16%) preferred only
                                             social change
                                             paradigm activities;
                                             the charity
                                             preference class
                                             (20%) preferred only
                                             charity paradigm
                                             activities; low
                                             value
                                             undifferentiated
                                             preference class
                                             (29%) did not prefer
                                             either kind of
                                             activity; and the
                                             high value
                                             undifferentiated
                                             class (35%)
                                             preferred activities
                                             in both paradigms

Weerts, Cabrera,      LCA                    4 classes:
& Meijas (2014)                              apolitical engagers
                                             (39%) were
                                             characterized by
                                             involvement in
                                             professional,
                                             service, social and
                                             community
                                             organizations but
                                             low involvement in
                                             political or
                                             environmental
                                             groups; social-
                                             cultural engagers
                                             (6%) were
                                             characterized by a
                                             high involvement in
                                             social and cultural
                                             organizations; non-
                                             engagers (25%) were
                                             characterized by low
                                             involvement in all
                                             organizations; and
                                             super engagers (30%)
                                             were characterized
                                             by high involvement
                                             in all organizations

Brunton-Smith (2011)  LCA                    4 classes: the
                                             voters only class
                                             (41%) voted, but
                                             were not involved in
                                             other ways; the
                                             nonconventional
                                             participation class
                                             (9%) participated in
                                             politics in non-
                                             conventional ways
                                             and in nonpolitical
                                             organizations; the
                                             not politically
                                             active class (13%)
                                             were not involved;
                                             and the highly
                                             politically active
                                             class (38%) were
                                             involved in all
                                             areas

TorneyPurta (2009)    Cluster
                      analysis               5 classes: the
                                             social justice class
                                             (17%) characterized
                                             by "I believe in
                                             rights for everyone
                                             but do not feel
                                             obligated to do much
                                             about it" (p. 829);
                                             the conventionally
                                             political class
                                             (33%) characterized
                                             by "I believe in my
                                             country and will
                                             support the status
                                             quo with positive
                                             political and civic
                                             actions that are
                                             expected of me" (p.
                                             829); the
                                             indifferent class
                                             (9%) and disaffected
                                             class (35%) both
                                             characterized by "I
                                             have better ways to
                                             spend my time than
                                             thinking about being
                                             active in politics,
                                             but I won't do
                                             anything rash" (p.
                                             830) with the
                                             indifferent class
                                             having more negative
                                             beliefs about
                                             minorities' rights
                                             and norms of
                                             citizenship; the
                                             alienated class (7%)
                                             characterized by:
                                             "I'm angry about the
                                             immigrants and
                                             minority groups in
                                             my country, and I
                                             don't trust the
                                             government; I have
                                             the right to do what
                                             I want" (p. 830)

Table 2
Percentages of Students Considering Activity
as "Essential" or "Very Important"

Item                                              N      %

1. Helping others who are in difficulty           2586   88
2. Influencing social values                      2586   73
3. Helping to promote racial understanding        2585   54
4. Participating in a community action program    2570   70
5. Becoming involved in programs to clean up      2583   55
  the environment
6. Keeping up to date with political affairs      2584   57
7. Influencing political structure                2586   42

Note. The sample sizes reported in this table are slightly lower than
the final sample size of 2,591 because of missing data. All 2,591
cases were used in the LCA, even those with missing data on one or
more items. The LCA estimation procedure, full information maximum
likelihood (FIML), accommodates missing data by estimating parameters
using all available data. Although this method makes certain
assumptions about the missing data mechanism, these assumptions are
easier to satisfy than the assumptions made by more traditional
missing data techniques (e.g., listwise or pairwise deletion). For
further information see Enders (2010).

Table 3
Fit Indices and Entropy for the 1-Class,
2-Class, 3-Class, and 4-Class Models

# of      # of      LL      BIC    SSABIC   Entropy   LMRT p   BLRT p
classes   paras

1-class     7     -11124   22303   22281       1       --        --
2-class    15     -10026   20170   20122      .69     < .01    < .01
3-class    23     -9839    19787   19787      .69     < .01    < .01
4-class    31     -9733    19710   19611     0.66     < .01    < .01

# of      BF (a)
classes

1-class    --
2-class    > 10
3-class    > 10
4-class    > 10

Note. # of classes = number of classes; # of paras. = number of
parameters estimated;

LL = log-likelihood; BIC = Bayesian information criterion; SSABIC =
sample size adjusted Bayesian information criterion; LMRT p =
Lo-Mendell-Rubin likelihood ratio p-value; BLRT p = bootstrap
likelihood ratio p-value; BF = Bayes factor

(a) The Bayes factor compared the C class model to the C-1 class
model.

Table 4
Validity Results

Auxiliary        Multinomial Logistic Regression
Varliable        Parameter Estimate and
                 Standard Erros from
                 the 3-step Method

                             Class 2/Class 1

Gender           Parameter    Value     SE

                 Intercept   -0.788#   0.130
                 Gender       0.600    0.180
Race             Intercept   -0.421#   0.098
                 Race        -0.442    0.233
Student          Intercept   -0.447#   0.194
Academic         Sophomore    0.081    0.255
Classification   Junior      -0.190    0.265
                 Senior      -0.119    0.259
                 Intercept    0.011    0.204
Cohort           2015/2016   -0.290    0.238
                 2016/2017   -1.210#   0269

Auxiliary                    Multinomial Logistic Regression
Varliable                    Parameter Estimate and
                             Standard Erros from
                             the 3-step Method

                             Class 3/Class 1   Class 4/Class 1

Gender           Parameter    Value     SE      Value     SE

                 Intercept   0.443#    0.076   -0.441#   0.092
                 Gender      -0.510#   0.140   0.355#    0.137
Race             Intercept   0.347#    0.071   -0.209#   0.008
                 Race        -0.324#   0.157   -0.419#   0.175
Student          Intercept    0.272    0.143   -0.304#   0.154
Academic         Sophomore    0.042    0.190   -0.247    0.216
Classification   Junior      -0.076    0.189    0.128    0.198
                 Senior       0.031    0.186    0.113    0.197
                 Intercept   0.870#    0.150   0.835#    0.137
Cohort           2015/2016   -0.404#   0.176   -1.301#   0.178
                 2016/2017   -1.261#   0.191   -1.950#   0.097

Auxiliary        Predicted probabilities of
Varliable        class membership
                 conditional on
                 auxiliary variable

                             Class 1     Class 2

                 Category     super     political
Gender                       engagers   engagers

                 Female        0.27       0.12
                 Male          0.27       0.22
Race             White         0.26       0.17
                 Non-White     0.34       0.14
Student          Freshman      ---         ---
Academic         Sophomore     ---         ---
Classification   Junior        ---         ---
                 Senior        ---         ---
                 2013/2014     0.15       0.15
Cohort           2015/2016     0.25       0.19
                 2016/2017     0.43       0.13

Auxiliary                    Predicted probabilities of
Varliable                    class membership
                             conditional on
                             auxiliary variable

                                Class 3      Class 4

                 Category    non-political     non-
Gender                         engagers      engagers

                 Female          0.43          0.18
                 Male            0.25          0.25
Race             White           0.36          0.21
                 Non-White       0.34          0.18
Student          Freshman         ---          ---
Academic         Sophomore        ---          ---
Classification   Junior           ---          ---
                 Senior           ---          ---
                 2013/2014       0.36          0.34
Cohort           2015/2016       0.40          0.16
                 2016/2017       0.29          0.14

Note. Class 1 served as the baseline category in all models. Each
predictor was represented by one or more dummy coded variables in the
model, with females, whites, Freshman, and 2013/2014 serving as the
reference categories in the models including gender, race, student
academic classification, and cohort, respectively. Coefficients
significant at p <.05 are shown in bold.

Interpretation. When considering the classes two at a time (e.g.,
Class 2 versus Class 1), gender was a statistically significant
predictor in the vast majority of comparisons. The largest gender
discrepancies indicated that females are more likely to be
classified as non-political engagers than as political engagers and
non-engagers, while males are equally likely to be classified in
these three groups. Race was a statistically significant predictor
of class membership for only some comparisons. The probability of
classification in Class 1 (super engagers) versus Class 4
(non-engagers) was significant, with whites only slightly more
likely to be classified as super engagers than non-engagers, and
non-whites far more likely to be classified as super engagers. The
probability of classification in Class 1 (super engagers) relative
to Class 3 (non-political engagers) also was significant, with
whites more likely to be classified as non-political engagers than
super engagers, and non-whites equally likely to be classified in
these two groups. Student Academic Classification was not a
statistically significant predictor of class membership; that is,
the probability of class membership was the same across academic
classification levels. Because of the lack of statistical
significance, predicted probabilities are not reported. Cohort was
a statistically significant predictor of class membership the vast
majority of the time. For the 2016/2017 cohort, the probability of
membership in the super engagers class was more likely than
membership in the other classes. The same is not true of the
2013/2014 cohort, who are more likely to be in Classes 3
(non-political engagers) and 4 (non-engagers) relative to Class 1
(super engagers) and equally likely to be in Class 2 (political
engagers) relative to Class 1 (super engagers).

Note: Coefficients significant at p <.05 are indicated with #.

Figure 1

Percentages of Incoming Students Perceiving
Activity as Essential or Very Important in 2015 or 2016

                                       2015   2016

Helping others who are in difficulty   75%    78%

Influencing social values              44%    49%

Helping to promote racial              41%    47%
understanding

Keeping up to date with political      40%    46%
affairs

Participating in a community           31%    36%
action program

Becoming involved in programs to       29%    34%
clean up the environment

Influencing political structure        22%    27%

Figure 2

Class 1 super engagers           27%
Class 2 political engagers       16%
Class 3 non-political engagers   36%
Class 4 non-engagers             20%

Note: Table made from line graph.
COPYRIGHT 2018 Research & Practice in Assessment
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Pastor, Dena A.; Ong, Thai Q.; Orem, Christopher D.
Publication:Research & Practice in Assessment
Date:Jun 22, 2018
Words:9687
Previous Article:Sharpening the Ax.
Next Article:Contextualizing Effect Sizes in the National Survey of Student Engagement: An Empirical Analysis.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters