The missing element: examining the loyalty-competence nexus in presidential appointments.
Reflecting this emphasis of loyalty in contrast to competence, Moe (1985a) recommends that presidents should seek "responsive competence" rather "neutral competence" (Heclo 1975), while Moe (1982; 1985b) and Wood and Waterman (1994) find that the appointment of loyalists to a number of federal regulatory agencies altered agency outputs in the president's preferred policy direction. Other scholars, journalists, and political pundits also assume that the appointment of loyalists advances presidential policy and argue that presidents have consistently promoted loyalty over competence when making appointments (e.g., Baker 2014; Moynihan and Roberts 2010). And more generally, Aberbach and Rockman (2000) note that appointing loyalists to key administration positions is but one aspect in a broader trend toward the politicization of the bureaucracy involving a variety of other techniques promoting presidential influence (e.g., control of budgets, administrative reform; see also Waterman 1989; Durant 1992; Durant and Warber 2001). Yet despite all of this impressive work, there is a theoretically important missing element in these studies. Empirically speaking, what distinguishes a loyal appointee from a competent one? Or, to phrase the question differently, what distinct--and observable--characteristics comprise loyalty and competence? This is an important question for scholars, as some appointee background characteristics, particularly prior experience in Washington, have been cited as evidence of both loyalty and competence (see Nathan 1975; Hess 1976). It is also important for presidents and their personnel teams, as they seek to first evaluate and then select appointees from the hundreds of thousands of available applicants.
To address this missing element and better differentiate loyalty from competence, we analyze a unique data set of 3,366 resumes of individuals appointed by Presidents George W. Bush and Barack Obama across 51 different federal institutions, including departments, commissions, and government corporations. The resumes describe the background characteristics of each appointee, from their education and training to their work and political experience. We also examine four types of appointments: presidentially appointed, Senate confirmed (PAS), the focus of most previous studies of presidential appointments, including the work of Krause and O'Connell (2011), (1) and three sets of appointments that do not require Senate confirmation (2) Schedule C (SC); (3) Senior Executive Service (SES), and (4) most Executive Office of the President appointments (PA, presidentially appointed but not requiring Senate confirmation). Our analysis of these generalizable and comprehensive data leads to more precise definitions of loyalty and competence.
A number of studies, many of them principal-agent models, conclude that presidents can successfully employ their appointment power to effectively control the bureaucracy (see Moe 1982; 1985a; 1985b; Stewart and Cromartie 1982; Menzel 1983; Wood 1990; Wood and Waterman 1991; 1993; 1994; Wood and Anderson 1993). Moe's work specifically posits that appointments are a vital technique for influencing bureaucratic outcomes and that presidents do so by valuing appointees demonstrating "responsive competence" over "neutral competence." Moe contends that appointees possessing "responsive competence" and placed into key administrative positions can implement policy change favorable to the president, while Heclo's (1975; 1977) recommendation for "neutral competence" merely prescribes that bureaucrats should be both policy neutral and expert in administrative management.
A second literature also examines the loyalty-competence nexus. These works examine whether presidents appoint individuals on the basis of loyalty or competence (Edwards 2001; Weko 1995; Moynihan and Roberts 2010). Presidential scholars describe how presidents "evaluate potential candidates on factors such as loyalty, competence, acceptability to key legislators and committees, demographic characteristics, political connections, and work for the campaign or party" (Lewis and Waterman 2013: 38; see also Mackenzie 1981; Pfiffner 1996; Weko 1995). Empirical studies emphasize the dimensions of loyalty and competence, or "compliance" contrasted with "expertise" (Krause and O'Connell 2011). Scholars also posit that presidents want to make patronage appointments as rewards for support or for coalition building with other key political principals (Hollibaugh, Horton, and Lewis 2014; Lewis and Waterman 2013; Mackenzie 1981; Patterson 2008; Patterson and Pfiffner 2001).
Other recent work explains how presidents could match certain appointee background characteristics with specific positions (Hollibaugh, Horton, and Lewis 2014; Krause and O'Connell 2011; Lewis and Waterman 2013; Lewis 2011) with specific agencies (Parsneau 2013). Most of this work (e.g., Krause and O'Connell 2011; Parsneau 2013) is confined to PAS appointees, for reasons of both prominence and limited data on lower-level appointees. But, as Lewis and Waterman (2013) argue, these seemingly insignificant and "invisible" appointees deserve more attention because they are of increasing importance for presidents pursuing political control of the bureaucracy. (2)
Along with this literature on appointee characteristics, many scholars focus on prescribing the characteristics that should be of most value to presidents. For example, Hess (1976) recommends that presidents should appoint individuals with prior federal governmental experience to the cabinet as a means of promoting competence over loyalty, while Nathan (1975) urges that presidents reward loyalty. Contrarily, Waterman (1989) warns that a reliance on loyalty alone can undermine presidential influence in the long run. The missing element in most of these studies--descriptive, empirical, or prescriptive--is that they do not operationalize their basic concepts, leaving unexplained which qualities constitute loyalty or competence.
To illustrate this point, while Hess (1976) recommends that prior Washington experience is an important characteristic related to competence, Nathan (1975) recommends that presidents are best served by promoting loyalists from within. Nathan (1975) then argues that prior Washington experience is a key testing ground for determining appointee loyalty. Consequently, prior Washington experience appears to cut both ways, offering presidents multiple cues regarding both the loyalty and competence of individuals. As this example illustrates, for us to better understand how loyalty translates into increased political control of the bureaucracy, we need to empirically identify the basic characteristics of both loyalty and competence.
To do so we examine a number of background characteristics of presidential appointees. Scholars long have collected data on the background characteristics of executive personnel (Herring 1936; Macmahon and Millett 1939; Stanley, Mann, and Doig 1967; Cohen 1988; Nixon 2004), while others have used personal interviews and surveys to measure appointee attributes (Aberbach and Rockman 1976; Aberbach, Putman, and Rockman 1981; Fisher 1987; Maranto 1993; Maranto and Hult 2004; Michaels 1997). (3) These studies theoretically suggest some possible characteristics that may be related to loyalty, such as past work in a presidential administration or for a political campaign. They also suggest that competence may be related to educational attainment and past task experience in a particular field. But with the exception of Krause and O'Connell (2011) and Lewis and Waterman (2013), there has been little systematic attempt to empirically identify the characteristics of loyalty and competence; and, though extensive, Krause and O'Connell's work (2011) focuses solely on PAS appointees and does not deal with SC, SES, and PA appointees. (4)
While PAS appointments--presidentially appointed and Senate confirmed--are most certainly important, recent research suggests that presidents use these other types of political appointments to control bureaucracies (see Lewis and Waterman 2013). As Light (1995) argues, an examination of SES and SC is warranted because presidents are more likely to use these types of appointments for political purposes, such as getting loyalists into specific agencies or rewarding party and campaign workers with patronage positions (Hollibaugh, Horton, and Lewis 2014). Lewis (2008, 97) notes, "Focusing on PAS positions also ignores the broader universe of appointed positions, which is where politicization usually occurs."
The broad scope of our research with a focus on four types of appointees is important because as David Lewis (2008, 22-24) notes, there were 1,137 PAS positions in the executive branch in 2004, with approximately 945 of these in policy-making positions. He also identifies 6,811 SES officials, with 674 of these presidential appointments. In all, he identifies 1,596 individuals with SC appointments. (5) Patterson (2008, 93-94) finds a similar distribution of appointments. Consequently, if we examine only PAS appointments we overlook many of the appointments that presidents make. We ignore much of what presidents do when they adopt a particular appointment strategy. PAS appointments, which require Senate confirmation, also are likely to be less clear-cut loyalists or patronage picks than lower-level appointments. (6) An examination of a broader range of appointments is therefore in order.
Our main research objective is to identify the characteristics of loyalty and competence across two presidencies--George W. Bush and Barack Obama. To empirically identify what is meant by these constructs we require individual-level data that allow us to examine the background characteristics and training of each Bush and Obama appointee. To do so we employ 3,366 resumes, acquired through a series of Freedom of Information Act (FOIA) requests across 51 different federal bureaucracies. Resumes provide a valuable source of information about the background and training of presidential appointees including data on the previous training, work experience (both inside and outside of government), and the political activities of each individual appointee. They allow us to directly and empirically measure each individual appointee's experience on various measures that the literature presumes to constitute either loyalty or competence.
The resumes describe the appointees' experience prior to their work in the administration (7) and include measures of education level (0-4); subject or policy area expertise from graduate-level education or prior professional experience (0, 1); task experience for a specific role, for example, press relations, budgeting, procurement, and so on (0, 1); previous federal agency work (0, 1); an appointment in a previous administration (0, 1); work for the Bush/Cheney or Obama/Biden campaigns (0, 1), transition teams (0, 1), or inauguration committees (0, 1); prior experience working for the Republican or Democratic Party at the local, state, or national level (0, 1); the number of previous campaigns (0-25); and whether the appointee's previous job immediately before the appointment was political in nature (0, 1). Another measure indicates whether the appointee worked in the White House of the Bush or Obama administrations prior to appointment to an agency (0, 1). (8) Resumes on which the most relevant information had been redacted, such as education or prior work experience were excluded. (9)
Importantly, the resumes were submitted through a centralized computer system and were evaluated by personnel working in the White House for the president. For the Bush administration, "all resumes and applications had to be sent via the Internet, electronically instead of through the mail by the bushel. In August, three months before the election, the team set up a website and developed the software ... Some 90,000 applications arrived within a few weeks" (Patterson 2008, 97). The resumes and the candidates for all appointment positions were then evaluated through the White House Office of Presidential Personnel, thus ensuring that all appointments at different levels served the president's purposes (Patterson 2008). Under the Obama administration, prospective appointees could submit resumes through the Change.gov website prior to inauguration and then directly through the White House website. The centralization of the appointment process through a single White House office is important for it provides the institutional capability for a presidential administration to identify and evaluate the loyalty or competence of a wide range of potential job candidates across a wide range of different federal bureaucracies. If appointments were handled in a nonsystematic manner in a variety of different institutional settings then we would expect wide variations in a president's appointment strategy. Centralization means that presidents who desire loyalty (or alternatively competence) have a greater capacity to appoint loyalists at different levels throughout the bureaucracy. For this reason we can compare the resumes of individuals across different federal agencies.
There are some limitations to our data. We would have preferred to compare those who applied and did not get jobs with those who did. Unfortunately, the FOIA does not allow us to access the resumes of those who were not appointed. In addition, because we see relatively low numbers of unit observations within particular agencies, we remain skeptical of the representativeness of our data at the agency level, at least for certain agencies. Still, our data consist of 3,366 resumes; a large number of presidential appointments that allow us to delve deep into the bowels of the bureaucracy. (10)
Table 1 provides a breakdown of the 51 agencies included in our analysis by president, as well as the number of resumes obtained from each agency. The agencies include 14 cabinet departments, 8 commissions, 3 government corporations, and various other executive branch units. Though not a random sample of agencies, as our data come from those agencies that responded to our FOIA request, the agencies in our data set perform a wide variety of different tasks--including national defense, homeland security, economic issues, health care, transportation, farm credits, space exploration, and postal regulation representing a wide swath of responsibilities, functions, and organizational types across the executive branch. With these individual-level data, our task here is to operationalize the concepts of loyalty and competence and then to relate them to the basic assumptions of the principal-agent model.
Loyalty versus Competence
Because any new president has thousands of individual appointments to make and the political and policy consequences can be profound, determining what characteristics might indicate loyalty or competence is crucial. As Patterson (2008, 97) writes, "To the Bush personnel team, the first question was not who, but what. 'What do we want the person in the job to accomplish in the next two, three, or four years that we will be here?'" For instance, when considering potential appointees to help rein in a recalcitrant agency, presidents will likely prefer a loyal candidate who will support the president's goals. And obviously, without proper cues as to the loyalty of an individual, this process would represent merely a matter of guesswork. Presidents established a centralized personnel process to reduce this guesswork and provide better information on appointee candidates. According to Clay Johnson, George W. Bush's first director of presidential personnel, "[t]his [process] is not a beauty contest. The goal is to pick the person who has the greatest chance of accomplishing what the principal wants done ... After the strongest candidate(s) has been identified, assess the political wisdom of the selection, and adjust accordingly" (quoted in Lewis 2008, 27).
So what specifically did the Bush administration want from its appointees? Joshua Bolten, Chief of Staff under George W. Bush, described the importance of both loyalty and competence:
It wasn't just the experience--the resumes of those individuals. I think it was also the fact that the team that participated in the president's campaign was in many respects transplanted into comparable government roles. The president very clearly said to me when I first arrived in March of 1999 as a policy director at his campaign--almost two years before the election; "I want to campaign the way I intend to govern." So, in structuring the work of the [White House] policy staff, I tried to do it in a way that would literally make it possible to say, "Okay, tomorrow we're no longer campaigning: we're actually governing." (quoted in Patterson 2008, 43-44)
As journalist Peter Baker (2014, 86) writes in his discussion of George W. Bush's appointment strategy, "Perhaps the most important lesson was the insistence on complete loyalty ... " George W. Bush's emphasis on loyalty was not unique. The personnel director to George H. W. Bush, Chase Untermeyer, noted that a cabinet secretary asked, "Do you mean to tell me that just because some people worked in a Bush campaign that I have to hire them in my department?" The cabinet secretary was told, "Had it not been for these people, and a lot of other people, George [H. W.] Bush would not have been elected president and you would not be the secretary of this department. That's the only way it can be." (quoted in Patterson 2008, 100).
There is much anecdotal evidence to support the idea that presidents promote loyalty over competence. Yet whether they do so or not is still an empirical question that can only be addressed if we have reliable measures reflecting the background characteristics of presidential appointees. By examining the resumes of the various appointees we can gauge not only the qualifications of each individual appointee, but more importantly whether various background characteristics are related to each other in a systematic fashion. This in turn helps us to understand what the George W. Bush and Barack Obama White Houses wanted from their appointees.
What then do we mean by loyalty and competence? Presidents would certainly have incentives to reward both characteristics, because loyalty brings them fealty to their policy objectives and competence brings them the ability to achieve their policy goals. As a result, there is no a priori reason to believe that presidents necessarily select one criterion over the other unless there is a shortage of either loyal or competent individuals to fill a particular position. Consequently, we treat as empirical questions some of the basic assumptions of the past literature. Table 2 provides a breakdown of the various background characteristics individuals identified as evidence of their qualification for a presidential appointment. We also provide a breakdown by president.
The background characteristic most often identified by applicants was "prior task experience" (68.81 percent), followed by "their last job was of apolitical nature" (56.33), "subject area expertise" (41.24), "worked for a campaign" (29.95), and "worked for a member of Congress" (29.00). (11) In previous work, Lewis and Waterman (2013) identify task and subject area expertise as evidence of competence, with political experience in the last job, and working for a campaign or a member of Congress as evidence of loyalty. Consequently, the frequencies demonstrate that we have a substantial number of responses to categories previously categorized as either competence or loyalty. A large numbers of individuals also identified a variety of other background characteristics, with the smallest being "held prior elected office" at 1.72 percent and "worked on the transition" (5.11) or "worked in the White House" (5.82). While these numbers are small they are not inconsequential.
Table 2 provides the first evidence that there are differences in the background characteristics of the Bush and Obama appointees. Obama was more likely to appoint individuals with prior executive branch experience. There are also statistically significant differences between the two presidents on characteristics, such as previous agency experience, subject area expertise, task area expertise, worked on a campaign, worked on the transition team, and worked for party. The percentages represent the overall appointees who possess a particular background characteristic. The largest differences in these percentages are worked for the party (favoring Bush), subject area expertise, worked on a campaign, and task area experience (all favoring Obama).
What then are the underlying relationships of the 17 background characteristics? In identifying possible measures of loyalty and competence (as well as patronage effects), we relied theoretically on a wide range of existing literature. Several studies posit these characteristics, including work by Nathan (1975), Waterman (1989), Pfiffner (1977), Cohen (1988), Edwards (2001), Moe (1985a), Moynihan and Roberts (2010), Lewis (2008), Lewis and Waterman (2013), and Waterman, Bretting, and Stewart (Forthcoming). As noted, Krause and O'Connell provide empirical evidence on this point. Such works provide a theoretical basis for our work.
In Figure 1, we present the results of the variable clustering, using Variable Cluster Analysis. (12) This approach allows us to arrange sets of variables into homogeneous clusters, which then allow us to obtain meaningful information about the structures of a large data set as well as the underlying relationships between variables. We employ both a theoretical and an inductive approach because there are disagreements in the literature as to whether some background characteristics, such as prior Washington experience, are evidence of loyalty or competence. Additionally, personnel management officials and presidents likely have certain qualities or characteristics in mind when they select individuals for appointive office. We know only that the Bush administration favored loyalty, but we do not know what specific characteristics his personnel management officials had in mind when they evaluated and recommended appointees for office. Therefore, both a theoretical and an inductive approach is appropriate because it helps us to identify how specific background characteristics cluster together. (13)
Results provided in the cluster analysis dendrogram indicate the existence of two main structures in the appointee characteristic dataset, with 7 variables representing the left main cluster and 10 variables constituting the right main cluster. (14) We did examine the possibility of three main structures with patronage as the third cluster, but found no support for a separate patronage cluster. Unfortunately, we do not have data on individual campaign contributions, a better patronage measure. To interpret this figure, the lowest part of the tree-like structure is a leaf; each leaf represents one of the variables used in the analysis. As we move further up the tree, some of the leaves begin to fuse into branches, indicating that the variables (as represented by the leaves) are similar to each other. The lower in the tree (the earlier) that these fusions occur suggests greater similarity among the groups of variables in the leaves. In short, the vertical height of the tree indicates the similarities (or differences) among variables. (15)
Interestingly, the variable partition within each main structure mostly accords with what theoretically one might intuit as denoting loyalty and competence that is based on the findings from past literature. For instance, the seven characteristic variables in the left main cluster are consistent with prior research on appointee loyalty. In addition, the variable clustering also reveals the possibility of sublevel clusters within the main loyalty and competence factors, (16) though here we focus on the characteristics in the main categories: loyalty and competence.
Of the various background characteristics the results for two are most interesting. Previous work in the executive branch, which as we noted has been interpreted by scholars as evidence for both loyalty and competence, falls on the competence dimension. Previous elected office also loads with competence. This is somewhat surprising because elected officials presumably come to office with a measure of loyalty to the president or at least the president's political party. Yet they also have practical experience in government and because the category includes governors, many also have prior executive experience. Politically, it likely reflects patronage considerations. Policy-wise, these are officials who have existing connections to important players in different issue networks. Hence, they come with experiences that allow them to directly interact with important policy actors, such as a mayor of a big city appointment as the Secretary of Housing and Urban Development.
Bayesian Structural Equation Modeling (BSEM)
The cluster analysis is but a first cut at the data. To analyze appointee ratings pertaining to measures of loyalty and competence, we also utilize a BSEM framework. The BSEM approach offers a number of advantages over the cluster modeling. First, a Bayesian approach to structural equation modeling (SEM) allows for greater flexibility in modeling complex data structures along with the incorporation of prior knowledge. (17) Second, by allowing each observed item to have its own unique variance, not only do we isolate what the observed items have in common, we also can assess the underlying relationships between the unique variances of each individual observed item. Third, by isolating the shared variance of the items from the unique variances of each individual item, we can better account for measurement error in the data. This is particularly important if the latent variable is used in subsequent analyses.
More specifically, we utilize the measurement model component of BSEM to model appointee loyalty and competence. This method is a generalization of confirmatory factor models widely used in political science. Let [y.sub.i] be a p X 1 observed random vector, the measurement model is defined as:
[y.sub.i] = [LAMBDA][[omega].sub.i] + [[epsilon].sub.i]
where [LAMBDA] is a p X q factor loading matrix, [[omega].sub.i]; is a q X 1 vector of factor scores, and [[epsilon].sub.i], is a p X 1 vector of error terms which is independent of [[omega].sub.i]. [[epsilon].sub.i] follows a normal distribution with mean 0 and variance [[PSI].sub.[epsilon]], which is a diagonal covariance matrix of measurement errors. [[omega].sub.i] follows a normal distribution with mean 0 and variance [PHI], which is a positive definite covariance matrix of latent variables.
Let Y = ([y.sub.1], ..., [y.sub.n]) be the observed data matrix, [OMEGA] = ([[omega].sub.1], ..., [[omega.sub.]n]) of latent factor scores, and [theta] be the structural parameter vector contains the unknown elements of [LAMBDA], [PHI], and [[PSI].sub.[epsilon]] in the model. We specify a binomial distribution for the appointee characteristics data, estimating the probability that an appointee holds a particular characteristic given their value on the latent variable. To identify the model, we set one indicator in each latent construct to 1. (18) We estimate the models using R and JAGS, with diffuse priors on all free parameters. (19)
Table 3 shows the results of Model 1, a two-factor Bayesian confirmatory model. In each of the two factors, one characteristic variable is set as the scaling variable to help identify the model (the coefficient fixed to 1). In the loyalty factor, the coefficient for having worked for the presidential campaign is defined as fixed. The coefficient for having subject area expertise is set as fixed in the competence dimension. The reported coefficients are posterior medians and the stars indicate that the 95% Bayesian credible interval for that parameter does not include zero.
As the loyalty and competence literature suggests that there are two main types of background characteristics. Furthermore, the qualities associated with each type are for the most part consistent with past theoretical and normative literature that identifies the key characteristics of loyalty and competence. On what we will therefore call the loyalty factor, all variables load positively with the underlying latent construct, with the exception of the characteristic that the appointee last held a political job. The positive coefficients associated with the remaining five observed binary indicators of loyalty suggest that higher levels of the latent variable--Loyalty--translate into higher probabilities that the appointee possesses that characteristic. As an example, the more loyal that an appointee is, the more likely it is that s/he will have once worked for the political party, a characteristic that George W. Bush (see Table 2) used extensively when he made presidential appointments. In short, Model 1 suggests that when presidents seek loyalist candidates for executive appointments, they will do well to prioritize those who previously worked for the party, worked in the White House, worked on the president's transition team, worked on the inauguration team, and worked for a member of Congress.
For the competence factor, the 95% Bayesian credible interval for all variables does not include zero, indicating that all variables correspond with the underlying construct. One surprising result, however, is that having task experience loads negatively on the factor, which suggests that higher levels of competence are associated with a lower probability that the appointee has task expertise, that is, they had previous experience with performing similar tasks as they do as executive appointees. Almost 69% of the appointee resumes identified task experience as a background characteristic. (21) Hence, it is likely that a substantial number of individuals with a variety of loyalty characteristics also have task experience. (22) In sum, the results from Table 3 indicate that while most variables associate neatly as either loyalty or competence, there are exceptions. While task experience has associations with loyalty, presidential personnel staffers also may use it as a means of evaluating the fealty of a prospective appointee to the president's policy position.
To explore the possible relation between having task experience and the identified loyalty variables, we ran a series of tests. The results are mixed. Of the seven characteristics that indicate appointee loyalty, only three--transition team experience, prior political job, and congressional experience--are statistically related to task experience. (23) In addition to bivariate measures of association, we also assess the relationship between task experience and the loyalty variables using a logistic regression model to account for the possibility that the relationship between these variables are more complex than that in a simple bivariate setting. (24) Results here indicate that while five variables are statistically related to having task experience, the directions of the relationships differ. Working on the president's campaign and working for the political party are both associated with lower probabilities of having task experience, all else being equal. In contrast, however, transition team experience and experience from working in Congress are both related to higher probabilities of having task experience, all else being equal.
Examining the Sub-dimensions of Loyalty and Competence
While we find evidence supporting what presidential scholars refer to as loyalty and competence, our models also suggest that a simple dichotomy may be too simplistic. After all, when examining a resume a personnel manager is more likely to pay attention to certain measures of loyalty or competence than others. If so, which measures provide the strongest evidence that a particular individual is either loyal or competent? Prior experience such as subject area expertise may provide some guidance for an administrator looking for competence, but it may as well for an administrator seeking loyalty. Additionally, what is a more reliable measure of loyalty: campaign experience or working for a member of Congress? And when we speak of loyalty and competence, is there but one type of loyalty and one type of competence, or are they multidimensional concepts? If so, then a personnel manager may be interested in one type of competence rather than another. One way to answer these questions is to examine the results from the resumes of presidential appointees, to determine if identifiable patterns exist within the data. We do so first for competence.
Figure 2 represents the dendrogram for our various measures of competence, as derived from our earlier analysis. (25) When we do so we find three sub-clusters of variables within the competence dimension. The first cluster consists of people who have previous experience in the agency to which they have been appointed, previously served as an executive appointee, or once worked in the federal bureaucracy. All three of these variables are related to federal government experience. The second sub-cluster includes appointees who previously held a public office, have public management experience, or have work experience at the state level. This second cluster represents previous experience holding some type of public office. The third cluster in the dendrogram involves people with private management experience, experience working for a nonprofit, task-related expertise, and subject related expertise: policy-related expertise.
For the analysis we present here we did not include a measure of education in our model because almost all of our respondents have an advanced degree, but when we ran that model separately, education was related to policy-related expertise. Of the three clusters, public office and policy-related expertise are the most closely related, as evidenced by their connections in the dendrogram.
In sum, when a personnel manager is examining a resume, there are specific measures that best relate to federal experience or public office, while others provide information on an individual's policy-related experience. What is particularly interesting is that these variables cluster in an intuitive manner. There are clear distinctions in our results between private and public work experience, for example. How then do the various measures of loyalty cluster? We turn to an examination of that in Figure 3.
For loyalty there are two identifiable sub-clusters. The first dimension includes appointees who worked in Congress, usually for a specific member of Congress, or whose last job was in politics. The second dimension includes individuals who worked on a campaign, worked for the party, worked on the inauguration team, served on the transition team, or worked in the White House. The first sub-cluster represents what we call outsider loyalty, while the second represents personal loyalty to the president. This distinction is important because presidents may be more likely to appoint individuals who exhibit personal loyalty to more sensitive or important political positions within the bureaucracy, a point that we will address in our continuing research. Here, however, we ask a more fundamental question. Which measures are more strongly associated with the sub-clusters or sub-dimensions of loyalty and competence? In other words, which of the five factors of personal loyalty are most important? We cannot answer that question with the dendrograms. SEM, however, can provide insights into this question. When we look at the SEM models for each sub-cluster, what do they tell us about the importance of each variable?
Table 4 presents the results of the sublevels analyses on the competence factor. We estimate three separate models. The first column, Model 2, is a single-factor Bayesian confirmatory model of appointee competence. The scaling variable is subject area expertise. The second column of Table 2 presents Model 3, which is a two-factor model with sub-jectarea expertise and prior executive branch experience as the respective fixed variables. The third column, Model 4, is the most complex model of appointee competence. The fixed variables for the three underlying latent competence sub-dimensions are: (1) subject area expertise, (2) public management experience, and (3) prior executive branch experience.
With the exception of task experience, all other variables load positively on the competence factor(s), suggesting that higher levels of competence are associated with a higher probability that the appointee will possess that characteristic. For example, higher levels of competence indicate a higher probability that the appointee will have some sort of experience at the nonprofit, public, or private level. Again, the most interesting result is the negative posterior median for having task experience. This is interpreted as meaning that the more competent an appointee is, the less likely that he or she will have task experience.
To compare models, we provide two separate measures of model fit. Deviance compares the fit of the model to the original data. Lower deviance indicates better model fit. The Deviance Information Criteria (DIC) adds a penalty to deviance, which adjusts for model complexity. (26) Similar to deviance, lower values of the DIC indicate the better performing model. By both the deviance measure and the DIC, the three-factor Bayesian confirmatory model (Model 4) is the best fitting model. Compared to the next best model (Model 3), the value of the deviance measure is 223 lower for Model 4 (20, 074.78) than for Model 3 (20, 297.9). Similarly, the DIC also indicates that the most complex model (Model 4) is the best model of appointee competence. Even after adjusting for greater model complexity, Model 4 still outperforms less complex models (Models 2 and 3). The DIC for the three-factor solution is 25, 840. In comparison, the DIC values for Models 2 and 3 are 25, 962 and 26, 025, respectively. The difference between the DIC values for the three-factor and the one-and-two-factor solutions is 122 and 185, respectively, not an insignificant reduction in DIC. Unequivocally, then, the three-factor Bayesian confirmatory solution (Model 4) is the best model of appointee competence. To put this in simpler terms, this confirms that we should consider three types of competence: federal government experience, public office, and policy-related experience.
As important as the question which is the best model for appointee competence is the relative importance of each variable in defining competence. Put differently, which variables best discriminate between candidates for appointment? One metric to use in assessing this question is the value of the coefficients in the respective models. (27) Generally speaking, if presidents emphasize competence, they should seek out a candidate that has previously held elected office (coefficient = 13.01). Again, this may seem surprising at first, but because governors and mayors have prior executive experience, and members of Congress likely have expertise in a particular policy area, this finding is not entirely counterintuitive. The second defining characteristic of strong competence is prior experience in the agency to which the candidate will be appointed (coefficient = 6.84). While previously held elected office and prior agency experience are the two strongest indicators of competence presented in Model 2, we must also consider the relative short supply of such appointment candidates for office. Of the 3,366 Bush and Obama appointees in our dataset, only 1.7% and 9.7% of all appointees previously held elected office and previously worked in the agency to which they were appointed, respectively.
Further, the results in Model 4 suggest that the story of the relative importance of the competence variables is more nuanced. As noted earlier, prior elected office is indicative of competence related to public office, while prior agency experience signifies federal government experience. Lastly, related to policy-related expertise, the results in Model 4 suggest that the best indicator of this particular component of competence is having nonprofit management experience (coefficient = 5.86).
Table 5 presents the results of sublevel analyses on the loyalty factor. The principal question here is should we model the loyalty dimension as a single factor or as the two smaller factors identified in Figure 3. Accordingly, we estimate two separate Bayesian confirmatory factor models on the variables associated with the loyalty dimension. In the first column, Model 5 is a one-factor Bayesian confirmatory model with the characteristic worked for the campaign as the fixed variable. In the second column (Model 6), we estimate a two-factor solution, with worked for the campaign and the last job political as the fixed variables for their respective sub-dimensions.
For both the one-factor and the two-factor solutions, all free parameters load positively on the loyalty (sub)dimension. Again, we use the deviance and the DIC to compare models. Lower values of each indicate the better model. Comparing the two models, the deviance and the DIC offer mixed results concerning which is the best fitting model. By the deviance measure, the two-factor solution--that is, the more complex model (Model 6)--outperforms the one-factor model of the loyalty dimension in terms of pure model fit, suggesting both loyalty and patronage effects. The deviance for the one-factor Bayesian confirmatory model of appointee loyalty is 18, 773. In comparison, the deviance for the two-factor solution is 14, 458.
While the deviance measure indicates that the more complex model (Model 6) fits the data better, the DIC suggests the opposite result. The DIC, which penalizes for having a more complex model, indicates that the one-factor model of appointee loyalty is the preferred model. (28) The DIC for the one-factor appointee loyalty model is 21, 960, while the DIC for the two-factor model is 28, 919. These results suggest that loyalty may be a more cohesive characteristic than competence. Lastly, note that the 95% credible interval now does not encompass zero. Previously in the initial model of loyalty and competence presented in Table 1, having worked on the president's inauguration team was not associated with appointee loyalty. Here, however, the opposite is true. Higher levels of loyalty are associated with a higher probability that the appointee will have once worked on the president's inauguration team.
Which variables best define loyalty? The three strongest indicators of loyalty in general are worked in the White House, worked on the inauguration team, and worked on the transition team (Model 5). (29) That is, the more loyal appointees previously worked in the White House, worked on the president's inauguration team, and/or worked on the president's transition team.
What if presidents employ a finer-grained assessment of loyalty or what we call outsider loyalty and personal loyalty? Results in Model 6 indicate that the best identifier for personal loyalty is previously working on the president's inauguration team (coefficient = 5.39), followed closely by White House experience (coefficient = 5.25). In comparison, the best indicator for outsider loyalty is previous experience working for a member of Congress (coefficient = 88.75).
Theoretically and empirically, identifying that presidents likely distinguish personal loyalty from outsider loyalty is important because it allows us to make a finer and more nuanced explanation of presidential control of the bureaucracy. Results in Table 5 suggests that not only is there more than one component of the loyalty dimension, but also that presidents can strategically appoint more loyal appointees to various parts of the federal bureaucracy by targeting specific appointee characteristics. Interestingly, our results suggest that the characteristics most commonly cited in studies of patronage fall on the personal loyalty dimension, suggesting patronage is closer in relationship to loyalty, than to competence.
Reassessing Appointee Loyalty and Competence
Given the results above that competence contains three sublevel components and loyalty one or two components, one natural question to ask is how the results of the revised models will differ from those presented in Table 3. In this section, we reexamine appointee loyalty and competence by fitting different models with increasing model complexity.
Table 6 reports the results of three additional Bayesian confirmatory factor models. Model 7 is the exact model as reported in Table 3, shown here for comparison purposes. Model 8 is a three-factor Bayesian confirmatory model on appointee loyalty and competence. Loyalty is defined as a single factor, while competence is distinguished into two separate factors. The fixed variables are: (1) worked on the president's campaign, (2) subject area expertise, and (3) prior executive branch experience. Model 9 is a four-factor solution: loyalty again as a single factor and competence is defined as three factors. The scaling variables are: (1) worked for the president's campaign, (2) subject area expertise, (3) public management experience, and (4) prior executive branch experience. Model 10 is the most complex model; it is a five-factor model, with loyalty broken into two factors and competence into three separate factors. The fixed variables for Model 10 are: (1) worked on the president's campaign, (2) last job was political, (3) subject area expertise, (4) public management experience, and (5) prior executive branch experience.
Comparing the four different models, based on the deviance measure and the DIC, the results are mixed with regard to which is the best model. Using the deviance measure, the four-factor solution (Model 9) is the better model (Deviance = 39, 969.3), by a slight margin over Model 10 (Deviance = 36, 679.6). Using the DIC, however, the two best performing models are Model 8 (the three-factor solution) and Model 9 (the four-factor solution). Numerically, the value of the DIC for Model 8 is 49, 672, about a 1096 reduction in DIC compared to Model 9 (DIC = 50, 768).
Regarding the factor loadings, two results stand out. The first is the robustness of the negative coefficient for task experience in the competence dimension. A negative coefficient for task experience indicates that the more competent an appointee is, the less likely it is that s/he will have prior task experience at the time of his/her appointment to office. While this may seem counterintuitive, this finding is robust across seven different model specifications. (30) It may suggest the paradox we introduced earlier: Presidents seeking competence and loyalty may search for those individuals with prior task experience, but for different reasons. For a loyalist seeking president, past performance on the job may be a solid indicator of that appointee's liberal or conservative credentials, while for a president seeking competence it may reflect on the individual's particular skills.
The second surprising finding from the models presented in Table 6 is the negative coefficient for having worked for a member of Congress in Model 10. Across the six different model specifications, (31) only in Model 10 is the coefficient negative, indicating that higher levels of loyalty are associated with a lower probability that the appointee will have worked in Congress. Most likely, this is due to model specification, given that no other models return similar results. (32)
Comparing Bush and Obama Appointees
With a well-specified model of appointee loyalty and competence, what can we say about presidential appointments in general? Further, are there notable differences in the types of personnel that Bush and Obama, respectively, appoint to the federal bureaucracy? In this section, we provide an overview of appointee loyalty and competence, using the model specification in Model 9. (33)
Figure 4 presents the histograms and a respective kernel density overlap for loyalty (Figure 4a), federal government experience (Figure 4b), policy-related expertise (Figure 4c), and Public Office Expertise (Figure 4d). Using BSEM, we predict each appointee's location on each of the four respective dimensions. We then normalize each dimension to have a mean of zero and standard deviation one. Figures 4a--4d show the distributions of all 3, 366 Bush and Obama appointees across 51 federal agencies. Larger values on the x axis indicate higher levels of loyalty (competence).
Figure 4a presents the distribution of Bush and Obama appointees on the loyalty dimension. As indicated, most appointees fall around zero on the normalized loyalty dimension. There is only a small group of appointees that the model predicted as the most loyal (predicted loyalty value > 1.0). Figure 4b shows the distributions of Bush and Obama appointees on the federal government experience dimension. While there is a small group of appointees that rank low on this dimension (predicted value < 0), there are multiple, smaller groups of both Bush and Obama appointees that score high on federal experience. One interesting thing to note in Figure 4b is that there is not a singular cluster of appointees that score high on competence relating to federal government experience. While testing this is beyond the scope of this article, one possibility is that because we have such a diverse data set of appointment types ranging from PAS, to SES, to SC, to PA, there is a considerable share of appointees with extensive experience inside the Beltway.
Figures 4c--4d provide the distributions of the competence dimensions relating to policy area expertise and public office experience, respectively. Note that in both graphs there is a bimodal distribution. Individually, there are two distinct groups for both policy-related expertise and public office expertise, one of which scores low on the predicted dimension and the other high. These results indicate that presidents do not reward any one type of loyalty or competence.
With these results, the next question is whether there is a distinguishable difference between Bush and Obama appointees. Figure 5 presents a series of graphs comparing the overall differences between Bush and Obama appointees across the four separate dimensions. First note the striking similarities between Bush and Obama appointees across each of the four dimensions. With the exception of a couple of minor details, there are no notable differences in the distributions between the Bush and Obama appointees.
However, while there is not a discernible difference in the distributions across the four dimensions, Obama's appointees consistently rank higher on each of the four dimensions, compared to Bush's appointees. This means that Obama consistently paid more attention to all three competence sub-dimensions and was more likely to use an array of loyalty measures in his appointment approach. In contrast, Bush was more likely to target specific characteristics of loyalty than Obama (e.g., appointing individuals who previously worked for the Republican Party).
In sum, by operationally identifying the dimensions and characteristics of loyalty and competence, we are able to show which background characteristics presidents actually rewarded, as well as the factors each president most relied on in making their appointments. In our continuing work we will further develop this idea by examining variations in the four different types of presidential appointments, as well as how these appointments impact presidential influence.
Many principal-agent studies assume that political control of the bureaucracy is facilitated by the presidential appointment of loyal individuals who reflect what Terry Moe (1985a) refers to as "responsive competence." Yet, with the exception of a few studies (e.g., Krause and O'Connell 2011; Lewis and Waterman 2013), scholars have not operationally defined the key concepts of loyalty and competence or examined appointments (SC, SES, and PA) below the PAS level. Using a comprehensive data set of 3, 366 resumes of presidential appointees in 51 different federal agencies, not only do we provide an operational definition of loyalty and competence, we also demonstrate that there are three distinct sub-dimensions of competence (public office experience, federal government experience, and policy-related expertise) and, less cohesively, two sub-dimensions of loyalty (personal and outsider loyalty). Consequently, when we speak of loyalty versus competence, we can more specifically describe five sub-dimensions of appointee characteristics potentially of interest to presidents and important for presidential influence over the bureaucracy.
These findings are important for a variety of reasons. First and most fundamentally, it allows us to discuss loyalty and competence with greater specificity. We can now examine different types of loyalty and competence. Not only is this finding important for scholars who are interested in how and why presidents employ their appointment power to influence the bureaucracy, it also has implications for policy makers, as well. Specifically, individual presidents may stress different types of loyalty in their appointment strategies. Therefore, even if all presidents have incentives to appoint loyalists, they may use different loyalty strategies to achieve that goal. The same is true of competence, with some presidents more likely to use policy-related experience and others federal government experience.
For scholars, this means that we can examine whether presidents reward different types of loyalty and competence in different agencies, or use different types of appointments (e.g., PAS, SES) appointments to promote different policy goals, such as by rewarding competence in PAS appointments and using SES appointments to promote loyalty. And by developing more nuanced measures of loyalty and competence we can reexamine existing approaches to the broader study of presidential influence, to determine whether presidents use strategies that rely on persuasion or less subtle forms of influence.
Theoretically, our most important contribution is to provide the missing element in principal-agent models, by identifying the background characteristics and sub-dimensions of loyalty and competence. This allows us to examine which types of presidential agents are more likely to work with the bureaucracy and which are most likely to affect changes in the bureaucracy. Again, the sub-dimensions of presidential loyalty and competence provide us with a means of differentiating the potential influence of various institutional actors, and the focus on the type of appointment allows us to determine more specifically how presidents design strategies for administrative reform. In sum, our empirical results open up a wide array of new research opportunities for scholars of the presidency and the bureaucracy.
Aberback, Joel D., Robert D. Putnam, and Bert A. Rockman. 1981. Bureaucrats and Politicians in Western Democracies. Cambridge, MA: Harvard University Press.
Aberback, Joel D., and Bert A. Rockman. 1976. "Clashing Beliefs Within the Executive Branch: The Nixon Administration Bureaucracy." American Political Science Review 70: 456-68.
--. 2000. In the Web of Politics: Three Decades of the U.S. Federal Executive. Washington, DC: Brookings Institution Press.
Acock, Alan. 2013. Discovering Structural Equation Modeling Using Stata. College Station, TX: Stata Press.
Baker, Peter. 2014. Days of Fire: Bush and Cheney in the White House. New York: Anchor.
Bollen, Kenneth A. 1989. Structural Equations with Latent Variables. New York: Wiley.
Brown, Timothy A. 2006. Confirmatory Factor Analysis for Applied Research. New York: Guilford.
Chavent, Marie, Vanessa Kuentz, Benoit Liquet, and Jerome Saracco. 2012. "ClustOfVar: An R Package for the Clustering of Variables." Journal of Statistical Software 50 (September): 1-16.
Cohen, Jeffrey E. 1988. The Politics of U.S. Cabinet: Representation in the Executive Branch, 1789-1984. Pittsburgh, PA: University of Pittsburgh Press.
Durant, Robert F. 1992. The Administrative Presidency: Public Lands, the BLM, and the Reagan Revolution. New York: State University of New York Press.
Durant, Robert F., and Adam L. Warber. 2001. "Networking in the Shadow of Hierarchy: Public Policy, the Administrative Presidency, and the Neo administrative State." Presidential Studies Quarterly 31 (2): 221-44.
Edwards, George C. 2001. "Why Not the Best? The Loyalty-Competence Trade-Off in Presidential Appointments." Brookings Review 19 (2): 12-16.
Fisher, Linda L. 1987. "Fifty Years of Presidential Appointments." In The In-and-Outers: Presidential Appointments and the Transient Government, ed. G. Calvin Mackenzie. Baltimore: John Hopkins University Press, 1-29.
Gareth, James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2013. An Introduction to Statistical Learning: With Applications in R. New York: Springer.
Gelman, Andrew, Jessica Hwang, and Aki Vehtari. 2014. "Understanding Predictive Informative Criteria for Bayesian Models." Statistics and Computing 24: 997-1016.
Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed. New York: Springer.
Hayduk, Leslie A. 1988. Structural Equation Modeling with LISREL. Baltimore: John Hopkins University Press.
Heclo, Hugh. 1975. "OMB and the Presidency--The Problems of Neutral Competence." The Public Interest 38: 80-98.
--. 1977. A Government of Strangers: Executive Politics in Washington. Washington, DC: Brookings Institution Press.
Herring, Pendleton. 1936. Federal Commissioners: A Study of Their Careers and Qualifications. Cambridge, MA: Harvard University Press.
Hess, Stephen. 1976. Organizing the Presidency. Washington, DC: Brookings Institution Press.
Hollibaugh, Gary E., Gabe Horton, and David Lewis. 2014. "Presidents and Patronage." American Journal of Political Science 58 (4): 1024-42.
Krause, George A., and Anne Joseph O'Connell. 2011. "Compilance, Competence, and Bureaucratic Leadership in U.S. Federal Government Agencies: A Bayesian Generalized Latent Trait Analysis." Working paper.
Lee, Sik-Yum. 2007. Structural Equation Modeling: A Bayesian Approach. Chichester, UK: Wiley.
Lewis, David E. 2008. The Politics of Presidential Appointments: Political Control and Bureaucratic Performance. Princeton, NJ: Princeton University Press.
--. 2011. "Presidential Appointments and Personnel." Annual Review of Political Science 14 (2): 47-66.
Lewis, David E., and Richard W. Waterman. 2013. "The Invisible Presidential Appointments: An Examination of Appointments to the Department of Labor, 2001-11." Presidential Studies Quarterly 43 (1): 35-57.
Light, Paul C. 1995. Thickening Government: Federal Hierarchy and the Diffusion of Accountability. Washington, DC: Brookings Institution Press.
Mackenzie, G. Calvin. 1981. The Politics of Presidential Appointments. New York: Free Press.
Macmahon, Arthur W., and John D. Millett. 1939. Federal Administrators. New York: Columbia University Press.
Maranto, Robert. 1993. "Still Clashing after All These Years: Ideological Conflict in the Reagan Executive." American Journal of Political Science 37 (3): 681-98.
Maranto, Robert, and Karen M. Hult. 2004. "Right Turn? Political Ideology in the Higher Civil Service, 1987-1994." American Review of Public Administration 34 (2): 199-222.
Menzel, Donald C. 1983. "Redirecting the Implementation of a Law: The Reagan Administration and Coal Surface Mining Regulation." Public Administration Review 43: 411-20.
Michaels, Judith E. 1997. The President's Call: Executive Leadership from FDR to George Bush. Pittsburgh, PA: University of Pittsburgh Press.
Moe, Terry M. 1982. "Regulatory Performance and Presidential Administration." American Journal of Political Science 26: 197-225.
--. 1985a. "The Politicized Presidency." In New Directions in American Politics. Washington, DC: Brookings Institution Press, 235-71.
--. 1985b. "Control and Feedback in Economic Regulations: The Case of the NLRB." American Political Science Review 79: 1094-116.
Moynihan, Donald, and Alsadair S. Roberts. 2010. "The Triumph of Loyalty over Competence: The Bush Administration and the Exhaustion of the Politicized Presidency." Public Administration Review 3 (4): 572-81.
Mufson, Steven. "Treasury Nominee Antonio Weiss Withdraws from Consideration," Washington Post, 12 January 2015, http://www.washingtonpost.com/business/economy/treasury-nomineeantonio-weiss-withdraws-from-consideration/ 2015/01/12/8de3bd44-9a7f-11e4-96cc-e858eba91ced_story.html (accessed January 12, 2015).
Muthen, Bengt, and Tihomir Asparouhov. 2012. "Bayesian Structural Equation Modeling: A More Flexible Representation of Substantive Theory." Psychological Methods 17 (3): 313-35.
Nathan, Richard P. 1975. Plot That Failed: Nixon and the Administrative Presidency. New York: Wiley.
Nathan, Richard P. 1983. The Administrative Presidency. New York: Wiley.
Nixon, David C. 2004. "Separation of Powers and Appointee Ideology." Journal of Law, Economics, and Organization 20 (2): 438-57.
Parsneau, Kevin. 2013. "Politicizing Priority Departments: Presidential Priorities and Subcabinet Experience and Loyalty." American Politics Research 41 (3): 443-70.
Patterson, Bradley H. 2008. To Serve the President: Continuity and Innovation in the White House Staff. Washington, DC: Brookings Institution Press.
Patterson, Bradley, and James Pfiffner. 2001. "The White House Office of Presidential Personnel." Presidential Studies Quarterly 31 (3): 415-38.
Pfiffner, James P. 1996. The Strategic Presidency: Hitting the Ground Running. Lawrence: University Press of Kansas.
--. 1997. "The National Performance Review in Perspective." International Journal of Public Administration 20 (1): 41-70.
Plummer, Martyn. 2008. "Penalized Loss Functions for Bayesian Model Comparison." Biostatistics 9 (3): 523-39.
Rivers, Douglas, Vicki Pineau, and Daniel Slotwiner. 2003. "Combining Random and NonRandom Samples." Proceedings of the American Statistical Association: 1-14.
Song, Xin-Yuan, and Sik-Yum Lee. 2012. Basic and Advanced Bayesian Structural Equation Modeling: With Applications in the Medical and Behavioral Sciences. Chichester, UK: Wiley.
Spiegelhalter, David J., Nicola G. Best, and Bradley P. Carlin. 1998. "Bayesian Deviance, the Effective Number of Parameters, and the Comparison of Arbitrarily Complex Models." Working paper.
Spiegelhalter, David J., Nicola G. Best, Bradley P. Carlin, and Angelika van der Linde. 2002. "Bayesian Measures of Model Complexity and Fit." Journal of Royal Statistical Society: Series B 64 (4): 583-639.
Stanley, David T., Dean E. Mann, and Jameson W. Doig. 1967. Men Who Govern: A Biographical Profile of Federal Political Executives. Washington, DC: Brookings Institution Press.
Stewart, Joseph, Jr., and Jane S. Cromartie. 1982. "Partisan Presidential Change and Regulatory Policy: The Case of the FTC and Deceptive Practices Enforcement, 1938-1974." Presidential Studies Quarterly 12: 568-73.
Waterman, Richard W. 1989. Presidential Influence and the Administrative State. Knoxville, TN: University of Tennessee Press.
Waterman, Richard W., John Bretting, and Joseph Stewart. Forthcoming. "The Politics of U.S. Ambassadorial Appointments: From the Court of St. James to Burkina Faso." Social Science Quarterly.
Weko, Thomas J. 1995. The Politicizing Presidency: The White House Personnel Office, 1948-1994. Lawrence: University Press of Kansas.
Wood, B. Dan. 1990. "Does Politics Make a Difference at the EEOC?" American Journal of Political Science 34: 503-30.
Wood, B. Dan, and James Anderson. 1993. "The Politics of U.S. Antitrust Regulation." American Journal of Political Science 37: 1-39.
Wood, B. Dan, and Richard W. Waterman. 1991. "The Dynamics of Political Control of the Bureaucracy." American Political Science Review 85 (3): 801-28.
--. 1993. "The Dynamics of Political-Bureaucratic Adaptation." American Journal of Political Science 37 (2): 497-528.
--. 1994. Bureaucratic Dynamics: The Role of Bureaucracy in a Democracy, Transforming American Politics. Boulder, CO: West-view Press.
Additional supporting information may be found in the online version of this article:
Appendix A: Table A1. Assessing Task Experience and Characteristics of Appointee Loyalty.
Appendix B: Assessing Variable Associations Using Multiple Correspondence Analysis.
Figure B1: MCA Results--Loyalty Dimension Only.
Figure B2: MCA Results--Competence Dimension Only.
Figure B3: MCA Results--All Variables.
Appendix C: Model Diagnostics.
Table C1: Model Set-Up.
Appendix D: Examining Appointee Characteristics Using Multidimensional Item Response Theory.
Figure D1: Comparing Predictions of Latent Variable (CFA vs. IRT).
Table D1: Classification and Comparison of CFA and IRT (5 Groups).
Table D2: Assessing Loyalty Sub-dimensions (MIRT).
Appendix E: Example Resumes.
(1.) While Krause and O'Connell (2011) only examine PAS appointments, a major advantage of their work over ours is that they examine presidential appointments over a much longer time frame. Because of the limitations of the FOIA process, we were only able to acquire data on appointees from the administrations of presidents G. W. Bush and Barack Obama.
(2.) An important recent example is Antonio Weiss. Nominated by President Obama to be undersecretary for domestic finance in the Treasury Department, Weiss withdrew his name from consideration after criticism from Senator Elizabeth Warren over his Wall Street background. But Weiss has already taken another SC or SES position as an adviser to the Treasury Secretary (Mufson 2015).
(3.) One of the most impressive is a survey by Mackenzie and Light of PAS appointees who served from November 1964 through December 1984 (iCPSR Study Number 8458, Spring 1987), though to protect the identity of each appointment their data set is split, thus separating the background information from the identity of the appointee.
(4.) SC positions are not established by statute like PAS positions; instead, department and agency heads establish the SC positions subject to certification by the Office of Personnel Management (OPM) that the positions are of a "policy-making" or "confidential" nature. Once the appointee leaves the position, the authority for the position is revoked by OPM and the position no longer exists (http://archive.opm.gov/Strategic_Management_of_Human_Capital/fhfrc/FLX05020.asp#itemA3). Though the department head technically makes Schedule C appointments after OPM certification, in practice the OPP vets and approves such appointments. For SES positions, there are several ways of appointing non-careerists to these positions; the number of non-careerists in SES positions is, however, limited both relative to the agency and to the overall number of SES positions across the federal executive establishment. (http://www.opm.gov/policy-data-oversight/ senior-executive-service/reference-materials/guidesesservices.pdf).
(5.) The latest figures from the 2012 United States Government Policy and Supporting Positions (commonly known as the "Plum Book") indicate that there are now 1,217 PAS appointees, 3,821 SES officials, and 1,392 Schedule C positions, as well as approximately 1,600 appointees in other appointment type positions. While we can also use the Fed-Scope database to update types of executive appointments, we note that the FedScope does not include positions in several key agencies, including the CIA, DIA, and foreign service personnel (many of whom fill appointed positions in the State Department--and not just ambassadorships).
(6.) David Lewis (2008, 2) defines "politicized agencies ... as those that have the largest percentage and deepest penetration of appointees."
(7.) Thus an appointee, upon getting a second appointment in the same administration, would not be coded as having federal or agency experience or expertise gained during the initial appointment in the administration.
(8.) The categories and coding are similar to the coding rules used by Lewis and Waterman (2013).
(9.) Most agencies did not redact much, if any, information; others (e.g., the Department of the Navy), however, systematically redacted almost all substantive content while leaving only category headings (Education, Work Experience, Honors/Awards, etc.). Resumes that had one or two areas blank but included most of the relevant information were included and redacted categories were coded as missing rather than 0.
(10.) To better illustrate our data, we provide three example resumes in Appendix E, along with a brief discussion of how they would be coded. All appendices are online, located at http://yuouyang.weebly.com/ uploads/8/4/5/5/8455800/missingelement-appendices.pdf.
(11.) Most of the individuals in this category actually worked for at least one representative and/or senator, though a few worked for a congressional committee or agency.
(12.) Typically, the researcher performs cluster analysis on the observations themselves, where the goal is to find clusters (or groups) of similar observations. For our approach the goal is less on the observations themselves, but more on the variables. More specifically, the aim is to find partitions with a larger dataset such that the variables within a single cluster are strongly related to each other. We use the variable clustering algorithms from the R package ClustOfVar (Chavent et al. 2012), which is developed specifically for the clustering of quantitative, qualitative, or a mixtures of variables. In sum, this is applicable to our use, especially because our data consist of binary indicators.
(13.) In addition to the results of the clustering of appointee characteristics variables presented here, we also conducted extensive robustness checks of our clustering results by applying a different approach to the data, multiple correspondence analysis (MCA). Results for variable associations using MCA are similar to our results here. For full MCA results, see Appendix B.
(14.) A dendrogram is a tree-like visual representation of data, typically used to graphically summarize the results of a cluster analysis.
(15.) It is important to note that we cannot draw conclusions based on their proximity on the horizontal axis. For example, based on their proximity on the horizontal axis, one may conclude that serving on the president's inauguration team and having previous agency experience are similar. However, this is incorrect. The organization in the tree structure suggests that these two variables, in fact, belong to two very distinct clusters (Gareth et al. 2013, 397).
(16.) We will examine these possible sublevel data structure in later analyses.
(17.) For greater details on BSEM, see Lee (2007), Muthen and Asparouhov (2012), and Song and Lee (2012).
(18.) There are two standard methods for identifications in structural equations models. First, we can fix the unstandardized factor loading of an indicator is fixed to some known value, usually 1. This is the reference variable approach. Another strategy is to fix the factor variance to 1.0. Fixing the variance also standardizes the factor and allows the loadings of all loadings to be estimated using the data. The unit variance identification is the more common approach in political science (see, e.g., Rivers 2003). We elected to utilize the reference variable approach for several reasons. First, model identification of BSEM by setting some appropriate element in the loading matrix to one is the norm (Lee 2007). Second, the for the unit variance identification approach is used less often than the reference variable approach among works in structural equations modeling. Moreover, as Brown (2006) notes, the reference variable approach is more useful when (1) testing for measurement invariance across groups and (2) evaluating scale reliability. Because we intend to undertake such endeavors in future projects, utilizing the reference variable approach allows us to better compare results as we examine the differences (1) across different executive agencies and (2) across additional presidential administrations. Last, numerous experienced SEM modelers recommend the use of the reference variable method. For instance, Hayduk (1988) suggests that one should always fix one-factor loading for each concept. In addition to assuring that the variances and covariances of the factors will be identified, fixing one indicator will also allow us to apply Bollen's (1989) Rules to test the identification of the structural model in subsequent analyses.
(19.) For additional details of the Bayesian models, including burn-in, sampling, and thinning rate, see Appendix C.
(20.) To compare our results to those using Item Response Theory Models, a more common approach in political science to examine latent constructs on binary indicators, see Appendix D.
(21.) See Table 2.
(22.) We test the possible relations between having task experience and the set of loyalty variables. See Appendix A for results.
(23.) The p values of the chi-square tests are .03. See Table 2. We test the possible relations between having task experience and the set of loyalty variables. See Appendix A for results. The p values of the chi-square tests are .03, < .01, and < .01, respectively. Substantively, the relationships between these three loyalty variables, respectively, and having task experience range from weak to moderate. Substantively, the relationships between these three loyalty variables, respectively, and having task experience range from weak to moderate. The results of the Cramer's V tests of association are .04, .09, and .19 for transition team experience, last job political, and congressional experience, respectively.
(24.) Examining the relationship between task experience and the set of loyalty variables in greater details is important because undue influences from other variables can sometimes mask the true relation between two specific variables. As an example, whereas we find no relation between task experience and working on the president's campaign in the bivariate setting, we find that these two variables are in fact related using logistic regression. In short, the true relation between task experience and campaign experience is being masked in the bivariate setting. By controlling for the influences of the other variables, we are able to uncover that having campaign work experience is negatively associated with having task experience. For greater details, see Appendix A.
(25.) A careful reader will notice that the dendrogram presented in Figure 2 looks similar to the competence component in the results of the full Variable Cluster Analysis presented in Figure 1. The fact that these results are similar underscores the robustness of the variables clustering. As Hastie, Tibshirani, and Friedman (2009) note, minor changes in the data can lead to different representations of the dendrograms (521). Given this, the fact that the dendrogram for the set of competence variables is similar to that above, despite excluding all of the variables denoting loyalty, is striking.
(26.) By the formula common in most Bayesian textbooks, the DIC is calculated as the sum of the deviance measure and a measure of model complexity, typically one-half the variance of the deviance measure. While the deviance and the DIC are common measures of model fit in Bayesian analysis, it remains a debate regarding the best model fit statistic for Bayesian research. For the continuing debate concerning the current state of the literature on the uncertainty, the validity, and the limitations of the DIC, see Plummer (2008) and Gelman, Hwang, and Vehtari (2014).
(27.) As Brown (2006) notes, factors loadings in a confirmatory factor model are analogous to the item discrimination parameter in an item response theory model. As such "... items with relatively high ... parameter values are more strongly related to the latent variable" (Brown 2006, 398).
(28.) It is interesting to note that most researchers rely on the DIC to arrive at the better fitting model, the original authors of the DIC assert that one should only rely on the DIC for generating a set of alternative models, not a definite indicator of the best model. As Speigelhalter, Best, and Carlin (1998) note, "we do not recommend that DIC be used as a strict criterion for model choice.... We rather view DIC as a method for screening alternative formulations in order to produce a list of candidate models for further considerations" (3, emphasis in original; see also Speigelhalter et al. 2002). We follow this advice and view both Model 5 and Model 6 as candidate models of appointee loyalty.
(29.) As with earlier, we note that these are relatively rare traits. Only 5.8%, 6.1%, and 5.1% of the 3, 366 appointees had White House, inauguration team, and transition team experiences, respectively.
(30.) This includes the three models in Table 4 and the four models presented here in Table 6.
(31.) This includes the two models in Table 5 and the four models presented here in Table 6.
(32.) This possibility is even more likely considering that the results in Table 3 offer mixed evidence regarding whether loyalty should best be fitted as either a single or two factor solution.
(33.) Model 9 specifies a one-factor solution for the loyalty variables and a three-factor solution for the competence variables. We utilize Model 9 as our default model specification because it is the most appropriate, as defined by model comparison statistics. Model comparisons indicate that Model 9 (the four-factor solution) is preferable to Model 10 (the five-factor solution).
Yu Ouyang is an assistant professor of political science at Purdue University Northwest. His research focuses on the unilateral presidency and quantitative methods. Evan T. Haglund is an assistant professor of public policy at the U.S. Coast Guard Academy. His research focuses on presidential appointees and executive branch performance. Richard W. Waterman is a professor of political science at the University of Kentucky. He coauthored Bureaucratic Dynamics and Bureaucrats, Politics and the Environment. He has written extensively on bureaucratic politics and the presidency.
Caption: FIGURE 1. Variable Cluster Analysis--Loyalty and Competence Note: This figure presents the results of the variable cluster analysis. Results indicate two main clusters, with possible sub-clusters within each.
Caption: FIGURE 2. Variable Cluster Analysis: Competence Sub-dimensions Note: This figure presents the results of the variable cluster analysis performed on only variables associated with the competence dimension. Results indicate three possible sub-clusters within the main competence dimension.
Caption: FIGURE 3. Variable Cluster Analysis: Loyalty Sub-dimensions Note: This figure presents the results of the variable cluster analysis performed on only variables associated with the loyalty dimension. Results indicate two possible sub-clusters within the main loyalty dimension.
Caption: FIGURE 4. Overall Appointee Loyalty and Competence Note: This graph shows the histograms and kernel density overlays of the loyalty (Figure 4a) federal government experience (Figure 4b), policy-related expertise (Figure 4c), and public office expertise (Figure 4d). All values are predicted following the Bayesian SEM.
Caption: FIGURE 5. Comparing Bush and Obama Appointees Note: This graph presents the distributions and kernel density overlays, comparing Bush and Obama appointees.
TABLE 1 Agencies and Resumes N Percentage African Development Foundation 7 0.21% Central Intelligence Agency 6 0.18% Commodity Futures Trading Commission 22 0.65% Corporation for National and Community Services 3 0.09% Defense Nuclear Facilities Safety Board 4 0.12% Department of Agriculture 5 0.15% Department of Commerce 8 0.24% Department of Defense 205 6.09% Department of Education 303 9.00% Department of Energy 757 22.49% Department of Health and Human Services 127 3.77% Department of Homeland Security 56 1.66% Department of Housing and Urban Development 3 0.09% Department of Interior 308 9.15% Department of Justice 74 2.20% Department of Labor 286 8.50% Department of State 4 0.12% Department of Transportation 230 6.83% Department of Treasury 128 3.80% Environmental Protection Agency 6 0.18% Equal Employment Opportunity Commission 3 0.09% Executive Office of the President 226 6.71% Export-Import Bank of the United States 11 0.33% Farm Credit Administration 8 0.24% Federal Aviation Administration 1 0.03% Federal Communications Commission 46 1.37% Federal Election Commission 10 0.30% Federal Labor Relations Authority 1 0.03% Federal Mediation and Conciliation Service 2 0.06% Federal Reserve System 3 0.09% General Services Administration 178 5.29% Millennium Challenge Corporation 1 0.03% National Aeronautics and Space Administration 57 1.69% National Credit Union Association 8 0.24% National Endowment for the Arts 5 0.15% National Endowment for the Humanities 38 1.13% National Labor Relations Board 4 0.12% National Mediation Board 3 0.09% Nuclear Regulatory Commission 5 0.15% Office of the Director of National Intelligence 1 0.03% Overseas Private Investment Corporation 44 1.31% Peace Corps 38 1.13% Pension Benefit Guaranty Corporation 9 0.27% Postal Regulatory Commission 17 0.51% Securities and Exchange Commission 7 0.21% Small Business Administration 46 1.37% Social Security Administration 1 0.03% U.S. Agency for International Development 2 0.06% U.S. International Trade Commission 25 0.74% U.S. Office of Government Ethics 1 0.03% U.S. Office of Personnel Management 2 0.06% Unknown Agency Affiliation 21 0.62% Total 3,366 100.00% G. W. Bush African Development Foundation 3 (42.86%) Central Intelligence Agency 5 (83.33%) Commodity Futures Trading Commission 17 (77.27%) Corporation for National and Community Services 3 (100%) Defense Nuclear Facilities Safety Board 4 (100%) Department of Agriculture 4 (80%) Department of Commerce 8 (100%) Department of Defense 53 (25.85%) Department of Education 182 (60.07%) Department of Energy 491 (64.86%) Department of Health and Human Services 2 (1.57%) Department of Homeland Security 15 (26.79%) Department of Housing and Urban Development 2 (66.67%) Department of Interior 171 (55.52%) Department of Justice 7 (9.46%) Department of Labor 205 (71.68%) Department of State 4 (100%) Department of Transportation 124 (53.91%) Department of Treasury 21 (16.41%) Environmental Protection Agency 5 (83.33%) Equal Employment Opportunity Commission 3 (100%) Executive Office of the President 169 (74.78%) Export-Import Bank of the United States 2 (18.18%) Farm Credit Administration 7 (87.50%) Federal Aviation Administration 1 (100%) Federal Communications Commission 29 (63.04%) Federal Election Commission 10 (100%) Federal Labor Relations Authority 1 (100%) Federal Mediation and Conciliation Service 1 (50%) Federal Reserve System 2 (66.67%) General Services Administration 117 (65.73%) Millennium Challenge Corporation 0 (0%) National Aeronautics and Space Administration 50 (87.72%) National Credit Union Association 4 (50%) National Endowment for the Arts 0 (0%) National Endowment for the Humanities 27 (71.05%) National Labor Relations Board 0 (0%) National Mediation Board 2 (66.67%) Nuclear Regulatory Commission 5 (100%) Office of the Director of National Intelligence 1 (100%) Overseas Private Investment Corporation 33 (75%) Peace Corps 6 (15.79%) Pension Benefit Guaranty Corporation 5 (55.56%) Postal Regulatory Commission 11 (64.71%) Securities and Exchange Commission 6 (85.71%) Small Business Administration 5 (10.87%) Social Security Administration 0 (0%) U.S. Agency for International Development 2 (100%) U.S. International Trade Commission 23 (92%) U.S. Office of Government Ethics 1 (100%) U.S. Office of Personnel Management 2 (100%) Unknown Agency Affiliation 10 (47.62%) Total 1,872 (55.29%) Obama African Development Foundation 4 (57.14%) Central Intelligence Agency 1 (16.67) Commodity Futures Trading Commission 5 (22.73%) Corporation for National and Community Services 0 (0%) Defense Nuclear Facilities Safety Board 0 (0%) Department of Agriculture 1 (20%) Department of Commerce 0 (0%) Department of Defense 152 (74.15%) Department of Education 121 (39.93%) Department of Energy 266 (35.14%) Department of Health and Human Services 125 (98.43%) Department of Homeland Security 41 (73.21%) Department of Housing and Urban Development 1 (33.33%) Department of Interior 137 (44.48%) Department of Justice 67 (90.54%) Department of Labor 81 (29.32%) Department of State 0 (0%) Department of Transportation 106 (46.09%) Department of Treasury 107 (83.59%) Environmental Protection Agency 1 (16.67%) Equal Employment Opportunity Commission 0 (0%) Executive Office of the President 57 (25.32%) Export-Import Bank of the United States 9 (81.82%) Farm Credit Administration 1 (12.50%) Federal Aviation Administration 0 (0%) Federal Communications Commission 17 (36.96%) Federal Election Commission 0 (0%) Federal Labor Relations Authority 0 (0%) Federal Mediation and Conciliation Service 1 (50%) Federal Reserve System 1 (33.33%) General Services Administration 61 (34.27%) Millennium Challenge Corporation 1 (100%) National Aeronautics and Space Administration 7 (12.28%) National Credit Union Association 4 (50%) National Endowment for the Arts 5 (100%) National Endowment for the Humanities 11 (28.95%) National Labor Relations Board 4 (100%) National Mediation Board 1 (33.33%) Nuclear Regulatory Commission 0 (0%) Office of the Director of National Intelligence 0 (0%) Overseas Private Investment Corporation 11 (25%) Peace Corps 32 (84.21%) Pension Benefit Guaranty Corporation 4 (44.44%) Postal Regulatory Commission 6 (35.29%) Securities and Exchange Commission 1 (14.29%) Small Business Administration 41 (89.13%) Social Security Administration 1 (100%) U.S. Agency for International Development 0 (0%) U.S. International Trade Commission 2 (8%) U.S. Office of Government Ethics 0 (0%) U.S. Office of Personnel Management 0 (0%) Unknown Agency Affiliation 11 (52.38%) Total 1,505 (44.71%) TABLE 2 Background Characteristics from Resumes Bush Obama Total Executive Branch * 384 (20.63) 429 (28.50) 813 (24.15) Previous Agency * 112 (6.02) 214 (14.22) 326 (9.69) Subject Area * 643 (34.55) 745 (49.50) 1,388 (41.24) Task Area * 1,196 (64.27) 1,120 (74.42) 2,316 (68.81) White House * 126 (.77) 70 (4.65) 196 (5.82) Congress* 575 (30.90) 401 (26.64) 976 (29.00) Public Management * 186 (9.99) 195 (12.96) 381 (11.32) State-Level 299 (16.07) 191 (12.69) 490 (14.56) Experience * Private Management 192 (10.32) 169 (11.23) 361 (10.72) Non-Profit 98 (5.27) 173 (11.50) 271 (8.05) Management * Was Prior 178 (9.56) 265 (17.61) 443 (13.16) Appointee * Last Job Political 1,038 (55.78) 858 (57.01) 1,896 (56.33) Worked on 461 (24.77) 547 (36.35) 1,008 (29.95) Campaigns * Worked on Transition 41 (2.20) 131 (8.70) 172 (5.11) Team * Worked on 104 (5.59) 100 (6.64) 204 (6.06) Inauguration Team Worked for Party * 583 (31.33) 206 (13.69) 789 (23.44) Held Prior Elected 37 (1.99) 21 (1.40) 56 (1.72) Office Note: This table provides a breakdown of appointees by their background characteristics. The numbers are the number of appointees appointed who possesses that characteristic. The percentage of the overall appointee holding that background trait is in parentheses. The background trait variable is starred if there is a statistically significant difference between the number of appointees by Obama holding that trait, compared to Bush appointees. TABLE 3 Assessing Loyalty and Competence, Model 1 2 Factor (1) Loyalty Worked on Campaign 1.00 (Fixed) Worked on Inauguration Team 4.97 * (4.32, 5.75) Worked for Party 1.30 * (1.15, 1.46) Worked in the White House 5.40 * (4.63, 6.33) Worked on Transition Team 5.31 * (4.58, 6.19) Last Job was Political 0.05 (-0.01, 0.11) Worked in Congress 0.77 * (0.67, 0.89) Competence Has Subject Area Expertise 1.00 (Fixed) Has Task Experience -0.50 * (-0.64, -0.38) Has Non-Profit Management Experience 5.87 * (5.11, 6.83) Has Private Management Experience 4.76 * (4.15, 5.53) Has Public Management Experience 5.55 * (4.83, 6.41) Has State-Level Work Experience 3.54 * (3.08, 4.08) Previously Held Elected Office 23.15 * (16.63, 46.91) Has Prior Executive Branch Experience 3.04 * (2.65, 3.51) Has Prior Agency Experience 6.99 * (6.04, 8.13) Was Previously an Appointee 5.53 * (4.81, 6.40) N 3366 Deviance 42429.96 * Deviance Information Criteria (DIC) 50938.10 Note: Two-factor Bayesian confirmatory model estimated. Worked on campaigns and subject area expertise are defined as fixed for identification. Coefficients are posterior medians and the stars indicate that the 95% credible interval does not include zero. 95% credible intervals in parentheses. TABLE 4 Assessing Competence Subdimensions 1 Factor (2) 2 Factors (3) Competence Has Subject Area Expertise 1.00 (Fixed) 1.00 (Fixed) Has Task Experience -0.37 * -0.48 * (-0.45, -0.29) (-0.60, -0.37) Has Non-Profit Management 5.14 * 6.05 * Experience (4.64, 5.67) (5.26, 6.96) Has Private Management 4.08 * 4.87 * Experience (3.72, 4.47) (4.27, 5.58) Has Public Management 5.08 * 5.51 * Experience (4.60, 5.59) (4.82, 6.33) Has State-Level Work 2.87 * 3.58 * Experience (2.62, 3.13) (3.15, 4.12) Previously Held Elected Office 13.01 * 109.94 * (10.62, 16.92) (42.07, 249.54) Has Prior Executive Branch 2.74 * 1.00 (Fixed) Experience (2.52, 2.97) Has Prior Agency Experience 6.84 * 2.87 * (6.10, 7.70) (2.16, 4.37) Was Previously an Appointee 5.20 * 2.16 * (4.70, 5.77) (1.81, 2.67) N 3366 3366 Deviance 22652.34 * 20297.90 * Deviance Information 25962.00 26024.90 Criteria (DIC) 3 Factors (4) Competence Has Subject Area Expertise 1.00 (Fixed) Has Task Experience -0.43 * (-0.53, -0.33) Has Non-Profit Management 5.86 * Experience (5.10, 6.79) Has Private Management 4.56 * Experience (4.00, 5.19) Has Public Management 1.00 (Fixed) Experience Has State-Level Work 0.67 * Experience (0.60, 0.75) Previously Held Elected Office 73.26 * (13.29, 206.14) Has Prior Executive Branch 1.00 (Fixed) Experience Has Prior Agency Experience 2.89 * (2.19, 4.30) Was Previously an Appointee 2.16 * (1.81, 2.68) N 3366 Deviance 20074.78 * Deviance Information 25839.70 Criteria (DIC) Note: This table presents the results of the sublevel analyses on the competence factor. Variable association with subclusters within the competence dimensions follows earlier results from Figure 2. TABLE 5 Assessing Loyalty Subdimensions 1 Factor (5) 2 Factors (6) Loyalty Worked on Campaign 1.00 (Fixed) 1.00 (Fixed) Worked on Inauguration Team 7.81 * 5.39 * (6.75, 9.25) (4.57, 6.54) Worked for Party 1.86 * 1.22 * (1.70, 2.03) (1.08, 1.36) Worked in the White House 8.38 * 5.25 * (7.15, 10.16) (4.41, 6.41) Worked on Transition Team 6.75 * 4.40 * (5.94, 7.75) (3.78, 5.16) Last Job was Political 0.19 * 1.00 (Fixed) (0.11, 0.27) Worked in Congress 1.05 * 88.75 * (0.94, 1.16) (21.52, 235.25) N 3366 3366 Deviance 18773.44 * 14457.91 * Deviance Information Criteria (DIC) 21959.90 28918.90 Note: This table presents the results of the sub-level analyses on the loyalty factor. Variable association with sub-clusters within the loyalty dimensions follows earlier results from Figure 3. TABLE 6 Appointee Loyalty and Competence Revisited 2 Factor (7) 3 Factors (8) Loyalty Worked on Campaign 1.00 (Fixed) 1.00 (Fixed) Worked on 4.97 * 4.99 * Inauguration Team (4.32, 5.75) (4.36, 5.82) Worked for Party 1.30 * 1.31 * (1.15, 1.46) (1.17, 1.50) Worked in the White House 5.40 * 5.38 * (4.63, 6.33) (4.66, 6.34) Worked on Transition Team 5.31 * 5.19 * (4.58, 6.19) (4.49, 6.08) Last Job was Political 0.05 0.05 (-0.01, 0.11) (-0.01, 0.12) Worked in Congress 0.77 * 0.77 * (0.67, 0.89) (0.67, 0.89) Competence Has Subject Area Expertise 1.00 (Fixed) 1.00 (Fixed) Has Task Experience -0.50* -0.59 * (-0.64, -0.38) (-0.72, -0.46) Has Non-Profit Management 5.87* 5.96 * Experience (5.11, 6.83) (5.24, 6.82) Has Private Management 4.76 * 4.91 * Experience (4.15, 5.53) (4.32, 5.56) Has Public Management 5.55 * 5.38 * Experience (4.83, 6.41) (4.73, 6.09) Has State-Level 3.54 * 3.81 * Work Experience (3.08, 4.08) (3.38, 4.35) Previously Held 23.15 * 105.40 * Elected Office (16.63, 46.91) (38.67, 241.86) Has Prior Executive 3.04 * 1.00 (Fixed) Branch Experience (2.65, 3.51) Has Prior Agency Experience 6.99 * 2.61 * (6.04, 8.13) (2.03, 3.67) Was Previously an Appointee 5.53 * 2.23 * (4.81, 6.40) (1.83, 2.75) N 3366 3366 Deviance 42429.96 * 40213.47 * Deviance Information Criteria (DIC) 50938.10 49672.40 4 Factors (9) 5 Factors (10) Loyalty Worked on Campaign 1.00 (Fixed) 1.00 (Fixed) Worked on 4.96 * 5.05 * Inauguration Team (4.34, 5.72) (4.41, 5.83) Worked for Party 1.31 * 1.29 * (1.16, 1.47) (1.15, 1.47) Worked in the White House 5.34 * 5.37 * (4.60, 6.20) (4.58, 6.30) Worked on Transition Team 5.12 * 5.18 * (4.44, 5.94) (4.49, 6.02) Last Job was Political 0.06 1.00 (Fixed) (-0.00, 0.12) Worked in Congress 0.77 * -3.47 * (0.67, 0.88) (-4.98, -2.58) Competence Has Subject Area Expertise 1.00 (Fixed) 1.00 (Fixed) Has Task Experience -0.49 * -0.52 * (-0.62, -0.38) (-0.64, -0.40) Has Non-Profit Management 5.85 * 5.78 * Experience (5.08, 6.81) (5.08, 6.65) Has Private Management 4.65 * 4.65 * Experience (4.07, 5.36) (4.09, 5.29) Has Public Management 1.00 (Fixed) 1.00 (Fixed) Experience Has State-Level 0.73 * 0.73 * Work Experience (0.66, 0.81) (0.66, 0.81) Previously Held 77.03 * 82.11 * Elected Office (13.38, 228.66) (14.73, 230.93) Has Prior Executive 1.00 (Fixed) 1.00 (Fixed) Branch Experience Has Prior Agency Experience 2.54 * 2.60 * (1.98, 3.51) (2.06, 3.57) Was Previously an Appointee 2.27 * 2.23 * (1.89, 2.85) (1.86, 2.78) N 3366 3366 Deviance 39969.26 * 39979.63 * Deviance Information Criteria (DIC) 50768.00 51066.99 Note: This table presents the results of four separate Bayesian confirmatory factor models, with increasing model complexity.
|Printer friendly Cite/link Email Feedback|
|Author:||Ouyang, Yu; Haglund, Evan T.; Waterman, Richard W.|
|Publication:||Presidential Studies Quarterly|
|Date:||Mar 1, 2017|
|Previous Article:||The historical presidency: a theoretical critique of the unitary executive framework: rethinking the first-mover advantage, collective-action...|
|Next Article:||1865: America Makes War and Peace in Lincoln's Final Year.|