Printer Friendly

Quality assurance in other sectors: lessons for higher education reformers.

Executive Summary

In response to growing concerns about the US higher education system, policymakers have launched a range of efforts to improve the system's quality. But this is easier said than done. The system is populated with a diverse array of programs offered through a mix of public, nonprofit, and for-profit providers. Furthermore, the outcomes that students and the public care about are frequently difficult to measure and are integrally tied to the characteristics and behavior of students themselves. All these factors confound efforts to improve quality.

In reality, however, numerous sectors suffer from these challenges in one way or another. Policymakers should, therefore, look to learn from efforts to ensure quality, accountability, and consumer protection in these other sectors. In that spirit, this paper examines four sectors that face many of these same challenges: health care (with a focus on transparency efforts), workforce development (specifically, the system's long-standing emphasis on outcome measurement and accountability), charter schools (a model of deregulation and delegated oversight), and housing finance (an example of risk sharing).

First, in the health care sector, scholars have been conducting research on the efficacy of transparency efforts--typically referred to as report cards--as well as any unintended consequences that might arise as a result of them. These efforts in health care are similar to efforts such as the College Scorecard that are taking place within the higher education system.

The data are mixed in terms of how report cards impact consumer behavior in health care. But regardless of the impact on student behavior, research in the health care sector suggests that increased transparency would change the behavior of institutions. This could simply reflect schools' anticipation of how students might respond or could reflect other concerns, such as those related to an institution's reputation among its peers. Note, however, that as part of this response, institutions would likely take steps to change the types of students they are willing to serve. To the degree that policymakers are concerned about this, they should take steps to include on report cards risk-adjusted measures or, better yet, measures broken out by specific subpopulations.

Second, the federal government has for decades held service providers in the workforce development system accountable for educational and employment outcomes, making it a helpful example of performance-based accountability. Researchers have found evidence suggesting that providers in the workforce development system engaged in "cream skimming"--that is, choosing those participants who are most likely to be successful over ones who are harder to serve but might benefit more--as well as gamed outcomes measures to enhance their performance. Therefore, higher education policymakers must recognize that any performance-based accountability system can create incentives for providers to change who they serve.

Policymakers must also be cognizant to invest in data that are easily validated and in measures that are clearly defined and not easily gamed. At the federal level, repealing the unit record ban--which prevents the Department of Education from collecting information on student enrollment--could enable the federal government to do most of the legwork around collecting and publishing a number of relevant outcomes in a way that avoids these challenges.

Third, the burgeoning charter school sector provides a good example of delegated oversight that higher education can learn from. A growing body of research on effective charter authorizing shows that organizations that see authorizing and accountability as part of their core purpose tend to be the most effective. Authorizer independence from the entities being regulated, and from politics, is also essential, as is creating some kind of accountability mechanism for authorizers on the basis of the performance of their school portfolios.

The most fundamental lesson that emerges from the charter sector is that building a parallel path for market entry can fundamentally change the supply side of a quasi-market such as higher education. Charter schools did not emerge from a complete overhaul of public schooling. Instead, they emerged because policymakers created space for new schools whose leaders were willing to be held accountable for student outcomes. Likewise, in higher education, reforming the accreditation process directly will be a long and difficult road. But that should not prevent policymakers from creating space for promising organizations that are willing to be held accountable for their student outcomes.

Finally, recent financial reform legislation known as the Dodd-Frank Act imposes requirements on mortgage lenders similar to proposals to impose risk sharing on higher education institutions. Research suggests that mortgage portfolios where originators did retain some "skin in the game" outperformed those where the originator had no risk retention. The degree of risk retention need not be large.

Higher education policymakers should also consider whether institutions might have an ability to simply raise tuition to effectively "price in" the risk they are obligated to take under any "skin in the game" proposal. Although an institution choosing this route might deter some students from applying and would increase the risk of repayment problems among its graduates, the institution might gain more in revenue than it loses in additional fines or lost enrollment. It, therefore, may be in its interest to increase its price. To the degree that this is the case, it indicates that policymakers should implement a risk-sharing scheme in conjunction with other proposals (such as greater transparency) that help strengthen the forces of market discipline.

Quality Assurance in Other Sectors: Lessons for Higher Education Reformers

This paper is the third in a series examining higher education quality assurance from a number of perspectives.

For decades, federal higher education policy has worked to ensure that no qualified students would be denied access to college because they lacked the financial means. This focus on access has raised college enrollment rates, but attainment rates have grown more slowly. Over time, observers have come to question the quality of many of the institutions and programs that students use federal aid money to access. After all, on a range of measures--graduation rates and student loan default rates, for example--the system as a whole is not performing nearly as well as it should.

In light of these concerns, policymakers have launched a range of efforts to ensure that students pursue programs that will serve them well. These efforts include initiatives such as the College Scorecard to promote greater transparency around college costs and student outcomes; outcomes-based accountability policies such as the "gainful employment" regulation and President Obama's proposed rating system; and proposals to give colleges and universities "skin in the game" when students default on their loans. (1) In addition, policymakers have started to focus on breaking down supply-side barriers--such as those created by the accreditation system--to better allow for new educational models that might shake up the status quo. (2)

Nevertheless, improving both the demand and supply sides of higher education is no easy task. The system contains a diverse array of programs--from traditional two- and four-year degree programs to vocational and technical programs--with a range of public, nonprofit, and for-profit providers. In addition, many policymakers and researchers would argue that education provides public and private benefits, meaning that both the student and society have a vested interest in particular outcomes--and not necessarily the same ones--from an educational experience. Furthermore, almost all outcomes of interest are difficult to measure and typically play out over long time horizons. Last, unlike most consumer products, higher education outcomes are integrally tied to the characteristics and behavior of students themselves, meaning that institutions are not entirely in control of their outcomes and can change their performance by adjusting the composition of who they serve.

Fortunately, these challenges are not unique to higher education. In reality, numerous other sectors confront these challenges in one way or another. Given these similarities, policymakers should look to learn from efforts to ensure quality, accountability, and consumer protection in these other sectors.

In that spirit, this paper examines four sectors that face many of these same quality assurance and consumer protection challenges. In the interests of brevity, the paper does not try to tackle each sector comprehensively, but instead identifies a particular issue in each sector that parallels contemporary debates in higher education policy. The sectors are health care (with a focus on transparency efforts), workforce development (specifically, the system's long-standing emphasis on outcome measurement and accountability), charter schools (a model of deregulation and delegated oversight), and housing finance (an example of risk sharing). Although the products or services at the heart of these sectors are fundamentally different from one another, many of the core issues confronting policymakers and consumers are the same. What can we learn from them?

Health Care

Higher education and health care markets have a lot in common. Both sectors operate as quasi-markets in that they not only provide consumers with the power to choose from among a variety of providers offering a wide range of services, but also involve significant government intervention. In addition, in both markets, many consumers use funds from a third-party payer to help them purchase those services--direct subsidies or loans in the context of higher education, health insurance in the case of health care.

Even more fundamental, both health care and education are difficult products to value. That is, unlike the purchase of basic consumer products where individuals can readily assess the quality of what they are buying, it is difficult to evaluate in advance the quality of an educational program or the services provided by a physician or hospital. In both cases, therefore, knowing how students or patients previously served by a provider have fared can provide insight into the quality of service being offered by that provider.

In higher education, this reality has led to large-scale efforts to provide more transparency regarding student outcomes for each educational institution. For example, the National Center for Education Statistics runs the College Navigator website, which offers students and their parents a wide range of statistics on higher education institutions, including graduation rates, student loan default rates, and other information. (3) Efforts such as the College Scorecard, as well as a number of private rankings and scorecards, attempt to distill this information to make it more accessible to consumers. (4)

In a similar vein, health care policymakers and researchers have been working to develop tools and processes that provide greater transparency around outcomes for health plans and providers. For example, using a standardized survey tool known as the Consumer Assessment of Healthcare Providers and Systems, developed by the Department of Health and Human Services, a wide range of organizations have conducted more than five million surveys of consumers since 1998 to assess levels of satisfaction with different health plans across the country. (5) Likewise, a consortium of physicians, hospitals, and insurers in Wisconsin formed the Wisconsin Health Reports website to provide quality information to consumers about doctors and hospitals in the state. (6)

In both higher education and health care, however, efforts to promote transparency face significant questions about their efficacy and potential for unintended consequences. What lessons can higher education take from the health care sector's multidecade experience in trying to promote transparency?

Lesson 1: Consumer perceptions of quality matter. Existing research suggests that consumers often believe that quality is uniformly high across the health care system, a perception that is not borne out in patient outcomes data. For example, a 1996 survey of consumers conducted by the Kaiser Family Foundation and the Agency for Healthcare Research and Quality found that roughly half believed there were little to no differences in quality across primary care physicians, specialists, and hospitals in their local area. (7) A similar survey conducted in 2005 in Wisconsin found that between 40 and 60 percent of consumers believed there were no differences across local hospitals with respect to key aspects of care quality. (8) To the degree that consumers do not recognize variations in quality, they will not see any utility in examining health care report cards. We should not be surprised, then, that in a survey of health care professionals, nearly three-quarters of respondents argued that consumers' lack of awareness of the variation in quality across providers is a major barrier to the effectiveness of transparency initiatives. (9)

Transparent information about costs and quality can change consumer beliefs about which organizations provide the most valuable product, however. When consumers in the 2005 Wisconsin survey were shown information about hospital outcomes, the fraction that believed that quality varies across local hospitals increased by roughly 20 percentage points. (10) A 2012 study showed that when health care consumers are shown quality and cost information together, they are one-third to one-half as likely to view the highest-cost providers as high quality. (11)

Lesson 2: Consumers say they want information about cost and quality; however, they are frequently unaware of what is already out there. Numerous surveys and focus groups have found that consumers value health care quality and want more information about it to help them choose a provider or health plan. (12) One study used 16 focus groups to assess consumer interest in various indicators of hospital quality. The researchers found a high level of interest in such information across all groups, including participants in several groups asking where they could get the information "right now." (13)

Nevertheless, a sizable literature has shown that only a minority of consumers are aware of various sources of information about the quality of health plans or providers. For example, a 2001 survey of California residents found that less than 40 percent reported seeing any information comparing the performance of health plans, such as measures of customer satisfaction. (14) Also, according to a 2006 survey, 12 percent of Americans claimed to have seen information about the quality of doctors in the past year, and 24 percent said the same about hospitals. (15) Researchers have found that disseminating data through community organizations that consumers trust and interact with, such as the American Association of Retired Persons, can increase consumers' awareness and use of the information. (16)

Lesson 3: Evidence regarding consumers' use of information is mixed. Researchers have examined the degree to which information about costs and quality actually changes consumers' behavior and found mixed results. On the one hand, according to a 2006 survey, only 20 percent of consumers who recalled seeing health care quality information in the previous year said that information influenced their health care choices. (17) Another study found that awareness of information about plan performance, including results from customer satisfaction surveys as well as other measures of quality, did not impact the degree to which consumers in the study chose to switch health plans.(18) Other studies have failed to find a relationship between the availability of information and consumer behavior.(19)

On the other hand, various studies have found that information about quality does alter the choices consumers make. For example, in 2002 Wedig and

Tai-Seale found that health plan report cards distributed to federal employees had a large positive effect on plan choice, with a one-standard-deviation increase in the plan's quality of care, increasing the chances that it would be chosen by more than half. (20) Also in 2002, Beaulieu found that Harvard employees were roughly 10 percent more likely to choose a health plan for every unit increase in that plan's quality rating on a scorecard provided to help the selection process. (21) Other studies have found similar evidence. (22)

There may be a variety of reasons as to why information effects vary significantly across studies. First, the process through which information is made available to consumers can significantly impact the degree to which it is used. For example, in 2011, Kling and colleagues found that providing targeted plan information to Medicare Part D beneficiaries--rather than relying on them to seek it out--increased the odds that a beneficiary would switch to a lower-cost plan by 11 percentage points. (23) In addition, some consumers may not respond to information about quality simply because they lack choices, possibly because of a dearth of providers in their geographic area, a physician who refers to only certain specialists, or other reasons. Finally, how information is presented to consumers on a report card can significantly impact its use by consumers. (24)

Consumers are not the only actors who benefit from increased transparency. Third parties, particularly health plans, can also make effective use of quality and cost data on consumers' behalf. Health plans, for example, can use such information to help inform the development of plan networks and pay-for-performance arrangements. (25)

Lesson 4: Evidence suggests report cards do alter provider behavior. Some researchers have bypassed the question of how report cards impact consumer behavior and have more directly measured their effects on providers. In a 2009 study, researchers examined the response of hospitals as they received report card results about their quality of care. Although the response was small on average, the authors found that most hospitals implemented specific quality improvement processes in response to measures on the report card that were most relevant to them. (26) Another study tried to assess the degree to which cardiac surgeons respond to health care report cards because they are concerned about losing market share or about such intrinsic factors as a diminished reputation among their peers. The study found that the decline in patient mortality rate as a result of intrinsic motivations was three times as large as the decline resulting from concerns about the potential for lost market share. The results suggest that providers may respond as much out of concern for their reputation within their profession or other intrinsic factors as they do out of concerns related to how report cards might change consumer behavior. (27)

Not all provider responses to report cards are necessarily positive. A 2005 study by Werner and colleagues found that the introduction of report cards that measured the outcomes from cardiac surgery decreased the incidence of such surgeries among black and Hispanic patients relative to white patients, presumably because these patients were less likely to have successful outcomes from the surgery. (28) Although certainly not perfect, risk-adjusting outcome measures can help to limit these effects; in the example given here, the report cards did use risk-adjusted data, but the measures did not account for the race of patients. In addition, focus groups have shown that simply acknowledging that outcomes data are affected by multiple factors outside the provider's control can increase the credibility of the report in the minds of consumers. (29)

What do these lessons imply about efforts to boost transparency in higher education? First and foremost, the experience in health care highlights the importance of simply making consumers aware of variations in quality among different educational providers. Evidence from health care suggests that juxtaposing cost and quality information can change consumer perceptions about which providers might offer the most educational value.

Second, although making information available through various report cards certainly helps to create additional market transparency, experience in the health care sector shows that a sizable fraction of consumers is still likely to be unaware of such efforts. Therefore, disseminating information about institutions and programs through trusted brands such as the College Board and trusted resources such as guidance counselors could be very helpful.

Third, in terms of how data might impact consumer behavior in higher education, the studies in the health care sector obviously show mixed results. Nevertheless, how the data are presented can clearly make a big difference. There is likely value in giving a wide range of organizations access to outcomes data so they can experiment with a variety of report card formats and learn which formats work best. Also, third parties, such as guidance counselors, college-access nonprofits, and private lenders, would likely be able to use such data effectively to help consumers navigate to quality programs and institutions. Private lenders, in particular, could base how much they are willing to lend on these data, providing an important consumer protection as well as a check on tuition growth.

Fourth, the studies in the health care sector suggest that schools will likely respond to transparency initiatives regardless of how much those initiatives actually change student behavior. This could simply reflect schools' anticipation of how students may respond or could reflect other concerns, such as those related to an institution's reputation among its peers. Note, however, that as part of this response institutions would likely take steps to change the types of students they are willing to serve. To the degree that policymakers are concerned about this, they should take steps to include on report cards risk-adjusted measures or, better yet, measures broken out by specific subpopulations.

Last, just as some health care consumers pay little attention to report cards because they have few options to choose from, for some students, increased market transparency will do little good because they have limited educational options in their geographic area. (30) This does not mean that policymakers should not take steps to boost market transparency but that they must also take steps to ensure that all students have access to a sufficient set of high-quality educational options.

Workforce Development

While transparency around costs and outcomes can help consumers navigate to worthwhile programs--thereby strengthening the forces of market discipline--the difficulty that consumers have in evaluating the quality of health care and education a priori means market-based accountability alone will likely not solve the quality assurance problem entirely. In light of this situation, many reformers have argued that the federal government does have some role in holding institutions directly accountable for performance. But before policymakers take this as a cue to construct an elaborate and complicated accountability system for higher education, it is worth asking what we can learn from other policy areas where states or the federal government have attempted to hold organizations, public or private, accountable for human capital development.

One obvious candidate is the workforce development system. Although this system is focused largely on preparing individuals for employment, it can still be a helpful reference for discussions of higher education accountability. After all, career and technical education represents a significant fraction of postsecondary enrollment. In fact, one study estimates that almost one-third of Pell Grant dollars are going to students pursuing career education. (31) In addition, the federal government has, for decades, worked to hold service providers in the workforce development system accountable for educational and employment outcomes, making it a helpful example of performance-based accountability.

Performance-based accountability was first introduced to the workforce development system through the Job Training Partnership Act (JTPA) passed in 1982. Under JTPA, the federal government would provide grant funds to states, who would then disburse them to local service delivery areas (SDAs). SDAs were responsible for providing training and job search assistance to individuals struggling to find employment. As part of this arrangement, JTPA established performance benchmarks for states and SDAs on a number of specific outcome measures, including employment, retention, earnings, and the acquisition of skills. SDAs that outperformed those benchmarks were eligible to receive a bonus of up to 20 to 30 percent of their annual budget. (32) But while each state and SDA had specific outcome benchmarks to aim for, the process for setting those benchmarks was not blind to the types of workers each entity was serving. Policymakers did not want to give SDAs an incentive to serve only those workers who were the most likely to be successful, behavior that would have left many needy individuals who might benefit from services out of luck. Therefore, in setting benchmarks, JTPA adjusted the local standards to reflect the characteristics of the population served by an SDA as well as current economic circumstances. (33)

In 1998, JTPA was reauthorized as the Workforce Investment Act (WIA). WIA expanded the types of outcome measures included--adding a customer satisfaction measure, for instance--and required that states set up administrative systems to capture more accurate outcomes data (rather than relying on follow-ups with participants). Also, instead of adjusting performance levels to reflect participant characteristics, WIA required that benchmarks be determined through a negotiation process informed by historical data about performance, the characteristics of individuals who had received services in a given region, and the types of services offered in a particular area or state. Finally, in addition to being eligible to receive bonuses for good performance (as was the case under JTPA), under WIA, states could lose up to 5 percent of their WIA grant for failing to meet their target outcomes. (34)

Researchers have looked closely at the performance management system imposed by both JPTA and WIA. (35) Although this list is not exhaustive, a number of lessons that emerge from this literature could be useful to policymakers and researchers in the higher education debate.

Lesson 1: Performance-based accountability can create incentives for providers to change who they serve. In establishing a performance management system based on outcomes, policymakers must be careful to ensure that they are not encouraging providers to simply "cream skim," or choose those participants who are most likely to be successful over ones who are harder to serve but might benefit more. The JTPA adjusted performance targets based on population characteristics to help address this issue. However, even with adjustments, some studies found evidence of cream-skimming behavior, likely because frontline staff have better information about an applicant's likely chance of success relative to the information used in adjusting for performance metrics. (36) More recent studies suggest that cream skimming became an even larger issue under WIA. That is because WIA's negotiated performance benchmarks, while being informed by the types of participants a provider had served historically, did not adjust for the mix of participants a provider had actually served that year.(37) For example, a 2002 Department of Labor analysis found that the number of participants who made it to the enrollment stage under WIA was far lower than under JTPA, suggesting that local providers were hesitant to enroll participants who might jeopardize their performance rating. (38)

Lesson 2: A failure to clearly define measures can encourage providers to game the system. Researchers examining JTPA found evidence that local providers took elaborate steps to game the performance system. For example, to be counted in the performance management system under JTPA, a participant had to be formally enrolled in the program's services. Because of budget constraints, not all participants were ultimately enrolled, and program staff had significant flexibility in terms of who would be enrolled and the timing of that enrollment. The combination of high-stakes performance measurement and discretion over who was enrolled created perverse incentives. For instance, staff providing job search assistance had incentive to delay enrollment as long as possible until an applicant showed signs that he or she would be able to find a job. (39) Similarly, some providers took advantage of unclear rules regarding when a participant must be considered to have exited the system to improve their performance outcomes. For example, one study found that local providers often manipulated the timing of case closures to maximize the performance bonuses they would receive. Because providers were assessed against a performance standard at the end of each fiscal year, they would close significant numbers of poor-performing cases in years when they were already comfortably above their performance goal but would hold open such cases in bad years when they might jeopardize their achievement of the performance standard. (40)

Lesson 3: Data can be hard to come by, but investing in data collection and infrastructure can significantly increase outcomes transparency while minimizing costs for providers. Under WIA, participants who qualify for training assistance can receive an individual training account with funding that can be spent on any approved provider. WIA requires that states maintain an "eligible training providers list" that ensures that individual training account funds are only spent on providers meeting a minimum level of performance. The law specifically requires that states set standards on the basis of at least six outcome measures: program completion rate; employment rate; retention rate; initial earnings; earnings after six months; and the rate of licensure, certification, or attainment of academic degree. (41) In addition--and equally important--WIA required that states gather employment and earnings data from the unemployment insurance system rather than relying on self-reported data from the providers themselves.

Many states struggled with the implementation of these requirements and had to obtain waivers from the Department of Labor. However, several states, including Florida, Washington, New Jersey, and Texas, have developed successful data collection platforms that allow for reliable outcomes reporting across all training providers. In other words, investing in the infrastructure needed to collect good data can enable a transparent marketplace that informs consumers and policymakers alike about the performance of different training providers. (42)

Lesson 4: It is important to invest in data validation. While building the platforms that enable accurate data collection is essential, policymakers should not overlook the need to develop mechanisms to ensure the data is valid. This is particularly important within decentralized systems such as WIA, where local providers are responsible for some of the data collection but may have incentive to manipulate their data. In 2002, the Employment and Training Administration at the Department of Labor implemented a standardized data validation process for WIA. This process included the development of software that enabled states to submit data efficiently to the federal government, perform checks of the data's integrity, and assess the accuracy of performance calculations. The software also allowed state staff to randomly sample records to ensure they match the submitted data set. (43) Performance data serve as the foundation of any assessment and accountability system; these processes help ensure that policymakers, agency staff, and the organizations held accountable can trust WIA's performance measures.

What do these findings mean for higher education? Researchers have found evidence suggesting that providers in the workforce development system engaged in "cream skimming" of participants as well as accounting manipulations to enhance their performance. Therefore, higher education policymakers must recognize that any performance-based accountability system can create incentives for providers to change who they serve. In part that would be by design; currently, far too many colleges are rewarded for filling their seats regardless of their students' potential for success. But policymakers will need to consider risk adjustment or a bonus for institutions that graduate Pell recipients if their goal is to mitigate such behavior.

Policymakers must also be cognizant to invest in data that are easily validated and select measures that are clearly defined and not easily gamed. States such as Florida, Washington, New Jersey, and Texas have shown that investing in platforms that streamline the process of collecting outcomes data can both increase their accuracy and significantly ease any collection burden on educational providers themselves. At the federal level, repealing the unit record ban--which prevents the Department of Education from collecting information on student enrollment--could enable the federal government to do most of the legwork around collecting and publishing a number of relevant outcomes in a way that avoids these challenges.

Charter Schools

Higher education accreditation began in the 19th century as a voluntary form of institutional peer review. However, as Congress expanded federal aid programs--first through the GI Bill in the 1940s and 1950s and subsequently through the expansion of federal loan and grant programs in the 1960s and 1970s--policymakers needed a way to ensure that federal investments in higher education were well spent. At the same time, there was a strong reluctance to have the federal government play a direct role in evaluating academic quality. As a result, Congress made accreditation, what had formerly been a voluntary form of self-regulation, the primary "gatekeeper" to federal financial aid dollars.

Decades later, policymakers and researchers have come to question whether accreditors are well suited for the task they were assigned decades ago. As creatures of the institutions they oversee, accreditors have naturally tended to evaluate institutions on the basis of a variety of input measures that reflect a traditional college campus--faculty credentials, libraries, facilities, and so on--rather than on assessing whether an institution's students learn anything. This has the unfortunate effect of blocking new and potentially innovative providers from the market and protecting existing, poorperforming colleges from sanctions.

As a result, a variety of reform proposals have been put forward, ranging from attempts to improve the accreditation process--by requiring a stronger emphasis on outcomes, for example--to bringing in new types of accreditors that might be more effective at oversight. In light of these proposals, it is helpful to ask whether this type of delegated oversight has been employed in other sectors and whether we can learn anything as a result.

In fact, the elementary and secondary education system in the United States has quite a bit of experience with delegated oversight through its burgeoning charter school sector. Charter schools began in 1991 under a straightforward compact: they would be given increased autonomy in exchange for increased outcomes-based accountability. To gain and maintain access to the market, charter schools must be authorized by an organization that is responsible for ensuring that the schools meet the goals specified in their charters.

In the early years of the charter school movement, proponents focused on expanding the number of charter schools. However, as the charter movement has grown--charters now serve 2.2 million students in more than 6,000 schools in almost every state in the country (44)--researchers have increasingly confronted the challenge of ensuring charter quality. And although a number of factors impact a school's quality, the effectiveness of charter authorizers themselves--the entities responsible for enforcing the accountability side of the horse trade--varies widely across states, contributing to variations in charter performance. (45)

The Center for Education Reform, which looked at data on charter closures going back to 1993, found that charter schools face roughly a 15 percent closure rate. (46) This rate reflects a much greater willingness on the part of authorizers to hold schools accountable relative to higher education accreditors, who, the Government Accountability Office found, terminated accreditation for about 1 percent of accredited schools over the 4-1/2 year period from October 2009 through March 2014. (47) In addition, a study by Hassel and Batdorff examined a sample of charter authorizers' decisions regarding the revocation of existing charters or the denial of new ones and judged those decisions to be well founded. (48) On the whole, therefore, while there is significant room for improvement in charter authorizing, there is a lot to learn from the most effective authorizers.

There is a growing body of research that is examining the factors that contribute to effective charter authorizing. What have the researchers learned?

Lesson 1: Organizations that see authorizing and accountability as part of their core purpose are more effective. The most effective charter authorizers have tended to be those organizations that see authorizing (and accountability) as a central part of their mission and therefore devote the time, attention, staff, and resources the process deserves. (49) Authorizing is a complex, challenging process in its own right, involving both rigorous assessments of data as well as ongoing interactions with the schools in an authorizer's portfolio. Most important, effective authorizing requires a commitment to holding schools accountable for their student outcomes rather than simply falling into a compliance-driven mentality that counts inputs and monitors processes. (50)

In many cases, however, charter authorizing has simply fallen into the laps of local school districts who see it as secondary to their primary job (running public schools) or who view charter schools as unwelcome competition. (51) As a result, although school districts make up more than 90 percent of all authorizers, roughly 90 percent of district authorizers oversee five or fewer schools. (52) In these cases, the authorizing process is often at best neglected and understaffed, leaving a critical gap in the oversight of the schools in a district's portfolio. At worst, hostile school districts make it difficult for charter schools to win approval to operate, creating a significant barrier to entry. When the state of Minnesota recently required that all authorizers affirmatively request to take on that role--including meeting certain minimum standards in terms of what is expected of an authorizer--60 percent of the state's authorizers simply did not bother to apply for recertification. (53)

Lesson 2: Authorizers must be independent from both the entities they oversee and the political process. Effective charter authorizers must be receptive to new institutions (with potentially innovative models) and be willing to exercise effective oversight of the schools in their portfolio. Therefore, each authorizer needs to have a minimum level of independence from the entities it oversees. School districts, which make up most of the authorizers, may be tempted to view charter schools as competitors rather than as peers. As a result, they might be less willing to authorize charter schools that are promising because these schools are likely to draw students from traditional public schools. At the same time, they might be more willing to burden existing charters with unnecessary regulations or hostile oversight that is unrelated to whether those schools are serving students well. (54)

Effective authorizing also requires that leaders have the ability and the will to shut down poorly performing schools. Closing a school almost always generates opposition from the institution itself and the community it serves. In certain circumstances, parents might not be fully aware of a school's poor performance. In other cases, parents will have a sense of a school's shortcomings but may still see the school as their best option if the quality of other schools in their area is even worse. Because authorizers will always face an uphill battle in trying to close poorly run schools, authorizers need to have a degree of independence from the political process. (55)

Lesson 3: Oversight should be more than a binary approval process. The most effective authorizers do not rely on a simple binary approval process. Instead, they have created processes to work with schools that are failing so as to make it clear well in advance that they need to improve. Authorizers in Washington, DC; Oakland, California; and Hartford, Connecticut, for example, present performance information in tiers so that schools and the public know which schools need to improve to avoid facing closure. In addition, in cases where a school does not improve sufficiently and needs to be closed, effective authorizers take steps to try to identify other charter school operators who might be able to take the place of the failing school, thus providing a potential transition plan to help students impacted by the closure. (56) By taking these steps, effective authorizers lay the groundwork for making the difficult decision of closing a school if and when it becomes necessary.

Lesson 4: Overseeing authorizers is critical. States are beginning to look more closely at how they might implement oversight policies for authorizers on the basis of the performance of their portfolios of schools. For example, the state of Ohio initially allowed a wide array of authorizers but exercised little oversight of how well they performed. However, after a significant increase occurred in the number of failing charter schools, Ohio implemented a performance floor to close the worst-performing charter schools automatically. The state also prohibited the 20 percent of charter authorizers scoring the worst on the state's charter authorizer performance index from authorizing new schools until their performance improved. (57)

Although the research on effective authorizing emphasizes the importance of state policies that hold authorizers accountable, little consensus exists about what these policies should look like. The most basic step, though, is simply transparency. Colorado, for example, requires that its independent chartering board as well as all school districts publish an annual report showing the performance of all the schools in their respective portfolios. (58) States could also take additional steps, such as setting a performance floor for authorizers or using a more subjective review process based on a number of school performance metrics. (59) In Indiana, for example, the State Board of Education can suspend an authorizer's ability to open new schools if it has intervened to close or transfer more than 25 percent of the schools in that authorizer's portfolio. (60)

The key lesson that emerges from charter schooling is that building a parallel path for market entry can fundamentally change the supply side of a quasimarket such as higher education. Charter schools did not emerge from a complete overhaul of public schooling. Instead, they emerged because policymakers created space for new schools whose leaders were willing to be held accountable for student outcomes.

Likewise, in higher education, reforming the accreditation process directly will be a long and difficult road. But that should not prevent policymakers from creating space for promising organizations that are willing to be held accountable for their student outcomes. This might mean allowing for new authorizers, independent from traditional higher education, that agree to assess institutions largely on the basis of outcomes rather than a list of inputs that may not reflect advances in technology and educational delivery.

The most effective charter school authorizers could serve as a model of what new authorizers in higher education should look like. Rather than assigning the task of self-regulation to existing players in the industry--as our reliance on accreditation currently does--policymakers should look to organizations that are independent from politics and the entities being regulated, such as groups of employers. Moreover, those entities should see their mission as holding schools accountable for performance while simultaneously giving those schools the flexibility to be innovative and successful.

Finally, the research on charter school authorizing emphasizes the importance of creating some kind of accountability mechanisms for authorizers on the basis of the performance of their school portfolios. This task is not easy, but it is likely essential to ensuring that the authorizers do their part in rigorously overseeing the schools under their purview. At a minimum, simply creating more transparency around the performance of each authorizer's portfolio should help to reveal which are overseeing their schools effectively.

Housing Finance

In light of the high rates of delinquency and default on federal student loans, higher education policymakers and researchers have increasingly called for colleges to have some "skin in the game." (61) Otherwise known as risk sharing, such a policy would put institutions on the hook for a portion of the federal student loan dollars that their students were unable to repay.

The federal student loan program has almost no underwriting, meaning that students can get loans for any eligible program up to certain limits. From an institution's perspective, the easy availability of credit ensures that students and their parents almost always have the liquidity necessary to pay tuition, and the downside risk of poor investment decisions are borne only by families and taxpayers. The goal of a risk-sharing policy is to give the institution that directly benefits from student access to credit a stake in his or her success.

The United States faced a similar, though not identical, dilemma in the housing market prior to the collapse of that market in 2007-08. In traditional mortgage markets, banks would make loans to consumers for home purchases and retain the mortgage on their books, bearing the risk of default directly. However, with the advent of securitization--a process whereby mortgages and other financial assets are packaged together as securities to be sold to investors--mortgage originators would issue loans for the purposes of selling them to third parties. Thus, mortgage originators were not retaining any of the credit risk in the transaction. (62)

This process would not necessarily be a problem if the other parties involved were in a position to perform the due diligence necessary to ensure that the mortgage was sound. However, research has identified a number of ways in which parties further along the securitization chain may struggle to monitor the quality of assets being securitized. For example, an investor purchasing a securitized mortgage is one or more steps removed from the entity that originated that mortgage. As a result, he or she typically has far less information about the borrower's ability to repay the debt than the originator does. In these circumstances, originators would be in a position to securitize mortgages that are not as sound as investors may think. (63)

In both contexts, institutions--schools in higher education and mortgage originators in housing finance--may face incentives to originate loans knowing they will suffer virtually none of the downside if the loan performs poorly. In turn, this can create incentive to originate as many loans as possible, even if doing so requires lowering credit standards. In the context of housing, this led to increased lending to low-income borrowers and a relaxation of requirements around income and employment documentation. In the context of higher education, it means an expanded willingness to enroll students who may have poor prospects of being successful in a given program.

After the collapse of the financial sector in 2007-2008, Congress enacted the Dodd-Frank Wall Street Reform and Consumer Protection Act, otherwise known as Dodd-Frank. (64) Among a wide array of changes, the legislation included a new requirement that credit originators, as well as institutions securitizing that credit, retain some portion of the risk associated with the loans making up those securities. The intention was to create stronger incentives for originators and securitizers to not exploit their information advantage over the investors to whom they are selling these loans.

Although the risk retention provisions have only recently been finalized, (65) a number of lessons can be drawn from housing finance reform, including the research that shows why such requirements might be necessary. What are those lessons?

Lesson 1: Mortgages where the originator lacked "skin in the game" performed relatively worse during the crisis. A number of studies have examined the relative performance of different portfolios of assets, some of which included a risk retention component for the originator and others of which did not. For example, in 2010, Keys and colleagues found that loans that were more likely to be securitized were 20 percent more likely to default relative to loans that were likely to be held by the originator. (66) In addition, a study by Demiroglu and James in 2009 examined the losses among mortgages that varied in the extent to which originators continued to bear a portion of the risk and found that loans where the originator had skin in the game had losses that were less than half the rate of the other mortgages. (67)

Lesson 2: The degree of risk retention need not be as large as one might think to affect originator behavior. In the context of the risk-sharing discussion among higher education policymakers and researchers, some people have recommended that institutions bear a significant fraction of the risk of student defaults, sometimes as high as 20 percent. (68) However, the credit retention requirements included in the Dodd-Frank legislation require only a 5 percent risk retention rate. (69) In addition, in the Demiroglu and James study, the degree of risk retention was 3 percent or less. (70) This evidence suggests that, in the higher education context, the degree of risk retention does not necessarily have to be high to significantly change the behavior of institutions.

Lesson 3: Beware of institutions raising prices to offset costs of bearing risk. As part of the discussion surrounding the regulations to implement Dodd-Frank, some policymakers expressed concerns that the sponsors of securities might be able to sell specific tranches to investors at prices that more than make up for the costs of bearing risk under the risk retention requirements. (71) Thus sponsors would face few incentives to change their behavior in terms of the quality of the assets they securitize.

Lesson 4: Potentially adjust risk-retention requirements on the basis of performance. Dodd-Frank allowed for a relaxation of risk-retention requirements for residential mortgages that, on the basis of historical evidence, would make them less likely to default. For example, mortgages that fell within specific levels for total debt-to-income ratio or loan-to-value ratio could be exempted from such requirements. (72) If policymakers created a risk-sharing regime for higher education institutions, they could consider adjusting the risk-retention requirements for each institution on the basis of the performance of that institution's past cohorts. For example, an institution whose graduates had been highly successful at repaying their loans might have to retain less risk than a lower-performing institution. This policy might reward institutions that do an effective job of minimizing loan repayment problems among their graduates. In doing so, however, policymakers should consider whether such a policy removes pressure from those institutions to continuing serving students well into the future.

The research on housing finance shows the potential pitfalls of systems where particular institutions have strong incentives to originate loans because they bear none of the risk of default. Higher education institutions face a similar dynamic, giving them strong incentives to originate as many loans as possible regardless of the qualifications of the borrowers.

As a result, the idea of giving institutions "skin in the game" with respect to student loans is gaining more traction across the political spectrum. The research on risk retention in mortgage markets shows that it can significantly impact the behavior of institutions, even at relatively low levels of risk sharing. This finding provides some indication that risk-sharing proposals in higher education would likely have a meaningful impact institutional behavior.

Higher education policymakers should consider, however, whether institutions might also have an ability to simply raise tuition to effectively "price in" the risk they are obligated to take under any "skin in the game" proposal. Although an institution choosing this route might deter some students from applying and would increase the risk of repayment problems among its graduates, the institution might gain more in revenue than it lost in additional fines or lost enrollment, and thus it may be in its interest to increase its price. To the degree that this is the case, policymakers should implement a risk-sharing scheme in conjunction with other proposals (such as greater transparency) that help strengthen the forces of market discipline.

Conclusion

As policymakers and researchers try to address the deep-seated cost and quality problems in American higher education, there is a tendency to think that these challenges are new and unique to higher education. But this is simply not true. Policymakers should recognize that they are not the first to confront many of these problems; indeed, their counterparts in many other sectors have been wrestling with these tensions for decades. By broadening our lens, higher education policymakers and researchers can improve higher education and ensure that we do not repeat some of the same mistakes that have plagued promising reform

Notes

(1.) For information on the College Scorecard, see White House, "College Scorecard," www.whitehouse.gov/issues/education/higher-education/college-score-card; for information on the "gainful employment" regulation, see US Department of Education, "Gainful Employment," October 30, 2014, www.ed.gov/category/keyword/gainful-employment; for information on President Obama's proposed rating system, see US Department of Education, "College Ratings and Paying for Performance," www.ed.gov/college-affordability/college-ratings-and-paying-performance; for an example of "skin in the game" proposals, see legislation offered by a group of Senate Democrats described in Michael Stratford, "'Skin in the Game' on Loans," December 20, 2013, www.insidehighered.com/news/2013/12/20/senate-democratslaunch-new-push-student-loan-debt-college-accountability; for Rep. Paul Ryan's discussion of the idea, see Paul Ryan and House Budget Committee Majority Staff, "Expanding Opportunity in America" (discussion draft, House Budget Committee, Washington, DC, 2014), http://budget.house.gov/uploadedfiles/expanding_opportunity_in_america.pdf.

(2.) For one example, see Mike Lee, "Lee Introduces Bill to Expand Higher Education Opportunities," January 9, 2014, www.lee.senate.gov/public/index.cfm/higher-educationreform-and-college-opportunity.

(3.) National Center for Education Statistics, "College Navigator," http://nces.ed.gov/collegenavigator/.

(4.) See White House, "College Scorecard." For other examples, see University and College Accountability Network, "Find the College That's Right for You," www.ucan-network.org; "Best Colleges," US News and World Report, http://colleges.usnews.rankingsandreviews.com/best-colleges; "College Guide," Washington Monthly, www.washingtonmonthly.com/college_guide/.

(5.) US Department of Health and Human Services, Agency for Healthcare Research and Quality, "CAHPS: Assessing Health Care Quality from the Patient's Perspective," https://cahps.ahrq.gov/about-cahps/cahps-program/cahps_brief.html.

(6.) Wisconsin Health Reports, "Wisconsin Health Reports," 2014, www.wisconsinhealthreports.org/.

(7.) Kaiser Family Foundation, Agency for Health Care Policy and Research, and Princeton Survey Research Associates, Americans as Health Care Consumers: The Role of Quality Information (Princeton, NJ: Princeton Survey Research Associates, 1996), www.ahrq.gov/professionals/quality-patientsafety/quality-resources/tools/kff/kaisqual.pdf.

(8.) Judith H. Hibbard, Jean Stockard, and Martin Tusler, "It Isn't Just about Choice: The Potential of a Public Performance Report to Affect the Public Image of Hospitals," Medical Care Research and Review 62, no. 3 (2005): 358-71.

(9.) Anna D. Sinaiko, Diana Eastman, and Meredith B. Rosenthal, "How Report Cards on Physicians, Physician Groups, and Hospitals Can Have Greater Impact on Consumer Choices," Health Affairs 31, no. 3 (2012): 602-11.

(10.) Hibbard, Stockard, and Tusler, "It Isn't Just about Choice," 358-71.

(11.) Judith H. Hibbard et al., "An Experiment Shows That a Well-Designed Report on Costs and Quality Can Help Consumers Choose High-Value Health Care," Health Affairs, 31, no. 3 (2012): 560-68.

(12.) Katherine M. Harris and Melinda Beeuwkes Buntin, Choosing a Health Care Provider: The Role of Quality Information (Princeton, NJ: Robert Wood Johnson Foundation, May 2008), www.cahpf.org/GoDocUserFiles/530.Choosing%20a%20healthcare%20provider.pdf.

(13.) Shoshanna Sofaer et al., "What Do Consumers Want to Know about the Quality of Care in Hospitals?" Health Services Research 40, no. 6, part 2 (2005): 2018-36.

(14.) RAND Health, Consumers and Health Care Quality Information: Need, Availability and Utility (Oakland, CA: California HealthCare Foundation, October 2001), www.chcf.org/~/media/MEDIA%20LIBRARY%20Files/PDF/C/PDF%20ConsumersAndHealthCareQualityInformation.pdf.

(15.) Harris and Buntin, Choosing a Health Care Provider.

(16.) Katherine E. Klem, Bringing Data to Life: Issues in Empowering Consumers to Choose Higher Value Health Care (Denver, CO: Center for Improving Value in Health Care, August 2012), www.civhc.org/getmedia/ab72a91f-70fd-4c62-b671-5f7dff472441/CIVHC-Consumer-Reporting-White-Paper-final.pdf.aspx/.

(17.) Kaiser Family Foundation, "Consumers' Views on Patient Safety and Quality Information," September 1, 2006, http://kff.org/other/poll-finding/consumers-viewsof-patient-safety-and-quality/.

(18.) Jean M. Abraham et al., "The Effect of Quality Information on Consumer Health Plan Switching: Evidence from the Buyers Health Care Action Group," Journal of Health Economics 25, no. 4 (2006): 762-81.

(19.) For a number of examples, see Marjan Faber et al., "Public Reporting in Health Care: How Do Consumers Use Quality-of-Care Information," Medical Care 47, no. 1 (January 2009): 1-8.

(20.) Gerald J. Wedig and Ming Tai-Seale, "The Effect of Report Cards on Consumer Choice in the Health Insurance Market," Journal of Health Economics 21, no. 6 (2002): 1031-48.

(21.) Nancy Dean Beaulieu, "Quality Information and Consumer Health Plan Choices," Journal of Health Economics 21, no. 1 (2002): 43-63.

(22.) See examples in David Dranove, "Health Care Markets, Regulators, and Certifiers," in Handbook of Health Economics, vol. 2, ed. Mark V. Pauly, Thomas G. McGuire, and Pedro Pita Barros (Waltham, MA: Elsevier, 2012), 640-90.

(23.) Jeffrey R. Kling et al., "Comparison Friction: Experimental Evidence from Medicare Drug Plans" (working paper no. 17410, National Bureau of Economic Research, Cambridge, MA, September 2011).

(24.) Klem, Bringing Data to Life.

(25.) Tim Lake, Chris Kvam, and Marsha Gold, Literature Review: Using Quality Information for Health Care Decisions and Quality Improvement (Cambridge, MA: Mathematica Policy Research, Inc., May 6, 2005), https://cahps.ahrq.gov/about-cahps/cahps-program/qualityinfo.pdf.

(26.) Jack V. Tu et al., "Effectiveness of Public Report Cards for Improving the Quality of Cardiac Care: The EFFECT Study: A Randomized Trial," Journal of the American Medical Association 302, no. 21 (2009): 2330-37.

(27.) See Jonathan T. Kolstad, "Information and Quality When Motivation Is Intrinsic: Evidence from Surgeon Report Cards" (working paper no. 18804, National Bureau of Economic Research, February 2013); also see the analysis in Dranove, "Health Care Markets."

(28.) Rachel M. Werner, David A. Asch, and Daniel Polsky, "Racial Profiling: The Unintended Consequences of Coronary Artery Bypass Surgery Graft Report Cards," Circulation 111 (2005): 1257-63.

(29.) Klem, Bringing Data to Life.

(30.) Nicholas W. Hillman, "Differential Impacts of College Ratings: The Case of Education Deserts" (working paper, 2014 Civil Rights Project Research and Policy Briefing, Washington, DC, August 27, 2014), https://news.education.wisc.edu/docs/WebDispenser/news-connections-pdf/crp---hillman-draft.pdf?sfvrsn=6.

(31.) Mary Alice McCarthy, Beyond the Skills Gap: Making Education Work for Students, Employers and Communities (Washington, DC: New America Foundation, October 2014), www.edcentral.org/wp-content/uploads/2014/10/20141013_BeyondTheSkillsGap.pdf.

(32.) Sheila Nataraj Kirby, "The Job Training Partnership Act and the Workforce Investment Act," in Organizational Improvement and Accountability: Lessons for Education from Other Sectors, ed. Brian Stecher and Sheila Nataraj Kirby (Santa Monica, CA: RAND Corporation, 2004), www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG136.pdf.

(33.) Pascal Courty and Gerald Marschke, "The JTPA Incentive System: Implementing Performance Measurement and Funding," in The Performance of Performance Standards, ed. James J. Heckman et al. (Kalamazoo, MI: W. E. Upjohn Institute for Employment Research, 2011), 79-82.

(34.) Kirby, "Job Training Partnership Act."

(35.) In 2014, Congress passed the Workforce Investor and Opportunity Act (WIOA), legislation to reauthorize WIA. At the time of this writing, however, the reforms in WIOA were still in the process of being implemented.

(36.) James J. Heckman and Jeffrey Smith, "Do the Determinants of Program Participation Data Provide Evidence of Cream Skimming?," in Performance of Performance Standards, 197.

(37.) The only exception is that adjustments could be considered under an appeals process.

(38.) See the description of this internal Department of Labor study in Burt S. Barnow, "Lessons from the WIA Performance Measures," in Workforce Investment Act, 81-111.

(39.) Heckman and Smith, "Do the Determinants of Program Participation Data Provide Evidence of Cream Skimming?"

(40.) Pascal Courty and Gerald Marschke, "An Empirical Investigation of Gaming Responses to Explicit Performance Incentives," Journal of Labor Economics 22, no. 1 (January 2004): 23-56.

(41.) Van Horn and Fichtner, "Eligible Training Provider Lists and Consumer Report Cards."

(42.) Ibid.

(43.) William S. Borden, "The Challenges of Measuring Performance," in Workforce Investment Act, 177-208.

(44.) National Alliance for Public Charter Schools, "The Public Charter Schools Dashboard: A Comprehensive Data Resource from the National Alliance for Public Charter Schools," http://dashboard.publiccharters.org/dashboard/schools/year/2014.

(45.) David Osborne, Improving Charter School Accountability: The Challenge of Closing Failing Schools (Washington, DC: Progressive Policy Institute, June 26, 2012), www.progressivepolicy.org/publications/policy-report/improving-charter-school-accountability-the-challenge-of-closing-failing-schools/.

(46.) Alison Consoletti, The State of Charter Schools: What We Know--and What We Do Not--About Performance and Accountability (Bethesda, MD: Center for Education Reform, December 2011), www.edreform.com/wp-content/uploads/2011/12/StateOfCharterSchools_CER_Dec2011-Web-1.pdf

(47.) Government Accountability Office, Higher Education: Education Should Strengthen Oversight of Schools and Accreditors (Washington, DC: Author, December 2014), www.gao.gov/assets/670/667690.pdf.

(48.) Bryan C. Hassel and Meagan Batdorff, High-Stakes: Findings from a National Study of Life-or-Death Decisions by Charter School Authorizers (Washington, DC: Brookings Institution, February 2004), www.brookings.edu/gs/brown/hassel0204.pdf.

(49.) Louann Bierlein Palmer, "The Potential 'Alternative' Charter School Authorizers," Phi Delta Kappan 89, no. 4 (December 2007): 304-9, www.pdkmembers.org/members_online/publications/Archive/pdf/k0712pal.pdf.

(50.) The Principles and Standards for Quality Charter School Authorizing states that "a quality authorizer engages in responsible oversight of charter schools by ensuring that schools have both the autonomy to which they are entitled and the public accountability for which they are responsible." See National Association of Charter School Authorizers, Principles and Standards for Quality Charter School Authorizing (Chicago: Author, 2012), www.qualitycharters.org/assets/files/images/stories/publications/Principles.Standards.2012_pub.pdf.

(51.) Palmer, "Potential 'Alternative' Charter School Authorizers," 304-9; Robin Lake, Holding Charter Authorizers Accountable: Why It Is Important and How It Might Be Done (Seattle, WA: Center for Reinventing Public Education, February 2006), www.crpe.org/publications/holding-charter-authorizers-accountable-why-it-important-and-how-it-mightbe-done.

(52.) National Association of Charter School Authorizers, The State of Charter School Authorizing, 2013 (Chicago: Author, 2012), www.pageturnpro.com/National-Association-of-Charter-School-Authorizers/58053-The-State-of-Charter-School-Authorizing-2013/index.html#1.

(53.) Osborne, Improving Charter School Accountability.

(54.) Lake, Holding Charter Authorizers Accountable.

(55.) Osborne, Improving Charter School Accountability.

(56.) Ibid.

(57.) Ibid.

(58.) National Association of Charter School Authorizers, On the Road to Better Accountability: An Analysis of State Charter School Policies (Chicago: Author, 2014), www.qualitycharters.org/assets/files/Documents/Policy/NACSAstateanalysisFNL.pdf.

(59.) Osborne, Improving Charter School Accountability; Lake, Holding Charter Authorizers Accountable.

(60.) National Alliance for Public Charter Schools, "Measuring Up: Indiana," www.publiccharters.org/get-the-facts/law-database/states/indiana/.

(61.) Stratford, "'Skin in the Game' on Loans"; Ryan and House Budget Committee Majority Staff, "Expanding Opportunity in America"; Andrew P. Kelly and Kevin James, Untapped Potential: Making the Higher Education Market Work for Students and Taxpayers (Washington, DC: AEI, October 2014), www.aei.org/wp-content/uploads/2014/10/Untapped-Potential-corr.pdf.

(62.) Timothy F. Geithner, Macroeconomic Effects of Risk Retention Requirements (Washington, DC: Financial Stability Oversight Council, January 2011), www.treasury.gov/initiatives/wsr/Documents/Section%20946%20Risk%20Retention%20Study%20%20(FINAL).pdf.

(63.) Adam B. Ashcraft and Til Schuermann, "Understanding the Securitization of Subprime Mortgage Credit," Foundations and Trends in Finance 2, no. 3 (2008): 191-309.

(64.) Dodd-Frank Wall Street Reform and Consumer Protection Act, Public Law 111-203, US Statutes at Large 124 (2010): 1376.

(65.) The regulations were finalized on October 22, 2014. For more information, see Board of Governors of the Federal Reserve, "Six Federal Agencies Jointly Approve Final Risk Retention Rule," October 22, 2014, www.federalreserve.gov/newsevents/press/bcreg/20141022a.htm.

(66.) Benjamin J. Keys et al., "Did Securitization Lead to Lax Screening? Evidence from Subprime Loans," Quarterly Journal of Economics 125, no. 1 (2010): 307-62.

(67.) Cem Demiroglu and Christopher M. James, "Works of Friction? Originator-Sponsor Affiliation and Losses on Mortgage-Backed Securities" (working paper, University of Florida, Gainesville, 2009).

(68.) For example, see Protect Student Borrowers Act of 2013, S 1873, 113th Cong., 1st sess., Congressional Record 159, no. 181, daily ed. (December 19, 2013): S9054.

(69.) This is partly a result of the fact that policymakers were concerned that a large risk-retention requirement could, by limiting the process of securitization, limit access to credit for consumers.

(70.) James, "Mortgage-Backed Securities."

(71.) Subcommittee on Capital Markets and Government Sponsored Entities, House Committee on Financial Services, Testimony of Julie Williams, First Senior Deputy Comptroller and Chief Counsel, Office of the Comptroller of the Currency, 112th Cong., 1st sess., 2011, www.occ.gov/news-issuances/congressional-testimony/2011/pub-test-2011-50-written.pdf.

(72.) Geithner, Macroeconomic Effects of Risk Retention Requirements.

Kevin J. James (kevin.james@aei.org) is a research fellow in AEI's Center on Higher Education Reform.

Previous Papers in This Series

* Untapped Potential: Making the Higher Education Market Work for Students and Taxpayers, Andrew P. Kelly and Kevin James

* Launching New Institutions: Solving the Chicken-or-Egg Problem in American Higher Education, Sylvia Manning
COPYRIGHT 2015 The American Enterprise Institute
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2015 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:James, Kevin J.
Publication:AEI Paper & Studies
Article Type:Report
Geographic Code:1USA
Date:Feb 1, 2015
Words:10444
Previous Article:AEI political report.
Next Article:Assessing the Universal Exchange Plan: A transcendental bypass of Obamacare or just a nondisruptive face-lift?
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters