What's wrong with testing? Except for supressing the talented, protecting incompetents, and making us less productive - not much.
Except for supressing the talented, protecting incompetents, and making us less productive-not much.
The term "meritocracy" was popularized 30 years ago by the English sociologist Michael Young, who introduced it in his short satire, The Rise of the Meritocracy. Taken literally, meritocracy means "rule by the meritorious," and such a system is what America, among other societies, has always dreamed of attaining. Many people assume, even without thinking, that the current system of school tracking, tests, and professional organizations is about as efficient a meritocracy as we're likely to devise. But is the connection between intelligence and success really so necessary and natural? It is not, as an examination of the meritocracy's premises will show.
The starting point for today's meritocracy, of course, is the idea that intelligence exists and can be measured, like weight or strength or fluency in French. The most obvious difference between intelligence and these other traits is that all the others are presumably changeable. If someone weighs too much, he can go on a diet; if he's weak, he can lift weights; if he wants to learn French, he can take a course. But in principle he can't change his intelligence. There is another important difference between intelligence and other traits. Height and weight and speed and strength and even conversational fluency are real things; there's no doubt about what's being measured. Intelligence is a much murkier concept. Some people are generally smarter than others, and some are obviously talented in specific ways; they're chess masters, math prodigies. But can the factors that make one person seem quicker than another be measured precisely, like height and weight?
Think for a moment about the difference between measuring intelligence and measuring anything else. We know that some natural traits are distributed according to what the statisticians call a "normal distribution," better known as a bell curve. Height is the classic example. If you randomly chose a thousand American men and measured them, you'd find that most would be slightly over or under six feet, smaller numbers would be four inches taller or shorter, and only a few would be at the top and bottom of the scale. Many other natural characteristics-the number of hairs on a person's head, the size of fish in a lake-follow a normal distribution. But some other, equally natural features, don't. Hair color among Japanese citizens does not have a normal distribution: almost everyone's is black. The ability to walk is another example, It is not "normally" distributed, since the great majority of people can walk without difficulty, and a minority of those who are too old, too young, or too sick cannot.
There is, then, nothing in nature that dictates that intelligence be distributed along a bell curve, with the normal proportions of geniuses and morons and people with average IQs. So how can we be sure that intelligence really is distributed that way? In fact, we can't; no one is sure just how it is distributed. The bell curve was invented for analytic convenience, not because anyone believed that it resembled the real, underlying pattern of intelligence. That is, if test makers produced a set of questions, but students' scores on the test did not follow a "normal" distribution, then the questions themselves must be bad. Good questions were those which yielded a bell -shaped curve.
IQ scores now fall into a bell curve mainly because that is where the original English and American psychometricians thought they should fall. But suppose they'd started out with different preconceptions. Suppose they believed that "intelligence" was something lik"health": some people were weak and some were strong, but most people were "healthy enough." Their bodies may have been shaped differently, but one type couldn't be called healthier than another. In that case, the distribution of IQ scores would have been very different. Indeed, it would look more like the distribution in Japan, where the prevailing idea is that intelligence (among Japanese) is like health. Most people are thought to have "enough."
This brings us to the second question. Whatever intelligence may be and however it may be distributed, is it really the main factor in determining how far people can go in life? Unless IQ is an important limit, the entire tracking system makes no sense. Why start channeling people early if most of them really can handle most jobs? Why not let them end up where they will, by trial or error, or encourage them to keep starting over? Why hive people off to trade school if, given a later chance, they could become scientists, doctors, or inventors?
As it happens, there have been some studies designed to test precisely this hypothesis: that only a small fraction of the public is intelligent enough to do complicated professional work. If the hypothesis were true, you would expect to see a correlation between IQ scores and positions on the job ladder. The greatest variety of IQ scores would be at the bottom of the ladder, because people in society's bad jobs would be (a) those people who weren't smart enough to do anything else and (b) those people who were smart enough but for one reason or another-weak ambition, negligent parents, sickness, alcoholism, character defects, simple bad luck-never fulfilled their potential. At the top of the ladder, there would not be much variety-there couldn't be, since only those with high IQs could handle professional or managerial work.
In I.Q. in the Meritocracy, R.J. Herrnstein discussed one important study that confirmed this expectation. He compared the intelligence test scores given to tens of thousands of recruits during World War II with the jobs they'd held before induction. As he had predicted, there was much variety at the bottom and less among those in good white-collar jobs.
But Herrnstein's subjects were young, just starting out in life. When Michael Olneck, of the University of Wisconsin, and James Crouse, of the University of Delaware, worked from data that followed men later into their careers, they found just the opposite. Their principal source of information was the "Kalamazoo brothers" study, one of sociology's longest-running and most thorough surveys, which followed thousands of boys from their childhood in Kalamazoo well into adulthood. Because the study lasted so long, early guesses about the boys' potential could be matched against the way their careers actually turned out,
When Crouse and Olneck compared men's first jobs with their test scores, they found a pattern like Herrnstein's. But the longer they followed subjects, the more the pattern changed. Of the Kalamazoo brothers who ended up as professionals, 10 percent had been considered "high-grade morons" as boys. Their childhood IQs were below 85, putting them in the bottom sixth of the population. One third of all the adult professionals, and 42 percent of the managers, had childhood IQs below 100. On average, managers were smarter than normal, but many managers were dumb. The greatest diversity of IQ scores was not among unskilled laborers, as Herrnstein had predicted, but among those in professional jobs. "While men with quite high test scores would rarely be found in undesirable jobs, men with low scores are represented in desirable jobs in fair numbers," Olneck and Crouse said. Men in the highest cat"professional, technical, and kindred," had a mean IQ of 107.5, but the lowest score in this category was 62, often classified as "imbecile." The standard deviation of IQ scores was greatest at the highest levels; the most homogeneous groups were the ones with the lowest scores, laborers and farmers.
"Rather than high cognitive ability being essential for successful performance in desirable jobs, it appears that the capacity to succeed in such jobs is rather widespread, and is not confined to men who score well on tests," Olneck and Crouse concluded.
Here is an implication worth chewing on. The ultimate justification for a system with early tracking is that we're chronically short on intellectual talent. Pretending that a low -IQ student can ever do a professional job, runs this argument, is like telling people that they can eat sand. They can't, and it's both cruel and inefficient to mislead them. Yet Olneck and Crouse's findings indicated that the basic reasoning was wrong. Many men who'd been classified as "subnormal" and "morons" did fine in jobs demanding high skill, once they got the job. Perhaps they had managed to raise their previously low IQs, or perhaps what the tests measured was not really significant. In either case, the "limits" imposed by low IQ did not seem to be natural. Hobo jungles
There is an even more familiar (and more emotionally charged) example of the same principle: the story of the G.I. Bill.
As World War II ground toward its close, the concept of the G.I. Bill was taking shape. The government decided it could reward the returning veterans, and help the economy digest hundreds of thousands of demobilized men, by offering a free or subsidized college education to every G.I.
The most prestigious members of the educational hierarchy thought this was a preposterous idea. Robert Hutchins, of the University of Chicago, warned in 1944 that when the G.I.s came home, "colleges and universities will find themselves converted into educational hobo jungles." In the same generous spirit, James B. Conant, the president of Harvard, said in 1945 that the bill was "distressing," because it did not "distinguish between those who can profit most by advanced education and those who cannot." The bill was clearly a scheme to push people beyond what their intelligence permitted.
In one way-but not quite the way they intended-people like Conant and Hutchins were right. By promoting the idea that everybody should go to college, and that any trade worth learning (accounting, physical education, journalism) should be learned in a university, the G.I. Bill magnified the importance of academic credentials and diminished the academic role of the university.
But Conant and Hutchins could not have been more wrong about the G.I.s. When the 2.3 million veterans enrolled, they turned out to be phenomenally successful. Older, less flighty, more seriously motivated than ordinary students, the early G.I. Bill scholars became the most successful group of students American universities had ever seen. By 1947, The New York Times was reporting that "the G.I.'s are hogging the honor rolls and the dean's lists'" Newsweek reported that not one of Columbia University's 7,826 veterans "was in serious scholastic difficulty at the last marking period." Fortune said that the class of 1949, 70 percent of whom were veterans, was "the best. . .the most mature. . .the most responsible. . .the most selfdisciplined" in history.
The G.I.s' success might seem surprising to anyone who took the shortage-of-talent hypothesis seriously. But it was wholly consistent with several other major trends in American and world history. During the two and a half centuries before the G.I.s went to college, the Western world had accomplished its Industrial Revolution. At the beginning of that process, most people lived on the farm. IQ tests almost always show that rural children are less intelligent than urban ones; the scores typically go up when the families move to town, If widespread IQ testing had been available in the early 1700s, it would surely have shown that there weren't enough intelligent people available to do advanced industrial work. Yet somehow people adjusted. Since 1900, the proportion of "intellectually demanding" professional and managerial jobs in the United States has quadrupled, and yet Americans rose to the task. Somehow children raised in the Sicilian village or on the Indiana farm managed to become engineers and schoolteachers and pharmacists, once the changing job market let them do something besides slop the hogs.
It is theoretically possible that now, in the late twentieth century, we've finally reached the precise equilibrium point. Perhaps we have exactly as many smart people as we have difficult jobs. If this is so, then it makes sense to channel people as early as possible. Why fool them with unrealistic expectations? Why let talent dribble away?
This is theoretically possible, but it's absurd. It's easy to tell in retrospect that there was a lot of unrecognized talent in the American hinterland of 1900; there is also talent in the ghettoes and steel towns and backwoods today. Japan's theory of intelligence is designed to make everyone rise to challenges. That country is often, and accurately, described as having the "best bottom 50 percent on earth." America's theory of intelligence writes off the supposedly untalented bottom ranks.
There is a third presumption shoring up the school-based meritocracy: How can we be sure that we have competent engineers, teachers, air-traffic controllers, and lawyers if we don't insist on their having gone to the best schools?
This idea sounds even more logical than the first two. But it is not any truer. To see what is wrong with it requires a look at the difference between "competence" an"ability."
In 1973, 10 years after he published The Achieving Society, David McClelland wrote a short article about precisely this distinction. "Testing for Competence Rather than 'Intelligence"' appeared in a professional journal, American Psychologist. McClelland, who had spent many years investigating why people "succeeded," asked in this article whether it made sense to devote so much effort-through testing, tracking, and requiring academic credentials-to predicting who would succeed.
It was certainly true, McClelland said, that IQ tests helped predict who would get good grades in school. This was not exactly a surprise, since grades and test scores measured similar skills. He said there was also a statistical correlation between doing well in school and succeeding later in life, since in twentieth-century life you had to get through school to enter a profession or get a managerial job. But did being good at school itself, apart from its value in getting a person a job, usefully predict how well he'd do the job? McClelland's answer was no.
McClelland said that most researchers had failed to find a connection between good grades and good professional performance, once they removed the effect of the credential itself. People who got good grades at one level of schooling usually did well in the next level too, but that was about as far as the correlation went. The most impressive part of the findings, McClelland said, was how hard they were for most academic officials to accept: "It seems. . .so self-evident to educators that those who do well in their classes must go on to do better in life that they systematically have disregarded evidence to the contrary that has been accumulating for some time."
McClelland added that in the 1950s, he chaired a committee of the Social Science Research Council, investigating the correlation between grades and occupational success. It found that college graduates predictably got better jobs than nongrads, but that academic differences among college graduates did not seem to matter very much. McClelland said that even he balked at this conclusion until he went back to study the postcollege careers of some of his own students. He dug out his grade books from Wesleyan University, where he had taught in the 1940s, and got the names of the eight best and eight worst students in his class. He traced their careers through the early 1960s, at which point he found no difference between the two groups. The better students had gotten into better graduate schools, but they had not necessarily proven more successful in their working lives.
Army Alpha and Russian Jews
McClelland and a number of other researchers who have been interested in why people succeed, not in how smart they are, have emphasized that ability and competence are two different attributes, not always related. Ability is theoretical potential; it's similar to some nightmare version of the Soviet Olympic-training system, where grade school children have their muscle fibers and body fat tested to see who is the most promising natural athlete. Competence is what you can actually do today; it's the idea behind the summer training camps held by National Football League teams, where the coaches see whether the promising rookies and the veteran stars are up to snuff this year.
The Soviet Olympic-training system may be fine for the Soviets, but the summer training camps are closer to the American ideal of how a merit system should work. Everyone has a chance; no one can coast for long; there's always next year. To reflect America's goal of openness and also to ensure skillful performance, an American merit system should pay more attention to competence than to ability, But McClelland and others have shown that the emphasis on professions, occupational licenses, and educational requirements rewards ability more and competence less.
The licensed professions focus attention on how a teacher or accountant or doctor trained for his job-did he go to the right school and get the right credential?-while simultaneously discouraging measurements of how well the people already in the profession do their work. Teachers' unions, for example, fight hard to keep "unqualified" people, those without education degrees, out of their profession and then fight equally hard against competence tests and outside assessments of how qualified today's teachers are. The IQ and tracking systems strive for more and more refined predictions of who will succeed but make no corresponding attempt to see who is in fact doing a good job.
There are two separate issues involved in this shift from competence to ability. One is what it does to the American idea of starting over and having second chances. The other is whether it's effective in its stated goal, guaranteeing skillful performance.
The effect on mobility is straightforward: the more we concentrate on ability, the harder it is for people to find new roles. It's all but impossible to think of starting over when the steel mill closes down if your big mistake was being in the slow reading group in fourth grade or choosing the wrong parents. Moreover, an emphasis on ability inevitably leads toward a hereditary class structure, since measured IQ is inherited about as consistently as money. The people with the most money and the highest social standing do best on the tests.
There are dozens of similar illustrations. When the Army Alpha tests were given during World War 1, they showed that intelligence varied with both rank and race. Soldiers with Anglo-Saxon backgrounds had IQs higher than the average score for all whites; Irish Americans' scores were below that average; Greeks placed lower still; southern blacks, lowest of all. (Blacks from the northern states, who mainly lived in cities, had higher scores than newly arrived Italians, Poles, and Russian Jews.) The correlation between money and intelligence could, of course, merely indicate that America's meritocracy is functioning perfectly The smartest people are getting the best jobs, earning the most money, and having the most talented children. But since this pattern showed up with the very first intelligence tests, which were given in the days of Jim Crow laws and blatant bias against immigrants and women, it is hard to believe that pure merit is the true explanation. Rather, the tests seem to measure something closely connected with social standing, so the correlation between IQ and money is a tautology; that is, it's two measures of the same thing.
Whatever the explanation for the link between money and intelligence may be, the effect is clear: an emphasis on so-called ability makes America rigid. People who start out on the bottom have inherently less chance to rise than those placing above them.
The second issue is whether this bias is necessary. Like the destructive side of capitalism, rigidity may simply be the price America must pay for its longterm growth. This, in fact, is exactly the argument that R.J. Herrnstein made in his book about the meritocracy.
But maybe this is not the whole story. Whenever scholars and investigators have looked closely at what people do in their jobs, they've found substantial differences between what it takes to get a job and what it takes to do it well. That is, they've found that the complicated and onerous effort to predict who will succeed need not be made at all.
Part of the problem here is that as licensing requirements have become more restrictive and been based even more on schooling, they haven't necessarily been tied to practical job skills. In California, contractors must pass a pencil-and-paper test before they can be licensed to go into business. According to one study, the major effect of this requirement was to spawn a cram school industry that taught people how to pass the test "Most licensing exams involve written responses to questions and extensive recall of a wide range of facts that may have little or nothing to do with good practice," S. David Young wrote in The Rule of Experts. "For example, occupations such as plumbing and barbering rely on written exams devised by state licensing boards that test little more than the ability to memorize irrelevant facts. Another example is the California licensing examination for architects, in which candidates are expected to discuss the tomb of Queen Hatshephut and the Temple of Apollo." Knowing about the tomb and temple would be a plus for anyone; the question is whether it serves anyone's interest, except that of the architects' guild, to keep people who don't know from entering the market. Amateur hour.
Moreover, once a person does get a license, he's practically immune from later scrutiny. Daniel Hogan, a lawyer and social psychologist at Harvard, pointed out that in 1972, only 0.1 percent of practicing lawyers were subject to some form of disciplinary action. Yet in another study, 30 percent of lawyers said they were aware of some legal or judicial misconduct. That is, something was seriously wrong with the standards of competence and honesty in the profession, but the elaborate system of licensing and credentials did very little to control it. In a typical recent year, only one American physician out of every 710 has his license revoked or suspended; most Americans who have been patients will find it difficult to believe that 709 out of 710 physiciansare completely competent. Professionals must put in years of schooling and pass a test before entering the field, but they're usually never tested again. People who have passed the bar exam are licensed to do anything from drafting wills to arguing a case in court, although the skills involved are very different. A person who has not been to law school or passed the bar exam cannot do either, even though he may have exactly the right skills. ("60 Minutes" publicized the case of Rosemary Furman, of Florida, who drew up low-cost legal forms for poor people. No one contended that she was incompetent or had offered bad advice, but she was sentenced to jail for practicing law without a license.) Only those who have been to medical school can prescribe drugs or perform surgery, but psychiatrists, surgeons, and research specialists are legally free to do any of those things. A few professions have accepted "continuing education" requirements, which, once more, measur"input": the architect or lawyer shows that he has taken more courses, not necessarily that he's kept up his skills.
Those in the growing field of "competence" studies have developed a theory of how modern, complex society could operate without heavy licensing requirements. The consulting firm Richard Boyatzis heads, called McBer, was founded by David McClelland in 1963. It analyzes what people actually do in business jobs-not what their job descriptions say, but how they spend their time and which skills seem most crucial to their success. "I've come to see that whenever a group institutes a credentialing process, whether by licensing or insisting on advanced degrees, the espoused rhetoric is that it's enforcing standards of professionalism," he says. "This is true whether it's among accountants or plumbers or physicians. But the observed consequences always seem to be these two: the exclusion of certain groups, whether by intention or not, and the establishment of mediocre performance."
One of the most exhaustive studies of the difference between preparation and performance, or ability and competence, was undertaken by Daniel Hogan. In 1979 he published a fat, four-volume swdy called The Regulation of Psychotherapists. Its purpose was to compare preparation and performance as carefully and systematically as he could. First he examined the day-by-day workings of psychotherapists at every level, from the social worker to the licensed psychoanalyst and the psychiatrist with an M.D. He devoted hundreds of pages to an analysis and a description of the traits that distinguish a good psychotherapist from a bad one. In deciding which psychotherapists were most effective, he concentrated strictly on "output"-whether the patient got better-rather than on "input"-how much effort the therapist applied, how much he charged, or how long he'd spent in school.
Then, having considered what it took to do the job well, Hogan, in the second half of the volume, went through all the qualifications a therapist needed to get the job. There was not much overlap between the two lists. To get a license to practice, a psychotherapist had to do well at hard, scientific training, which in most cases was unimportant in doing the job well.
Hogan's book was filled with cases illustrating this point. In a study done in 1965, for example, five laymen, only one of whom had finished college, were given fewer than a hundred hours of training in therapy skills. Then they were put in charge of patients who had been hospitalized, on average, for more than 13 years without significant improvement. Under their "amateur" treatment, more than half of the patients got better.
"Competence" studies like Hogan's have turned up many other illustrations of the difference between ability and performance-between what it takes to get a job and what it takes to do it. Two other fields are worth discussing: schoolteaching and air-traffic control.
Secrets of St. Albans
Anyone who wants to teach in the public schools must first be licensed, which means getting a degree from a teacher's college. Sometimes the people who get these degrees are good teachers, but that seems to be largely coincidental. In 1967, Harold Howe II, who was then the U.S. commissioner of education, said that this focus on credentials and certificates wa "a bit like saying that Socrates wasn't a good teacher because he had no credentials. . . .We have forgotten that Spinoza earned his living as a lens grinder and that Tom Edison quit school at nine." Howe described a woman in her twenties who lived for several years in Paris, worked for a French magazine, and taught French at a private school when she returned to the United States. But when she applied for a job as a public school teacher, she was turned down flat because she had not been to a teacher's college. Howe concluded, "I probably don't need to tell you, either, that a majority of states do not require language teachers to be able to speak the language they are to teach." Denny Harrah is a former professional football player for the Los Angeles Rams, chosen for the Pro Bowl six times. When he volunteered to help coach a high school football team in Charleston, West Virginia, he was turned down because he was no "certified." One of my children spent a year with an elementary school science teacher who had been shifted from teaching English. She was fully "qualified" to teach, since she had her credentials, but she knew less about science than most of the children did. During that year my son was taught, among other things, that the moon rotated on its axis, you could see its back side from the earth, and that you could go blind from looking at a picture of a solar eclipse unless you protected yourself with smoked glass. My son would check each day's new information with a neighbor who had studied for a degree in astronomy. The neighbor, of course, was not qualified to teach.
Private schools are free to ignore education school degrees, and generally they do. I once observed classes at St. Albans, one of the most prestigious private schools in Washington, D.C. It was obvious that the students were learning more, or at least being offered more, than in Typical public high schools. Part of the reason is that St. Albans is a very hightoned, expensive school that draws most of its students from professional-class families that stress education. But the school's headmaster, Mark Mullin, said that another factor was more important to the school's quality: "the freedom to hire the people we want. The freedom not to worry about certification, seniority, and all of that. I don't know how we'd do without it. That clearly is number one." Unlike the surrounding public schools, St. Albans could hire teachers who knew French and music, understood the rotation of the moon, and could also show that they had an instinct for teaching and deal ing with children.
A few public schools have edged toward the same approach. The Houston school system has had to cope with large and rapid demographic changes, more dramatic than those in many other big-city systems. Only Los Angeles and Miami have had to absorb more immigrants in their schools. But unlike the performance levels in most other big-city systems, those in Houston schools went up in the mid 1980s. The superintendent of the Houston schools, a back-slapping character named Billy Reagan, said that his crucial advantage was his freedom, within the limits of a public school system, to let talented people teach even if they didn't have the right credentials.
An education school dean would probably argue that public schools should put more emphasis, not less, on preparation. If they wind up with science teachers who've seen the dark side of the moon, then the teacher's colleges should add more science courses to their curricula. This is the argument nearly every licensed profession has made when asking the government to stiffen entry requirements. (In the early 1970s, when I was working as an assistant in the Texas state senate, a group of auctioneers demanded that the state government license them. The public was being jeopardized by unqualified auctioneers, they said. Rather than letting just anybody conduct an auction, the state should make people pass written tests and spend a certain period in apprenticeship.) But there is little evidence that the regulations have done what they are supposed to do-that is, protect the public more thoroughly than a simple market test would have done. (In the case of teachers, a market test would mean letting principals and school districts hire people who know their subjects, as private schools do today.) As S. David Young concluded in his book about licensing, "Occupation regulation has served to limit consumer choice, raise consumer costs. . .deprive the poor of adequate services, and restrict job opportunities for minorities-all without a demonstrated improvement in quality or safety of the licensed activities ."
No to Tokyo
If schoolteaching and psychotherapy seem too "soft" to provide a fair test of meritocratic standards, what about air-traffic control? In 1970, in The Great Train Robbery, Ivar Berg reported on a study conducted by the Federal Aviation Administration, which wanted to analyze what made the 507 topranking air traffic controllers good at their jobs. The question was whether advanced educational requirements would produce more competent controllers; the answer was no. Berg said that the controller's job seemed to demand a high degree of academic preparation. It required an understanding of important mathematical and engineering principles, and it also drew on some of the personal qualities that higher education was supposed to foster: disciplined thinking, reliability, responsibility, and so on. Yet when the FAA studied the backgrounds of its best controllers, it found no correlation between academic training and professional skill. Half of the top-ranked controllers had never gone to college. Many of them had come directly to the FAA from high school or military service, and had then received rigorous technical training specifically related to their jobs.
When competence really matters-among airtraffic controllers, on the battlefield, in very competitive businesses, in high-powered prep schools-people soon find a way to look past academic preparation and find out who can really do the job.
There is one other argument for relying on "ability" and "preparation" to steer people toward jobs. They may not be perfect gauges of later competence, but how bad can they really be? Once, while interviewing officials at the Educational Testing Service headquarters near Princeton, I came to the end of a long discussion with an ET'S test designer. Yes, he said, standardized tests measured largely, often laughably, arbitrary skills. Yes, they reflected the students' exposure to literate, upper-middle-class culture during their formative years. No, test performance didn't necessarily have much to do with useful job skills. And yes, children raised in families with the most money consistently did best on the tests, for reasons that seemed to reflect money itself as much as innate differences in talent. "But, in general, the kids who know these things know a lot else," he said. 'A lot."
The idea that tests and school credentials are "close-enough" approximations of other, important skills might be satisfactory in Japan. There, everyone seems comfortable with the knowledge that the people-in practice, the men-who get into Tokyo University are the ones who will lead industry and government. The Tokyo University admissions test doesn't measure much that is directly useful to Toyota or the Ministry of Finance. The Englishlanguage portion of the test, for example, measures almost nothing that would be useful to people who intend to speak English. (In a sixth-year English class in Tokyo high school, I once listened to a 30-minute lecture, in Japanese, about the supposed difference between "attain" and "attain to." Apparently it had been on an admissions test.) But the tests do measure effort, and for the Japanese that's close enough. Conceivably the same principle could apply here. Long years in school and good scores on tests may not be directly connected to professional competence, but indirectly they may lead the right people to the right jobs.
The main problem with this reasoning is that it ignores the tremendous damage caused by the emphasis o"ability." The more that formal schooling matters, the harder it is for Americans to move out of the social class where they were born. For American society, unlike Japanese, schooling and ability should be emphasized offly if there's no other way to ensure competence. In fact, as the evidence shows, the educational merit system increases costs and decreases opportunity without noticeably raising the overall level of skill.
The world is full of "close-enough" judgments, which are also known as "prejudice." Statistically, the average black American man is more likely to be a criminal, a drug abuser, and a credit risk than the average white American. Therefore, employers would be close enough if they refused to hire any blacks. This policy would be close enough, but it would be ugly and unfair.
America cannot afford to erect barriers that are close enough. America was built by people who broke out of categories, defied probabilities, and changed their fate.
|Printer friendly Cite/link Email Feedback|
|Date:||May 1, 1989|
|Previous Article:||Stop panicking over inflation; why liberals should hate Volcker and Greenspan and love low interest rates.|
|Next Article:||Fumbling the Future: How Xerox Invented, Then Ignored, the First Personal Computer.|
|Handle with care: dealing with offenders who are mentally retarded.|
|A 20-year survivor.|
|Life and death.|