cross-posted from Dagblog
So it turns out that New York University has bought its president a summer home on Fire Island (h/t Tenured Radical). Or rather, a special foundation associated with New York University has loaned the university president, John Sexton, around a million dollars to buy a beach house, and there seems a real possibility that much of that million-dollar mortgage will eventually be forgiven, so that Sexton won't have to pay it back. NYU has also made similar vacation-home loans to other top administrators and VIP faculty, at least some of them on the same forgive-over-time plan. This represents a brave new financial frontier in higher education. No other university buys its executives second houses. This seems like an obvious story of an out-of-control administration. But more importantly, it's the story of a board of trustees failing to do its job.
A college or university's board of trustees oversees the long-term financial health of the institution. They approve the budget, keep tabs on how the school's endowment funds are being invested, hire and fire the president, monitor debt levels, and decide how much of the school's annual investment returns should be invested in running the school and how much should be reinvested in the markets. It's an absolutely necessary function.
Lately, there has been argument about how far boards' fiduciary authority extends, an argument that involves both genuine gray areas but also examples of poorly-thought-out overreach. Because their central job is financial oversight, trustees tend to be business and legal types. NYU's Board is fairly typical in that (except for two emeritus NYU presidents whose appointments may be honorary) the Board includes no one with a background in education. Barry Diller and Maria Bartiromo aren't necessarily the people you want to ask about how to teach history or which physics researchers to hire. They have (and need) a totally different kind of expertise. In the traditional arrangement, the faculty takes the lead in questions of teaching and scholarship, the administration takes the lead in daily management, and the trustees take the lead in questions of overall financial management. Some boards do seem intent on micromanaging things better left to the other two spheres (as in the recent CUNY mess, or last summer's debacle at the University of Virginia), but no matter how far a particular board believes its brief extends, fiscal oversight is its original core mission. Providing fiscal oversight is the reason any board of trustees exists in the first place. And when a board neglects that duty, bad things happen.
When the trustees don't do their jobs properly, the university administration begins to overspend on things that administrators tend to value: more administrators, higher administrative pay, more spending on big-ticket athletics (which can run up staggering deficits), and higher construction debt. University presidents have strong personal incentives to spend lots of money on large, impressive new buildings; putting up a "signature building" is considered a major achievement, and can help the president move on to a better job. Of course, raising the money for fancy new buildings really is a significant achievement, and a sign of a president's overall fundraising skill. But some university presidents don't want to wait until they've raised the necessary funds before they put up their big new construction project, or they have construction ambitions that are beyond their fund-raising skills. If the trustees don't holds the reins properly, a president can go on a building spree with borrowed money, saddling the institution with tens of millions of dollars in debt that will eventually have to come from other revenue. A healthy board should keep that from happening.
(What faculty generally want, on the other hand, is more money spent on the annual operations of the education side, including on faculty compensation but also on more faculty, more financial aid, and so forth. A university tilted to far toward faculty interests spends too much of the annual return on its endowment on the yearly operating budget and doesn't reinvest enough of those funds for the future.)
NYU doesn't blow its money on a football or basketball team. (It's Division III, which means no athletic scholarships.) It has been doing a lot of new building since Sexton took over, and is planning much more. Exactly what that means depends on the details of the financing. If Sexton has gotten donors to fund most of that construction, he's been doing exactly what the trustees hired him to do. If a lot of that construction has been financed with debt, on the other hand, that would mean the trustees haven't done their duty to the school.
On the question of executive compensation, however, NYU's Board seems to have lost its senses completely. There has already been a long history of controversy over how much NYU was spending on top executives. Buying those executives second homes ends the debate by proving the critics right.
NYU's board chairman, Martin Lipton, has written a letter to the New York Times
defending NYU's practices while missing the real point. Lipton argues that the University needs to pay what it takes to attract and keep top talent, who other schools want to hire away from NYU, etc. But it's clear that Lipton doesn't get it. No one objects to paying employees what it takes to keep them. The problem is that NYU is paying certain favored people more than they could conceivably get anywhere else. It is a failure of the Board's fiduciary duty.
To get a few things out of the way: many universities, especially those in expensive real estate markets such as New York City or the San Francisco Bay Area, give their faculty housing assistance. Even some schools in fairly inexpensive areas help their faculty buy homes: for example, chipping in if the faculty buy a house in the neighborhood around campus. (Fair disclosure: none of my employers have given me housing assistance
of this kind. Would I take such assistance if it was offered to me?
Probably.) But if you're trying to run a world-class college in Manhattan, helping your hires with Manhattan real estate is part of hiring them. That is not crazy.
Also, just about every college provides the college president with a house, whether that's a mansion on campus or another place that the school purchases for the new hire. The college president is the school's most important fundraiser, and she or he needs a big, impressive house to throw functions for guests and donors. The house is part of the job, and the school provides it. That's not crazy, either.
But buying Sexton a beach house is crazy, because it has no market justification. This is excessive compensation because there is no market justification
for it. No other school is going to steal John Sexton away by giving him
a million-dollar beach house, because no other school buys someone two houses. NYU is compensating Sexton in vast excess of his market value; it's like an NFL team giving the quarterback a private jet and saying they needed to do it to keep him. They don't need to give him a jet, because no one else would give him a jet. NYU doesn't need to give Sexton two houses to keep him, because no one else would ever give him more than one free house.
But it's even worse than that. In a market where no one buys anywhere more than one house, NYU has been buying more than one house for more than one person: not just Sexton but at least one executive VP, the dean of the law school and several law and medical faculty. This suggests that the Board is totally out of its mind. It's giving an unheard-of six-or-seven figure perk, which no one else would give to any employee, to multiple employees. This is like an NFL team giving private jets to the backup quarterback, the wide receivers, and the punter. It doesn't make any sense. Most importantly, it doesn't make any business sense.
It's not even clear who the Trustees think will hire Sexton away from them. The actual market for his services at this point is surprisingly small. There aren't a lot of other universities bidding for his services, because even at actual market rates only a few other universities could afford his services, and most of them likely aren't interested.
There aren't many schools that wouldn't represent a step down from NYU, either in pay or in the desirability of the job itself. In fact, most of the richer and more famous schools pay their presidents less. (Columbia, the Ivy uptown from NYU, does seem to pay its president more; call it the Manhattan premium.) Moreover, several of the filthy-rich schools that could conceivably afford to give Sexton a raise have long traditions of only hiring presidents with a degree from their own school, or prefer to promote from within, or both. Yale and Princeton hired new presidents in the last year; both hired their own provosts. If you take the already-small pool that could afford to give Sexton a raise and that would look like a step up from NYU, and then you subtract the Princetons, Yales, etc., how many places are left? NYU's Trustees are giving Sexton things that no employer gives, for fear that he will leave them for another employer who does not exist.
Business sense is what the Trustees are for. If they're not providing
that, they're worse than useless to the university. But at NYU, business
sense seems to have given way to business culture. Many of the Trustees
themselves are part of a culture of excessive CEO compensation, and
their sense of what is "normal" for executive leadership has trumped
their ability (or their will) to make a hard-nosed evaluation of actual
market prices. That lack of business judgement undermines NYU's
financial strength, wasting funds that should have been invested in the
institution itself. They are bidding against themselves to retain people who are in little danger of leaving, and they they pay their outrageous auction prices with the University's money.
Friday, June 28, 2013
Wednesday, June 26, 2013
Red States and Blue States After DOMA
cross-posted from Dagblog
I'm delighted about the Supreme Court's decision striking down the Defense of Marriage Act in United States v. Windsor. It's a triumph for human dignity, and also a triumph for federalism. The federal government should not be in the business of restricting the rights that individual states extend to citizens. If thirteen states see fit to recognize same-sex marriage, Washington should not interfere.
One result, however, is that the divide between the red states and the blue states will be wider tomorrow. Millions of Americans will only be able to be married in some states and not others, and may only have their existing marriages recognized in certain states. If you drive from Boston to Chicago you'll be married as you drive through Massachusetts and New York, just dating as you go through Pennsylvania, Ohio, and Indiana, and only in a civil union when you reach Illinois. That's silly, but it has real personal consequences, and it will have real economic consequences for different states. The end of DOMA opens a whole new front of inter-state business competition, and it's the red states that are most likely to sing the blues.
The divide between the universal-marriage states and limited-marriage states is likely to have very serious effects on business's ability to attract and retain certain kinds of workers. Would you take a job if it meant that your marriage would not be legally recognized any more? And how would you turn down a job that meant having your marriage recognized, for the first time, in the place where you lived? But on the other hand, would you accept a transfer to the Detroit office if it meant giving up being married? We're not just talking about fringe benefits here.
This eventually means that some businesses are going to have good reasons to work in the states where all their workers can get married if they want; it's a lot easier. It's going to get harder to attract businesses from pro-marriage states to limited-marriage states. And it will be hardest of all in businesses that rely heavily on educated and generally mobile knowledge workers: exactly the workers that are most important in a post-industrial economy.
Smart employers are going to start thinking about prize workers they can poach from competitors in the wrong states. Smart politicians are going to start talking to businesses they want to lure across state lines. And people beginning start-ups are going to find them a little bit easier to start up in Seattle and Boston than they will be in Austin and Durham. That's just economic reality.
We'll see how long it takes for business-oriented politicians in some of those limited-marriage states to see the light on marriage equality. Longer than I'd like, of course. But probably not that long.
I'm delighted about the Supreme Court's decision striking down the Defense of Marriage Act in United States v. Windsor. It's a triumph for human dignity, and also a triumph for federalism. The federal government should not be in the business of restricting the rights that individual states extend to citizens. If thirteen states see fit to recognize same-sex marriage, Washington should not interfere.
One result, however, is that the divide between the red states and the blue states will be wider tomorrow. Millions of Americans will only be able to be married in some states and not others, and may only have their existing marriages recognized in certain states. If you drive from Boston to Chicago you'll be married as you drive through Massachusetts and New York, just dating as you go through Pennsylvania, Ohio, and Indiana, and only in a civil union when you reach Illinois. That's silly, but it has real personal consequences, and it will have real economic consequences for different states. The end of DOMA opens a whole new front of inter-state business competition, and it's the red states that are most likely to sing the blues.
The divide between the universal-marriage states and limited-marriage states is likely to have very serious effects on business's ability to attract and retain certain kinds of workers. Would you take a job if it meant that your marriage would not be legally recognized any more? And how would you turn down a job that meant having your marriage recognized, for the first time, in the place where you lived? But on the other hand, would you accept a transfer to the Detroit office if it meant giving up being married? We're not just talking about fringe benefits here.
This eventually means that some businesses are going to have good reasons to work in the states where all their workers can get married if they want; it's a lot easier. It's going to get harder to attract businesses from pro-marriage states to limited-marriage states. And it will be hardest of all in businesses that rely heavily on educated and generally mobile knowledge workers: exactly the workers that are most important in a post-industrial economy.
Smart employers are going to start thinking about prize workers they can poach from competitors in the wrong states. Smart politicians are going to start talking to businesses they want to lure across state lines. And people beginning start-ups are going to find them a little bit easier to start up in Seattle and Boston than they will be in Austin and Durham. That's just economic reality.
We'll see how long it takes for business-oriented politicians in some of those limited-marriage states to see the light on marriage equality. Longer than I'd like, of course. But probably not that long.
Monday, June 24, 2013
Hard Truths About College Admissions and Affirmative Action
cross-posted from Dagblog
Public debates over affirmative action in college admissions, such as all the hubbub about Fisher v. Texas (in which a not-so-qualified white student named Abigail Fisher sued before the Supreme Court to end affirmative action at the University of Texas), usually run into basic confusion about how college admission works in the first place. Opponents of affirmative action often call, loudly, for American colleges to "go back" to an admissions system that has never existed.
During oral arguments for Fisher v. Texas last fall, a commenter on a news post wrote: "Here's an idea. Why don't we just let in the students with the best grades and test scores?" The commenter intended this as irony, because he assumed that was the normal and natural way to manage selective college admissions. But the system he thinks of as "normal" has never been used in this country, and it will not be used if affirmative action is abolished.
(The ugly irony, of course, is that plaintiffs like Abigail Fisher who complain about "reverse racism" would lose out under a strict academic meritocracy, since their actual academic credentials would not have gotten them into the schools they're suing. Despite all the fantasies about obviously-qualified white students being passed over for hopelessly-unqualified black students, somehow anti-affirmative-action groups can only find weak borderline applicants when it's time to sue. And seriously, America's B- white students do not need everything to be about grades and scores.)
So let's review a few basic facts:
1. College Admissions Has Never Been Strictly About Academics
This is the hardest one for people to accept. But it is absolutely true. Competitive college admissions in the United States has never been about scholarly qualifications alone. There are countries, like France, where the competition for places at top schools is strictly academic. This country has never been one of them.
There are American colleges that don't have the luxury of being selective, and some of those schools do primarily rely on grades and test scores and sometimes use numerical cut-offs. But those are schools that are worried about finding enough students. They are basically taking every qualified person who applies. These schools use grades and scores to screen out applicants who seem likely to flunk. But these schools are irrelevant to the affirmative-action discussion, because students do not compete with one another to get in. You can't claim that you lost your place to a minority, or a legacy, or a football recruit, because there aren't a limited number of places. Letting one student in doesn't mean turning another student away.
(I want to be clear, by the way, that a college that isn't selective isn't necessarily bad, or that schools that are harder to get into are better. Some good schools have trouble filling their classes, and some selective schools are overrated. I'm just talking about the practical side of admissions.)
On the other hand, Harvard, which is not necessarily the best school in America but has long been the most selective, had for many decades (and may still have) a policy that capped the number of undergrads who got in on nothing but academics to ten percent of the incoming class. Ask your friends what percentage of Harvard kids they think got in just on their grades. Ten percent won't come up often.
The other ninety percent have good-enough grades, with the definition of "good enough" changing over time and depending on the strength of the applicant's other qualifications. They all have to be in the pool of students who probably won't flunk out, but the pool of applicants who could eventually graduate if they were let in is much, much larger than the number of students Harvard can or will actually take.
That other ninety percent are selected for things like athletic ability, family connections to the school, signs of "leadership," artistic or other extracurricular talents, regional origin (because the school wants "geographic diversity," meaning more kids from Idaho), and history of charity work. Some of the spots in the ninety percent have always been set aside for students from poor backgrounds. Fifty years ago, racial diversity got added to the list of possible qualifications.
Other selective schools work on basically the same model, with the specific percentages varying a little. (Some schools give more weight to legacies, or a little less to athletes, or set aside 15% instead of 10% for academic stars, but the general pattern holds.) Harvard's a convenient example because its fame means there has been more written and published about its practices, but all of Harvard's peers and rivals, and all of the schools struggling to get closer to that exclusive tier, operate in much the same way.
But these colleges operated this way for decades before affirmative action. Giving some consideration to minority applicants didn't change the fundamental procedure, and banning such consideration wouldn't either.
2. Selective Admissions Are Driven by the Schools' Self-Interest
Why do colleges give these preferences to athletes, legacies, class presidents, bassoon players, blood drive organizers, and kids from Wyoming or West Virginia? Because they believe that doing so is good for the school.
Selective colleges don't favor athletes in the admissions process just because they're trying to make money off football or basketball. Schools with no big-revenue sports programs still give athletes preference, and they give preference to athletes in non-revenue sports like track and lacrosse. The Harvard-Yale game isn't going to bring in any TV dollars, but both of those schools recruit football players. The exclusive private colleges don't admit athletes to make money off sports. They're actually willing to lose money on sports in order to recruit athletes.
Why would schools do this? Because they believe (or enough people within the institution believe), that athletic standouts are likely to excel in other fields after graduation. They view a B+ or A- student who is showing leadership, discipline, and drive on the playing field as more likely to succeed in life than another student with similar grades. So they look for rowers, fencers, and swimmers. And if you think it sounds crazy to look for the next Mark Zuckerberg on a high school fencing team, remember that the last Mark Zuckerberg was captain of his high school fencing team.
A college that's able to choose the students it accepts does its best to choose a group that will become successful alumni in as many different fields as possible. The graduates' success raises the school's reputation and strengthens its core pool of donors. So the best strategy is to choose students with a wide range of different strengths and advantages. You want brains. You want motivation and leadership. You want some socially adroit students who will go on to success in business or politics, so you take kids who were president of three different clubs in high school. You want a certain number of students who were born to money and privilege, because their money and privilege will make them influential later. You want students from all over the country because you want to have alumni all over the country. You want a few driven poor kids who will make good. You want a few artists, a few non-profit leaders, a few intellectuals. (Harvard's 10% rule suggests the percentage of academics and intellectuals they want to turn out.) This isn't so much "diversity" as diversification: admitted students are the schools' long-term investment portfolio, and like smart investors schools spread their bets around.
These are all value judgements, and they are inseparable from the institution's value system. The schools have long-standing ties to certain established elites; they have a long-standing commitment giving legs up to at least a few poor kids. But institutions' values only shape their sense of their own long-term interests. In the end, they make the admissions decisions that they believe will best serve the school itself.
Affirmative action programs are ultimately just another admissions bet, and many of them have paid off. Any country with distinct racial and ethnic groups will always produce a number of high-profile leaders drawn from those groups. Elite schools want those leaders to be their alumni. W.E.B. DuBois had a Harvard degree, but the leaders of the Civil Rights Movement generally had not been educated in the traditional elite schools. Whether the Ivy League's leaders welcomed the Civil Rights movement or feared it, none of them wanted to see a whole set of national leaders coming exclusively from Howard. They started making a point of admitting minorities, and eventually a Columbia/Harvard Law man was elected the first black president of the United States. Mission accomplished.
Elite colleges make sure to take kids from Utah or North Dakota because they want a chance to produce some of Utah and North Dakota's future leaders. They make sure to take some kids from the privileged classes because they want a chance to produce the privileged classes' future leaders. Now they also make sure to take some minority kids because they want a chance to produce some of those minority groups' future leaders. It isn't white guilt. It's time-honored strategy.
3. Academics Were Less Important to Colleges Before Affirmative Action
People talk about affirmative action as if it gutted the academic integrity of American colleges and lowered standards. But exactly the opposite is true for selective college admissions. Academic expectations for all applicants have been much higher in the affirmative action era than they had ever been before.
This is more coincidence than causation. Affirmative action did not make the standards higher. It just began around the same time that other major changes were taking place, including coeducation at formerly all-male schools and a major shift away from legacy admissions and toward greater emphasis on scores and grades.
Over the course of the 1960s, elite East Coast colleges seriously reduced their investment in the sons of the traditional East-Coast elite and began investing much more heavily in academically talented middle-class students from public schools. The reasons for this are complex. Part has to do with the richest schools becoming rich enough, for the first time, to become relatively independent of their traditional old-money clients. Another part has to do with universities' changing sense of where America's leadership class was going to come from in the future. Ivy League schools bet (correctly, as it turned out) that the old East Coast ruling class was going to keep a major seat at the table but would no longer dominate national institutions as it had before World War II. Schools adjusted their admissions strategies accordingly, reducing the number of legacy and prep-school kids so that they represented a large and still-important but not predominant share of each entering class. At the same time, schools took more upwardly-mobile public-school kids with impressive SATs. Some old-money alumni naturally howled bloody murder about reverse discrimination. But in the long run, it seems to have been the right move.
Naturally, it was the academically-weaker legacy kids who got left out of this process. The Gentlemen's C is no longer a qualification if the school you're applying to is no longer primarily interested in recruiting gentlemen. This was a real and large academic change. One Princeton alumni group pamphlet from the 1950s makes it clear that every "Princeton son" who applies will automatically get in if he looks like he can make it through to graduation. (The pamphlet goes on to reassure worried parents that the lowest quartile of Princeton boys is absolutely stuffed with legacies, so don't worry that your precious child needs to a genius.) That isn't even remotely the case any more. Legacies still have a very large statistical advantage in college admissions, but today they face an only slightly (if crucially) easier standard than non-legacies face. (Of course, when standards have risen so high, a slight relaxation matters enormously.) The days of you're-in-if-you-won't-flunk are long gone.
As the aristocrats with mediocre grades got replaced by studious middle-class kids, the overall academic qualifications for admitted students rose sharply. The academic qualifications of that "other" ninety percent admitted to Harvard have gone upward with them. "Good enough" grades for a lacrosse star applying to Harvard today are a lot higher than "good enough" grades for a lacrosse star in 1935 or 1955. Admitting women (and thereby doubling the pool of potential applicants) only made competition fiercer, and the excessive admissions arms race of the last thirty years, with selective admissions rates falling nearly every year and those at the most selective schools going from around 20% to somewhere in the single digits, has relentlessly kept defining "decent grades" upward. Every generation of Ivy Leaguers is full of people who will tell you, "Of course, I would never get in today."
Affirmative action didn't cause that upward spiral in academic expectations. But it hasn't been a noticeable drag on it. And more importantly, the definition of "good enough" grades for an affirmative action applicant is essentially pegged to the standard of "good enough grades" for other kinds of favored applicants. As "good enough" grades for athletes and legacies have gotten higher, so have "good enough" grades for recruited minorities.
Of course, "good enough" grades for black and Latino applicants will never be as low as "good enough" grades for legacies or athletes. Selective colleges' interest in recruiting lacrosse players and aristocrats has always been greater than their interest in recruiting disadvantaged minorities. Studies of admissions have repeatedly shown that the advantages for athletes and legacies is significantly higher than any advantage for underrepresented minorities. (It's worth noting that anti-affirmative-action plaintiff Abigail Fisher has no problem with legacy preferences. One of her complaints about how UT has done her wrong is that all of her family has gone to UT. Clearly, she thinks giving her an advantage because of that would be more than fair.)
At bottom, schools perceive recruiting athletes and legacies as better for the school's long-term advantage than recruiting minorities. They want to recruit future leaders from minority communities, but they want to recruit track stars and aristocrats even more. The average academic standards for affirmative-action recruits hovers somewhere above the average academic standards set for big-time legacies and lacrosse heroes. As competitive pressure has raised the floor for the athletes and legacies, the expectations for recruited minorities have also increased. That is not the story anyone tells about affirmative action in America's colleges. But it is what's actually happening.
Public debates over affirmative action in college admissions, such as all the hubbub about Fisher v. Texas (in which a not-so-qualified white student named Abigail Fisher sued before the Supreme Court to end affirmative action at the University of Texas), usually run into basic confusion about how college admission works in the first place. Opponents of affirmative action often call, loudly, for American colleges to "go back" to an admissions system that has never existed.
During oral arguments for Fisher v. Texas last fall, a commenter on a news post wrote: "Here's an idea. Why don't we just let in the students with the best grades and test scores?" The commenter intended this as irony, because he assumed that was the normal and natural way to manage selective college admissions. But the system he thinks of as "normal" has never been used in this country, and it will not be used if affirmative action is abolished.
(The ugly irony, of course, is that plaintiffs like Abigail Fisher who complain about "reverse racism" would lose out under a strict academic meritocracy, since their actual academic credentials would not have gotten them into the schools they're suing. Despite all the fantasies about obviously-qualified white students being passed over for hopelessly-unqualified black students, somehow anti-affirmative-action groups can only find weak borderline applicants when it's time to sue. And seriously, America's B- white students do not need everything to be about grades and scores.)
So let's review a few basic facts:
1. College Admissions Has Never Been Strictly About Academics
This is the hardest one for people to accept. But it is absolutely true. Competitive college admissions in the United States has never been about scholarly qualifications alone. There are countries, like France, where the competition for places at top schools is strictly academic. This country has never been one of them.
There are American colleges that don't have the luxury of being selective, and some of those schools do primarily rely on grades and test scores and sometimes use numerical cut-offs. But those are schools that are worried about finding enough students. They are basically taking every qualified person who applies. These schools use grades and scores to screen out applicants who seem likely to flunk. But these schools are irrelevant to the affirmative-action discussion, because students do not compete with one another to get in. You can't claim that you lost your place to a minority, or a legacy, or a football recruit, because there aren't a limited number of places. Letting one student in doesn't mean turning another student away.
(I want to be clear, by the way, that a college that isn't selective isn't necessarily bad, or that schools that are harder to get into are better. Some good schools have trouble filling their classes, and some selective schools are overrated. I'm just talking about the practical side of admissions.)
On the other hand, Harvard, which is not necessarily the best school in America but has long been the most selective, had for many decades (and may still have) a policy that capped the number of undergrads who got in on nothing but academics to ten percent of the incoming class. Ask your friends what percentage of Harvard kids they think got in just on their grades. Ten percent won't come up often.
The other ninety percent have good-enough grades, with the definition of "good enough" changing over time and depending on the strength of the applicant's other qualifications. They all have to be in the pool of students who probably won't flunk out, but the pool of applicants who could eventually graduate if they were let in is much, much larger than the number of students Harvard can or will actually take.
That other ninety percent are selected for things like athletic ability, family connections to the school, signs of "leadership," artistic or other extracurricular talents, regional origin (because the school wants "geographic diversity," meaning more kids from Idaho), and history of charity work. Some of the spots in the ninety percent have always been set aside for students from poor backgrounds. Fifty years ago, racial diversity got added to the list of possible qualifications.
Other selective schools work on basically the same model, with the specific percentages varying a little. (Some schools give more weight to legacies, or a little less to athletes, or set aside 15% instead of 10% for academic stars, but the general pattern holds.) Harvard's a convenient example because its fame means there has been more written and published about its practices, but all of Harvard's peers and rivals, and all of the schools struggling to get closer to that exclusive tier, operate in much the same way.
But these colleges operated this way for decades before affirmative action. Giving some consideration to minority applicants didn't change the fundamental procedure, and banning such consideration wouldn't either.
2. Selective Admissions Are Driven by the Schools' Self-Interest
Why do colleges give these preferences to athletes, legacies, class presidents, bassoon players, blood drive organizers, and kids from Wyoming or West Virginia? Because they believe that doing so is good for the school.
Selective colleges don't favor athletes in the admissions process just because they're trying to make money off football or basketball. Schools with no big-revenue sports programs still give athletes preference, and they give preference to athletes in non-revenue sports like track and lacrosse. The Harvard-Yale game isn't going to bring in any TV dollars, but both of those schools recruit football players. The exclusive private colleges don't admit athletes to make money off sports. They're actually willing to lose money on sports in order to recruit athletes.
Why would schools do this? Because they believe (or enough people within the institution believe), that athletic standouts are likely to excel in other fields after graduation. They view a B+ or A- student who is showing leadership, discipline, and drive on the playing field as more likely to succeed in life than another student with similar grades. So they look for rowers, fencers, and swimmers. And if you think it sounds crazy to look for the next Mark Zuckerberg on a high school fencing team, remember that the last Mark Zuckerberg was captain of his high school fencing team.
A college that's able to choose the students it accepts does its best to choose a group that will become successful alumni in as many different fields as possible. The graduates' success raises the school's reputation and strengthens its core pool of donors. So the best strategy is to choose students with a wide range of different strengths and advantages. You want brains. You want motivation and leadership. You want some socially adroit students who will go on to success in business or politics, so you take kids who were president of three different clubs in high school. You want a certain number of students who were born to money and privilege, because their money and privilege will make them influential later. You want students from all over the country because you want to have alumni all over the country. You want a few driven poor kids who will make good. You want a few artists, a few non-profit leaders, a few intellectuals. (Harvard's 10% rule suggests the percentage of academics and intellectuals they want to turn out.) This isn't so much "diversity" as diversification: admitted students are the schools' long-term investment portfolio, and like smart investors schools spread their bets around.
These are all value judgements, and they are inseparable from the institution's value system. The schools have long-standing ties to certain established elites; they have a long-standing commitment giving legs up to at least a few poor kids. But institutions' values only shape their sense of their own long-term interests. In the end, they make the admissions decisions that they believe will best serve the school itself.
Affirmative action programs are ultimately just another admissions bet, and many of them have paid off. Any country with distinct racial and ethnic groups will always produce a number of high-profile leaders drawn from those groups. Elite schools want those leaders to be their alumni. W.E.B. DuBois had a Harvard degree, but the leaders of the Civil Rights Movement generally had not been educated in the traditional elite schools. Whether the Ivy League's leaders welcomed the Civil Rights movement or feared it, none of them wanted to see a whole set of national leaders coming exclusively from Howard. They started making a point of admitting minorities, and eventually a Columbia/Harvard Law man was elected the first black president of the United States. Mission accomplished.
Elite colleges make sure to take kids from Utah or North Dakota because they want a chance to produce some of Utah and North Dakota's future leaders. They make sure to take some kids from the privileged classes because they want a chance to produce the privileged classes' future leaders. Now they also make sure to take some minority kids because they want a chance to produce some of those minority groups' future leaders. It isn't white guilt. It's time-honored strategy.
3. Academics Were Less Important to Colleges Before Affirmative Action
People talk about affirmative action as if it gutted the academic integrity of American colleges and lowered standards. But exactly the opposite is true for selective college admissions. Academic expectations for all applicants have been much higher in the affirmative action era than they had ever been before.
This is more coincidence than causation. Affirmative action did not make the standards higher. It just began around the same time that other major changes were taking place, including coeducation at formerly all-male schools and a major shift away from legacy admissions and toward greater emphasis on scores and grades.
Over the course of the 1960s, elite East Coast colleges seriously reduced their investment in the sons of the traditional East-Coast elite and began investing much more heavily in academically talented middle-class students from public schools. The reasons for this are complex. Part has to do with the richest schools becoming rich enough, for the first time, to become relatively independent of their traditional old-money clients. Another part has to do with universities' changing sense of where America's leadership class was going to come from in the future. Ivy League schools bet (correctly, as it turned out) that the old East Coast ruling class was going to keep a major seat at the table but would no longer dominate national institutions as it had before World War II. Schools adjusted their admissions strategies accordingly, reducing the number of legacy and prep-school kids so that they represented a large and still-important but not predominant share of each entering class. At the same time, schools took more upwardly-mobile public-school kids with impressive SATs. Some old-money alumni naturally howled bloody murder about reverse discrimination. But in the long run, it seems to have been the right move.
Naturally, it was the academically-weaker legacy kids who got left out of this process. The Gentlemen's C is no longer a qualification if the school you're applying to is no longer primarily interested in recruiting gentlemen. This was a real and large academic change. One Princeton alumni group pamphlet from the 1950s makes it clear that every "Princeton son" who applies will automatically get in if he looks like he can make it through to graduation. (The pamphlet goes on to reassure worried parents that the lowest quartile of Princeton boys is absolutely stuffed with legacies, so don't worry that your precious child needs to a genius.) That isn't even remotely the case any more. Legacies still have a very large statistical advantage in college admissions, but today they face an only slightly (if crucially) easier standard than non-legacies face. (Of course, when standards have risen so high, a slight relaxation matters enormously.) The days of you're-in-if-you-won't-flunk are long gone.
As the aristocrats with mediocre grades got replaced by studious middle-class kids, the overall academic qualifications for admitted students rose sharply. The academic qualifications of that "other" ninety percent admitted to Harvard have gone upward with them. "Good enough" grades for a lacrosse star applying to Harvard today are a lot higher than "good enough" grades for a lacrosse star in 1935 or 1955. Admitting women (and thereby doubling the pool of potential applicants) only made competition fiercer, and the excessive admissions arms race of the last thirty years, with selective admissions rates falling nearly every year and those at the most selective schools going from around 20% to somewhere in the single digits, has relentlessly kept defining "decent grades" upward. Every generation of Ivy Leaguers is full of people who will tell you, "Of course, I would never get in today."
Affirmative action didn't cause that upward spiral in academic expectations. But it hasn't been a noticeable drag on it. And more importantly, the definition of "good enough" grades for an affirmative action applicant is essentially pegged to the standard of "good enough grades" for other kinds of favored applicants. As "good enough" grades for athletes and legacies have gotten higher, so have "good enough" grades for recruited minorities.
Of course, "good enough" grades for black and Latino applicants will never be as low as "good enough" grades for legacies or athletes. Selective colleges' interest in recruiting lacrosse players and aristocrats has always been greater than their interest in recruiting disadvantaged minorities. Studies of admissions have repeatedly shown that the advantages for athletes and legacies is significantly higher than any advantage for underrepresented minorities. (It's worth noting that anti-affirmative-action plaintiff Abigail Fisher has no problem with legacy preferences. One of her complaints about how UT has done her wrong is that all of her family has gone to UT. Clearly, she thinks giving her an advantage because of that would be more than fair.)
At bottom, schools perceive recruiting athletes and legacies as better for the school's long-term advantage than recruiting minorities. They want to recruit future leaders from minority communities, but they want to recruit track stars and aristocrats even more. The average academic standards for affirmative-action recruits hovers somewhere above the average academic standards set for big-time legacies and lacrosse heroes. As competitive pressure has raised the floor for the athletes and legacies, the expectations for recruited minorities have also increased. That is not the story anyone tells about affirmative action in America's colleges. But it is what's actually happening.
Sunday, June 16, 2013
The Other Thing College Is For (and Why It Matters)
cross-posted from Dagblog
If you ask anyone what colleges and universities are for, they'll give you more or less the same answer: to educate people. That's a good answer. It's the one I give myself. But it's only half the truth. Colleges and universities actually fulfill two separate roles. We all know about both of them. We only talk about one of them. And because of that, we misunderstand almost everything about how higher education works and how it might be improved.
Every individual college and university exists to educate: to teach people things they did not previously know. (People disagree over what the goal of education should be, but agree that education is the goal.) But colleges also confer social prestige on their graduates. Some confer a large amount, some a smaller amount, and some confer little or none. Taken together as a system, the American colleges and universities have both a teaching function and a sorting function.
I dislike the sorting function and would rather it not exist. But saying that a deeply embedded social practice ought not to exist doesn't get rid of it. Too many people are committed to it, and too many people believe in that social practice as a simple reflection of reality. American higher education labels students by the "quality" (meaning the selectivity and prestige) of the school they attended. People by and large take those labels as real indicators of students' intelligence, likely prospects, and so forth. This social labeling is part of America's class system.
We don't talk about this much, because most people don't like talking about the class system. It's difficult and embarrassing. (And before anyone finds this discussion of class upsetting or insulting, let me say two things. First, I'm not saying that some people should have more social prestige than others. I'm pointing out that in practice they do. I'm trying to describe how the system works, not endorsing it. Second, while talking to another person about their class position can be insulting and a way of putting them down, not talking about how the larger system works is a way of keeping the system in place and letting the people on top get away with things.) But even when we don't talk about class, and maybe especially when we don't talk about it, we experience it a real force in our society. We all know that Princeton alumni enjoy advantages that alumni of poor public colleges do not get. We feel it.
Our public debates about higher education are confused because we don't talk openly or think clearly about the two different functions. It's easier to pretend that colleges are doing only a single thing. But that leads us into misunderstandings because we don't acknowledge how our things really operate or, worse still, talk about one function as if it were the other.
Some people talk about the "great education" a school provides when they really mean the social cachet it provides. Some perceive socially-disadvantaged schools as genuinely providing less learning, no matter the quality of actual education there; those schools are simply "bad schools." These people conflate education and social prestige without being aware that they're doing it, and have trouble perceiving educational quality separately from educational privilege. Other people sometimes deny that anything happens at universities but social differentiation, and are prone to claiming that no actual education happens at elite schools, and so on. This is also a radical misunderstanding, and saying that nobody at Yale learns anything that people don't learn in night school is such an overstatement that it undermines the speaker's point. If two schools have different budgets and different missions, there will be real differences in educational outcomes. No one who's worked in higher education can honestly deny that.
There are schools that are good at teaching students things. There are schools that are good at giving their students social credibility. There are schools that do both well, and schools that do neither. And some can only do one. There are schools that do an excellent job of educating their students but cannot give them any social prestige. There are also, I'm afraid, schools that provide mediocre or poor educations but do better at conferring social capital.
Why does this matter? Because those two basic functions, educating and conferring prestige, have distinct and opposed economic logics.
Education is an absolute and unlimited commodity. The more you produce, the better. Teaching people things is perhaps the ultimate non-zero-sum game.
Social prestige is a relative and inherently limited commodity. The value of selectivity and exclusivity lies in the fact that most people are excluded. In a perfect world, every student could be superbly educated, but in no possible world could every student be more prestigious than every other student. "Exclusivity for all" is a nonsense slogan. Conferring prestige on students is very much a zero-sum game, where one student can only gain from what some other student, somewhere, loses.
The prestige game matters because a school's prestige is its lifeline to more tangible resources, most importantly funding and potential students. A selective institution must maintain its perceived selectivity, its reputation for being hard to get into, for fear of losing access to the most desirable students. The difficulty of getting into a school is key to the social value of getting in to it, which is why exclusive universities trumpet their obscenely low admissions rate every year. Selectivity turns out to be self-fulfilling; if you lose your reputation for selectivity, the students you want most will stop applying.
If your college or university dedicates itself strictly to education without any thought of conferring social prestige, both the school and its students will suffer for that choice. The students and the institution will be stigmatized by many people, and their actual educational achievement will often go unrecognized. Graduates will face disadvantages on the job market; the school will be starved of resources. This is the story of public higher education in America over the past thirty years.
If your university protects its students' best interests, and its own, by working to build its own prestige (and thereby the prestige it confers on its alumni), it must enter a ceaseless zero-sum competition with its peers and rivals. This competition may strike some observers as dysfunctional, but it is an entirely rational and inevitable response to the underlying system of rewards. Since prestige is always relative, any institution must constantly be trying to keep up with its peers and stay ahead of the schools behind it. Every administration tries to move up in the pecking order, which is probably the only viable strategy for not falling further down in that order. You have to run just to stand still.
Understanding the prestige competition between universities is crucial to understanding all the other intractable, poorly-explained questions in American higher education: how admissions work, why costs keep rising, why there seems to be such emphasis on research, what the current rage of enthusiasm for MOOCs is all about. Unless you separate out the questions of education and prestige, it's hard not to misunderstand these questions.
Many people, for example, talk about the "high-quality courses" from Harvard and Stanford (which have taken the lead in MOOC production) being turned into MOOCs. But that is not an evaluation of educational quality; it is an evaluation of institutional prestige. If you don't make any distinction between teaching and creating prestige, then it seems self-evident that a class from Harvard is better than a class from Underfunded State. If you think of teaching and conferring prestige as separate things, the MOOCs look like a pretty bad deal.
One of the MOOCs that gets the most press is a version of one of Harvard's most famous gut classes, a class universally known among undergraduates by a derogatory nickname. (I can't remember ever hearing a Harvard student call that course by its actual name. For practical purposes "Heroes for Zeroes" is its name.) I won't call it a bad course. (I've never taken it.) But there are plenty of classics professors at much less glorious places capable of teaching equally useful courses on the same material and making those courses more challenging for the students. Questions of prestige aside, taking that particular course at Harvard is not a better deal than taking an equivalent course at any number of less-glorious schools.
But the MOOC version of that notoriously-easy class is actually much easier than the Harvard class itself. At least students in the actual class write a few college papers, which get read and graded by teaching assistants who are studying for their own classics PhDs in the field (i.e. by smart people who can read classical Greek in the original). That's not possible for MOOCs, especially because of the Massive Open part. So someone taking the MOOC version just takes multiple-choice reading quizzes instead. And instead of a weekly face-to-face discussion session with one of those doctoral students (who actually knows what's going on in the material), there are lightly moderated online discussion boards. Now the educational product is very clearly inferior to taking a real class on the subject almost anywhere. Even a face-to-face version that's not quite as good as the face-to-face Harvard class is still much better than taking the Harvard MOOC. Talking about the MOOC as superior to a real course at Inglorious State is simply delusional. And replacing face-to-face classes at poor schools with MOOCs from rich, famous schools would be a rotten deal for students at the poor schools.
Does this mean that Harvard doesn't do as good a job educating its students as less prestigious schools do? No. It spends more resources on education than poorer schools can dream of spending, and that matters. But it is not going to spend the kind of money it spends on its own handpicked students on every random stranger who signs up for a MOOC. It could not and would not. Harvard will always save its high-cost, high-value educational products for its own students.
Cheerleaders for MOOCs talk about how they will make education more democratic, breaking down the exclusivity of the elite schools and making elite educations available for all. That has nothing to do with reality. The two most prominent MOOC providers, Harvard and Stanford, are currently wrestling for boasting rights over whose admissions rate is lower. (Stanford's rate finally fell below Harvard's by a tenth of one percent. They want everyone to know. Harvard wants everyone to forget.) Both universities are intent on turning away more applicants every year, and publicly boast about how many excellent students they have turned away. These are not schools committed to breaking down exclusivity. These are schools committed to being the most exclusive. Exclusivity is their business.
If you take a MOOC produced at Harvard or Stanford, you don't get the full educational value that the real Harvard or Stanford version of the class provides. But you get absolutely none of the prestige that Harvard and Stanford gives its students. Part of the cachet of going to those schools is getting into a school that turns down more than 94% of its applicants. MOOCs take everyone who logs on. Harvard and Stanford have enormous social value because they are clubs almost no one can get into. MOOCs are clubs that will take anyone as a member.
In fact, the point of a Harvard or Stanford MOOC is to remind you that you are NOT at Harvard and Stanford, that you are NOT one of the chosen few who gets to take the real class. They get to go to the selective school. You get to wish that you were one of them, with your nose pressed up against the monitor glass. The point is not for Harvard and Stanford to reduce the educational difference between the haves and the have-nots. The point is to increase the prestige difference between the haves and the have-nots. It isn't democratic. It isn't even very nice.
If you ask anyone what colleges and universities are for, they'll give you more or less the same answer: to educate people. That's a good answer. It's the one I give myself. But it's only half the truth. Colleges and universities actually fulfill two separate roles. We all know about both of them. We only talk about one of them. And because of that, we misunderstand almost everything about how higher education works and how it might be improved.
Every individual college and university exists to educate: to teach people things they did not previously know. (People disagree over what the goal of education should be, but agree that education is the goal.) But colleges also confer social prestige on their graduates. Some confer a large amount, some a smaller amount, and some confer little or none. Taken together as a system, the American colleges and universities have both a teaching function and a sorting function.
I dislike the sorting function and would rather it not exist. But saying that a deeply embedded social practice ought not to exist doesn't get rid of it. Too many people are committed to it, and too many people believe in that social practice as a simple reflection of reality. American higher education labels students by the "quality" (meaning the selectivity and prestige) of the school they attended. People by and large take those labels as real indicators of students' intelligence, likely prospects, and so forth. This social labeling is part of America's class system.
We don't talk about this much, because most people don't like talking about the class system. It's difficult and embarrassing. (And before anyone finds this discussion of class upsetting or insulting, let me say two things. First, I'm not saying that some people should have more social prestige than others. I'm pointing out that in practice they do. I'm trying to describe how the system works, not endorsing it. Second, while talking to another person about their class position can be insulting and a way of putting them down, not talking about how the larger system works is a way of keeping the system in place and letting the people on top get away with things.) But even when we don't talk about class, and maybe especially when we don't talk about it, we experience it a real force in our society. We all know that Princeton alumni enjoy advantages that alumni of poor public colleges do not get. We feel it.
Our public debates about higher education are confused because we don't talk openly or think clearly about the two different functions. It's easier to pretend that colleges are doing only a single thing. But that leads us into misunderstandings because we don't acknowledge how our things really operate or, worse still, talk about one function as if it were the other.
Some people talk about the "great education" a school provides when they really mean the social cachet it provides. Some perceive socially-disadvantaged schools as genuinely providing less learning, no matter the quality of actual education there; those schools are simply "bad schools." These people conflate education and social prestige without being aware that they're doing it, and have trouble perceiving educational quality separately from educational privilege. Other people sometimes deny that anything happens at universities but social differentiation, and are prone to claiming that no actual education happens at elite schools, and so on. This is also a radical misunderstanding, and saying that nobody at Yale learns anything that people don't learn in night school is such an overstatement that it undermines the speaker's point. If two schools have different budgets and different missions, there will be real differences in educational outcomes. No one who's worked in higher education can honestly deny that.
There are schools that are good at teaching students things. There are schools that are good at giving their students social credibility. There are schools that do both well, and schools that do neither. And some can only do one. There are schools that do an excellent job of educating their students but cannot give them any social prestige. There are also, I'm afraid, schools that provide mediocre or poor educations but do better at conferring social capital.
Why does this matter? Because those two basic functions, educating and conferring prestige, have distinct and opposed economic logics.
Education is an absolute and unlimited commodity. The more you produce, the better. Teaching people things is perhaps the ultimate non-zero-sum game.
Social prestige is a relative and inherently limited commodity. The value of selectivity and exclusivity lies in the fact that most people are excluded. In a perfect world, every student could be superbly educated, but in no possible world could every student be more prestigious than every other student. "Exclusivity for all" is a nonsense slogan. Conferring prestige on students is very much a zero-sum game, where one student can only gain from what some other student, somewhere, loses.
The prestige game matters because a school's prestige is its lifeline to more tangible resources, most importantly funding and potential students. A selective institution must maintain its perceived selectivity, its reputation for being hard to get into, for fear of losing access to the most desirable students. The difficulty of getting into a school is key to the social value of getting in to it, which is why exclusive universities trumpet their obscenely low admissions rate every year. Selectivity turns out to be self-fulfilling; if you lose your reputation for selectivity, the students you want most will stop applying.
If your college or university dedicates itself strictly to education without any thought of conferring social prestige, both the school and its students will suffer for that choice. The students and the institution will be stigmatized by many people, and their actual educational achievement will often go unrecognized. Graduates will face disadvantages on the job market; the school will be starved of resources. This is the story of public higher education in America over the past thirty years.
If your university protects its students' best interests, and its own, by working to build its own prestige (and thereby the prestige it confers on its alumni), it must enter a ceaseless zero-sum competition with its peers and rivals. This competition may strike some observers as dysfunctional, but it is an entirely rational and inevitable response to the underlying system of rewards. Since prestige is always relative, any institution must constantly be trying to keep up with its peers and stay ahead of the schools behind it. Every administration tries to move up in the pecking order, which is probably the only viable strategy for not falling further down in that order. You have to run just to stand still.
Understanding the prestige competition between universities is crucial to understanding all the other intractable, poorly-explained questions in American higher education: how admissions work, why costs keep rising, why there seems to be such emphasis on research, what the current rage of enthusiasm for MOOCs is all about. Unless you separate out the questions of education and prestige, it's hard not to misunderstand these questions.
Many people, for example, talk about the "high-quality courses" from Harvard and Stanford (which have taken the lead in MOOC production) being turned into MOOCs. But that is not an evaluation of educational quality; it is an evaluation of institutional prestige. If you don't make any distinction between teaching and creating prestige, then it seems self-evident that a class from Harvard is better than a class from Underfunded State. If you think of teaching and conferring prestige as separate things, the MOOCs look like a pretty bad deal.
One of the MOOCs that gets the most press is a version of one of Harvard's most famous gut classes, a class universally known among undergraduates by a derogatory nickname. (I can't remember ever hearing a Harvard student call that course by its actual name. For practical purposes "Heroes for Zeroes" is its name.) I won't call it a bad course. (I've never taken it.) But there are plenty of classics professors at much less glorious places capable of teaching equally useful courses on the same material and making those courses more challenging for the students. Questions of prestige aside, taking that particular course at Harvard is not a better deal than taking an equivalent course at any number of less-glorious schools.
But the MOOC version of that notoriously-easy class is actually much easier than the Harvard class itself. At least students in the actual class write a few college papers, which get read and graded by teaching assistants who are studying for their own classics PhDs in the field (i.e. by smart people who can read classical Greek in the original). That's not possible for MOOCs, especially because of the Massive Open part. So someone taking the MOOC version just takes multiple-choice reading quizzes instead. And instead of a weekly face-to-face discussion session with one of those doctoral students (who actually knows what's going on in the material), there are lightly moderated online discussion boards. Now the educational product is very clearly inferior to taking a real class on the subject almost anywhere. Even a face-to-face version that's not quite as good as the face-to-face Harvard class is still much better than taking the Harvard MOOC. Talking about the MOOC as superior to a real course at Inglorious State is simply delusional. And replacing face-to-face classes at poor schools with MOOCs from rich, famous schools would be a rotten deal for students at the poor schools.
Does this mean that Harvard doesn't do as good a job educating its students as less prestigious schools do? No. It spends more resources on education than poorer schools can dream of spending, and that matters. But it is not going to spend the kind of money it spends on its own handpicked students on every random stranger who signs up for a MOOC. It could not and would not. Harvard will always save its high-cost, high-value educational products for its own students.
Cheerleaders for MOOCs talk about how they will make education more democratic, breaking down the exclusivity of the elite schools and making elite educations available for all. That has nothing to do with reality. The two most prominent MOOC providers, Harvard and Stanford, are currently wrestling for boasting rights over whose admissions rate is lower. (Stanford's rate finally fell below Harvard's by a tenth of one percent. They want everyone to know. Harvard wants everyone to forget.) Both universities are intent on turning away more applicants every year, and publicly boast about how many excellent students they have turned away. These are not schools committed to breaking down exclusivity. These are schools committed to being the most exclusive. Exclusivity is their business.
If you take a MOOC produced at Harvard or Stanford, you don't get the full educational value that the real Harvard or Stanford version of the class provides. But you get absolutely none of the prestige that Harvard and Stanford gives its students. Part of the cachet of going to those schools is getting into a school that turns down more than 94% of its applicants. MOOCs take everyone who logs on. Harvard and Stanford have enormous social value because they are clubs almost no one can get into. MOOCs are clubs that will take anyone as a member.
In fact, the point of a Harvard or Stanford MOOC is to remind you that you are NOT at Harvard and Stanford, that you are NOT one of the chosen few who gets to take the real class. They get to go to the selective school. You get to wish that you were one of them, with your nose pressed up against the monitor glass. The point is not for Harvard and Stanford to reduce the educational difference between the haves and the have-nots. The point is to increase the prestige difference between the haves and the have-nots. It isn't democratic. It isn't even very nice.