December 31, 2003
finally, campaign coverage with substance
During a campaign, our job as citizens is to decide whom to vote for. Two questions are relevant: What do the candidates propose? And what kind of people are they? The job of the press is to help us answer these two questions. As James Madison wrote, the purpose of the press is "canvassing the merits and measures of public men."
Instead, we mainly see (even in the best newspapers, and even after 15 years of criticism) a steady stream of stories about campaign tactics, voters' opinions, the electoral process, and comments that candidates make about one another. I suppose politicians' tactics and remarks can shed some light on their personal "merits," but not much. (Even a great potential president could campaign badly or say something inappropriate on the campaign trail). Worse than irrelevant, these stories are harmful, because they suggest that we should not vote for those candidates who currently appear to be doing badly in the horse race. This makes political news a self-fulfilling prophesy; it denies voters the power to choose for themselves. Witness, for example, Elizabeth Rosenthal's recent front-page story on Senator John Edwards, which is all about how poor his chances are. Rosenthal says nothing that helps us assess Edwards' "merits" or his "measures."
Today, at last, the Times runs an article succinctly comparing the economic plans of the nine Democratic presidential contenders.
I'm a policy wonk and a news junkie, but I still found this simple story uniquely illuminating. For instance, Sen. Lieberman would collect $135 billion less in taxes each year than Gov. Dean, and spend proportionately less. That's a $1,393 difference per US household per year--something to think about. The difference between Dean and Gephardt is more about spending priorities: Gephardt budgets more than twice as much for health care, but Dean offers more support to states. On trade, Gephardt is unique among the major candidates, for he alone would renegotiate NAFTA and the WTO agreements.
I like today's article, but there's so much more that could be written along the same lines. For instance, what would states likely do with the $100 billion/year that Howard Dean proposes to give them? (See my budget pie charts for part of the answer.) What would a typical family's tax bill look like under Lieberman's plan, versus Dean's or Gephardt's? How would one go about renegotiating NAFTA, and what would the Canadian and Mexican negotiating positions be? Addressing these vital questions is the first responsibility of our best newspapers. Instead, they all send their top reporters to Iowa and New Hampshire to record politicians' gaffes, count heads at campaign events, and describe the primary results before anyone votes.
December 30, 2003
varieties of fame
As I've remarked before, I'm interested in the desire for fame. It's the main selfish motivation of academics--and of people who create personal websites and blogs. Christians and ancient Stoics called the desire for fame a vice. Arguably, it is a virtue: specifically, a civic republican virtue that motivates and accompanies participation in public life. But I think its influence on academics is mostly corrupting. If it is a vice, then it's a worse moral danger for me personally than some others, such as greed for money and desire for power.
I hope to create an extended thread on the topic. For today, I'll just observe that many people want fame, but they want it in very different forms. So ask yourself:
1. Would you rather be known to millions of people at one instant because of a CNN broadcast, or to one hundred people during your lifetime, plus one hundred people in each generation after your death for the next 500 years?
2. Would you rather be known to half a billion people in India or China, or half the people you pass as you walk around in your own neighborhood?
3. Whom would you prefer to know about you--some of the world's most powerful people, some of the top experts in a difficult field, or some of the people who themselves have the biggest audiences?
4. Would you rather be known for your name, your ideas and actions, or your face? For instance, would you rather (a) have a daily byline in a major newspaper, or (b) see your own work described once in a news article, or (c) appear in secondary roles in various TV dramas?
5. Would you rather be known by a limited number of cognoscenti for your originality, or would you rather that millions of people associated you with an idea that you did not originate, although you have expressed it articulately?
December 29, 2003
Burke, Oakeshott, and Iraq
The invasion of Iraq is the most radical project undertaken by our government in generations. It involves the use of coercive state power to redesign a whole society, ostensibly in the name of liberty and political equality. This sounds like a highly "progressive" program. Thus Leftist critics of the occupation resort to charges of duplicity: the aims of the Bush administration, they say, are not what the President now publicly announces them to be. He is not after democratic reconstruction, but rather oil or military bases or avenging a Bush family quarrel. Whether these charges are valid will be clear only after several years, once we can observe the whole course and consequences of the occupation.
I find the conservative critique more interesting and perhaps more compelling. I've invoked Edmund Burke's name against the war, for that great conservative warned that it is always a mistake to try to change societies rapidly and wholesale, especially from afar and without due appreciation of local norms. Similarly, in Saturday's New York Times column, David Brooks conducts an imaginary dialogue with another major, dead English conservative, Michael Oakeshott. "Be aware of what you do not know," he imagines Oakeshott warning us. "Do not go charging off to remake a society when you do not understand its moral traditions, when you do not even understand yourself. Do not imagine that if you conquer a nation the results will be in any way predictable. Do not try to administer a country from behind a security bunker."
Brooks' first response is reasonable enough: conservatism is usually good policy, but not in places like Saddam's Iraq, where there was nothing worth conserving. Brooks' judgment on this point will prove correct if (but only if) our forces help to create an Iraq that is distinctly and lastingly better than the awful society they helped to destroy. Burke and Oakeshott would be skeptical, but they were wrong about other things.
Brooks' second reply invokes the American tradition of "modest" revolutions. The men who built our republic, he says, "didn't pretend to know what is the good life, only that people should be free to figure it out for themselves." Likewise, our forces had no "plan for postwar Iraq," but they were committed to creating a free society in which (presumably) the Iraqis will be able to decide their future for themselves. A revolution that expands liberty is not subject to standard conservative arguments against "social engineering."
This is not a foolish point, but it overlooks some important complications. First, conservatives in the Burke/Oakeshott tradition would claim that rapid liberalization (i.e., quickly freeing people to make their own choices) is itself a form of social engineering. Like any imposition of a new value, liberalization can feel like cultural imperialism, it can unravel an existing social fabric, and it can generate unintended consequences. For example, the Washington Post reported yesterday that a US plan to replace food handouts with cash has been abandoned. "It's a great idea that academics thought up, but it wasn't in tune with the political realities," according to a US official quoted in the Post. "We have to look at what we gain versus what we risk. Right now, we don't need to be adding any more challenges to those we already have." Presumably, there were powerful local interests profiting from those food rations, and alienating them would have put the occupation in extra jeopardy. This would come as no surprise to real conservatives, who (unlike libertarians) recognize that economics is always enmeshed with politics.
Second, freeing Iraqis to make their own choices is an ambiguous idea. It can mean freeing individual Iraqis to make choices for themselves in a marketplace. Or it can mean freeing Iraqis to make a joint decision by voting on their economic system. The first idea implies strong liberalization or marketization and constraints on the emergent Iraqi state; the second means allowing an Iraqi democracy to regulate markets if it so chooses. The second interpretation is presumably what Adel Abdel-Mahdi, a Shiite leader, means when he says, "The Americans ... need to let the Iraqi people decide the big issues." Indeed, the US Administrator of Iraq, Paul Bremer, who was formerly a proponent of rapid privatization, now says the scope of free markets is an issue "for a sovereign Iraqi government to address." In other words, he has interpreted "letting the Iraqis decide" as a commitment to democratic procedures, not markets. [All these quotes come from the Post.]
Perhaps Bremer is right, although there is also a case for using American power to overturn an incredibly corrupt and ineffecient state-centered system, so that individual Iraqis can make free private decisions. In any case, true conservatives would view the imposition of either a market or a democracy as a perilous enterprise. Either way, the occupiers will make a decision that must rapidly and unpredictably change life in a far-away country that they little understand.
December 24, 2003
I'm on vacation in Georgia and don't anticipate blogging again until Dec. 29. Happy holidays!
December 23, 2003
During the administration of George W. Bush, the Federal government is likely to borrow approximately $642 billion (net, counting the surplus in 2001). That's $2,287 for every man, woman, and child in the nation, or almost $6,000 per average household--money that we and our children will have to repay with interest. Meanwhile, the latest Washington Post/ABC News poll shows the president on course for reelection. He has a nice strategy: borrow the equivalent of about $6,000 per household, spend the money on tax cuts, domestic programs, and a quick war against a tinpot dictator. Buy 8 percent GDP growth and a military victory in the year before you're up for reelection, and coast to another four years. Worry about the debt later.
There are conservatives who believe that deficits are good, because they prevent the government from expanding social programs. They should be careful what they wish for. First of all, my simple analysis of current government spending shows that almost 75% of it goes to currently "untouchable" programs: Social Security, Medicare, the military, veterans benefits, mandatory retirement benefits for other federal employees, and interest. Social Security and interest payments are guaranteed to rise. When the fiscal crunch comes, it's unlikely that presidents will respond by deeply cutting the remaining portion of the federal budget, the domestic discretionary programs that conservatives detest. A lot of this spending is popular, since it includes education, scientific research, aid to states, and law enforcement. Instead of cutting, future administrations will have to increase taxes or borrow still more money.
Second, if Bush wins reelection with this strategy, it's hard to imagine the next Democratic president doing what Bill Clinton did, and painfully paying down the deficit he inherits. Instead, I suspect that fiscal discipline will be forgotten for a long time to come. Presidents of both parties will remember Bush's successful use of a borrow-and-spend reelection strategy: great for incumbents and potentially disatrous for the country.
December 21, 2003
finding an old essay
In between work on youth civic engagement, I'm writing a book about moral philosophy, using Dante as the main text. I recently remembered a relevant but unpublished article that I had written about 1991--when I was approximately 24 and finishing graduate school. Although I couldn't find an electronic copy of the essay, I did manage to dig up an old dot-matrix printout of it, with corrections pasted over the mistakes to save printer paper. I remembered nothing about the content, so reading it was like reading someone else's work, except that I happened to own the intellectual property rights. I'm not sure that I want to reuse any of it in my current work, because the argument is now rather unfamiliar to me, and I haven't decided what I think of it. Meanwhile, it occurred to me that I cannot do philosophical work that's much (or any?) better than that article today. This is disturbing, to say the least, because I don't think of myself as being much of a scholar ca. 1991. I certainly had difficulties getting things published in those days, and probably for good reason. Yet I have no confidence that my current book-in-progress is any better than that old article. At any rate, it starts with a good quote (from the preface to Dewey's Philosophy and Civilization, 1931):
philosophy, like politics, literature and the plastic arts, is itself a phenomenon of human culture. Its connection with social history, with civilization, is intrinsic. There is current among those who philosophize the conviction that, while past thinkers have reflected in their systems the conditions and perplexities of their own day, present-day philosophy in general, and one's own philosophy in particular, is emancipated from the influence of that complex of institutions which forms a culture. Bacon, Descartes, Kant each though with fervor that he was founding it anew philosophy because he was placing it securely upon an exclusive intellectual basis, exclusive, that is, of everything but intellect. The movement of time has revealed the illusion. ... Philosophers are part of history, caught in its movement; creators perhaps in some measure of its future, but also assuredly creatures of its past.
December 19, 2003
The US and Saddam's use of poison gas
The National Security Archive (a private group that sues to declassify government documents) released a set of very important materials today. This is the story they tell: in 1983 and 1984, Saddam Hussein's Iraq used chemical weapons against Iran and against Kurdish "insurgents" within Iraq. On March 5, 1984, the US acknowledged and publicly criticized these attacks. However, there followed a series of private meetings with Iraqi officials that had a "nudge-nudge, wink-wink" quality to them. At a meeting involving Secretary of State George Schultz, the Americans "clarified that our cw [chemical weapon] condemnation was made strictly out of our opposition to to the use of lethal and incapacitating cw, wherever it occurs. They emphasized that our interests in (1) preventing an Iranian victory and (2) continuing to improve bilateral relations with Iraq, at a pace of Iraq's choosing, remain undiminished." (Emphasis added.) These are quotes from a briefing memo for Donald Rumsfeld, who was preparing to go to Iraq, where he presumably delivered a similar message while shaking Saddam's hand. On Nov. 26, 1984, the US and Iraq restored diplomatic relations. In 1988, Saddam used chemical weapons on a much larger scale against Kurdish villagers.
I recognize that the US had a legitimate interest in containing Iran. Furthermore, there is something to be said in defense of our system: despite its desire for good relations with Iraq, the US government had to acknowledge Saddam's use of poison gas publicly, thus embarrassing him before the world. On the other hand, the public denunciation had little force if very senior US officials also conveyed the message that our interest in good relations "remained undiminished." Thus the record should show that the US chose not to warn Iraq against using poison gas in 1984. The subsequent use of chemical weapons against Kurds constituted genocide, for which the United States must therefore bear some moral (if not legal) responsibility.
December 18, 2003
youth civic engagement
Today was a day for thinking about youth civic engagement from various angles. It started with a long conference call to go over the results of a new national youth poll that some partners and CIRCLE will release in January. Then some University of Maryland colleagues and I went to a high school in Hyattsville, MD to talk to the principal about three classes that we're organizing for his kids. They're all civics courses, in the broad sense. One will concern youth relations with the police. A second will continue oral history research that we have done in the past--the topic being the desegregation of the county schools. (This is the history website that our kids built last year.) And the third will involve mapping the food and exercise assets of the community. After almost two hours with the principal, I went to my office and worked on a meeting that we'll hold in January to discuss the latest research on how to mobilize young voters. And then I spent some time on the phone discussing the organization of the "Campaign for the Civic Mission of Schools," as we're now calling a coalition effort to implement the recommendations of the Civic Mission of Schools report. All this talk leaves me with no energy for a blog on any other topic, but it was a rewarding day.
December 17, 2003
philosophy & the young child
I love Gareth B. Mathews' Philosophy & the Young Child (1980). It's full of dialogues in which kids between the ages of 4 and 10 explore profound issues of metaphysics, epistemology, logic, and ethics with an adult who's genuinely interested in their perspective. They supply fresh vision and curiosity; the adult provides some useful vocabulary and provocative questions.
Mathews believes that it's hard to think straight about fundamental philosophical questions once you've been encumbered by a bunch of conventional theories--and once you've been told that most deep questions are really simple and obvious. For example, we're inclined to think that a kid is silly if she asks why she doesn't see double, since she has two eyes. Actually, this is not such an easy question to answer, but most of us are soon socialized to dismiss such matters as childish.
Mathews skewers the great developmental psychologists, especially Piaget, who assumed that children first express naive views and then develop correct adult positions. Mathews points out that many of the "primitive" statements quoted by Piaget are actually more philosohically defensible than the adult positions he espouses without thinking twice. For instance, Piaget asserts that small children confuse "the data of the external world and those of the internal. Reality is impregnated with self and thought is conceived as belonging to the category of physical matter." When you grow up, according to Piaget, you realize that there are two separate domains: thought and matter. But Mathews quotes his own teacher, W.V.O. Quine (often called the greatest American philosopher), who told him, "Let's face it, Mathews. It's one world and it's a physical world." This is exactly the position that Piaget calls "primitive" and expects kids to drop as they "develop."
Another treat in Mathews' book is his identification of a whole genre of children's literature: "philosophical whimsy." In some books that small children love, the plot is not driven forward by a practical problem or threat or a clash among characters. Rather, the protagonists face purely logical or epistemological puzzles. A simple example is Morris the Moose by B. Wiseman, in which Morris keeps trying to prove to other animals that they are moose like him. "My mother is a cow, so I'm a cow," says the cow. "You're a moose, so your mother was a moose," Morris replies. The whole book is about what makes a proof. This is a short and light-weight example, but the genre of philosophical whimsy also embraces Alice in Wonderland, Winnie the Pooh, and the Wizard of Oz.
December 16, 2003
the importance of teachers
Thanks to an excellent speech by Dan Fallon (a former colleague of mine, now at Carnegie Corporation), I understand education policy much better. Dan shows that in the 1960s, experts and policymakers were much influenced by James Coleman's massive studies, which were later confimed by Chrisopher Jencks. Using the data they had, these scholars found that you could predict academic success very accurately by looking only at the home and the neighborhood from which a kid came. In other words, schools didn't matter; society did. Coleman in particular expressed caveats about this finding, but it was the simple summary of his work.
Then (as Fallon explains it) Southern governors in at least five states mandated tests as part of their efforts to improve eduation. These tests for the first time collected data on students, teachers, and schools. A Tennessee agricultural statistician named William Sanders realized that the data would allow him to find out whether it makes a difference which teacher you have. In short, the answer is yes. Numerous subsequent studies have confirmed that differences in teachers make a huge difference for kids. In particular, teachers' own education and their teaching styles are extremely important.
In the little part of the education world that I know something about--civics--exactly the same pattern occurred. In the 1960s, political scientists looked at available data and concluded that you could fully predict an adolescent's civic and political behavior and attitudes based on his or her family and community; schools made no difference. As a result, scholarship on civic education virtually ceased. Then the National Assessment of Educational Progress (NAEP) civics exam collected data on civic knowledge. Lo and behold, it turned out to matter whether and how young people are taught civics.
December 15, 2003
Saddam and the US horserace
Fred Barnes writes that it would be "crass" to "assess the politics of the capture of Saddam Hussein." (He proceeds to do so anyway.) Meanwhile, The New York Times webpage ran a story yesterday that began: "How big a political lift will President Bush derive from the capture of Saddam Hussein? Very big indeed, said several political scientists, who used words like 'huge, 'enormous' and 'profound.' .... 'My first reaction was, you might as well call off the election,' said Prof. Allan J. Lichtman, a historian at American University." By this morning, this story had disappeared from the Times, although it's still available via the International Herald Tribune. (The story that actually appeared in today's Times is more nuanced, less prominent, and focuses mainly on the dangers for Howard Dean.)
I admit that one of my first thoughts upon hearing about Saddam's capture was: How does this affect the election? But I felt guilty about having that thought. On reflection, a number of (not entirely consistent) ideas came to mind:
1. It would be completely crass for a candidate to talk about the effects of a major military event on his or her own political prospects--that would be putting concern for self over country. If a candidate has such thoughts, he should wish that he didn't and not utter them aloud. Thus we rightly expect Democratic candidates to praise the capture of Saddam and keep any regrets strictly to themselves. Likewise, it was obnoxious for Tom DeLay to disparage the success of the Kosovo operation under President Clinton. ("For us to call this a victory and to commend the President of the United States as the Commander in Chief showing great leadership in Operation Allied Force is a farce,” DeLay said on the House floor [Cong. Rec. 1999 p. H5210.)
2. If you're a citizen and not a candidate, it is reasonable to wonder how a major event will affect the next election. After all, you may think that it is very important for the challenger to win, and thus your delight at the success of the American military may be tempered by regret at the advantage given to the despised incumbent. You may reasonably weigh the benefits of any victory against the damage that the president will do if he's re-elected. Republicans thought that Bill Clinton was harming America; thus they were entitled to think that any victory achieved under his Administration was partly a bad thing. The same applies to Democrats under Bush. However, patriotism requires that you not overrate the importance of your favorite party's winning. A great achievement by the current administration may be more important than the result of the next election. Indeed, this is why we don't want politicians to consider the effects of major events on their own election: we assume that they will overestimate their own significance.
3. It is very hard to predict the effects of current events on future elections. For example, I can imagine a story appearing one month from now that begins, "The Bush Administration is no longer delighted about Saddam Hussein's capture. With Saddam out of the way and the violence continuing, it has become increasingly clear that the unrest in Iraq resulted from an extremely difficult underlying situation for which Mr. Bush was unprepared. ..." Or I can imagine that the capture and trial of Saddam will make a huge psychological difference and aid the transition to democracy. Who knows? One of the problems with speculative political press coverage is its unreliability. (Maybe the Times pulled the story that first ran on their website because they decided that political scientist Allan J. Lichtman is no expert on the Middle East.)
4. Our job as citizens is to decide who would do the best job in the future. Whether an event will cause our fellow Americans to vote one way or the other should be irrelevant to that decision. Thus we shouldn't pay attention to "horse race" stories (ones that discuss the effects of current events on candidates). I realize that it's hard to resist an occasional look at such stories, but the less horse race news, the better. We want the press to tell us what happened today, and why--not what may happen as a result in November.
5. Horse race coverage of foreign affairs generally hurts challengers. Journalists often say, "This victory in Iraq helps Bush; this disaster helps Dean," and so on. It's easy to draw the lesson that Democrats want us to fail. Democratic candidates may insist that this is not the case, but their message is overwhelmed by news stories that award them black eyes every time things go well. The same would happen to Republicans under a Democratic administration.
December 12, 2003
Almost a month ago, I was at Wingspread, the retreat center near Racine, WI, to attend a meeting on national and community service programs. This was my second visit to Wingspread. The first was in 1988. Then I was one of two token college students at a meeting otherwise filled with foundation executives and college administrators, including the great Father Ted Hesburgh, President of Notre Dame. The subject was ... community and national service programs. The discussions of those days led more or less directly to George Bush's Points of Light Foundation and then Americorps under Bill Clinton. (I contributed nothing to that history, but I observed a piece of it.)
Two meetings on the same subject, 15 years apart. Since that first memorable experience at a non-partisan, non-profit, professional conference, I have attended similar gatherings on campaign-finance reform, public journalism, civic renewal, digital media policy, Internet research, civic education, service-learning, philanthropy and civil society, economic development and civil society, deliberative democracy, engaged universities, youth digital media work, values in higher education, social capital in Latin America, and many other subjects. There's a whole culture of these events: shared taxi rides from the airport; tables arranged around hollow squares; introductory sessions where you go around the room and everyone says where they're from; "break-out sessions" with "flip charts" and "reporting back" to the full group. The conversations tend to drift, especially once there's a long list of people waiting to speak. Speakers refer politely to previous comments ("Building on what John said, ..."). There are complaints, often highly justified, about the people not represented "in the room"--usually racial minorities, but also youth and (in the circles I travel) conservatives. Participants quickly begin to speak as "we" and to imply a hostile outside world, even though they are often not clear about what goals and values they share. There's always a point when everyone starts talking about "message" (i.e., the need to communicate some simple idea to the broad public).
In short, there are many frustrations. Yet I can't think of any better models, and civil society would be much weaker without these events. Thus I fully expect to be back at Wingspread in 2018, talking about ... community and national service.
December 11, 2003
the campaign finance decision
Justice Scalia's dissent in the recent landmark campaign finance case is written with characteristic brio and affords an opportunity to consider the deepest issues. Some of Scalia's points are rebuttals to arguments made during Congressional debate that I would not myself defend. For example, Members of Congress had claimed that there was too much overall spending on politics, and that "attack ads" were too negative. I agree with Scalia that the total quantity of spending is not too high, and that attacks on incumbents are desirable. But there is still a very good reason for campaign-finance reform: namely, to curb the disproportionate influence of organized interests. Against this position, Scalia makes five main points:
1. It is outrageous to restrict speech by limiting campaign donations that buy political advertising, when the courts are otherwise so protective of speech that they find rights to "virtual child pornography," "tobacco advertising," "dissemination of illegally intercepted communications," and "sexually explicit cable programming."
I would reply that campaign finance involves much more serious considerations on both sides than, say, sexually explicit cable shows. On one hand, there is a connection between spending and speech. Congress could limit spending in such a way as to prevent criticism of itself: a serious danger. On the other hand, massive, unregulated campaign contributions threaten to undermine a democratic system based on one-person, one-vote. Thus not all limits on contributions are constitutional, but reasonable limits will enhance core constitutional values. The same is not true of pornography or commercial advertising, where the tension is between free speech (a constitutional right) and decency.
2. Limits on campaign donations are patronizing. They assume that voters are easily manipulated, even when they have access to information about contributors. Scalia says: "The premise of the First Amendment is that the American people are neither sheep nor fools, and hence fully capable of considering both the substance of the speech presented to them and its proximate and ultimate source. If that premise is wrong, our democracy has a much greater problem to overcome than merely the influence of amassed wealth. Given the premises of democracy, there is no such thing as too much speech."
I'm not a sheep, but I do feel completely unable to use campaign finance disclosure information to guide my voting decisions. First of all, major candidates from both parties take money from long lists of donors, so I cannot cast my vote for someone who is "clean." Second, only a tiny proportion of congressional business is covered in the press; the rest is too routine and complex for anyone to follow. Yet it is precisely on the routine, nitty-gritty, economic issues that campaign donations have the most impact. Money doesn't so much determine votes as put some issues on the agenda and keep others off. As the majority concludes, "unlike straight cash-for-votes transactions, such corruption is neither easily detected nor practical to criminalize. The best means of prevention is to identify and to remove the temptation."
3. Campaign contributions have limited impact, at most. "Evil corporate (and private affluent) influences are well enough checked (so long as adequate campaign-expenditure disclosure rules exist) by the politician's fear of being portrayed as 'in the pocket' of so-called moneyed interests."
The most sophisticated version of this argument uses data on money and votes to assert that campaign donations cancel one another out. I tried to debunk that conclusion in an article and in my New Progressive Era book. There's a lot of empirical evidence that campaign money does indeed change the political agenda. I'm delighted that the majority accepted this argument, finding that "The evidence connects soft money to manipulations of the legislative calendar, leading to Congress' failure to enact, among other things, generic drug legislation, tort reform, and tobacco legislation. ... 'Donations from the tobacco industry to Republicans scuttled tobacco legislation, as contributions from the trial lawyers to Democrats stopped tort reform' [according to former Sen. Paul Simon]. To claim that such actions do not change legislative outcomes surely misunderstands the legislative process."
4. Money probably does buy access (rather than influence), but that's acceptable. "It cannot be denied ... that corporate (like noncorporate) allies will have greater access to the officeholder, and that he will tend to favor the same causes as those who support him (which is usually why they supported him). That is the nature of politics -- if not indeed human nature -- and how this can properly be considered 'corruption' (or 'the appearance of corruption') with regard to corporate allies and not with regard to other allies is beyond me."
This is a juncture where I cannot avoid detecting class bias in Scalia's reasoning. It's not as if politicians will favor a random selection of Americans who just happen to support them. Those who get in the door--overwhelmingly--will represent investor and management interests. Politicians will also listen to some officials of working-class organizations, such as unions, but these lobbyists will be dramatically outnumbered, and they will be well-paid professionals who imperfectly represent their membership.
"Pluralists" in political science are those who believe that modern politics is and ought to be a competition among numerous interest groups, in which none has the power to prevail. Pluralists have often defended the campaign finance regime on the grounds that no single moneyed interest is dominant; there's cash available from all sides. But I agree with EE Schattschneider: "The flaw in the pluralist heaven is that the choir sings with a strong upper class accent.”
December 10, 2003
ideology and civics
I spoke yesterday at the Learn & Serve America conference, which convenes people who run federally-funded community-service programs in schools. I talked about the Civic Mission of Schools report, which my organization and Carnegie Corporation of New York published earlier this year. One person in the audience said that he had read the first sentence to colleagues back at his home college, and they interpreted it as ridiculously and offensively conservative. Neither the questioner nor I had the report with us, so we argued about exactly what it says. In fact, it begins as follows:
"For more than 250 years, Americans have shared a vision of a democracy in which all citizens understand, appreciate, and engage actively in civic and political life. In recent decades, however, increasing numbers of Americans have disengaged from civic and political institutions such as voluntary associations, religious congregations, community-based organizations, and political and electoral activities such as voting and being informed about public issues."
I didn't write this language, but I like it and would resist seeing it as conservative. I do think that there has been strong ideal of equality and democratic participation in America since its founding. (Reality has been a different matter, but ideals are important.) Moreover, the last few decades have witnessed substantial and troubling declines, especially a one-third drop in youth voting and a four-fifths drop in young people's expressed interest in news. Incidentally, these trends are of greater concern to liberals than to conservatives, because they result in a smaller and older electorate. What's more, one reason for these trends is the demise of traditional mobilizing institutions, especially unions. If there's nostalgia in the report, it's for the activist 1960s, not for 1950 or 1850.
December 9, 2003
the real origins of the Internet
There's a standard version of the history of the Internet that traces it back to ARPA (the Advanced Research Projects Agency) in the 1960s. ARPA developed a way for computers to exchange information in small packets, so that two computers would not need to open a permanent and exclusive channel (such as a standard phone connection) in order to remain constantly in touch. Instead, they would send messages in small chunks that could be routed through whatever computers happened to be online until they reached their destination. ARPA was a military outfit (it soon became DARPA; the "D" stands for "Defense"), and its motive was to create a new communications network that could withstand massive disruption during wartime.
The DARPA system improved, and similar processes developed separately in the academic world. Under the auspices of the National Science Foundation, these networks were brought together (starting with a process for sharing email). After a while, the NSF named this network of networks the Internet, and so we entered the current era.
This is all true and important, but it's like explaining the origins of a human being by listing all of her direct ancestors who happen to share her last name: her father, her paternal grandfather, and so on back. Lots of other ancestors have also contributed their genes and nurture, although their names are harder to retrieve. Similarly, if you look around today's World Wide Web, you'll see numerous important features that did not arise from ARPA, DARPA, or the NSF.
Two quick examples: 1) I use online library catalogues all the time, and they are a significant part of the Internet. Their genealogy begins with handwritten book catalogues (which, for all I know, are as old as ancient Alexandria), and then moves to computerized databases in major research libraries, which became accessible via modem at least 25 years ago, which then became accessible by telnet, and which are now usually searchable through a Web browser.
2) Elaborate multiplayer games like MUDs and MOOs are another part of today's net. I think their ancestors include: wargames with lead figures in the era of H.G. Wells; role-playing games after World War II; role-playing games played by correspondence, which arose roughly at the same time as computerized single-player wargames; networked computerized wargames; and finally multiplayer games on the Web.
There is also a story that starts with the first mechanical computers and concludes with modern handheld devices for browsing the Web (this is the hardware side of the Internet's genealogy). And there's a history that starts from the earliest operating systems and concludes with Windows and Linux, by way of Xerox and Bell Telephone.
For ideological reasons, I like the story that starts with ARPA. That was a federal agency that used taxpayer money, which suggests that the Internet belongs to us (collectively). However, it's simply untrue that the Federal Government created the Internet, when so much of its value arose from other sources. The 'net is ours in a different way, more as a folk culture belongs to the group that gradually built it over the generations.
December 8, 2003
visual aspects of music
Hearing live chamber music one night last week, I thought about the visual dimension of music, which we miss when we listen to recordings. Musicians often show a lot of expression on their faces, and they exchange meaningful looks that are interesting to interpret. In a string quartet, they all hunch over when they're playing fast and intensely, and then sit back during lulls. I also like the general sight of their gleaming wooden instruments and slender bows, vibrating like insect wings.
I suspect that composers often think about visual issues when they write. For example, why give a theme to the first violin and the accompaniment to the second, and then switch their roles after a few bars? On a recording, it would sound the same if the first violin repeated the melody. But it's visually interesting to see a motif passed around a semicircle of musicians, or bounced back and forth.
One of the pieces I heard last week was Tchaikovsky's sextet for strings, "Souvenir de Florence," which I happen to know quite well from a CD. According to the program notes, the composer told his brother, "I definitely do not want to write just any old tune and then arrange it for six instruments, I want a sextet--that is, six independent voices, so that it can never be anything but a sextet." By listening to a recording, someone with a reasonably experienced ear could tell that there are two violins, two violas, and two cellos playing; and the piece would sound different with a different ensemble. However, you would need a fabulous ear to tell that the first viola has a consistently different role from the second. This is much clearer when you can see that the first viola is sitting over there, and she's the young Japanese-American with a somewhat worried expression who re-tuned after the first movement; whereas the second viola is an older Jewish gentleman with a serene expression. One might object that these assignments were no part of Tchaikovsky's plan. But he did expect us to be able to keep track of parts. Moreover, the combination of different musical roles, instruments, and players' faces creates an interesting aesthetic layer that is missing on a CD.
December 5, 2003
universal v. particular in ethics
In ethics, the words "universal," "general," and "particular" are used in three entirely different contexts. First, there is the issue of cultural difference. Some people say, "Morality is universal," meaning that the same rules or judgments ought to apply to members of any culture. Their opponents reply that at least some moral principles are particular to cultures (they only bind people who come from some backgrounds).
Meanwhile, some people say, "Obligations are universal," meaning that we have the same duties to all human beings. For instance, perhaps we are required to maximize everyone’s happiness, to the best of our ability, not favoring some over others. Opponents of this kind of universalism reply that we have stronger obligations in particular people, such as our own children or compatriots. (See, for example, this good article by blogger and public intellectual Amitai Etzioni.)
Finally, some people say, "What is right to do in a particular case is shown by the correct application of a general or universal moral rule." Their opponents reply that we can and should decide what to do by looking carefully at all the features of each particular case. They agree that there is a right or wrong thing to do in each circumstance; but general rules and principles are unreliable guides to action. Any rule or principle that makes one situation good may make another one bad.
These three arguments are distinct analytically. If you take the "universalist" side in one debate, it does not follow that you must also take it in the others. One can, for example, believe that all people (regardless of culture) ought to be partial toward their own particular children. That view would combine two forms of universalism with one variety of particularism. Or one can believe that very abstract, general rules are never good guides to action, yet everyone from every culture should agree that this mother, in this particular set of circumstances, was right to feed her own child and to let a stranger go hungry. Or one can believe that we ought to treat everyone with precise equality, but only because we are members of a distinctly Western and modernist culture; there is an abstract rule of equal treatment that binds us but does not apply elsewhere.
I think that the only illogical combination is resistance to universal rules plus commitment to impartiality, because impartiality seems best construed as a rule that applies in all cases ("treat everyone alike"). Particularism is consistent with partiality, if partiality just means that sometimes it's OK to discriminate.
I suspect that there is a psychological tendency for some people to embrace universalism in all its forms, or else all forms of particularism; but there is no logical reason for this tendency. On the contrary, there may be some illogic involved. For example, some people favor partiality towards kin and countrymen, and they think but they can support this value by rejecting cultural universalism. That is a non sequitur, although probably a common one. (We see it in "Romantic" reactions against "Enlightenment" universalism.) Likewise, one might fear the nihilistic consequences of cultural relativism and therefore favor abstract, rule-based ethics, but this is another illogical move.
My own view, in a nutshell, combines cultural universalism (everyone should agree in their assessment of any particular case, if they understand it fully); openness to partiality (sometimes it is right to discriminate in favor of certain people with whom one has special relationships); and "particularism" about ethical judgments (we can and should judge cases by closely examining their details, not by applying rules).
December 4, 2003
I received news yesterday that we were awarded $106,230 by the National Geographic Foundation. The project involves working with high school students to make computerized maps of safe and walkable streets, parks, and nutritious food sources (stores and restaurants) in their city. Kids will walk around the community with Palm Pilots, entering data. We will also collect data on adolescents' "healthy living" behavior (nutritious eating and exercise). Using these data, we will be able to create maps showing safe streets and good food sources, for public display on our website (www.princegeorges.org). Such maps are an example of the kind of relatively sophisticated public good that I find most valuable in the Internet "commons." We will also try to generate a statistical model showing the impact of highly varied local geography on residents' health-related behavior. This model will address the question: "Can we reduce obesity through urban planning?" Finally, we will keep track of the students' civic engagement, on the theory that community-based research is a good form of civic education.
December 3, 2003
lessons from Houston
According to today's New York Times, the Houston school "miracle" was illusory. After the state imposed a strict regime of standards and high-stakes assessment, students in Houston dramatically improved their performance on a specific Texas test. However, scores on the national Stanford Achievement Test did not rise much (no faster than in other cities) and actually fell in 9th-11th grade. Moreover, gaps by race disappeared on the Texas exam but remained unchanged on the Stanford Achievement Test.
The argument for standards-and-accountability runs like this: Students' general aptitude can be measured with standardized tests. The higher their scores, the better prepared they are for college or work. Any responsible test should generate roughly the same results. Forcing students and schools to score well on tests will spur them to improve their aptitude. Instead, we find that raising the stakes can cause students to do much better on one test while not budging their results on another exam. In principle, the reason could be that either the Texas exam or the Stanford test was flawed. More likely, teachers learned to prepare their students for the particular instrument that would determine their fates: in this case, the state test. Scores on that instrument rose. But students' "aptitude" or general educational preparation (if such a thing can be measured at all) did not rise significantly.
This finding will not surprise the many critics of No Child Left Behind, the federal law passed in 2002 that is transforming American education. But for friends of NCLB, the Houston results should be deeply troubling.
December 2, 2003
Iraq and Al Qaeda
Al Qaeda hasn't attacked any US domestic targets since 2001. Maybe this is because Osama bin Laden is only interested in worse crimes than the ones he ordered on 9/11, and he's now planning something truly devastating. Or perhaps Al Qaeda has been temporarily battered and foiled, but will soon strike again.
On the other hand, could it be that that the invasion of Iraq has made the US a less desirable target? A "Tom-Friedmanesque" argument would go like this: Osama bin Laden is only interested in overthrowing secular or corrupt governments in Moslem countries. He doesn't care about an infidel nation like the US. He does, however, regard America as a source of support for the regimes in Egypt and the Gulf. Furthermore, he used to think that we would be easy to scare. Thus he believed that he could move toward his goal by striking a blow against the United States, thereby causing us to disengage from the Middle East. This was supposed to be an easy step in his overall plan. Instead, 9/11 led to the occupation of two historically Moslem states: Iraq and Afghanistan. To be sure, these US adventures have created targets and opportunities for Al Qaeda. But they also pose serious risks for Islamic extremism. Thus it's no longer clear that attacking the US is a logical step on the way to bin Laden's goals. Instead, he is now ordering attacks aimed at destablizing the regimes that he actually wants to overthrow, in Indonesia, Morocco, Saudi Arabia, and Turkey.
By itself, this argument (even if true) would not justify a war against Iraq, but it would weigh on the scales of judgment.
December 1, 2003
Ariel's song, from Shakespeare's last play (The Tempest, 1:ii), seems a premonition of modernism. In traditional poetry, it's fairly obvious what is being described, represented, or signified. But it takes sophistication to notice the formal features of the poetry itself (such as meter, rhyme, and assonance) and any allusions to other literary works. In some modernist poetry, by contrast, what is described is unclear, or there may not be any literal referent at all, but the formal features of the writing immediately draw our attention. Thus modernist poetry can be more or less abstract in a way that recalls the modern visual arts.
Ariel's song is striking because the characters who hear it do not know if it means anything; they cannot see a speaker and may simply be hearing the wind. That it is poetry, however, becomes obvious from the alliteration, rhyme, and powerful rhythmic scheme:
Full fathom five thy father lies;
Of his bones are corals made;
Those are pearls that were his eyes;
Nothing of him that doth fade
But doth suffer a sea change
Into something rich and strange.
Sea nymphs hourly ring his knell:
Hark! Now I hear them--ding dong bell.
The form tells us this is poetry (and very beautiful and memorable), but is it about anything? It conjures up an image, but not one that necessarily connects to the rest of the play. Ferdinand thinks he finds a meaning in it: "The ditty does remember my drowned father." His interpretation may be right, but there is no apparent reason for a voice suddenly to describe his father dead beneath the sea. This is an experience, then, of formal beauty that may or may not have significance or explanation--and that seems characteristic of modernism.