February 27, 2005
civic education: the case for smaller schools
The nation's governors met this weekend to discuss high school reform. They identified real problems, including a high-school completion rate of only about 70% and a set of curricula and standards that obviously aren't working. But their conversation apparently focused on preparing students for work and college--not citizenship. They called for regular standardized testing rather than reform of schools themselves. I was hoping for more emphasis on school size, which is a signature issue of the Bill and Melinda Gates Foundation. Bill Gates himself addressed the governors and said:
The three R’s [rigor, relevance, and relationships] are almost always easier to promote in smaller high schools. The smaller size gives teachers and staff the chance to create an environment where students achieve at a higher level and rarely fall through the cracks. Students in smaller schools are more motivated, have higher attendance rates, feel safer, and graduate and attend college in higher numbers.
He was right, but the governors mainly focused their attention on standards and accountability.
The average size of American primary and secondary schools increased four-fold between 1940 and 1965, from 100 to more than 400 (see this pdf, p. 26). Toward the end of that period (1959), James Conant identified small high schools as the single biggest problem in American education. He argued that they were economically inefficient, unprofessional, and unable to provide a wide range of equipment and specialized teachers. In addition to these arguments, other factors probably contributed to massive school consolidation in that era, including a tendency to close down historically black schools under court desegregation orders (not to mention the desire to field better football teams).
The result was the creation of very large schools, especially high schools, in which students were seen as consumers who should be permitted to choose among a wide variety of offerings (curricular and extracurricular) provided by specialists. Students were presumed to have diverse interests and abilities. Thus it was right that some should choose student government and AP courses while others preferred "shop" and basketball.
If we hope to create effective, committed, and responsible citizens, huge schools have several marked disadvantages. Relatively few students--mostly ones who are already on a successful track--can possibly participate in the extracurricular activities, such as school government and scholastic journalism, that seem most likely to teach civic skills. Students in large schools tend to self-select into cliques and can avoid interacting with those different from themselves. Parents and other adults in the community have little impact on these large, bureaucratic institutions; so schools are rarely models of community problem-solving or active citizenship, nor can they create paths to participation in the broader world. We know that students who feel that they can have an impact on the governance of their own schools tend to be efficacious and interested in public affairs; but it is impossible for anyone to influence the overall atmosphere and structure of a huge school that is organized around private choice.
Finally, young people become victims of their own choices. You can pick up civic skills (as well as other ones) if you attend a school with a wide range of offerings and equipment and you elect to take the honors classes and work on the school newspaper. But those assets are of no use unless you have the confidence, motivation, networks ties, and knowledge to use them. In a huge high school, there is little chance that any adult will try to steer a student who is on a mediocre track onto a more challenging one. Twenty years later, the student who chose easy courses and avoided clubs may still be paying a price, economically as well as socially and politically.
February 25, 2005
"trackback spam" (an ethical dilemma)
Blogs originally formed a "commons," even according to a narrow and technical definition of that term. They were always privately owned, of course. I'm the only person who can post here, because I pay the $9.95 monthly fee. However, the whole array of blogs, the "blogosphere," originally had an un-owned feel. That was because you could visit any site you liked, and any blogger could link to anyone else. The blogs with the most incoming links were the easiest to find through search engines. Therefore, prominence was difficult to buy; it resulted from others' "gifts" of links. Most blogs also permitted visitors to post their own ideas in the "comments" field, thus opening up space for free discussion. Finally, the clever "trackbacks" feature notified bloggers when their posts were discussed on other blogs. For example, when another site links to mine, it often sends a "trackback ping" to let me know; that site is then automatically listed here (under "links to this specific post") so that you can see who has written something in response to me.
In short, the network of interlinked blogs belonged to no one, it was unaffected by money, and it was open to newcomers. In all these respects, it was a commons.
All commons are fragile. One form of the "tragedy of the commons" is pollution.
The first pollution to hit blogs was simply the obnoxious comment--a price we always pay for liberty. Then came a more insidious problem, "comment spam." In order to increase their ranking with Google, pornographers and other bottom-feeding capitalists started placing comments on blogs that were really just links back to their own sites. They used software to place these links automatically. At the low point last summer, I occasionally received more than 100 comments per day that advertised various illegal pornography sites. I removed them quickly, but it was a big nuisance. Finally, by making it more difficult to post comments and by changing some technical aspects of this site, I reduced the problem to a manageable level. Since many other bloggers took similar steps around the same time (or stopped allowing comments altogether), "comment spam" became generally less efficient and profitable.
So the bad guys discovered "trackback spam." It's simple: they link to specific entries on blogs like mine, thereby getting themselves automatically listed on my site. Again, they use software that places the links automatically and rapidly. I now receive scores of incoming links every day, mostly from gambling sites. These trackbacks are very hard to remove using my blogging software (MovableType 2.64). There are many hundreds scattered through my archives. Although I remove offending trackbacks when I find them, I have left most of them alone because it's just too time-consuming to delete them. A link to a gambling site is not terribly offensive: not like a comment that actually describes some disreputable product.
However, Henry Farrell and Brad DeLong are now arguing that massive use of trackback spam could spoil the whole blog "commons." (By the way, I just sent them "trackback pings" by linking to them in the previous sentence; but my motives were pure.) If most links have nothing to do with the content or merit of the target site, then we will no longer be able to find popular blogs, or similar ones, by following links. Mechanisms like Technorati, which uses the link structure of the blogosphere to derive interesting information, can be badly damaged by trackback spam. You can even imagine popular sites selling links, which would make prominence a function of money.
As I've previously noted, the blogosphere never met ideals of equality or meritocracy. However, trackback spam will make things considerably worse--much as email spam has spoiled that medium. Bloggers could fight back by modifying software to prevent trackback spam and removing the spurious links. In my case, that would mean upgrading to more recent software and transferring all my archives--a pretty time-consuming process and one that I could easily screw up.
If we all took steps to block spam, it would go away. One should always do what one wants everyone else to do. That's the moral argument for taking the time to upgrade my site. The counter-argument is simply that I have other things to do with a whole day. ... Occasionally, when email spam, viruses, comment spam, and other nuisances really get me down, I wonder, "What's so great about the Internet?"
February 24, 2005
why Dante is "good to think with"
The Cambridge philosopher Miles Burnyeat says that Plato is “good to think with” (pdf, p. 20) I believe the same of Dante, which is why I chose to write a book about current moral issues by interpreting sections of the Divine Comedy. Like Plato’s dialogues, the Comedy is a concrete story in which abstract ideas appear as statements by embodied characters in specific historical circumstances, who attempt (to various degrees) to live by what they say. In both works, the question of irony arises. Plato is not Socrates, and Dante-the-poet is not Dante-the-pilgrim. It isn't clear what the author thinks of his main character's views.
It is not obvious why we should use old literary works to think about current moral issues, especially if the authors of those texts refused to say straightforwardly what they believed. However, the humanities are premised on the idea that we should “think with” novels, dialogues, and other narratives.
One explanation is that any text from the distant past provides an alternative perspective on the world. For instance, the Divine Comedy helps us to understand what it would be like to see everything (historical events, the behavior of animal species, even the movements of the stars) as if it had a moral purpose. But I must say that I do not find a morally teleological universe at all plausible; thus it may be interesting to understand Dante’s medieval teleology, but it is not life-altering. Perhaps it would be more challenging for a modern democrat to take seriously Dante’s celebrations of aristocratic and martial virtues.
However, Dante’s exotic perspective is not what I find most useful in him. The Divine Comedy is “good to think with” because it embodies several moral perspectives in vivid characters and situations—including the character of the author. Embodying moral values is how we must think if we want to make really serious ethical choices.
Philosophers often hope to be able to construct persuasive moral arguments that run inevitably from premises to conclusions. So, for example, Robert Nozick argues that if you value freedom, then you cannot favor schemes to guarantee particular distributions of wealth. Peter Singer argues that if you believe that we must minimize the quantity of suffering in the world, then you cannot permit vivisection. Judith Jarvis Thompson argues that if you believe that individuals may refuse to be involuntary life-support systems for other individuals, then you must permit abortion in cases of rape and incest.
Impressive as some of the arguments are, they have two major limitations. First, there is substantial and reasonable disagreement about the premises that generate the conclusions, and there may never be arguments strong enough to decide the premises. Second, there cannot be abstract arguments that address a wide (and crucial) range of questions involving our choice of a life or our valuation of characters and institutions. It is simply implausible that an argument, abstracted from context, could decide whether I should lead an active or a contemplative life, advise the powerful or seek power myself, pursue civic engagement or study mathematics, raise children or devote myself to work, or prefer the political economy of Norway to that of Hong Kong (or vice-versa). To grapple with such issues, we need detailed, “thick” descriptions that give us portraits of whole situations over time.
Thus, when I wanted to consider whether it was better to take moral guidance from stories or from philosophical principles, I found it most illuminating to “think with” a story—the Divine Comedy—in which that choice is a major theme, woven into the structure and not merely talked about. The tension between Dante’s love of human particularity and his commitment to abstract principles is embodied in the narrator’s ambivalence toward his main character; in the gradual but relentless movement from concrete and emotional narrative toward abstract speculation; and even in the metrical scheme, terza rima, which marries a metronomic regularity to great variety of rhythm and texture. Thus all aspects of literary criticism, including formal analysis, can help us to identify the values of Dante-pilgrim and of Dante-poet, and to decide whether we should agree with either of them.
February 23, 2005
reading and civics
It's hard to modify the current regime for elementary education in America, which revolves around annual high-stakes tests in a few subjects. However, without changing the fundamental structure now in place, we could infuse civic ideas and values in reading education. In general, there is a remarkable lack of nonfiction in early reading texts. According to studies summarized in this article, nonfiction represented just 12 percent of the texts included in five major “basal” reading series for first grade. "Furry-animal stories" dominate. A survey of 83 primary school teachers found that just 6 percent of the material discussed or used in their classrooms was factual.
However, students perform better on existing reading assessments if they have had practice reading in a variety of genres, including history, news, and science as well as fiction. Thus schools should incorporate more social studies into k-8 education as a strategy for complying with existing "No Child Left Behind" reading requirements. As a very important by-product of reading about George Washington, Rosa Parks, or Nelson Mandela, civic knowledge and skills should also increase.
February 22, 2005
who's more powerful, Bill Gates or Kim Jong-il?
This is a silly question, except that it can prompt some serious thinking about the nature of power, the state versus the market, and monopoly.
Let's say that "power" is the capacity to do things you want to do. Clearly, the Microsoft Chairman and "chief software architect" and the Chairman of North Korea's National Defense Commission (also known as the country's dictator) can each do things that the other cannot. Bill Gates can shape the daily experience of, I suppose, billions of human beings. He can influence the flow of global ideas and capital and the creation of new forms of culture. He can employ thousands, and fire any employees he chooses. He cannot, however, order people killed, turn Seoul into a "sea of fire," or sell nuclear bombs to al-Qaeda that might incinerate Manhattan. Nor can he order people to listen to his six operas, or have a conspicuously weird sex life. Kim Jong-il can do all of the latter things, but he cannot travel freely or have a frank conversation with anyone--or make the North Korean economy prosper while maintaining a totalitarian state.
As Hellmut Lotz (a Maryland graduate student) notes, a dictator cannot retire. Once he loses control of the military and police, he cannot guarantee his own security. He is inevitably a threat to the successor regime, which may decide to destroy him--regardless of any deal he may have struck before exiting power. Augusto Pinochet is just the latest example of an ex-dictator who has paid a serious price for relinquishing control. In contrast, Bill Gates can step down at any time and retain any amount of wealth and influence he likes.
In a perfectly competitive market, Gates' "power" would be very limited. He would have to produce software that people wanted to buy. He would be constantly maximizing his products' popularity, at peril of losing his position on top of a publicly-traded corporation. Any discretion that Gates does enjoy results from Microsoft's quasi-monopoly position. But his monopoly is insecure.
For his part, Kim Jong-il relies on all those guys with guns and bombs in the North Korean security apparatus. If they decide collectively to withdraw support, he's dead. Fortunately for him, soldiers in a totalitarian regime cannot safely communicate--unlike investors and consumers in a free market.
Bill Gates was part of the "personal computer revolution," which has certainly changed billions of people's lives. However, it is not easy to estimate his personal contribution to that revolution. On one theory, he was the very clever (and lucky) guy who capitalized on an inevitable technological development by getting there first. This is not to minimize his intelligence, but it does suggest that he wasn't very powerful. On the other hand, the discretionary decisions that Gates made early in Microsoft's history have had enormous and perhaps irreversible affects on the precise way that software works today.
It's easy to see that Microsoft has been on the side of history over the last 30 years, whereas North Korea is doomed. But that doesn't by itself show that Bill Gates is more powerful than Kim Jong-il. We don't say that a dinghy being swept to shore by a strong tide has more power than one battling to stay at sea.
February 21, 2005
the aesthetics of suburbs
This post was prompted by a family weekend in some beautiful parts of northern Virginia—which necessitated a lot of travel through ugly northern Virginia suburbs. I've always been a city person, prone to disparage suburban life on political, ethical, environmental, and aesthetic grounds. However, I recognize some important counter-arguments. Suburbs are becoming increasingly diverse and integrated. As they have grown, they have developed sophisticated cultural institutions. Suburban landscapes may be ugly, but our cities aren’t all gleaming Manhattans, either. For every Telegraph Hill or Michigan Avenue, there are hundreds of square blocks of slums and “brownfields” in our urban centers. Contrary to what I would have guessed, suburbanites (according to Robert Putnam) are slightly more involved in voluntary associations than urbanites are. In short, prejudice against suburbanites could be simple snobbery, closely related to the condescension that certain self-styled “intellectuals” have traditionally felt toward the bourgeoisie. After all, 150 million Americans have voted with their feet by moving to, or remaining in, the suburbs. Since they know their own circumstances best, their choice demands respect.
But the fear of being charged with snobbery should not prevent us from grappling with aesthetic problems. The beauty (or ugliness) of our environment is important. And I maintain that suburbs are ugly, especially if one considers their relative affluence. A retail strip in Bethesda, MD may be more attractive than many blighted streets in the Southeast quadrant of Washington, DC. But residents of Bethesda occupy homes with a median market value of $396,000 (in 2000). Their median household income was above $99,000 that year. Compared to urbanites of the same wealth, they live in ugly surroundings. Strip malls are uglier than shopping streets. Suburban office parks are uglier than contiguous office buildings. Clusters of wires are uglier than buried ones. On-ramps are uglier than urban intersections. Ranch houses tend to be uglier than row houses. I think huge lawns are uglier than front yards. Big revolving signs for fast food restaurants are certainly uglier than shop fronts.
It’s important to recognize that the suburban landscape is very recent. More than 100 million people had to be housed in new communities in the space of 50 years. Maybe these encampments will look better once we have settled in. Maybe we’ll figure out ways to improve the look of the familiar combination of a wide road, grassy strip, parking lot, and cement-block store. Higher density may improve aesthetics, simply because all that wasted—cleared but vacant—land in the suburbs is ugly.
But we’ll have to overcome another kind of problem, too: a dynamic that discourages investment in suburban public spaces. In cities, private owners have motives to invest in the public appearance of their property: a shop-window, a façade, or a lobby can be an efficient advertisement. Banks, hotels, and department stores are often major civic ornaments, constructed at private expense; but even traditional tenement houses had handsome cornices. The worst buildings in cities are often public ones: for example, housing “projects” constructed by cheap authorities for the poorest residents.
In contrast, there is profound underinvestment in the outward appearance of suburban buildings. Even a fancy office park will often show basically blank walls to the outside. It is designed for the people in the offices. Visitors come by car and don’t need to be drawn in or impressed by the façade. There are expensive suburban restaurants with “designer” interiors that sit in completely undistinguished parking lots. The new arts center in Bethesda has received positive reviews for its glamorous architecture; but you can’t see the building from the nearby main road that carries probably 2,000 vehicles/hour at its peak. For all those “passers-by,” the architecture means nothing.
I suspect that this underinvestment is partly a function of low density. It usually doesn’t pay in a suburb to use architecture and landscaping as advertisements—especially when people zoom past in cars. Another cause of the problem may be rapid growth. If everything around your building looks ugly and temporary, then it doesn’t pay to try to improve the landscape by investing in the small piece of it that you own. Third, suburbs compete madly for new construction, and they can’t regulate aesthetics without simply losing development to the next jurisdiction. Finally, it may be individualism that explains both the migration to the “crabgrass frontier” and the failure, once there, to invest in public spaces.
February 18, 2005
service instead of politics? blame Clinton
When trying to explain trends in civic participation, we shouldn't overlook major political events and how they influence ideological groups.
When I was in college, in the late 1980s, I played a very small role in national discussions about how to increase opportunities for service. These discussions helped lay the groundwork for the Points of Light Foundation and then the Corporation for National and Community Service. Most of the young people in those discussions were left-liberals. For us, service seemed useful because it might sensitize people to problems like poverty and racism and lead to political action. However, service would be harmful, we thought, if it became an end in itself or a palliative. These were the explicit conclusions of a Wingspread retreat on service that I attended in 1988.
Thirteen years later, in 2001, Campus Compact brought a new group of college students to Wingspread to discuss civic engagement. These students said:
For the most part, we are frustrated with conventional politics, viewing it as inaccessible. [However,] while we are disillusioned with conventional politics (and therefore most forms of political activity), we are deeply involved in civic issues through non-traditional forms of engagement. We are neither apathetic nor disengaged. In fact, what many perceive as disengagement may actually be a conscious choice; for example, a few of us … actively avoided voting, not wanting to participate in what some of us view as a deeply flawed electoral process. … While we still hope to be able to participate in our political system effectively through traditional means, service is a viable and preferable (if not superior) alternative at this time.
I suspect that there was one major reason for the change in attitudes toward service among left-liberal youth: the Clinton Administration. In 1988, most young proponents of civic engagement, having grown up under Reagan, believed that a Democratic electoral victory was much more important than any form of direct service. In 2001, having experienced a Democratic presidency, idealistic young liberals were highly skeptical of government and politics as paths to social change. Note that a similar pattern of mobilization and disillusionment could easily affect conservative youth under different political circumstances.
The title of this post is basically facetious, since I think that the Clinton Administration was at least partly successful. But a Democratic president had much less impact than left-liberal college students would have hoped, ca. 1988. And disappointment can be very demobolizing.
February 17, 2005
the latest on our local work
For the last year, with generous support from the National Geographic Foundation, my colleagues and I have been working with high school kids to study the environmental causes of obesity in their community and display the results on public maps on the Prince George's Information Commons website. It has been a tortuous process, frequently derailed by changes in the school's administration and rules, flawed ideas and plans on my part, turnover among the University of Maryland team, attrition of students, and technical problems. In the latest phase, the kids have been trying to present their ideas in the form of audio segments, mixing voice and music. But the talented graduate student who was helping them had to quit this week for health reasons.
Despite all these problems, various groups of high school and college students with whom I have been working should have produced more than 30 separate research projects on various aspects of their community by the end of this summer. I am starting to envision the Commons website as a kind of magazine about Prince George's County, with "articles" in various formats (including audio and video) and lots of opportunities for readers to post comments. Blogging software like MovableType could underlie the whole site, although it wouldn't look or "feel" like a blog. After all, blogging software is essentially a database that displays selected entries on a website. So the Prince George's Information Commons could consist of a database of research products created by a wide range of students and adult volunteers. The homepage would present short summaries of some recent products, with links to the full results. Each summary could be accompanied by an enticing picture to draw visitors' interest.
Prince George's County is a large jurisdiction (pop. 838,000) without its own news media. It receives generally disparaging treatment from the Washington press corps, probably because it's the suburban county with the lowest income and the largest African American population (62.7%). I didn't get involved in these projects to try to create a news organ for the community, but that wouldn't be a bad thing.
February 16, 2005
the torture lawyers
My friend and former colleague David Luban has a very useful article in Slate explaining why the lawyers who advised the Bush administration to allow torture ("former White House counsel Alberto Gonzales, vice presidential counsel David Addington, Justice Department lawyers Jay Bybee and John Yoo, and Pentagon counsel William Haynes") violated legal ethics. Luban's analogy between those lawyers and Lynne Stewart--Sheik Omar Abdel Rahman's defense attorney, recently convicted of conspiring with him--is suggestive but not necessarily tight. The real payoff of Luban's piece is his explanation of the role of legal advisor versus that of advocate. The Bush lawyers acted as advocates when they were employed to give advice, and that violates legal ethics.
February 15, 2005
In my usual style, here is a very belated comment on two once-"hot" news stories: Larry Summers and Ward Churchill. For all their differences, these men are both university employees who got into trouble for their public speech. In both cases, "academic freedom" has been cited as a defense.
In my opinion, "academic freedom" is not an individual civil right that academics can wield in conflicts with their employers. Academics, like everyone else, have First Amendment rights, but those are rights against the state. The First Amendment does not require a university to pay us to say anything we like, nor must it grant us academic credit or preferment for our speech. Universities carefully and intensively regulate the speech of their students, professors, and administrators. You can't receive credit or a degree for your writing unless it fulfills a professor's assignment and meets all kinds of canonical standards of relevance and rigor. You can't get tenure unless your work is acceptable to the mainstream discipline in which you work. Even once you have tenure, you can't win grants, promotions, or opportunities to publish without subjecting your "speech" to peer review for content. Thus if academic freedom were a right of individuals, it would be a myth.
Academic freedom is not an individual civil right, but rather an institutional prerogative. When we support academic freedom, we mean that colleges and universities, scholarly associations, journals, and presses should be free to set their own standards for expression without (much) state interference. In other words, the ideal is autonomy for certain professional associations, not rights for their employees as individuals.
Tenure causes confusion: it makes us think that the central commitment of a university is to the individual autonomy of professors. But tenure only applies to senior faculty (not to students, junior faculty, or administrators). Moreover, it is part of a larger system. It is aimed against one problem--invidious political pressure on professors not to teach or publish unpopular ideas. That is a real threat, but universities also worry about "free speech" that is incompetent, undisciplined, or irrelevant. To address that problem, they put academics through a lengthy and grueling socialization process before they grant tenure. And even after tenure, they apply all kinds of pressure to make faculty express themselves in particular ways.
Which brings me to the two cases of recent weeks. I haven't made a study of Ward Churchill's writing, nor do I have time to do so. But there are tenured professors--possibly including Dr. Churchill--who are radical blowhards: offensive and totally lacking in rigor and discipline. Such people are one price we pay for the tenure system. (Some other costs are the burnouts and timeservers on our faculties.) If tenure makes sense, it's because the advantage of protecting trenchant, insighful radicals outweighs the cost of all those blowhards and timeservers. I don't know for sure that this price is worth paying--it probably is. In any case, we should evaluate tenure overall, and not let particular cases dominate our thinking. Thus Churchill may have to be allowed to speak offensively and foolishly in order to uphold an institutional rule that is valuable, overall.
As for Larry Summers: some have said that he "modeled" free speech by making a politically incorrect statement about women in science. I would reply that he modeled free speech but without rigor or discipline. Moreover, he is an administrator, and as such his primary duty is to shape and implement policies. Harvard ostensibly has a policy of attracting more female scientists. Summers' comment undermined Harvard's policy. As such, it was damaging. He was like a corporate executive who criticizes his company's product, or a U.S. ambassador who attacks American foreign policy in public. The First Amendment covers his speech, but that only means that he can't be prosecuted for it. He has no right to be paid for it. If Harvard chooses to retain him, which seems very likely, it will be because Summers' talents outweigh his mistakes. But his comment about women was a mistake, and "academic freedom" is no excuse.
[Added Feb. 17: It can be courageous and honorable for an employee to attack the policies of his or her organization, if the criticism is valid. However, such a critic must also be prepared to face the consequences. For instance, a US diplomat who criticizes American foreign policy may deserve public praise but ought to submit his or her resignation letter along with the critique. The same applies to a university president who undermines the institution's policies. But see Andrew Canter's challenging response in the comments.]
By the way, I don't have tenure and have never been on a tenure track. I'm fairly grateful not to have gone through the socialization process that tenure would have entailed.
February 14, 2005
how institutions socialize young people for citizenship
In 1928, Karl Mannheim argued that people tend to form stable civic identities in their late teens. As adolescents emerge from the relatively narrow horizons of their families and neighborhoods, they confront the broader world of governments, ideologies, parties, and nation-states. They must adopt some stance toward this world, whether it is passive acceptance, alienation, enthusiastic embrace, or personal obligation. After people form a stance, the effort required to change their minds is too costly unless major historical events intervene and require a reassessment. Given the relatively low salience of public life, inertia tends to dominate for the rest of our lives.
If Mannheim was even partly right, then it is important to ask how our institutions socialize young people for lifetimes of civic and political participation. The impact of these institutions is likely to change as their structure and behavior evolve. Thus a study of institutional change is crucial for our analysis of political development.
I suspect that the following are some of the most significant ways in which American institutions have changed their effects on political socialization over the past 25 years:
There is a problem with youth civic engagement, but it's important not to locate the problem inside young people's heads if the real cause of their alienation lies in institutions.
February 11, 2005
the Iraqi election, suicide bombing, & rational choice theory
An election is partly a public good, in the precise economist's sense. If representatives are selected peacefully and officials are made accountable to the majority, that is a good thing for most people. This good is indivisible. I cannot decide to forgo the benefits to me personally of democratic elections, nor sell my stake to someone else. Like national defense or the ozone layer, elections benefit all (or at least the whole majority faction), if they serve anyone.
There is a well-known problem with public goods. Whether a democratic election occurs depends on many people's behavior, yet each person benefits regardless of what he or she does. For example, you gain from our political system—whether or not you vote. Thus you may be tempted to free-ride and let others bear the burden of voting. Or you may feel that it's pointless to promote this public good, since others are unlikely to do their share.
The problem is not too acute when the only cost of an election is the time in takes to cast a vote. In the US, about half the eligible people choose to participate. But when voting is extremely dangerous, the cost becomes high indeed and we expect few people to turn out.
Fortunately, if many people publicly show that they are going to face the dangers of voting, then the calculation changes for other individuals. Everyone sees that collective action can work, that democracy can prevail. While the cost of their own participation remains high, the benefits also become tangible. Thus it was crucial that long lines formed early in Iraq and remained even when bombs went off.
Suicide bombing is a brilliant (although despicable) strategy for disrupting other people's collective action. It raises the "cost" of participation enormously by creating the distinct possibility that you'll die if you vote, join the civilian police force, or merely walk the streets. The best way to win a game of "chicken" is to remove your steering wheel and throw it out the window. Then it's clear that you're going to keep driving straight, and the other person must swerve aside. A suicide bomber is a person who has thrown his wheel out the window.
Nevertheless—and this is what I’m winding up to say—conspicuous collective action and solidarity can defeat suicide bombers, and that’s what seems to have happened in Iraq’s elections. I find this deeply moving. It's the best aspect of human behavior.
Elections have another side, of course. I have said that they are public goods because they produce better governments than other processes would. But elections are also competitive struggles to allocate scarce resources among parties with divergent interests. As soon as the collective ritual of voting ended in Iraq, the more mundane business of counting ballots began. As everyone knows, the detailed political situation that underlies the Iraqi vote is perilous: Shiites have too large a potential majority. They threaten to make the election a losing proposition (not a public good) for the Sunni minority. Creating public goods, a process well begun in the elections, will remain difficult for a long time to come.
February 9, 2005
Ruth Simmons is the President of Brown University. I had a chance to hear her speak and then joined her for a dinner yesterday. In the speech, she described her path from a small, East-Texas town where she was the twelfth child of sharecroppers to the presidency of an Ivy League university. I was particularly interested in her description of Houston in the 1950s. She said that a tight network of very talented Black teachers and ministers, plus a few lawyers and doctors, led the community and collaborated closely to help young people--they raised money for college tuition, picked out students' clothes, taught and counselled them. Because of segregation, these adults had no options other than to hold a few professional jobs within the Black community. This restriction was deeply unfair, but it meant that children who came from deep poverty had access to skilled and charismatic people in their own neighborhoods.
Dr. Simmons' parents were suspicious of schooling and constantly told her to take her nose out of books. She suggested three reasons for this attitude: her parents worked with their hands and distrusted "idleness"; they were afraid that Ruth might question racial injustice and be killed; and they worried that she might lose touch with (and respect for) her own family. A final reason could have been a kind of "bargain with reality." Since other opportunities had always been closed for African Americans, especially in rural East Texas, sharecroppers developed a pride in manual labor and a hostility to books to help validate their own lives. But Simmons was also exposed to community leaders for whom education was a route to freedom. Despite centuries of oppression, her teachers and other professionals prepared youth to become leaders in the hope that better opportunities would arise. I suspect that their hope was an essential precondition of the Civil Rights Movement.
the president's budget and civic education
The Bush Administration's budget proposal for education is available online. For those concerned about civic learning, here are two key points:
The budget is somewhat ambiguous about how to reform secondary education. On one hand, the title of the relevant subhead is "Finishing the Job: Bringing NCLB to High Schools," and money is earmarked for mandatory "testing in grades 9–11 in language arts and math." On the other hand, the following passage implies some flexibility:
This initiative provides $1.2 billion to help States implement a high school accountability framework and a wide range of effective interventions. In return for a commitment to improve academic achievement and graduation rates for secondary school students, States will receive the flexibility to choose which intervention strategies will be most effective in serving the needs of their at-risk high school students. Allowable activities would include vocational education programs, mentoring programs, and partnerships between high schools and colleges, among other approaches. A portion of the funding will be used for randomized trials and evaluations to identify the most effective intervention strategies to enable school administrators to make better choices on what educational strategies to adopt."
I read this as a negotiated statement. Those who simply want high-stakes testing to be expanded through the 12th grade probably have the upper hand, but they have made some room for people who see other ways to reform high schools.
[cross-posted from the CMS Community blog]
February 8, 2005
handbook of public deliberation
John Gastil and I are busy organizing the production of our co-edited volume, The Handbook of Public Deliberation: Strategies for Effective Civic Engagement in the 21st Century. Jossey Bass will publish it this summer. Of the 19 chapters, 16 describe very concrete and practical approaches to public deliberation; thus the book will offer a diverse menu of choices for civic groups, governments, school systems, and others to use. (The three remaining chapters are overviews of the field.) Since almost all chapters have been written by teams, usually comprised of both scholars and practitioners, there are 44 authors in all. Coordinating everyone's participation has been quite a job for John and me. However, we'll reach a milestone tomorrow when we submit a fully edited and complete manuscript.
The cover design to the right is preliminary. We've asked for more people, more evident diversity, and less office ceiling. (Apparently, that's the publisher's office, and one arm belongs to our editor.) Still, I like the informality and zest of the basic design.
February 7, 2005
"every subject's soul is his own"
(Continuing Friday's theme. ...) There is no doubt, after Nuremberg, that soldiers must question the justification of their side's conduct during a conflict--and disobey any immoral orders. But should they worry about the purposes and legitimacy of the whole war? "Adam K. Anonymous" argued "no" on this blog. "In a democracy," he wrote, "the military is a tool, subjected to our elected representative[s], who should worry about the legitimacy of the war. The military, who don't represent the people, should not be in a position to make autonomous decisions about the legitimacy of the war." One could add that soldiers don't have all the information available to high elected officials, so they should simply follow orders about whether to wage war.
On the other hand, it might seem that soldiers in a democracy bear a particularly heavy responsibility for deciding whether to participate in a war. In a dictatorship, it's very hard to obtain information relevant to a moral assessment of your country's foreign policy. If you want to object, you may have no practical options; you certainly can't agitate publicly against the government. And passive resistance will probably just get you killed. All of these problems are less serious in a democracy, so perhaps the individual soldier must treat the decision to participate in a war--and thus to help kill other human beings--as a matter for personal judgment.
I'm not sure what to think, but I'm struck by the relevance of Henry V, act IV, scene 1.
King Harry is prowling through the English camp incognito on the night before Agincourt. His troops are weary and outnumbered five-to-one; they expect to die. He meets two disgruntled soldiers and defends the conduct of their leader (actually himself), ending: "methinks I could not die any where so contented as in the king's company; his cause being just and his quarrel honourable." The first soldier, Williams, replies: "That's more than we know." Williams implies that it's impossible for an ordinary "grunt" like him to assess the justice of the King's position in the war.
A second soldier, Bates, sees an advantage in their ignorance: they are absolved of moral responsibility: "Ay, or more than we should seek after; for we know enough, if we know we are the king's subjects: if his cause be wrong, our obedience to the king wipes the crime of it out of us." (Today, most of us do not see a monarchy as legitimate, but in the world of Henry V, the religious foundations of kingship work like democratic elections for us--they render Harry a legitimate ruler.)
Williams sees a corrolary of Bates' point: if they are innocent because they follow the orders of a legitimate ruler who has access to information, then Harry is in moral peril: "But if the cause be not good, the king himself hath a heavy reckoning to make, when all those legs and arms and heads, chopped off in battle, shall join together at the latter day and cry all 'We died at such a place;' some swearing, some crying for a surgeon, some upon their wives left poor behind them, some upon the debts they owe, some upon their children rawly left. I am afeard there are few die well that die in a battle; for how can they charitably dispose of any thing, when blood is their argument? Now, if these men do not die well, it will be a black matter for the king that led them to it; whom to disobey were against all proportion of subjection."
The King understandably resists the implication that he is responsible for everything his men may do. Still concealing his identity, he says: "So, if a son that is by his father sent about merchandise do sinfully miscarry upon the sea, the imputation of his wickedness by your rule, should be imposed upon his father that sent him: or if a servant, under his master's command transporting a sum of money, be assailed by robbers and die in many irreconciled iniquities, you may call the business of the master the author of the servant's damnation: but this is not so: the king is not bound to answer the particular endings of his soldiers, the father of his son, nor the master of his servant; for they purpose not their death, when they purpose their services. Besides, there is no king, be his cause never so spotless, if it come to the arbitrement of swords, can try it out with all unspotted soldiers: some peradventure have on them the guilt of premeditated and contrived murder; some, of beguiling virgins with the broken seals of perjury; some, making the wars their bulwark, that have before gored the gentle bosom of peace with pillage and robbery. Now, if these men have defeated the law and outrun native punishment, though they can outstrip men, they have no wings to fly from God: war is his beadle, war is vengeance; so that here men are punished for before-breach of the king's laws in now the king's quarrel: where they feared the death, they have borne life away; and where they would be safe, they perish: then if they die unprovided, no more is the king guilty of their damnation than he was before guilty of those impieties for the which they are now visited. Every subject's duty is the king's; but every subject's soul is his own. Therefore should every soldier in the wars do as every sick man in his bed, wash every mote out of his conscience: and dying so, death is to him advantage; or not dying, the time was blessedly lost wherein such preparation was gained: and in him that escapes, it were not sin to think that, making God so free an offer, He let him outlive that day to see His greatness and to teach others how they should prepare."
To paraphrase: subjects must obey the king's decision to wage a war, at least after they have offered their services as soldiers. But their conduct in bello is their own moral responsibility. Left alone, Henry speaks a soliloquy about the lonely responsibilities of the king:
We must bear all. O hard condition,
Twin-born with greatness, subject to the breath
Of every fool, whose sense no more can feel
But his own wringing! What infinite heart's-ease
Must kings neglect, that private men enjoy!
Henry V is one of my least favorite plays of Shakespeare. It seems impossible to separate the perspective of the playwright from that of the king, who dominates the entire work with his particular vision. A monarchical ideology is built into the structure of the plot, and dissonant voices (such as those of Falstaff's old crew) are virtually suppressed. In contrast, Shakespeare usually displays "negative capability," or the capacity not to hold a doctrine of his own. He is "myriad-minded"--inhabiting the minds of all his thousands of characters. Given the overall shape of Henry V, it is tempting to assume that Harry wins the argument with Williams and Bates. However, I'd prefer to see act IV, scene 1 as a place where Shakespeare employs his usual "dialogic imagination." Harry has one perspective; Williams another; and it's up to us to decide what we must think.
February 4, 2005
just war theory
I've been thinking about just war theory, mainly because my colleagues and I discussed a good paper on that topic by Judy Lichtenberg today, but also because of Lt. Gen. James N. Mattis' recent comments ("Actually, it's a lot of fun to fight. You know, it's a hell of a hoot. It's fun to shoot some people. I'll be right upfront with you, I like brawling. ... You go into Afghanistan, you got guys who slap women around for five years because they didn't wear a veil. ...You know, guys like that ain't got no manhood left anyway. So it's a hell of a lot of fun to shoot them.")
Just war theory, with its roots in medieval Christian theology, traditionally separates jus ad bellum from jus in bello. The former deals with justifications for waging war; the latter, with acceptable behavior during a war. For instance, some would say that a just conflict is one waged in self-defense or one authorized by the Security Council to promote human rights. Meanwhile, just behavior during a war requires, for example, not deliberately harming civilians, protecting captives, and not taking hostages.
These two issues are separated so that even a nation that is waging a just war must restrain its conduct during the conflict; furthermore, even soldiers fighting in an unjust war must obey certain norms. Because of the distinction between the two sets of standards, Nazi officers could be prosecuted for violating the Geneva Conventions, but not for invading Poland. Likewise, high Allied officials could be held accountable (morally, if not legally) for decisions like the firebombing of Dresden, which was an immoral and unnecessary act in the middle of a just, defensive war.
However, separating jus ad bellum from jus in bello raises its own problems. First of all, the separation can excuse professional military people from worrying about the most important question, which is whether they are fighting a legitimate war in the first place. How the Wehrmacht honored the Geneva Convention was a lot less important than its invasion of the USSR, which led to the slaughter of more than 13 million uniformed Soviet soldiers. It seems strange to demand that a German officer risk his career and even his life in defense of the rules of war, but to excuse him from waging that war in the first place.
Second, a lot of the traditional components of jus in bello seem outmoded or indefensible. For instance, the distinction between combatants and non-combatants doesn't always make a moral difference. Soldiers can be completely innocent draftees (or volunteers in a just war), whereas civilians can be causally responsible for wicked conflicts. It is sometimes a moral mistake to say that you may kill people in uniform but not civilians.
Still, there is a chance that positive results come from having separate rules of justice ad bellum and in bello. On the one hand, political decision-makers (including citizens in democracies) should always carefully consider moral issues before they support military action--no matter how professional and ethical their army may be. Meanwhile, professional military people should have an ingrained sense of proper behavior in bello, regardless of the legitimacy of any overall conflict.
It's too much to ask a professional soldier, whether a draftee or a volunteer in the army of a legitimate state, to ask hard moral questions every time he sees an enemy in uniform. There just isn't time; obedience and instinct must take over. But it is good if the soldier's conscience is triggered when he sees a civilian or a prisoner. The distinction between combatants and non-combatants may be somewhat arbitrary; nevertheless, the triggering of a conscience can help prevent atrocities.
This is why we may accept that General Mattis is a good Marine and a useful guy to have on our side in a war, yet we don't want him to tell his men that killing Afghans is enjoyable. His bold and ruthless behavior on the battlefield is acceptable, maybe even admirable, assuming that the war itself is just. But his expressions of enthusiasm for killing members of an alien culture threaten to erode the scruples that should constrain all soldiers in bello.
February 3, 2005
(8:00 pm--I meant to add "come join us," because it's a call-in show, but now it's too late. However, you can listen via the WBUR website.)
(8:09 pm--For a parody of the kind of discussion I just participated in--in fact, for a parody of people like me--read the Onion's "Study: Watching Fewer than Four Hours of TV A Day Impairs Ability to Ridicule Pop Culture." Thanks to LibraryChronicles for the lead.)
February 2, 2005
on "constructivism" in education
"Constructivism" is one of the most influential words in the whole jargon of education--and a highly divisive one. It is a rallying-cry for many progressive educators and reformers, but an irritant to conservatives. Constructivists oppose the kind of scene in which a teacher stands before a disciplined class of children and endlessly tells them what is true. But they oppose that pegagogy for a variety of overlapping reasons, some of which I find more persuasive than others.
Creativity: Constructivists often see traditional pedagogy as excessively passive, because children are given everything ready-made in textbooks or by teachers. They want children to be creative, to generate their own works of art, narratives (including factual ones), rules and norms, clubs and other organizations, and social or service projects.
Child-centeredness: Constructivists often want educators to recognize the interests, goals, and "learning styles" of children at particular ages and in particular communities. Teachers are then supposed to tailor classroom experiences in order to capture kids' imaginations and interests. Education should "start where the kids are."
Pluralism: Constructivists emphasize that interests, values, and dispositions differ according to the culture, gender, and social class of students. Thus they oppose standardization, as epitomized by textbooks and "standardized" tests.
Experimentalism: Some constructivists want children to discover facts and methods through experimentation, not wait to be given answers. So, for example, it is better for students to re-discover an algorithm for solving a type of mathematical problem than simply to be taught how to solve it. According to constructivists, kids will remember and be able to apply the method better if they have "made" it themselves.
Holism: Constructivists oppose the separation of intellectual learning from social and emotional learning and ethical development. They see traditional pedagogy as narrow and dismissive of the "whole child."
Democracy: Many constructivists argue that democracy should not only be an outcome of education, but also an aspect of it. Students should share authority and responsibility in schools and classrooms (to various degrees) with adults.
Relativism/Skepticism: It is very common for constructivists to deny explicitly that there is any objective truth. They claim that people or cultures "construct" their own truths. Since many truths have been constructed, none is more objective or valid than the others.
I'd like to unpack educational "constructivism" into its components, because I admire some and quite strongly dislike others. For example, I'm in favor of creativity; this is a core value for me. However, I think it's an empirical question whether children use and remember knowledge best if they have re-discovered it for themselves. This may only be true of some knowledge and some children. Likewise, I think it's an empirical question whether democratically organized classrooms and schools produce the most competent and committed democratic citizens. They may, or they may not.
Relativism is my least favorite part of the constructivist package. Constructivists often deploy a relativist "epistemology" in the belief that it supports their practices. They favor creativity, democracy, experimentialism, holism, pluralism, and child-centerdness. They see "positivism" as the enemy of all these good things, and relativism as the one alternative to positivism that can support their pedagogy. The classic positivists believed that there were objective, verifiable, empirical (or "positive") facts, in contrast to theories, values, and metaphysical statements, which were merely subjective. In contrast, "constructivists hypothesize that it is the subject who actually invents reality and that knowledge is tied to an internal-subjective perspective where truth is replaced by ways of knowing."
But reality is obdurate. We can invent some things, but other things are real whether we like them or not. Although classical positivism is flawed, there are many ways to defend objectivity without being a positivist. No serious thinker has ever believed that the objective world is obvious, directly apprehended by reason, and uncontroversial. But denying it would be equally foolish. Thus I'm very unimpressed by assertions that "subjects invent reality."
Moreover, I think it's ethically bankrupt to pretend that people or groups can and should make up their own worlds. There are many white communities in which everyone would like to believe that chattel slavery was pleasant--or, at the very least, they would like to ignore it completely. The vicious wickedness of slavery is not part of their lifeworld. But it should be. If everyone "constructs" reality and individuals may decide what knowledge they want to create, then we have no right to challenge people to face uncomfortable realities.
In fact, relativism is bad for "constructivism," because two of constructivism's best components, experimentalism and democracy, require individuals to deal with a world outside themselves--a world not of their creation and not under their control.
February 1, 2005
students and the First Amendment
I’ve spent the last day and a half in the magnificent 23nd floor offices of the First Amendment Center, which provide the most panoramic view of the National Mall. We have been discussing a new Knight Foundation report on students and the free press. As you might expect, American adolescents poorly understand—and undervalue—the free speech and free press clauses of the First Amendment. For example, just over half (51%) agree that newspapers should able to freely publish without government approval of each story. However, those students who have studied the Constitution and/or worked for school newspapers and other youth media are relatively likely to support freedom of the press.
This is an important study, especially for its details. (The executive summary—which describes adolescents’ general lack of knowledge and interest—will surprise no one.) However, some of the presenters, by decrying our clueless kids, simply reminded me why I prefer a different approach.
First of all, many of us have learned that education should be “asset-based.” Given any topic (knowledge of history or science, sexual behavior, concern for the environment, religious piety, voting, grammar) you can always show that average kids have too little knowledge and interest. After discovering such “deficits,” adults typically call for campaigns to raise students’ consciousness or change their behavior. These campaigns typically fail. But one can start in a different place, by recognizing that students are capable of tremendously creative and innovative and excellent work. Then the question becomes: How can we support and encourage such work? Incidentally, Knight has supported some of the best student journalism and student expression work in America, through the First Amendment Schools initiative, J-Ideas, and other initiatives.
Second, we should consider a genuine dialogue with students who have opted not to use traditional news media. I spend more than an hour of every day reading several newspapers in hard copy and online. Obviously, I think that at least some reporting is worthy of my time and attention. At the same time, I see very serious flaws in mainstream news media. The most prominent media (local television news, brief news updates on the radio, and chain newspapers) are particularly bad. So could it be that students are at least partly right to shun the press? Note that they could be partly right and partly wrong, in which case a dialogue might be productive for both sides.
Third, students must have a sense of political efficacy in order to take the news seriously. During the Freedom Forum event, ABC News’ Carole Simpson said that she is traveling around the country trying to persuade kids to pay attention to issues like Iraq and outsourcing. She tells them that these issues will affect them. But there is a missing piece in her argument. Why should you follow the news, even if it threatens to affect you personally, unless you feel you can do something about current events? For example, imagine that Iraq is going to turn into a quagmire, and today’s 16-year-olds will be drafted to fight over there. Even if this were true (which I doubt), a teenager still shouldn’t bother informing himself unless he thinks that he can help to solve the problem. Political powerlessness, or the feeling thereof, inevitably discourages people from consuming news.