May 6, 2011

how to save the Enlightenment Ideal

If there is such a thing as the "Enlightenment Ideal," it says that individuals should hold general, publicly articulable, and correct moral principles that, in turn, guide all their opinions, statements, and actions. That is a view that--with some variations--Kant, Madison, J.S. Mill, and many others of their era explicitly defended. None of those writers was naive about the impact of "prejudice [and] voluntary ignorance" (Mill), "accident and force" (Madison), or "laziness and cowardice" (Kant) on actual people's thought and behavior, but they presumed that ideals could have causal power, shaping actions. Reasons were supposed to be motives.

That assumption has seemed to recede into implausibility as evidence has accumulated about the scant impact of reasons or values on actions. It seems that people cannot articulate consistent moral reasons for their opinions. We choose our moral principles mainly to rationalize our decisions after we have made them.*

Scholars who reflect on this evidence seem either to dismiss the relevance of morality entirely or to defend a different model of the moral self. This alternative model presumes that our intuitive, non-articulable, not-fully-conscious, private reactions to situations can be valid, can affect our behavior, and can be improved by appropriate upbringings and institutions. The new model retains some Enlightenment optimism about the importance of morality and education, but at the cost of treating moral judgment as intuitive and non-discursive.

I would propose that we misinterpret the empirical findings and miss their normative implications if we rely on a dichotomy of conscious, logical, articulable reasons versus unconscious, emotional, private intuitions. There is more than one kind of valid, publicly articulable reason.

The Enlightenment thinkers cited above and their skeptical critics seem to share the view that a good moral reason must be highly general and abstract. They have in mind a kind of flow chart in which each of one's concrete choices, preferences, and actions should be implied by a more general principle, which should (in turn) flow from an even more general one, until we reach some kind of foundation. This is not only how Kant thinks about the Categorical Imperative and its implications, but also how J.S. Mill envisions the "fundamental principle of morality" (utilitarianism) and the "subordinate principles" that we need to "apply it." Consistency and completeness are hallmarks of a good overall moral structure.

But many people actually think in highly articulate, public, reflective ways about matters other than general principles and their implications. They think, argue, and publicly defend views about particular people, communities, situations, and places. They do not merely have intuitions about concrete things; they form reasonable moral opinions of them. But their opinions are not arranged in a hierarchical structure with general principles implying concrete results. Sometimes one concrete opinion implies another. Or a concrete opinion implies a general rule. That may not be post hoc rationalization but an example of learning from experience.

Moral thinking must be a network of implications that link various principles, judgments, commitments, and interests. We are responsible for forming moral networks out of good elements and for developing coherent (rather than scattered and miscellaneous) networks. But there is no reason to assume that the network should look like an organizational flowchart, with every concrete judgment able to report via a chain of command to more general principles.

I plan to support this argument by comparing two clear and reasonable moral thinkers, John Rawls and Robert Lowell. Both lapsed protestants who were educated in New England prep schools, drafted during World War II, and taught at Harvard, they shared many political views. In his writing, Rawls both endorsed and employed highly abstract moral principles, but Lowell was equally precise and rigorous. His moral thinking was a tight network of associations among concrete characters, events, and situations.

*One summary of the evidence, with an emphasis on sociology, is Stephen Valsey, "Motivation and Justification: A Dual-Process Model of Culture in Action," American Journal of Sociology, vol. 114, no. 6 (May 2009), pp. 1675-1715.

permanent link | comments (0) | category: philosophy

April 29, 2011

the character of poets and of people generally

In Coming of Age as a Poet (Harvard, 2003), Helen Vendler interprets the earliest mature verse of four major poets: Milton, Keats, Eliot, and Plath. She argues that great poets reach maturity when they develop consistent diction and formal styles; favored physical and historical milieux; major symbolic referents; characters or types of characters whom they include in their verse; and some sort of (at least implicit) cosmology. They often retain these combinations to the ends of their careers.

Robert Lowell provides an example (mine, not Vendler's). From the 1940s until his death, his characteristic milieu is New England--specifically the coastal region from Boston to Nantucket--over the centuries from the Puritan settlement to the present. His diction mimics the diverse voices of that region's history, from Jonathan Edwards to Irish Catholics, but he brings them into harmony through his own regular rhymes and rhythms. His major symbolic references include gardens, graveyards, wars of aggression, the Book of Revelation, and the cruel ocean. He avoids presenting a literal cosmology, but he describes several worldviews in conflict. Sometimes, the physical and human worlds are cursed or damned and we are estranged from an angry, masculine God. Other times, the world is a garden: organic, fecund, and pervasively feminine. (See my reading of The Indian Killer's Grave for detail.)

A combination of diction, favored characters, milieux, subjects of interest, value-judgments, and a cosmology could be called a "personality." I don't mean that it necessarily results from something internal to the author (a self, soul, or nature-plus-nurture). Personality could be a function of the author's immediate setting. For instance, if Robert Lowell had been forceably moved from Massachusetts to Mumbai, his verse would have changed. Then again, we often choose our settings or choose not to change them.

A personality is not the same thing as a moral character. We say that people are good or virtuous if they do or say the right things. Their diction and favorite characters seem morally irrelevant. For example, regardless of who was a better poet, Lowell was a better man (in his writing) than T.S. Eliot was, because Eliot's verse propounded anti-Semitism and other forms of prejudice, whereas Lowell's is full of sympathy and love.

So we might say that moral character is a matter of holding the right general principles and then acting (which includes speaking and writing) consistently with those principles. Lowell's abstract, general values included pacifism, anti-racism, and some form of Catholic faith. Eliot's principles included reactionary Anglicanism and anti-Semitism--as well as more defensible views. The ethical question is: Whose abstract principles were right? That matter can be separated from the issue of aesthetic merit.

I resist this way of thinking about virtue because I believe that it's a prejudice to presume that abstract and general ideas are foundational, and all concrete opinions, interests, and behaviors should follow from them. One kind of mind does treat general principles as primary and puts a heavy emphasis on being able to derive particular judgments from them. Consistency is a central concern (I am tempted to write, a hobgoblin) for this kind of mind. But others do not organize their thoughts that way, and I would defend their refusal to do so. What moral thinking must be is a network of implications that link various principles, judgments, commitments, and interests. There is no reason to assume that the network must look like an organizational flowchart, with every concrete judgment able to report via a chain of command to more general principles. The hierarchy can be flatter.

To return to Lowell, one way of interpreting his personality would be to try to force it into a structure that flows from the most abstract to the most concrete. Perhaps he believed that there is an omnipotent and good deity who founded the Catholic church when He gave the keys of heaven to Peter. Peter's successors have rightly propounded doctrines of grace and nature that are anathema to Puritans. Puritans massacred medieval Catholics and Native Americans who loved nature and peace. Therefore, Lowell despises Puritans and admires both medieval Catholics and Wampanoags. In his diction, he mocks Puritans and waxes mournful over their victims. His poetic style follows, via a long chain of entailments, from his metaphysics.

But I think not. It is not even clear to me that Lowell, despite his conversion to Catholicism, even believed in a literal deity. (Letter to Elizabeth Hardwick, April 7, 1959: "I feel very Montaigne-like about faith now. It's true as a possible vision such as War and Peace or Saint Antony--no more though.") The point is, literal monotheism did not have to be the basis or ground of all his other opinions, such as his love for and interest in Saint Bernard or his deep ambivalence toward Jonathan Edwards. Those opinions could come first and could reasonably persuade him to join the Catholic Church. By mimicking the diction of specific Puritans in poems like "Mr Edwards and the Spider," Lowell could form and refine opinions of Puritanism that would then imply attitudes toward other issues, from industrial development to monasticism.

Poets are evidently unusual people, more self-conscious and aesthetically-oriented than most of their peers, and more concerned with language and concrete details than some of us are. As a "sample" of human beings, poets would be biased.

But they are a useful sample because they leave evidence of their mental wrestling. Poetry is a relatively free medium; the author is not constrained by historical records, empirical data, or legal frameworks. Poets say what they want to say (although it need not be what they sincerely believe), and they say it with precision.

I think the testimony of poets at least suffices to show that some admirable people begin with concrete admirations and aversions, forms of speech, milieux and referents, and rely much less on abstract generalizations to reach their moral conclusions. Their personalities and their moral characters are one.

permanent link | comments (0) | category: philosophy

April 4, 2011

why political recommendations often disappoint: an argument for reflexive social science

In an essay entitled "Why Last Chapters Disappoint," David Greenberg lists American books about politics and culture that are famous for their provocative diagnoses of serious problems but that conclude with strangely weak recommendations. These include, in his opinion, Upton Sinclair's The Jungle (1906), Walter Lippman's Public Opinion (1922), Daniel Boorstin's The Image (1961), Allan Bloom's Closing of the American Mind (1987), Robert Shiller's Irrational Exuberance (2000), Eric Scholsser's Fast Food Nation (2001), and Al Gore's The Assault on Reason (2007). Greenberg asserts that practically every book in this list, "no matter how shrewd or rich its survey of the question at hand, finishes with an obligatory prescription that is utopian, banal, unhelpful or out of tune with the rest of the book." The partial exceptions are works like Schlosser’s Fast Food Nation that provide fully satisfactory legislative agendas while acknowledging that the most important reforms have no chance of passing in Congress.

The gap between diagnosis and prescription is no accident. Many serious social problems could be solved if everyone chose to behave better: eating less fast food, investing more wisely, using less carbon, or studying the classics. But the readers of a given treatise are too few to make a difference, and even before they begin to read they are better motivated than the rest of the population. Therefore, books that conclude with personal exhortations seem inadequate.

Likewise, some serious social problems could be ameliorated by better legislation. But the readers of any given book are too few to apply sufficient political pressure to obtain the necessary laws. Therefore, books that end with legislative agendas disappoint just as badly.

The failure of books to change the world is not a problem that any single book can solve. But it is a problem that can be addressed, just as we address complex challenges of description, analysis, diagnosis, and interpretation that arise in the social sciences and humanities. Every work of empirical scholarship should contribute to a cumulative research enterprise and a robust debate. Every worthy political book should also contribute to our understanding of how ideas influence the world. That means asking questions such as: "Who will read this book, and what can they do?"

Who reads a book depends, in part, on the structure of the news media and the degree to which the public is already interested in the book’s topic. What readers can do depends, in part, on which organizations and networks are available for them to join and how responsive other institutions are to their groups.

These matters change over time. Consider, for example, a book that did affect democracy, John W. Gardner's In Common Cause: Citizen Action and How It Works (1972). After diagnosing America's social problems as the result of corrupt and undemocratic political processes and proposing a series of reforms, such as open-government laws and public financing for campaigns, Gardner encouraged his readers to join the organization Common Cause. He had founded this organization two years earlier by taking out advertisements in leading national newspapers, promising "to build a true 'citizens'' lobby—a lobby concerned not with the advancement of special interests but with the well-being of the nation. … We want public officials to have literally millions of American citizens looking over their shoulders at every move they make." More than 100,000 readers quickly responded by joining Gardner's organization and sending money. Common Cause was soon involved in passing the Twenty-Sixth Amendment (which lowered the voting age to 18), the Federal Election Campaign Act, the Freedom of Information Act, and the Ethics in Government Act of 1978. The book In Common Cause was an early part of the organization’s successful outreach efforts.

It helped that Gardner was personally famous and respected before he founded Common Cause. It also helped that a series of election-related scandals, culminating with Watergate, dominated the news between 1972 and 1976, making procedural reforms a high public priority. As a book, In Common Cause was well written, fact-based, and clear about which laws were needed.

But the broader context also helped. Watergate dominated the news because the news business was still monopolized by relatively few television networks, agenda-setting newspapers, and wire services whose professional reporters believed that a campaign-finance story involving the president was important. Everyone who followed the news at all had to follow the Watergate story, regardless of their ideological or partisan backgrounds. In contrast, in 2010, some Americans were appalled by the false but prevalent charge that President Obama's visit to Indonesia was costing taxpayers $200 million per day. Many other Americans had no idea that this accusation had even been made, so fractured was the news market.

John Gardner was able to reach a generation of joiners who were setting records for organizational membership.* Newspaper reading and joining groups were strongly correlated; and presumably people who read the news and joined groups also displayed relatively deep concern about public issues. Thus it was not surprising that more than 100,000 people should respond to Gardner's newspaper advertisements about national political reform by joining his new group. By the 2000's, the rate of newspaper reading had dropped in half, and the rate of group membership was also down significantly. The original membership of Common Cause aged and was never replaced in similar numbers after the 1970s. John Gardner's strategy fit his time but did not outlive him.

Any analysis of social issues should take account of contextual changes like these. Considering how one’s thought relates to the world means making one's scholarship "reflexive," in the particular sense advocated by the Danish political theorist Bent Flyvbjerg. He notes that modern writers frequently distinguish between rationality and power. "The [modern scholarly] ideal prescribes that first we must know about a problem, then we can decide about it. … Power is brought to bear on the problem only after we have made ourselves knowledgeable about it."** With this ideal in mind, authors write many chapters about social problems, followed by unsatisfactory codas about what should be done. As documents, their books evidently lack the capacity to improve the world. Their rationality is disconnected from power. And, in my experience, the more critical and radical the author is, the more disempowered he or she feels.

Truly "reflexive" writing and politics recognizes that even the facts used in the empirical or descriptive sections of any scholarly work come from institutions that have been shaped by power. For example, in my own writing, I frequently cite historical data about voting and volunteering in the United States. The federal government tracks both variables by fielding the Census Current Population Surveys and funding the American National Election Studies. Various influential individuals and groups have persuaded the government to measure these variables, for the same (somewhat diverse) reasons that they have pressed for changes in voting rules and investments in volunteer service. On the other hand, there are no reliable historical data on the prevalence of public engagement by government agencies. One cannot track the rate at which the police have consulted residents about crime-fighting strategies or the importance of parental voice in schools. That is because no influential groups and networks have successfully advocated for these variables to be measured. Thus the empirical basis of my work is affected by the main problem that I identify in my work: the lack of support for public engagement.

Reflexive scholarship also acknowledges that values motivate all empirical research. Our values--our beliefs about goals and principles--should be influenced and constrained by what we think can work in the world: "ought implies can." Wise advice comes not from philosophical principles alone, but also from reflection on salient trends in society and successful experiments in the real world. An experiment can be a strong argument for doing more of the same: sometimes, "can implies ought." If there were no recent successful experiments in civic engagement, my democratic values would be more modest and pessimistic. If recent experiments were more robust and radical than they are, I might adopt more ambitious positions. In short, my values rest on other people’s practical work, even as my goal is to support their work.

Finally, reflexive scholarship should address the question of what readers ought to do. A book is fully satisfactory only if it helps to persuade readers to do what it recommends and if their efforts actually improve the world. In that sense, the book offers a hypothesis that can be proved or disproved by its consequences. No author will be able to foresee clearly what readers will do, because they will contribute their own intelligence, and the situation will change. Nevertheless, the book and its readers can contribute to a cumulative intellectual enterprise that others will then take up and improve.


*In 1974, 80 percent of the "Greatest Generation” (people who had been born between 1925 and 1944) said that they were members of at least one club or organization. Among Baby Boomers at the same time, the rate of group membership was 66.8%. The Greatest Generation continued to belong at similar rates into the 1990s. The Boomers never caught up with them, their best year being 1994, when three quarters reported belonging to some kind of group. In 1974, 6.3% of the Greatest Generation said they were in political clubs. The Boomers have never reached that level: their highest rate of belonging to political clubs was 4.9% in 1989. (General Social Survey data analyzed by me.)

**Bent Flyvbjerg, Making Social Science Matter (Cambridge: Cambridge University Press, 2001), p. 143

permanent link | comments (0) | category: philosophy

February 14, 2011

a real alternative to ideal theory in political philosophy

In philosophy, "ideal theory" means arguments about what a true just society would be like. Sometimes, proponents of ideal theory assert that it is useful for guiding our actual political decisions, which should steer toward the ideal state. John Rawls revived ideal theory with his monumental A Theory of Justice (1971). His position was egalitarian/liberal, but Robert Nozick joined the fray with his libertarian Anarchy, State and Utopia (1974), and a huge literature followed.

Recently, various authors have been publishing critiques of ideal theory. I am, for example, reading Raymond Geuss' Philosophy and Real Politics (2008) right now. One of the most prominent critiques is by Amartya Sen in The Idea of Justice (2009). Sen argues that there is no way to settle reasonable disagreements about the ideal state. Knowing what is ideal is not necessary to make wise and ethical decisions. Even an ideally designed set of public institutions would not guarantee justice, because people must be given discretion to make private decisions, but those decisions can be deeply unjust. Finally, there is an alternative to the tradition of developing ideal social contracts, as Plato, More, Locke, Rousseau, Rawls, Nozick, and many others did. The alternative is to compare on moral grounds actually existing societies or realizable reforms, in order to recommend improvements, a strategy epitomized by Aristotle, Adam Smith, Benjamin Constant, Tocqueville, and Sen (among many others).

I am for this but would push the critique further than Sen does. The non-ideal political theories that he admires are still addressed to some kind of sovereign: a potential author of laws and policies in the real world, a "decider" (as George W. Bush used to call himself). Sen, for example, in his various works, addresses two kinds of audiences: the general public, understood as sovereign because we can vote, or various specific authorities, such as the managers of the World Bank. In his work aimed at general readers, he envisions a "global dialogue," rich with "active public agitation, news commentary, and open discussion," to which he contributes guiding principles and methods. In turn, that global dialogue will influence the actual decision-makers, whether they are voters and consumers in various countries or powerful leaders.

Unfortunately, no reader is really in the position of a sovereign. You and I can vote, but not for elaborate social strategies. We vote for names on a ballot, while hundreds of millions of other people also vote with different goals in mind. If I prefer the social welfare system of Canada to the US system, I cannot vote to switch. Not can I persuade millions of Americans to share my preference, because I don't have the platform to reach them. Even legislators are not sovereigns, because there are many of them, and the legislature shares power with other branches and levels of government and with private institutions.

Thus "What is to be done?" is not a question that will yield practical guidance for individuals. It is a more relevant question for Sen than for me, because he has spent a long life in remarkably close interaction with famous and distinguished leaders from Bengal to California. (The "acknowledgments" section of The Idea of Justice is the longest I have ever seen and represents a Who's Who of public intellectuals.) But if Sen's full "theory of change" is to become internationally famous and then give advice to leaders, it will only work for a very few.

What then should we do (I who writes these words and you who read them, along with anyone whom we can enlist for our causes)? That seems to be the pressing question, but not if the answer stops with changes in our personal behavior and immediate circumstances. National and global needs are too important for us only to "be the change" that we want in the world. We must also change the world. Our own actions (yours and mine) must be plausibly connected to grand changes in society and policy. Thinking about what we should do raises an entirely different set of questions, dilemmas, models, opportunities, and case-studies than are familiar in modern philosophy.

permanent link | comments (0) | category: philosophy

January 21, 2011

artistic excellence as a function of historical time

The New York Times music critic Anthony Tommasini has compiled his top ten list of all-time greatest classical composers. As explanations for his choices, he offers judgments about the intrinsic excellence of these composers along with comments about their roles in the development of music over time.

These temporal or historical reasons prove important to Tommasi's overall judgments. For example, Beethoven's Fourth Piano Concerto, when played between works composed in the 20th century, "sound[s] like the most radical work in the program by far." Schubert’s "Ninth paves the way for Bruckner and prefigures Mahler." Brahms, unfortunately, "sometimes become entangled in an attempt to extend the Classical heritage while simultaneously taking progressive strides into new territory." Bach "was considered old-fashioned in his day. ... [He] was surely aware of the new trends. Yet he reacted by digging deeper into his way of doing things." Haydn would make the Top Ten list except that his "great legacy was carried out by his friend Mozart, his student Beethoven and the entire Classical movement."

It seems that originality counts: it's best to be ahead of one's time. On the other hand, if, like Haydn, you launch something that others soon take higher, you are not as great as those who follow you. Bach is the greatest of all because instead of moving forward, he "dug deeper." So originality is not the definition of greatness--it is an example of a temporal consideration that affects our aesthetic judgments.

One might think that these reasons are mistaken: timing is irrelevant to intrinsic excellence or "greatness." It doesn't matter when you make a work of art; what matters is how good it is. But I'm on Tommasini's side and would, like him, make aesthetic judgments influenced by when works were composed. Why?

For one thing, an important aspect of art (in general) is problem-solving. One achievement that gives aesthetic satisfaction is the solution of a difficult problem, whether it is representing a horse in motion or keeping the kyrie section of a mass going for ten minutes without boring repetition. The problems that artists face derive from the past. Once they solve the problems of their time, repeating their success is no longer problem-solving. To be sure, one only appreciates art as problem-solving if one knows something about the history of the medium. That is why art history and music history enhance appreciation, although that is not their only purpose.

Besides, in certain artistic traditions, the artist is self-consciously part of the story of the art form. Success means taking the medium in a productive new direction. This is how traditions such as classical music, Old Master Painting, Hollywood movies, and hip-hop have developed. It is not the theory of all art forms in all cultures. Sometimes, ancient, foundational works are seen as perfect exemplars; a new work is excellent to the extent that it resembles those original models.

The Quarrel of the Ancients and the Moderns was a debate about whether the European arts and sciences should be progressive traditions or should aim to replicate the greatness of their original Greco-Roman models. The Moderns ultimately won that debate, not only promoting innovation in their own time but also reinterpreting the past as a series of original achievements that we should value as contributions to the unfolding story of art. Since we are all Moderns now, we all think in roughly the way that Tommasini does, admiring Beethoven because his contemporaries thought his late works were incomprehensible.

Meanwhile, classical music and Old Master painting have become completed cultures for many people. Their excellence is established and belongs to the past. Beethoven was great because he was ahead of his time, but now the story to which he contributed is over. The Top Ten lists of classical music are closed. I am not sure this is true, but it seems a prevalent assumption. Maybe we are all Ancients now.

permanent link | comments (0) | category: fine arts , philosophy

January 10, 2011

upside-down Foucault

Hypothesis: every space where Michael Foucault discovered the operation of power is also a venue for creativity, collaboration, and a deepening of human subjectivity.

By way of background: I respect Foucault as one of the greatest thinkers of the 20th century. Although deeply influenced by other writers and activists, he made his own crucial discoveries. In particular, he found power operating in places where it had been largely overlooked, such as clinics, classrooms, and projects of social science. Further, he understood that power is not just a matter of A deliberately making B do what A wants. It rather shapes all of our desires, goals, and beliefs. Its influence on beliefs suggests that knowledge and power are inseparable, so that even our understanding of power is determined by power. Despite the skeptical implications of Foucault's epistemology, he struggled in an exemplary fashion to get the theory right, revising it constantly. He traveled a long intellectual road, directed by his own conscience and experience rather than any kind of careerism.

So it is as a kind of homage to Foucault that I suggest flipping his theory upside-down. Just as close, critical observation of people in routine settings can reveal the operations of power, so we can detect people developing, growing, reflecting, and collaborating voluntarily. To be sure, social contexts fall on a spectrum from dehumanizing to humanizing, with prisons at one end (not far from office cubicles), and artists' ateliers at the other. But it would be just as wrong to interpret a whole society as a prison as to view it all as a jazz band. And, I would hypothesize, even in the modern US prison system--swollen in numbers, starved of resources for education and culture, plagued by rape and abuse, and racially biased--one could find evidence of creativity as well as power.

permanent link | comments (0) | category: philosophy

December 10, 2010

the philosophical foundations of civic education

Ann Higgins-D’Alessandro and I have published an article under this title in Philosophy & Public Policy Quarterly. It is actually a version (with due permission) of a chapter we published in The Handbook of Research on Civic Engagement in Youth, edited by Lonnie Sherrod, Judith Torney-Purta, and Constance A. Flanagan (John Wiley & Sons, 2010). Here it is online.

We note that educating young people for citizenship is an intrinsically moral task. Even among reasonable people, moral views about citizenship, youth, and education differ. We describe conflicting utilitarian, liberal, communitarian, and civic republican conceptions and cite evaluations of actual civic education programs that seem to reflect those values. We conclude:

permanent link | comments (0) | category: advocating civic education , philosophy

November 4, 2010

against a cerebral view of citizenship

For a faculty seminar tomorrow, a group of us are reading Aristotle's Politics, Book III, which is a classic and very enlightening discussion of citizenship. Aristotle holds that the city is composed of citizens: they are it. Citizenship is not defined as residence in a place, nor does it mean the same thing in all political systems. Rather, it is an office, a set of rights and responsibilities. Who has what kind of citizenship defines the constitution of the city.

According to Aristotle, the core office or function of a citizen is "deliberating and rendering decisions, whether on all matters or a few."* In a tyranny, the tyrant is the only one who judges. In such cases, the definition of a good man equals that of a good citizen, because the tyrant's citizenship consists of his ruling, and his ruling is good if he is good. Practical wisdom is the virtue we need in him, and it is the same kind of virtue that we need in dominant leaders of other entities, such as choruses and cavalry units. Aristotle seems unsure whether a good tyrant must first learn to be ruled, just as a competent cavalry officer first serves under another officer, or whether one can be born a leader.

In democracies, a large number of people deliberate and judge, but they do so periodically. Because they both rule and obey the rules, they must know how to do both. Rich men can make good citizens, because in regular life (outside of politics) they both rule and obey rules. But rich men do not need to know how to do servile or mechanical labor. They must know how to order other people to do those tasks. Workers who perform manual labor do not learn to rule, they do not have opportunities to develop practical wisdom, but they instead become servile as a result of their work. Thus, says Aristotle, the best form of city does not allow its mechanics to be citizens.

Note the philosopher's strongly cognitive or cerebral definition: citizenship is about deliberating and judging. Citizenship is not about implementing or doing, although free citizens both deliberate and implement decisions.

But what if we started a different way, and said that "the city" (which is now likely a nation-state) is actually composed of its people as workers? It is what they do, make, and exchange. In creating and exchanging things, they make myriad decisions, both individually and collectively. Some have more scope for choice than others, but average workers make consequential decisions frequently.

If the city is a composite of people as workers, then everyone is a citizen, except perhaps those who are idle. It does not follow logically that all citizens must be able to deliberate and vote on governmental policies. Aristotle had defined citizens as legal decision-makers (jurors and legislators); I am resisting that assumption. Nevertheless, being a worker now seems to be an asset for citizens, not a liability. Only the idle do not learn both to rule and to be ruled.

Aristotle's definition of citizenship has been enormously influential, but it has often been criticized: by egalitarians who resist his exclusion of manual workers and slaves; by Marxists and others who argue that workers create wealth and should control it; and by opponents of his cerebral bias, like John Dewey. The critique that interests me most is the one that begins by noting the rich, creative, intellectually demanding aspects of work. That implies that working, rather than talking and thinking, may be the essence of citizenship. I draw on Simone Weil, Harry Boyte, and others for that view.

*Politics 1375b16, my translation.

permanent link | comments (0) | category: philosophy , populism

July 19, 2010

the visionary fire of Roberto Mangabeira Unger

We are deep into our annual Summer Institute of Civic Studies, with as much as six-and-a-half-hours of class and many hundreds of pages of reading each day. The most blogging I can manage will be less-than-daily notes about the texts we discuss. Today, one important text is Roberto Mangabeira Unger's False Necessity: Anti-Necessitarian Social Theory in the Service of Radical Democracy. (Unger is a Harvard Law Professor and cabinet member in his home country of Brazil.)

Unger takes "to its ultimate conclusion" the thesis "that society is an artifact" (p. 2). All our institutions, mores, habits, and incentives are things that we imagine and make. We can change each of these things, "if not all at once, then piece by piece" (p. 4). When we observe that people are poisoning their environment or slaughtering each other--or are suffering from a loss of community and freedom--we should view the situation as our work and strive to change it. He "carries to extremes the idea that everything in society is politics, mere politics"--in the sense of collective action and creation (p. 1)

Unger is a radical leftist but a strong critic of Marxism. He views Marxism as one example of "deep-structure" theory. Any deep-structure theory identifies some "basic framework, structure, or context" beneath all our routine debates and conflicts. It treats each framework as "an indivisible and repeatable type of social organization." And then it explains changes from one framework to another in terms of "lawlike tendencies or deep-seated economic, organizational, and psychological constraints" (p. 14-5). So--according to Marxists--all the politics that we observe today is a function of "capitalism"; capitalism is a unitary thing that can repeat or end; and the only way forward is from capitalism to a different deep structure, namely socialism.

Unger argues that this theory fails to acknowledge the virtually infinite forms of social organization that we can make (including, for instance, many definitions of private property and many combinations of property with other laws and institutions). It suggest that perhaps nothing can be done to alter the arc of history. The only possible strategy is to start a revolution to change the unitary underlying structure of the present society. But that solution is generally (perhaps always) impractical, so the leftist thinker or leader is reduced to denouncing capitalist inequality. "Preoccupied with the hierarchy-producing effects of inherited institutional arrangements, the leftist reaches for distant and vague solutions that cannot withstand the urgent pressures of statecraft and quickly give way to approaches betraying its initial aims" (p. 20).

Instead, writes Unger, the leftist should be constantly "inventing ever more ingenious institutional instruments." The clearest failure of actual Marxism was its refusal to experiment, which was legitimized by its deep-structure theory. (Once capitalism was banished, everything was supposed to be fixed). "The radical left has generally found in the assumptions of deep-structure social analysis an excuse for the poverty of its institutional ideas. With a few exceptions ... it has produced only one innovative institutional conception, the idea of a soviet or conciliar type of organization" (p. 24). In theory, a "soviet" was a system of direct democracy in each workplace or small geographical location. But, Unger writes, that was an unworkable and generally poor idea.

In contrast, Unger is a veritable volcano of innovative institutional conceptions. He wants a new branch of government devoted to constant reform that is empowered to seize other institutions but only for a short time; mandatory voting; automatic unionization combined with complete independence of unions from the state; neighborhood associations independent from local governments; a right to exit from public law completely and instead form private associations with rules that protect rights; a wealth tax; competitive social funds that allocate endowments originally funded by the state; and new baskets of property rights.

None of these proposals is presented as a solution. Together they are ways of creating "a framework that is permanently more hospitable to the reconstructive freedom of the people who work within its limits" (p. 34). The task is to "combine realism, practicality, and detail with visionary fire" (p. 14)

On deck: Madison, Hayek, and Burke--all defenders of tradition and enemies of the Ungerian project.

permanent link | comments (0) | category: philosophy

July 9, 2010

on hope as an intellectual virtue

My favorite empirical research programs try to help something good work in the world. For instance, scholars who study Positive Youth Development assess initiatives that give young people opportunities to contribute to their communities. Scholars of Common Pool Resources study how communities manage common property, such as fisheries and forests. Scholars of Deliberative Democracy investigate the impacts on citizens, communities, and policies when people talk in structured settings.

These are empirical research programs, committed to facts and truth. They do not seek to celebrate, but to critically evaluate, their research subjects. However, an obvious goal is to make the practical work succeed by identifying and demonstrating positive impacts and by helping to sort out the effective strategies from the ineffective ones. Underlying these intellectual efforts is some kind of hope that the practical programs, when done well, succeed.

As a philosopher, I am especially interested in that hope and why scholars have it. I like to ask what motivates these research projects. The motives are largely hidden, because positivist social science cannot handle value-commitments on the part of researchers; it treats them as biases to be minimized and disclosed only if they prove impossible to eliminate. Often the search for motives is critical and suspicious: one tries to show that a given research project is biased by some value-judgment, cultural assumption, or self-interest on the scholars' part. But I look for motives in an appreciative spirit, believing that an empirical research program in the social sciences can only be as good as its core values.

Note that it is not at all obvious why we should hope that Positive Youth Development, Common Property Resource Management, and Deliberative Democracy work. These are expensive and tricky strategies. For instance, the core empirical hypothesis of Positive Youth Development is that you will get better outcomes for youth if you help them contribute than if you use surveillance and remediation. But it would be cheaper and more reliable if we could cut crime with metal detectors in every school instead of elaborate service-learning programs. So why should we hope that Positive Youth Development is right?

Likewise, it would be easier to turn all resources into private or state property than to encourage communities to manage resources as common property. And it would be easier for professionals to make city plans and budgets than to turn those decisions over to citizens. So why do scholars evidently hope that good common property regimes produce more sustainable and efficient economic outcomes than expert management, and that deliberations generate more legitimate and fair policies than governments do?

I think part of the reason is simply that things are not going very well in the world, and scholars seek alternatives that may be uncontroversially better: more efficient or sustainable, less corrupt and wasteful. That's part of the reason, but it doesn't fully explain the focus of these research projects. If you're worried about violence in American high schools, you should look for something new that works. But why should that new approach include service and leadership programs, instead of better metal detectors and video cameras?

Ultimately, all three of my examples are anchored in commitments that I would describe as "Kantian." The individual is a sovereign moral agent and our responsibility to others is always to help develop their capacities for autonomy and voluntary cooperation. Real Kantianism is dismissive of utilitarian outcomes (such as efficient public services) and is willing to defend autonomy even if the consequences for health and welfare turn out to be bad. But real Kantianism just doesn't fly. It doesn't influence power and it doesn't satisfy most people's intuitions. So I think the research projects I have mentioned here are motivated by a kind of soft or strategic Kantianism. The best initiatives, on this view, are the ones that achieve efficient and reliable improvements in tangible human welfare by enhancing people's autonomy. Strategies like Positive Youth Development and common property regimes stand out as worthy of study because of their Kantian values. But they deserve critical scrutiny on utilitarian grounds. If they fail to deliver the promised practical outcomes, they should be improved before they are abandoned. The same attention should not be given to surveillance systems or top-down managerial structures. In theory, those solutions might work just as well, but helping them to succeed would not enhance autonomy.

I realize that it is a risky strategy in our culture for scholars to admit their core moral commitments. The smartest move is to pretend that a research program is simply scientific and all the outcomes of interest are utilitarian. But those assumptions have the disadvantage of being wrong. They distort research in various subtle but damaging ways. Even though it is idealistic, I think we should take on positivism directly and not accept the presumption that values are simply biases.

permanent link | comments (0) | category: philosophy

July 6, 2010

moral thinking is a network, not a foundation with a superstructure

When we talk together about public concerns, a whole range of phrases and concepts is likely to emerge. Imagine, for example, that the topic is a local public school: how it is doing and what should change. In talking about their own school, parents and educators may use abstract moral concepts, like fairness or freedom. They may use concepts that have clear moral significance but controversial application in the real world. For example, fairness is a good thing, by definition. It is not the only good thing, and it can conflict with other goods. But the bigger challenge is to decide which outcomes and policies actually are fair.

Other concepts are easy to recognize in the world but lack clear moral significance. We either bus students to school or we do not bus them, but whether busing is good is debatable. (In this respect, it is a very different kind of concept from fairness.) Still other concepts have great moral weight and importance, but their moral significance is unclear. You can't use the word love seriously without making some kind of morally important point. But you need not use that word positively: sometimes love is bad, and the same is true of free and achieve.

People string such concepts together in various ways. They may make associations or correlations ("The girls are doing better than the boys in reading"). They may make causal claims ("The math and reading tests are causing us to overlook the arts.") They may apply general concepts to particular cases. Often they will describe individual teachers, administrators, events, classes, and facilities with richly evaluative terms, such as beautiful or boring. Frequently, they will tell stories, connecting events, individuals, groups, concepts, and intentional actions over time.

All these ways of talking are legitimate in a democratic public discussion. But the heterogeneity of our talk seems problematic. So many different kinds of ideas are in play that it seems impossible to reach any principled or organized resolution. We talk for some arbitrary amount of time, and then a decision must be made by the pertinent authorities or by a popular vote. It is not clear whether the decision was correct based on the discussion that preceded it.
It seems beneficial to organize and systematize public discussion, and several kinds of experts stand ready to help:

All of these forms of expert and disciplined guidance can be useful. But they often conflict, and so the very fact that they all help should tell us something. There is no methodology that can replace or discipline our public discussions or bring them to a close. This is because of the nature of moral reasoning itself.

Moral concepts are indispensable. We cannot replace them with empirical information. Even if smaller class sizes do produce better test scores, that does not tell us whether our tests measure valuable things, whether the cost of more teachers would be worth the benefits, or whether the state has a right to compel people to pay taxes for education.

But moral concepts are heterogeneous. Some have clear moral significance but controversial application in the world. (Fairness is always good, and murder is always bad.) Others have clear application but unpredictable moral significance. (Homicide is sometimes murder but sometimes it is justifiable.) Still others are morally important but are neither predictable nor easily identified. (Love is sometimes good and sometimes regrettable, and whether love exists in a particular situation can be hard to say.) A method that could bring public deliberation to closure would have to organize all these concepts so that the empirically clear ones were reliably connected to the morally clear ones.

That sometimes happens. For instance, waterboarding either happens or it does not happen. The Bush Administration's lawyers defined it in obsessive detail: "The detainee is lying on a gurney that is inclined at an angle of 10 to 15 degrees to the horizontal. ... A cloth is placed over the detainee's face and cold water is poured on the cloth from a height of approximately 6 to 18 inches …" Waterboarding is, in my considered opinion, an example of torture. Torture is legally defined as a felony, and the reason for that rule is a moral judgment that torture is always wrong (in contrast to punishment or interrogation, which may be right). Therefore, waterboarding is wrong. This argument may be controversial, but it is clear and it carries us all the way from the concrete reality of a scene in a CIA interrogation room to a compelling moral judgment and a demand for action. The various kinds of concepts are lined up so that moral, legal, and factual ideas fit together. There is room for debate: Is waterboarding torture? Who waterboarded whom? But the debate is easily organized and should be finite.

If all our moral thinking could work like that, we might be able to bring our discussions to a close by applying the right methods--usually a combination of moral philosophy plus empirical research. But much of our thinking cannot be so organized, because we confront moral concepts that lack consistent significance. They are either good or bad, depending on the circumstances. Nevertheless, they are morally indispensable; we cannot be good human beings and think without them. Love and freedom are two examples. To say that Romeo loves Juliet--or that Romeo is free to marry Juliet--is to say something important, but we cannot tell whether it is good or bad until we know a lot about the situation. There is no way to organize our thinking so that we can bypass these concepts with more reliable definitions and principles.

A structured moral mind might look the blueprint of a house. At the bottom of the page would be broad, abstract, general principles: the foundation. An individual's blueprint might be built on one moral principle, such as "Do unto others as you would have them do unto you." Or it might start even lower, with a metaphysical premise, like "God exists and is good." At the top of the picture would be concrete actions, emotions, and judgments, like "I will support Principal Jones's position at the PTA meeting." In between would be ideas that combine moral principles and factual information, such as, "Every child deserves an equal education," or "Our third grade curriculum is too weak." The arrows of implication would always flow up, from the more general to the more specific.

I think most people's moral thinking is much more complex than this. Grand abstractions do influence concrete judgments, but the reverse happens as well. I may believe in mainstreaming special-needs children because of an abstract principle of justice, and that leads me to support Mrs. Jones at the PTA meeting. Or I may form an impression that Mrs. Jones is wise; she supports mainstreaming; and therefore I begin to construct a new theory of justice that justifies this policy. Or I may know an individual child whose welfare becomes an urgent matter for me; my views of Mrs. Jones, mainstreaming, and justice may all follow from that. For some people, abstract philosophical principles are lodestones. For others, concrete narratives have the same pervasive pull—for example, the Gospels, or one's own rags-to-riches story, or Pride and Prejudice.

We must avoid two pitfalls. One is the assumption that a general and abstract idea is always more important than a concrete and particular one. There is no good reason for that premise. The concept of a moral "foundation" is just a metaphor; morality is not really a house, and it does not have to stand on something broad to be solid. Yet we must equally avoid thinking that we just possess lots of unconnected opinions, none intrinsically more important than another. For example, the following thoughts may all be correct, but they are not alike: "It is good to be punctual"; "Genocide is evil"; and "Mrs. Jones is a good principal." Not only do these statements have different levels of importance, but they play different roles in our overall thinking.

I would propose switching from the metaphor of a foundation to the metaphor of a network. In any network, some of the nodes are tied to others, producing an overall web. If moral thinking is a network, the nodes are opinions or judgments, and the ties are implications or influences. For example, I may support mainstreaming because I hold a particular view of equity; then mainstreaming and equity are two nodes, and there is an arrow between them. I may also love a particular child, and that emotion is a node that connects to disability policy in schools. A strong network does not rest on a single node, like an army that is decapitated if its generalissimo is killed. Rather, a strong network is a tight web with many pathways, so that it is possible to move from one node to another by more than one route. Yet in real, functioning networks, all the nodes do not bear equal importance. On the contrary, it is common for the most important 20 percent to carry 80 percent of the traffic--whether the network happens to be the Internet, the neural structure of the brain, or the civil society of a town.

I suspect that a healthy moral mind is similar. It has no single foundation, and it is not driven only by abstract principles. Concrete motives (like love or admiration for a particular individual) may loom large. Yet the whole structure is network-like, and it is possible for many kinds of nodes to influence many other kinds. My respect for Mrs. Jones may influence how I feel about the concept of the welfare state, and not just the reverse. I need many nodes and connections, each based on experience and reflection.

I do not mean to imply that a strong network map is a fully reliable sign of good moral thinking. A fascist might have an elaborate mental map composed of many different racial and national prejudices and hatreds, each supported by stories and examples, and each buttressing the others. That would be a more complex diagram than the ones possessed by mystics who prize purity and simplicity. Purity of Heart Is to Will One Thing, wrote Sören Kierkegaard, and the old Shaker hymn advises, "'Tis the gift to be simple, ‘tis the gift to be free, ‘Tis the gift to come down where we ought to be." A righteous Shaker would do more good than a sophisticated fascist. But even if complexity is not a sufficient or reliable sign of goodness, a complex map is both natural and desirable. It reflects the real complexity of our moral world; it reduces the odds of becoming fanatical; it hems in self-interest; and it is resilient against radical doubt.

Four conclusions follow from this discussion.

permanent link | comments (0) | category: philosophy

April 15, 2010

what was Rawls doing?

John Rawls was the most influential recent academic political philosopher in the English-speaking world, or at least the most influential academic who defended liberal views. If you take him at face value, he is a very abstract kind of thinker. In fact, he says in section 3 of A Theory of Justice:

In a famous methodological move, he defines the "original position" as one in which persons are ignorant of all morally irrelevant facts so that each cannot "tailor principles to the circumstances of [his or her] own case." By making us ignorant of most empirical facts about ourselves, Rawls makes his theory seem more abstract than even Kant's.

As Rawls works out the actual framework of justice, it turns out that the government should do certain things and not others. Parties to the original contract would want there to be "roughly equal prospects of culture and achievement for everyone similarly motivated and endowed. The expectations of those with the same abilities and aspirations should be not be affected by their social class." To achieve this outcome, the government should fund education and channel educational resources to the least advantaged. I presume it should also regulate employment contracts to prevent discrimination, thus enacting the principle of "careers open to talents." But the government should not be in charge of child-rearing, even though families affect people's capacities and motivations. ("Even the willingness to make an effort, to try, and so to be deserving in the ordinary sense is itself dependent upon happy family and social circumstances.") The state should compensate people from unhappy families, but should not take over the family's traditional function.

Why not? One answer might be that Rawls was insufficiently radical and consistent. He arbitrarily excluded the family from his program of reform because of prejudice. I have a different view than this--more favorable to Rawls' conclusions but less supportive of his methods.

I don't believe that his reasoning was nearly as abstract as he claimed. Instead, I think he was a reader of newspapers and an observer of life in America, ca. 1945-1975. He observed that the actual government did a pretty good job of providing universal education but could still improve the equality of educational opportunity. The government policed employment contracts increasingly well to prevent racial and gender discrimination, albeit with room for improvement. But the government didn't do child-rearing well. (The foster care system was only an emergency response that, in any case, relied on private volunteers.) Rawls derived from the immediate past and present some principles for further reform.

That interpretation makes Rawls a good thinker, sensible and helpful, but not quite the kind of thinker he believed himself to be. In my view, he was less like Kant (elucidating the universal Kingdom of Ends from the perspective of pure reason) and more like Franklin Roosevelt, defending the course of the New Deal and Great Society in relatively general and idealistic terms. Or he was like John Dewey, critically observing reality from an immanent perspective. The reason this distinction matters is methodological. As we go forward from Rawls, I think we need more social experimentation and reflection on it, not better abstract reasoning about the social contract.

permanent link | comments (0) | category: philosophy

April 6, 2010

philosophers dispensing advice

Yesterday, for fun, I posted a clip of the philosopher Jonathan Dancy on the Late Late Show. His interview raises an interesting and serious question. Asked whether philosophers should dispense moral advice, Dancy says: No. I would agree with that, for reasons stated below. But Dancy goes further and suggests that philosophers shouldn't address substantive moral issues at all. He implies that people's ethical judgments are already in pretty good shape. A philosopher's job is to understand what kind of thing an ethical judgment is. In other words, moral philosophy is meta-ethics.

That is a controversial claim. John Rawls, Peter Singer, Robert Nozick, Judith Jarvis Thomson, and many other modern philosophers have advanced and defended challenging theses about morality. Since the great renaissance of ethics in the English-speaking world (1965-1975), its ambitions have diminished, I think, and a distinction has arisen between ethics (which is very "meta") and applied ethics (which is mostly about a given topic area, and not very philosophical). This split seems a harmful development, because the best moral philosophy is methodologically innovative and challenging and also addresses real issues.

Why shouldn't philosophers dispense advice? Because what one needs to advise people well is not only correct general views (which, in any case, many laypeople hold), but also good motivations, reliability and attention, fine interpretative skills, knowledge of the topic, judgment born of experience, and communication ability (meaning not only clarity but also tact). There is no reason to think that members of your local philosophy department are above average on all these dimensions.

But correct general views are valuable, and philosophers offer proposals that enrich other people's moral thinking. You wouldn't ask John Rawls to run a governmental program or even to advise on specific policies, but your thinking about policies may be better because you have read Rawls. It so happens that he held some interesting ideas about meta-ethics, but those were merely complementary to his core views, which were substantive.

I'm afraid I detect a general withdrawal from offering and defending moral positions in the academy. Humanists like to "problematize" instead of proposing answers. Social scientists are heavily positivist, regarding facts as given and values as arbitrary and subjective (thus not part of their work). If moral philosophers begin to consider the offering of moral positions as beyond their professional competence, there's virtually no one left to do it.

permanent link | comments (0) | category: philosophy

April 5, 2010

a philosopher hits the big time

I'm an adherent of a very small and obscure philosophical school called "particularism." (Of course, because I'm an academic, I have to have my own special flavor of it.) The best known particularist is Jonathan Dancy, whom I only met once but who nicely reviewed a book manuscript of mine. And his work has had a big influence on me, even though I come at things from a different angle. Anyway, unbelievable as it may seem, here he is explaining particularism on Craig Furgeson's "Late Late Show" on CBS:

I've never seen his show, but this Furgeson guy strikes me as pretty smart. And Dancy does a credible job in a terrifying situation. It turns out he's the actress Claire Danes' father-in-law. That relationship--rather than the arguments in "Are Basic Moral Facts both Contingent and A Priori?" (2008)--may be the reason for Dancy's new TV career. Whatever the reason, long may it prosper.

permanent link | comments (0) | category: philosophy

March 25, 2010

state, market, and original sin

Imagine that the pure and original human condition is freedom from all political constraint; and when governments intervene, they introduce arbitrary and illegitimate power. Then the market is Eden and the government is original sin. In that case, anyone who deliberately increases the scope of government must either be a purposeful or a deluded friend of sin. Regardless of what the Congressional Budget Office or the American Medical Association may say about the new health care act, it can only be a snake in the garden. The difference between literally "taking over one sixth of the economy" by nationalizing health care and merely adding some new insurance regulations and subsidies (as Congress did this week) is immaterial, because sin is sin. On this view, the only important political distinction is between those who would protect freedom from the state and those who would use government for their ends. Communists, fascists, liberals, and moderate conservatives--despite what I observe as profound differences--run together.

I am certainly not the first to note a similarity between this specific kind of libertarianism and religious thought. In 1922, Charles A. Beard argued:

About the middle of the nineteenth century, thinkers [in the field of Political Economy] were mainly concerned with formulating a mill owner's philosophy of society; and mill owners resented every form of state interference with their 'natural rights.' ... The state was regarded as a badge of original sin, not to be mentioned in economic circles. Of course, it was absurd for men to write of the production and distribution of wealth apart from the state which defines, upholds, taxes, and regulates property, the very basis of economic operations; but absurdity does not stay the hand of the apologist.

Beard wanted to rebut the idea that markets were primeval and natural by demonstrating that states originally created modern markets by seizing territory, chartering corporations, coining money, literally building physical exchanges, and so forth. But Beard's language suggests another point. The doctrine of laissez-faire echoes Christian principles, but almost precisely in reverse. (And to teach an inverted Christian doctrine would be blasphemous.) The conventional Christian view is that property was absent in Eden and among Jesus' apostles. Property entered because of sin; anointed or otherwise legitimate governments rightly restrain it with law.

I think Tom Paine represents an intermediary stage between the original doctrine (property is sin) and its laissez-faire inversion (property is pristine). In Common Sense, he writes:

[Natural] Society is produced by our wants, and government by our wickedness; the former promotes our happiness positively by uniting our affections, the latter negatively by restraining our vices. The one encourages intercourse, the other creates distinctions. This first is a patron, the last a punisher. Society in every state is a blessing, but government, even in its best state, is but a necessary evil . . . Government, like dress, is the badge of lost innocence.

This is not yet philosophical libertarianism, because Paine thinks that government, like dress, is a good idea under the circumstances. But it introduces the association of government with original sin.

Glenn Beck waded into the same territory when he denounced churches that embrace "social justice." His sense of sin was religious, I think, although his doctrine was the precise reverse of what all Christian denominations still officially hold. Jim Wallis has a nice rebuttal in the Huffington Post. If the official and traditional religious position still influences believers, then Beck bit off more than he can chew.

permanent link | comments (0) | category: philosophy

March 23, 2010

debating Bleak House

Steven Maloney has a thoughtful post about moral issues in Dickens' Bleak House. He cites two of my posts on the same subject, so this is a bit of a back-and-forth. I would summarize my thoughts about the novel as follows:

1. Mrs. Jellyby illustrates how an author's judgment of a character can be correct even though the same author's choice of that character is problematic. I find Mrs. Jellyby awful, as does Dickens. She is callously unconcerned about her own family because she is obsessed with an obviously foolish charitable scheme in Africa, a place of which she knows nothing. No doubt there were women like that in Dickens' day, when paths to national political and civic leadership were reserved for men. But bourgeois women were also struggling to play useful public roles despite a powerful cult of domesticity. Dorothea Brooke in Middlemarch--for example--is a great soul largely squelched by her narrow opportunities for improving the world. So it bothers me that Dickens would choose to portray a woman who should just stop worrying about society and serve her family better.

Steven makes a fair point that a whole range of characters populates Bleak House, and both the men and women exhibit various levels of social and domestic responsibility. The fact that Messrs. Skimpole and Carstone are as irresponsible as Mrs. Jellyby reduces the misogyny of the novel. Yet there is no female character with any capacity for social improvement--despite the terrible needs that Dickens portrays--and that seems a flaw.

The general category that interests me here encompasses fictional characters who have genuine virtues or vices, but whose description reinforces a harmful stereotype.

2. I think that Bleak House is a nationalistic novel, encouraging readers to broaden their sympathies to encompass all Englishmen (while stopping at the coasts of England). That's certainly not my favorite ethical stance, but it's better than a narrower frame or a vacuous and sentimental concern for human beings in general. Such nationalism is a form of solidarity, not just empathy. Building the nation-state as a community of mutual concern was an arduous task that could still fail today. Bleak House (and the liberalism it represents) improved the world.

Steven makes an important observation about Mr. Skimpole, who professes literally not to understand his social obligations. That creates an interesting problem for moral assessment. I think Steven is right that Skimpole is ultimately a charlatan and his kind of non-understanding is either inexcusable or spurious.

I've written much more about the ethical interpretation of literature in Reforming the Humanities: Literature and Ethics from Dante through Modern Times (Palgrave Macmillan, 2009).

permanent link | comments (0) | category: fine arts , philosophy

February 25, 2010

idea for a moral philosophy survey

I suspect that people make moral judgments based on a mix of principles, rules, virtues, moral exemplars, and stories. My own philosophical position is that these factors are on a single plane. Principles need not underlie stories, for example. There can be a web of influence or implication that connects all these different kinds of factors. It can be legitimate for a story to imply a principle, a principle to imply respect for an exemplar, the exemplar to suggest respect for a virtue, which implies a different principle. None is necessarily primary or foundational.

As an empirical matter, people differ (I assume) in how their moral thought is organized. If you envision each moral factor as a node, and each implication from one factor to another as a network tie, then we each have a moral network map in our mind. But for some, the map will look like an organizational chart, with a few very broad principles at the bottom, which imply narrower principles, which imply specific judgments. For others, a single story (like the Gospels, or one's own traumatic experience) lies at the center, and everything else radiates out. Some may have a random-looking network map, with lots of nodes and connections but no order. And some--whether by chance or not--will have what's called a "scale-free" network, in which 20% of the nodes are responsible for 80% of the ties. That kind of network is robust and coherent, but not ordered like a flow chart. The 20% of "power nodes" may be a mix of stories, exemplars, principles, and virtues.

I would further hypothesize that people of similar cultures have similar moral network maps.

How to find out? I wonder if you could give people an online survey that led with a fairly realistic but fictional moral situation.* It would be something close to lived experience, not a scenario like a trolley problem that is contrived to bring abstract principles to the surface.

Respondents could then be asked:

1. What principles (if any) influence you when you think about what you should do?
2. Whom would you imitate (if anyone) when you're deciding what to do?
3. What virtues (if any) would you try to embody when you're deciding what to do?
4. What stories (if any) come to mind when you're deciding what to do?

All of a respondent's answers could then be displayed on a screen, randomly scattered across the plane. The respondent could be given a drawing tool and asked to draw arrows (one- or two-directional) between factors that seem to influence or support other ones. Those data would generate a moral network map for the individual, and we would see how much the structure of people's maps differ.

*It would be very challenging to write a scenario that didn't bias responses toward one kind of moral factor. It would also be difficult to create a fictional scenario that had salience for different people. But the general idea would be to create a nuanced, complex, realistic situation demanding a moral response. For me personally, the kind of fictional story that would resonate would be something like this: "Your child attends a local public school. She's doing well academically and learning some academic material in classes, although not as much as she could. The school is racially and culturally diverse, and she benefits from learning about people who are demographically different. White, middle-class students perform better on standardized tests within this school than their peers who are children of color. The principal is caring and concerned with equity but does not seem to have a vision. The teacher is not especially nice but does seem effective at raising all children's test scores. Options for you include moving your kid to a different school, becoming more involved in the school's governance, or advocating for a policy change. What do you feel you should do?"

permanent link | comments (0) | category: philosophy

February 24, 2010

going deeper on gay marriage

At a meeting last week, we discussed whether gay marriage makes a good topic for discussion in a philosophy or civics course at the high school or college level. Some participants argued that there are no good secular, public reasons against gay marriage. Students (at any level) may have personal convictions against it, but they can only disclose those convictions (if they dare). They will not be able to make arguments relevant to fellow students who hold different convictions. All the neutral arguments favor gay marriage. And that makes it a poor choice for a discussion topic.

I'm not certain that's correct, but I do think that gay marriage is nested in broader issues that make better discussion topics. IF we should live in a liberal, democratic state that is neutral about religion, AND IF that state should give special legal recognition and benefits to "marriage," defined as a very specific contract between pairs of consenting adults, THEN that recognition and those benefits should be available to gay citizens as well as straight ones. That argument seems very straightforward to me and virtually impossible to refute on its own terms. But ...

Should we live in a liberal, democratic state that is neutral about religion? That's a good, complicated, heavily-discussed topic. It raises thorny cases. For example, Martin Luther King was a Christian minister and theologian who made brilliant, "faith-based" arguments against segregation. Those arguments influenced policymakers and voters in our liberal democracy. Was his influence appropriate? If so, why?

Second, should the state recognize and provide benefits for only certain kinds of contracts, defined as "marriages?" Today, in some states, gays may marry legally. But everyone who marries enters into a contract that has certain features. It is designed to be permanent, although there is an intentionally difficult escape hatch in the form of divorce. It combines in one package monogamous sexual intimacy, economic unity, parenting and adoption rights, cohabitation, tax benefits, inheritance, and other legal privileges. Clearly, these elements could be unpacked and offered a la carte.

In practice, marriages do differ. Some people who marry are never sexual partners nor plan to be. Some couples do not expect or value monogamy. Prenuptial agreements may override the principle of economic unity or common property. Yet it remains important that the state -- and social custom -- favors one model of marriage (even when gay marriage is permitted).

I think this second issue (standardized legal marriages versus a la carte contracts) is pretty interesting. If legal marriage became very flexible, it would be like forcing everyone to negotiate their own prenuptial agreements. I would personally hate that idea. It seems extremely stressful to have to invent one's own model of marriage as a couple and then write it all down in legal terms. I would much rather buy into an existing legal and social norm. But this seems like a worthy topic of discussion.

permanent link | comments (0) | category: philosophy

January 6, 2010

why I am not a libertarian

I have a lot of respect for the pragmatic kind of libertarianism that says: Market solutions might work better than government programs, and we should try them. For example, I think it's right to experiment with voucher systems as alternatives to government-run schools. This experiment will either work or not (under various circumstances), but it's worth trying.

A voucher system would not, however, bring about true philosophical libertarianism. The government would still collect mandatory taxes to fund education, and would still make certain educational experiences mandatory for every child. In fact, voucher systems are standard in some of the Western European countries that we call "socialist."

True philosophical libertarianism says: Government taxation and regulation are affronts to personal liberty. My life is mine, and no one, including a democratic state, may take goods from me or direct my actions without restricting my freedom. At most, minor restrictions on my liberty are acceptable for truly important reasons, but they are always regrettable.

That doctrine simply does not feel plausible to me, experientially. Imagine that all levels of government in the United States reduced their role to providing national defense and protecting us against crimes of violence and theft. Gone would be an interventionist foreign policy, criminalization of drugs and prostitution, and--more significantly--publicly funded schools, colleges, medical care, retirement benefits, and environmental protection. As a result, a family like mine could probably keep 95% of the money we now have to spend on taxes, paying only for a minimal national defense and some police and courts. We would have perhaps one third more disposable income,* although we would have to purchase schooling for our kids, a bigger retirement package, and more health insurance; and we would have to pay the private sector somehow for things like roads and airports.

I have my doubts that we would be better off in sheer economic terms. In any case, I am fairly sure that I would not have more freedom as a result of this change. And freedom (not economic efficiency or impact) is the core libertarian value.

I don't think one third more discretionary income would make me more free because I know plenty of people who already have that much income and they don't seem especially free. With an extra billion dollars, I could do qualitatively different things from what I can do now; but an amount under $100,000 would just mean more stuff. Meanwhile, when I consider the actual limits to my freedom, the main ones seem to fall into two categories. First, there is a lack of time to do what I want. I suppose not having to pay taxes would give me a bit more time because I could work fewer hours. But my work is a source of satisfaction to me (and is also somewhat competitive with others' work). I would be very unlikely to cut my hours if the opportunity arose, nor would doing so feel like an increase in my freedom. The way to get more time is to stop wasting it.

Second, I feel limited by various mental habits: too much concern with material things, too much fear of disease and death, too much embroilment in trivialities. I hardly think that being refunded all my taxes would help with those problems, especially if I then had to shop for schools, retirement packages, and insurance. That sounds like a perfect snare.

I have been talking about me and my family. Whatever the impact on us of a libertarian utopia, it would be worse for people poorer than us. Unless you take a very dim view of the quality of government services such as Medicaid and public schools, you should assume that low-to-moderate income citizens get more from the state than they could afford on the market. They would have reason to worry that they could afford basic services at all, and such insecurity would decrease their freedom as well as their welfare.

Overall, economic libertarianism seems to me a materialistic doctrine. (Civil libertarianism, which I endorse, is a different matter.) You risk being called elitist for saying that we are unfree because we have too much stuff and care too much about it. But it happens to be true.

*I don't know how much my family spends on total taxes (income, sales, property, local, state, federal, Social Security, etc), but the Statistical Almanac of the United States says that 12% of all personal income goes to taxes, and I am presuming that we pay three times the average rate because we have higher income and live in Massachusetts.

permanent link | comments (0) | category: philosophy

January 5, 2010

Habermas illustrated by Twitter

The contemporary German philosopher Jürgen Habermas has introduced a set of three concepts that I find useful. They play out in the 140-character messages, "tweets," that populate Twitter. Here are Habermas' three concepts, with tweets as illustrations. (I found these examples within seconds as I wrote this blog post.)

Lifeworld is the background of ordinary life: mainly private, maybe somewhat limited or biased, but also authentic and essential to our satisfaction as human beings. When in the Lifeworld, we mostly communicate with people we know and who share our daily experience, so our communications tend to be cryptic to outsiders and certainly not persuasive to people unlike us. Real examples from Twitter: "y 21st bday with my beloved fam, bf and bff :)" ... "Getting blond highlights for new year." ... "Thanks! You too! I hope you get a chance to rest over the weekend before 'life' comes back at us."

The Public Sphere is the set of forums and institutions in which diverse people come together to talk about common concerns. It includes civic associations, editorial pages of newspapers, New England Town Meetings, and parts of the Internet. The logic of public discourse demands that one give general reasons and explanations for one's views--otherwise, they cannot be persuasive. Examples from Twitter: "Is it time to admit that the failures in our intelligence on terrorism are not systemic/technical but human/cultural?" "Clyburn Compares Health Care Battle To Struggle For Civil Rights" ... "Reports from Iran of security forces massing in squares as new footage of protests is posted." (Note that each of these tweets had an embedded link to some longer document.)

The "System" is composed of formal organizations such as governments, corporations, parties, unions, and courts. People in systems have official roles and must pursue pre-defined goals (albeit with ethical constraints on how they get there). For example, defense lawyers are supposed to defend their clients; corporate CEOs are supposed to maximize profit; comptrollers are supposed to reduce waste in their own organizations. You can see the "System" at work on Twitter if you follow Microsoft ("The Official Twitter of Microsoft Corporate Communications"), The White House, or NYTimes.

When well designed, Systems can be efficient, predictable, and fair. But they prevent participants from reasoning about what ought to be done, because officials have pre-defined goals. Thus it is dangerous for the System to "colonize" the public sphere and the Lifeworld. It is also dangerous for people to retreat entirely from the public sphere into the privacy of the Lifeworld. The Twitter Public Timeline shows this struggle play out in real time.

permanent link | comments (0) | category: Internet and public issues , philosophy

September 23, 2009

Reforming the Humanities (coming soon)

My new book is in production and has a cover and an Amazon page. It's entitled Reforming the Humanities: Literature and Ethics from Dante through Modern Times. Two blurbs are on the back:

permanent link | comments (0) | category: philosophy

September 3, 2009

ethical reasoning as a scale-free network

All of us have many ethical thoughts--about this person, that activity, and also about general concepts like virtues and principles. Some of our ethical thoughts are linked to other ones. One entails another, or trumps it, or incorporates it. So you could make a diagram of my moral or ethical worldview that would consist of my thoughts and links among them.

What kind of network would it be? And what kind of network should it be? These are, respectively, an empirical/psychological question (the answer to which might differ for individuals) and a moral/philosophical question (which probably has one correct answer). By the way, instead of asking these questions about individuals, one could pose them for cultures or institutions.

Ethics might turn out to involve one of three kinds of networks:

1. An ordered hierarchy. This kind of network map would resemble the organizational flowchart of the US Army. At HQ would be some very general, core principles, mutually consistent: like Kant's Categorical Imperative or the utilitarian principle of the greatest good for the greatest number. Division commanders would be big principles like "no lying" or "spend government money to reduce suffering." The footsoldiers would be particular judgments. The chain of command would ideally be clear. Real people might have confused structures, but then we should try to rationalize them. The purpose, for example, of trolley problems is to identify the core principles of people's ethics so that inconsistencies can be reduced.

2. A random-looking network. In a truly random network, any node has an equal chance of being linked to any other. As in a bell curve, the node with the most links would not be that different from the mean node. Our ethical map would not be truly random, because there are reasons that one moral thought entails another. But the links among concepts and opinions might be distributed so that they were mathematically similar to those in a randomly-generated network.

I doubt that this is good description of morality. David McNaughton and Piers Rawling are correct to say that some ethical concepts are "central." They are not just more weighty than other concepts (as rape is more weighty than jaywalking). They are also more central in the sense that they turn up more often and we rely on them more for judgments (“Unprincipled Ethics,” in Hooker and Little, eds., Moral Particularism, p. 268.)

3. A scale-free network: This is a mathematical phrase for a network in which just a few nodes have enormous numbers of links and basically hold the whole thing together. Scale-free networks have no "scale" because there's no typical number of links that can be used to create a scale of popularity on the y-axis. Instead, popularity rises asymptotically according to a "power law." From wikipedia:


"An example power law graph, being used to demonstrate ranking of popularity. To the right is the long tail, to the left are the few that dominate (also known as the 80-20 rule)."

In the case of ethics, we might find that equality, freedom, self-improvement, and compassion were power hubs with enormous numbers of links. Gratitude, fidelity, etc might appear in an important second tier. (I am drawing here on W.D. Ross's list of prima facie duties.) Not cutting ahead in line would be out on the "long tail" of the distribution, along with reading Tolstoy and smiling at bus drivers.

Empirically, I think we could find out whether people (some or all of them) had scale-free moral network maps in their heads. One method would be to obtain a lot of text in which they reasoned about ethical issues--say, interview transcripts. One would identify and code concepts and connections among them, justifying each addition to the map with a quote. Whether the network is scale-free then becomes a mathematical question.

Philosophically, I like the idea of morality as a scale-free network. It means that some concepts are much more important than others, but everything needn't rest on a consistent and coherent foundation. The network can be strong even though it accommodates tensions. Further, since there is no foundation, doubting any one premise doesn't undermine morality as a whole. It just knocks out one hub and the traffic can be redirected. Finally, this metaphor helps us to think about differences in ethical thinking among individuals and among cultures. It's not that we have incommensurable perspectives, but that our network maps have (somewhat) different hubs. That suggests that dialog is possible even though disagreement should be expected (which sounds to me like the truth).

permanent link | comments (0) | category: philosophy

July 30, 2009

reforming the humanities

Last week, I submitted the copy-edited version of my next book for layout and production. It is entitled Reforming the Humanities: Literature and Ethics from Dante Through Modern Times, and it will be published by Palgrave Macmillan this year. The first paragraph says:

permanent link | comments (0) | category: philosophy

July 8, 2009

a tendency to generic thinking

When we try to think seriously about what should be done, we have a tendency or temptation to think in generic terms--about categories rather than cases.

I have a gut-level preference for particularism: the idea that, in each situation, general categories are "marinaded with others to give some holistic moral gestalt" (Simon Blackburn's phrase). That implies that applying general categories will distort one's judgment, which should rather be based on close attention to the case as a whole.

I will back off claims that I made early in my career that we should all be thorough-going particularists, concerned mainly with individual cases and reluctant to generalize at all. My view nowadays is that there are almost always several valid levels of analysis. You can think about choice in general, about choice in schooling, about charters as a form of choice, or about whether an individual school should become a charter. All are reasonable topics. But the links among them are complex and often loose. For instance, your views about "choice" (in general) may have very limited relevance to the question of whether your neighborhood school should become a charter. Maybe the key issue there is how best to retain a fine incumbent principal. Would she leave if the school turned into a charter? That might be a more important question than whether "choice" is good.

The tendency to generalize is enhanced by certain organizational imperatives. For instance, if you work for a national political party, you need to have generic policy ideas that reinforce even more generic ideological ideas. The situation is different if you are active in a PTA. Likewise, if you are paid to do professional policy research, you are likely to have more impact if your findings can generalize--even if your theory explains only a small proportion of the variance in the world--than if you concentrate on some idiosyncratic case. On the other hand, if you are paid to write nonfictional narratives (for instance, as a historian or reporter), you can focus on a particular case.

I'm inclined to think that we devote too much attention (research money, training efforts, press coverage) to generic thinking, and not enough to particular reasoning about complex situations and institutions in their immediate contexts. There is a populist undercurrent to my complaint, since generic reasoning seems to come with expertise and power, whereas lay citizens tend to think about concrete situations. But that's not always true. Martha Nussbaum once noted that folk morality is composed of general rules, which academic philosophers love to complicate. Some humanists and ethnographers are experts who think in concrete, particularistic terms. Nevertheless, I think we should do more to celebrate, support, and enhance laypeople's reasoning about particular situations as a counterweight to experts' thinking about generic issues.

permanent link | comments (1) | category: philosophy

June 15, 2009

ethics from nature (on Philip Selznick)

(en route to the Midwest for a service-learning meeting.) Here is a fairly comprehensive ethical position. It is my summary of Philip Selznick's The Moral Commonwealth, chapter 1, which is presented as an interpretation of Dewey's naturalistic ethics. I have not investigated whether Selznick gets Dewey right--that doesn't matter much, because Selznick is a major thinker himself. His position has just a few key ingredients:

1. "The first principle of a naturalist ethic is that genuine values emerge from experience; they are discovered, not imposed" (Selznick, p. 19). So we shouldn't expect to ground ethics in a truth that is outside of experience, as Kant advised.

2. Experience is the understanding of nature, broadly defined. Such experience has moral implications. There is "support in nature for secure and fruitful guides to moral reflection and prescription" (p. 27). Yet "humanity is in the business of amending nature, not following it blindly" (p. 18).

3. The study of nature that we need for ethics is more like "natural history" than "theoretical science." In other words, it looks for generalities and patterns, but it doesn't assume that true knowledge is highly abstract and universal. "For modern theoretical scientists, nature is not known directly and concretely but indirectly and selectively. Ideally embodied in mathematical propositions, nature becomes rarified and remote. In contrast, students of natural history--naturalists--are interested in the situated wholeness of objects and organisms. They perceive a world of glaciered canyons, burnt prairies, migrating geese." They exhibit "love for the world" (p. 26).

4. Certain facts about human beings (not to be sharply separated from other natural species) emerge from such empirical observation and are ethically important. For instance, human beings have a potential for growth or development in interaction with community, and such growth gives us well-being. "When interaction is free and undistorted--when it stimulates reflection and experiment--powers are enhanced, horizons expanded, connections deepened, meanings enriched. Growth depends on shared experience, which in turn requires genuine, open communication" (pp. 27-8).

Dewey/Selznick begin with observable facts about us as a natural species, identify growth as a "normative idea" (p. 28), and are soon on their way to strong ethical conclusions. For instance, Dewey claimed that democracy is the best system of government because it permits free collective learning; but a democracy is desirable to the extent that discussion and experimentation prevail (rather than the mere tabulation of votes).

This approach suggests that it's better to "benchmark" than to set ideals. That is, it's better to assess where we are as a species, or as a community, or as an individual, and then try to enhance the aspects that seem best, rather than decide what a good society or a good character should be like in principle. Dorf and Sabel have tried to work out a whole political theory based on this distinction. (Link opens a Word doc.)

I find Selznick's view attractive, but I have two major methodological concerns. First, I'm not sure that the selection of natural features is as straightforward as Selznick and Dewey presume. We are naturally capable of learning together in cooperative groups, thereby developing our own competence and enriching our experience. We are also capable of exploitation, cruelty, faction, brutality, and waste. These all seem equally "natural." I suspect the pragmatist's preference for "growth" is closer to a classical philosophical premise than a naturalist observation. In fact, it sounds a lot like Kant's requirement that we develop ourselves and others.

We could read Dewey's conclusions as simply a contribution to public debate. He likes "growth"; others can discuss his preference. If we reach consensus within our community, we have all the ethical certainty we need. If we disagree, our task is to discuss.

That's all very well as long as we recognize that consensus is highly unlikely. (This is my second objection.) Imagine Dewey in a debate with an Iranian Ayatollah. The latter would reject Dewey's method, since revelation should trump experience; Dewey's understanding of natural history, since the world began with creation and will end apocalyptically; and Dewey's goals, since salvation after death is much more valuable than growth here on earth. No experience can directly settle this debate, because we only find out what happens after death after we die. And until the Mahdi actually returns, it's possible that he is waiting.

But here's an argument in favor of Dewey's method. The debate is not just about abstract principles and unfalsifiable predictions. It's also about how principles play out in real, evolving institutions. So we should compare not just the metaphysics of a Shiite Ayatollah and an American pragmatist, but also the institutions that each one endorses: contemporary Iran versus a Deweyan model, such as a laboratory school or a settlement house. It seems to me that contemporary Iran is not doing very well, and Dewey has a "naturalist" explanation of why not. The fundamental principles of the Iranian revolution are not in sync with nature. That's not going to persuade a diehard revolutionary, because he will expect everything to improve as soon as the Mahdi returns. But it is an observation that a devout Shiite can accept and use as an argument for reform. Thus there is a meaningful debate between reformers like Khatami and diehards like Ahmadinejad. If Khatami ultimately wins, score one for Dewey and Selznick, because Iran will have turned out to be governed by natural laws of growth and reflection.

permanent link | comments (0) | category: philosophy

May 7, 2009

two paths to abstraction

1. At first, artists depict the world as they think it actually is. They even show heaven and other eternal and transcendent scenes in terms of their own times, places, and styles. Then they realize that they have a manner, a method, and a style of representation; and many such styles are possible. They learn to imitate art from distant places and times, which requires a certain sympathy or compassion. Their ability to represent the world as depicted by others reduces their attachment to their own style, which begins to seem arbitrary. For example, it seems arbitrary that the center of a flat piece of art should always appear to recede into the distance, and that one side of each object should be visible. Why not show all the sides at once, as in cubism? Gradually, artists' enthusiasm for any form of representative art diminishes. One important option becomes renunciation, in the form of minimalism and abstraction. Showing the world in any style means embodiment; but the mind can transcend the body. True art then becomes not the naive representation of the world, nor a sentimental imitation of someone else's naive style, but just a field of color on a canvas. That seems the way to make the artist's arbitrary will and narrow prejudices disappear, and beauty appear.

2. The Buddha's "Karaniya Metta Sutta," translated by the Amaravati Sangha:

Even as a mother protects with her life
Her child, her only child,
So with a boundless heart
Should one cherish all living beings;
Radiating kindness over the entire world:
Spreading upwards to the skies,
And downwards to the depths;
Outwards and unbounded,
Free from drowsiness,
One should sustain this recollection.
This is said to be the sublime abiding.
By not holding to fixed views,
The pure-hearted one, having clarity of vision,
Being freed from all sense desires,
Is not born again into this world.

The image is Ad Rheinhart, "Abstract Painting" (1951-2). (Rheinhart, influenced by Zen through his friend Thomas Merton, sought to make painting as “a free, unmanipulated, unmanipulatable, useless, unmarketable, irreducible, unphotographable, unreproducible, inexplicable icon.”)

permanent link | comments (0) | category: philosophy

May 1, 2009

what shape is a field of vision?

At an idle moment recently, I was wondering what shape my field of vision has. A quick Google search took me to Alexander Duane's and Ernst Fuchs's 1899 textbook of ophthalmology, which is online. I am sure there is much more recent work--both empirical and conceptual--but I didn't explore it. Instead, I began to speculate that this is a fairly complicated question.

My first responses were in terms of two-dimensional spaces--for instance, I thought that perhaps my field of vision was an oval with a perturbation around my nose. It's oval rather than round because I have two eyes, and each has a separate field like the one pictured here. Putting them together creates an oval. So if you wanted to represent what I can see, you would take a wide-angle photo from my vantage point and cut out a roughly oval shape.

But my retinas are three-dimensional, as is the world they see. So should we say that my field of vision is a section of an ovoid with some irregularities created by my nose, eyebrows, and hair? Even that that answer seems oversimplified, since my eyeballs are capable of focusing at different depths (and even rolling around, although that might be forbidden in a test of one's field of vision); and the world itself is not pasted on the inside of an oval--it extends into the distance. If we said that the shape of my field of vision was roughly ovoid, how big would that ovoid be? The night sky that is sometimes part of it is awfully far away. And I haven't even mentioned that we see moving things and bright colors more easily than stable, dull things. By now, it's beginning to sound as if my field of vision has no shape. But surely that can't be right; my vision has limits and moves as I change my orientation. We've begun talking about the world, not what I see of it.

By the way, it is interesting how easily we accept a photograph as a representation of vision, even though it is flat and rectangular, whereas our field of vision is--at the very least--irregular and vaguely bordered.

Wittgenstein seems to want us to dispense with the goal of analogizing inner experience to something else, as if everyday experience required some explanation on terms other than its own:

For Wittgenstein, I take it, a field of vision has no shape, and we only feel that that's strange because we are in the grip of a model of vision as inner photography. It's actually something else entirely. And yet I keep returning to my initial thought that what I see is an oval with my nose intruding from the bottom.

permanent link | comments (0) | category: philosophy

March 12, 2009

a new book on the way

Palgrave Macmillan has offered me a contract to publish my "Dante book" (which needs an actual title--and I'm not sure what that should be). I have been working on the manuscript for 14 years, and it has gone through many profound structural changes as my thoughts have evolved and as I've assimilated useful criticism. It is great to think that the project will be done and between covers within months.

Here is the beginning of the introduction:

This is a book of humanistic scholarship: specifically, literary criticism and moral philosophy. Those are my roots, even though I spend almost all my time on quantitative social science or policy analysis. My day job is to study and promote "civic engagement" or "active citizenship"; and it has proved useful to study those topics empirically. (Hence CIRCLE.) I don't think either phrase appears in this book manuscript. But there is a deep connection in my mind, which I hope to make explicit in a later project.

The thesis of my "Dante book" is that an indispensable technique for moral judgment is the description of concrete, particular situations in narratives. I argue that no set of principles, no procedure, no algorithm for weighing values, and no empirical data could ever replace this process of description. It is an art and a skill; some people practice it better than others, and it can be taught. But it is not the special province of any credentialed experts, such as lawyers, economists, or moral philosophers. It cannot be replaced--even in a distant utopia--by rules or systems.

In my "Dante book," I draw some conclusions about the purposes and methods of the humanities. (In fact, it has been suggested that I entitle the volume, Dante's Moral Reasoning: Reforming the Humanities.) In my other work, I follow the implications beyond the academy into the domain of politics. We cannot tell what is right and good unless active, engaged citizens discuss concrete cases. They will only be motivated to discuss and to inform their conversations with experience if they have practical roles in self-government. That is the fundamental connection between my two main interests: moral judgment and civic engagement.

permanent link | comments (0) | category: philosophy

March 6, 2009

critical thinking about "critical thinking"

Here are three interestingly complementary comments. The first is from the moderate-conservative New York Times columnist David Brooks:

The second comment is from the influential Yale literary and queer theorist Michael Warner (hardly a moderate conservative, nor a pundit--although he might be a pandit). In a chapter entitled "Uncritical Reading," Warner writes that the standard justification of college-level English is to teach students to be critical readers, ones who aren't fooled by various forms of ideology, emotion, bias or writerly tradecraft.

Warner ends with a quote from the philosopher Bernard Williams (who, considering his politics as a British social democrat, makes a nice third leg of this stool):

Williams is skeptical about this ideal of separating the "criticizing self" from "everything that a person contingently is." To put the point in my terms (not his): We can criticize any value. We can always ask, Why? Why should people have freedom of speech? Because they have equal dignity. But why should they have equal dignity? When moral words and phrases have emotional appeal, we can learn to disassociate ourselves from the positive emotions by asking critical questions. That process, carried to its relentless conclusion, leaves nothing.

Thus a good life is not simply a critical one; it also requires appreciation of contingency and solidarity for others. In my opinion, it is right to appreciate the diverse values that people have inherited (for contingent reasons) and to feel solidarity with them despite these differences. In that case, critical thinking and critical reading are not satisfactory goals of education, at any level. Some critical independence is valuable, but there must also be a positive affective dimension.

A separate question is to what extent critical thinking really dominates at institutions like Harvard. My sense is that the faculty report that Brooks quotes is only part of the picture. Universities also powerfully teach respect or even reverence for various institutions and traditions. Indeed, they try to teach students to revere academia itself--not mainly as a venue for critical debate but as a social gatekeeper and arbiter of norms. The fact that "critical reading" takes place in the seminar room helps to justify the institution's major function, which is to bestow membership and recognition on some and not on others.

permanent link | comments (4) | category: academia , philosophy

February 24, 2009

the politics of negative capability

Zadie Smith's article "Speaking in Tongues" (The New York Review, Feb 26) combines several of the fixations of this blog--literature as an alternative to moral philosophy, deliberation, Shakespeare, and Barack Obama--and makes me think that my own most fundamental and pervasive commitment is "negative capability." That is Keat's phrase, quoted thus by Zadie Smith:

Other critics have noted Shakespeare's remarkable ability not to speak on his own behalf, from his own perspective, or in support of his own positions. Coleridge called this skill "myriad-mindedness," and Matthew Arnold said that Shakespeare was "free from our questions." Hazlitt said that the "striking peculiarity of [Shakespeare’s] mind was its generic quality, its power of communication with all other minds--so that it contained a universe of feeling within itself, and had no one peculiar bias, or exclusive excellence more than another. He was just like any other man, but that he was like all other men." Keats aspired to have the same "poetical Character" as Shakespeare. Borrowing closely from Hazlitt, Keats said that his own type of poetic imagination "has no self--it is every thing and nothing--It has no character. … It has as much delight in conceiving an Iago as an Imogen. What shocks the virtuous philosop[h]er, delights the camelion poet.” When we read philosophical prose, we encounter explicit opinions that reflect the author’s thinking. But, said Keats, although "it is a wretched thing to express … it is a very fact that not one word I ever utter can be taken for granted as an opinion growing out of my identical nature [i.e., my identity]."

In Shakespeare's case, it helps, of course, that he left no recorded statements about anything other than his own business arrangements: no letters like Keats' beautiful ones, no Nobel Prize speech to explain his views, no interviews with Charlie Rose. All we have is his representation of the speech of thousands of other people.

Stephen Greenblatt, in a book that Smith quotes, attributes Shakespeare's negative capability to his childhood during the wrenching English Reformation. Under Queen Mary, you could be burned for Protestantism. Under her sister Queen Elizabeth, you could have your viscera cut out and burned before your living eyes for Catholicism. It is likely that Shakespeare's father was both: he helped whitewash Catholic frescoes and yet kept Catholic texts hidden in his attic. This could have been simple subterfuge, but it's equally likely that he was torn and unsure. His "identical nature" was mixed. Greenblatt argues that Shakespeare learned to avoid taking any positions himself and instead created fictional worlds full of Iagos and Imogens and Falstaffs and Prince Harrys.

What does this have to do with Barack Obama? As far as I know, he is the first American president who can write convincing dialog (in Dreams from My Father). He understands and expresses other perspectives as well as his own. And he has wrestled all his life with a mixed identity.

Smith is a very acute reader of Obama:

The challenge for Obama is that he doesn't write fiction (although Smith remarks that he "displays an enviable facility for dialogue"), but instead holds political office. Generally, we want our politicians to say exactly what they think. To write lines for someone else to say, with which you do not agree, is an important example of "irony." We tend not to like ironic leaders. Socrates' "famous irony" was held against him at his trial. Achilles exclaims, "I hate like the gates of hell the man who says one thing with his tongue and another in his heart." That is a good description of any novelist--and also of Odysseus, Achilles' wily opposite, who dons costumes and feigns love. Generally, people with the personality of Odysseus, when they run for office, at least pretend to resemble the straightforward Achilles.

But what if you are not too sure that you are right (to paraphrase Learned Hand's definition of a liberal)? What if you see things from several perspectives, and--more importantly--love the fact that these many perspectives exist and interact? What if your fundamental cause is not the attainment of any single outcome but the vibrant juxtaposition of many voices, voices that also sound in your own mind?

In that case, you can be a citizen or a political leader whose fundamental commitments include freedom of expression, diversity, and dialogue or deliberation. Of course, these commitments won't tell you what to do about failing banks or Afghanistan. Negative capability isn't sufficient for politics. (Even Shakespeare must have made decisions and expressed strong personal opinions when he successfully managed his theatrical company). But in our time, when the major ideologies are hollow, problems are complex, cultural conflict is omnipresent and dangerous, and relationships have fractured, a strong dose of non-cynical irony is just what we need.

permanent link | comments (0) | category: Barack Obama , Shakespeare & his world , deliberation , philosophy

February 23, 2009

consolation of mortality

I just finished Jonathan Barnes' Nothing to be Frightened Of, which is the memoir of a novelist who fears death. I read it because the quotations in reviews were very funny; because, as a fellow chronophobiac, I hoped that some wisdom and solace might be mixed in with the humor; and because I knew the author's brother Jonathan at Oxford around 1990 and wanted to understand more about this philosopher who "often wears a kind of eighteenth-century costume designed for him by his younger daughter: knee breeches, stockings, buckle shoes on the lower half; brocade waistcoat, stock, long hair tied in a bow on the upper." (This is Julian's description. I would add that the effect is less foppish that you'd think. The wearer resembles a plain-spun, serious Man of the Enlightenment much more than a dandy.)

Anyway, it's a good book and certainly amusing. But Barnes treats the most powerful consolation of morality very subtly--if he recognizes it at all. I mean the consolation of the first person plural. I will die, but we will live on. We think in both the singular and plural and probably began the former first, when we stared at our parents. Language, thought, culture, desire--everything that matters is both individual and profoundly social.

"After I die, other people will go about their ordinary lives, laughing, singing, complaining about trifles, never mourning or even missing me." That is the solipsist's jealous lament. But the mood changes as soon as the grammar shifts. "Even though I must pass, our ordinary life will continue in all its richness and pleasure."

What we count as the "we" is flexible--it can range from a dyad of lovers to the whole human race. No such "we" is guaranteed immortality. It depresses Jonathan Barnes that humanity must someday vanish along with our solar system (and we may finish ourselves off a lot faster than that). But no large collectivity of human beings is doomed to a fixed life span. We can outlive you and me, and you and I can help to make that happen. This is a consolation available to all human beings, whatever they may believe about souls and afterlives. But it is not, I think, much of a comfort to Jonathan Barnes.

permanent link | comments (0) | category: philosophy

February 18, 2009

fundamental orientations to reform

(This is a rambling post written during a flight delay at Washington National. It lacks an engaging lead. In brief, I was thinking about various conservative objections to utopian reform and how social movements, such as the Civil Rights Movement, can address some of those objections.)

The French and Russian revolutions sought dramatically different objectives--the French Jacobins, for example, were fanatical proponents of private property--but they and their numerous imitators have been alike in one crucial way. Each wave of revolutionaries has considered certain principles to be universal and essential. They have observed a vast gap between social reality and their favored principles. They have been willing to seize the power of the state to close this gap. Even non-violent and non-revolutionary social reformers have often shared this orientation.

I see modern conservatism as a critique of such ambitions. Sometimes the critique is directed at the principles embodied in a specific revolution or reform movement. The validity of that critique depends on the principles in question. For example, the Soviet revolution and the New Deal had diametrically opposed ideas about individual liberty. One could consistently oppose one ideology and support the other.

Just as important is the conservative's skepticism about the very effort to bring social reality into harmony with abstract principles (any principles). Conservatives argue: Regardless of their initial motivations, reformers who gain plenipotentiary power inevitably turn corrupt. No central authority has enough information or insight to predict and plan a whole society. The Law of Uninintended Consequences always applies. There are many valid principles in the world, and they trade off. The cost of shifting from one social state or path to another generally outweighs the gains. Traditions embody experience and negotiation and usually work better than any plan cooked up quickly by a few leaders.

These are points made variously by Edmund Burke, Joseph de Maitre, James Madison, Lord Acton, Friedrich von Hayek, Isaiah Berlin, Karl Popper, Daniel Patrick Moynihan, and James C. Scott, among others: a highly diverse group that includes writers generally known as "liberals." But I see their skepticism about radical reform as emblematic of conservative thought.

Two different conclusions can follow from their conservative premises. One is that the state is especially problematic. It monopolizes violence and imposes uniform plans on complex societies. Its power reduces individual liberty. Individuals plan better than the state because they know their own interests and situations, and they need only consider their own narrow spheres. They have limited scope for corruption and tyranny. Therefore the aggregate decisions of individuals are better than the centralized rule of a government. This is conservative libertarianism: the law-and-economics "classical liberalism" of Hayek, not the utopian libertarianism of Ayn Rand or Robert Nozick (as different as those authors were).

The alternative conclusion is that local traditions should generally be respected. Reform is sometimes possible, but it should be gradual, generally consensual, and modest. The odds are against any effort to overturn the status quo, imperfect as that may be. This is Burkean traditionalist conservatism. The Republican Party has very little interest in it today, but it motivates crunchy leftists who prize indigenous customs and cultures and oppose "neo-imperialism" (just as Burke opposed literal imperialism).

These two strands of conservative thought often come into conflict, because actually existing societies do not maximize individual liberty or minimize the role of the state (or of state-like actors, such as public schools, religious courts, clans, and bureaucratic corporations). Traditionalists and libertarians disagree forcefully about what to do about illiberal societies.

Take the case of Iraq under Saddam. The so-called neoconservatives (actually libertarians of a peculiar type) claimed that the main problem with Iraq was a tyrannical state, and the best solution was to invade, liberate, and then constrain the successor regime sharply. Private Iraqis should govern their own affairs under a liberal constitution. The Burkean response was that Iraq was a predominantly non-liberal society, deeply religious and patriarchal; therefore, a liberal constitution would be an alien, utopian imposition that would never work.

We can envision a kind of triangular argument among utopian revolutionaries, Burkean traditionalists, and libertarians--with strengths and weaknesses on all sides. But there is a fourth way. That is the deliberately self-limiting utopian social movement. The Gandhian struggle in India, the Civil Rights Movement in the United States, and the anti-Apartheid movement in South Africa shared the following features: (1) regular invocation of utopian principles, portrayed as moral absolutes and as pressing imperatives; (2) deep respect for local cultures, traditions, and faiths; (3) pluralism and coalition politics, rather than a centralized structure; and (4) strict, self-imposed limits.

The South African ANC had a military wing that aimed to capture the state, whereas Gandhi and the Civil Rights Movement were non-violent. But I would describe non-violence as simply an example of a self-limitation designed to prevent corruption and tyranny. It's a good strategy, because violence tends to spin out of control, to the detriment of the reformers themselves. But it isn't intrinsically or inevitably better than other strategies. The ANC managed to use violence but to restrain itself--as did the American revolutionaries of our founding era.

So now we see a four-way debate among utopian reformers, libertarians, traditionalists, and social-movement reformers. Social movements have answers to several of the chief arguments made by the other sides. They can address conservative worries about arrogance, corruption, and tyranny while also seeking to change the world in principled ways. The problem for social movements is institutionalization. Such movements tend to crest and then fall away, unlike the regimes that the other ideologies promote.

permanent link | comments (1) | category: philosophy

January 28, 2009

measuring what matters

(Washington, DC) I am here for a meeting of a federal committee--one of dozens--that helps to decide which statistics to gather from public school students. We are especially focused on socio-economic "background variables" that may influence kids' success in schools. What to measure often boils down to what correlates empirically with test scores or graduation rates. For instance, a combination of parents' income, education, and occupation can explain about 15%-20% of the variance in test scores. And so we measure these variables.

But the mere fact of a correlation between A and B doesn't mean we should measure both. We could look for correlations between the length of students' noses and the weight of their earlobes. Instead, we look for covariance between parental income and the total number of questions a kid can answer correctly on a test that we write and make him take. Why? Because of moral commitments: beliefs about what inputs, outputs, and causal relationships matter ethically in education.

So it's worth getting back to fundamental principles. These would be mine:

First, the quality of schooling (education that the state provides) should be equal, or should actually be better for less advantaged kids. Quality does not mean effectiveness at raising test scores--it means what is actually good. That may include intrinsically valuable experiences, such as making and appreciating art. But quality probably includes effective practices that raise scores on meaningful, well-designed tests.

Second, it's good when outcomes are equal, but equality trades off against other values, such as freedom for children and parents, and cultural diversity. Also, a narrow focus on equality of outcomes almost inevitably leads to narrow definitions of success and can put excessive pressure on teachers and kids.

Third, individuals' aptitude probably varies (and the degree to which it varies is an empirical question), but every kid who is not performing very well could probably perform better if he or she got a better education. Thus differences in aptitude do not excuse failure to educate.

Fourth, out-of-school resources affect educational outcomes. These resources vary, and that is not fair. We should do something to equalize kids' chances. But resources fall into various categories that raise different moral questions:

1. Fungible resources, such as parents' income or wealth. We can compensate for these inequalities by, for instance, spending more on schools in poor communities. (We tend to do the opposite, but I am writing about principles, not reality.) Note, however, that family income alone explains a small amount of variance in test scores.

2. Attributes of parents that cannot be exchanged or bought, such as their knowledge, skills, abilities, social networks, and cultural capital (ability to function well in privileged settings such as universities and white-collar businesses). It is interesting, for example, that the number of books in a student's home is a consistent predictor of educational success. This is related to income, but it's not the same thing. You may be more educationally advantaged if your parents are poor graduate students with lots of books than rich but vapid aristocrats, especially if your parents devote time to you. The challenge is that parental attributes cannot be changed without badly restricting freedom.

3. Prevalent attitudes, such as racial prejudice/white privilege, that may affect students' self-image; or values relevant to education, such as the belief in Amish communities that a basic education is sufficient. These attitudes vary in how morally acceptable they are. But they have in common the fact that the state cannot change them without becoming highly coercive.

In the end, I think we measure parental resources and their relationship to test scores because we think that (a) it's especially important to compensate for inequalities in cash, and (b) we presume that test scores measure educational success. Both presumptions are debatable, but I believe them enough that I'll keep attending meetings on how to measure them better.

permanent link | comments (0) | category: education policy , philosophy

January 12, 2009

should lying to the public be a crime?

This is an argument from my side of the aisle, so to speak, that really upsets me. (Frank Rich, Dec. 13):

It is not against the law to lie to the public or to start a war on false pretenses. Because those acts are not illegal, Libby was not charged with them. He was not investigated for lying to the public; no evidence to that effect was ever put before a jury. No one examined him to see whether his assertions were (a) false and (b) knowingly so. He could not defend himself in court against an accusation of deliberately misleading the American people, because no such accusation was made. If, as Frank Rich apparently wishes, Libby was convicted because he lied to the public about a war, that was a flagrant violation of the rule of law, one of whose fundamental principles is nullum crimen et nulla poena sine lege ("no crime and no punishment without a law").

Having gotten that off my chest, I'd like to raise a more theoretical question: Would it make any sense to create a criminal law against lying to the public? The elements of this crime would have to include intent and serious consequences. In other words, it would be a defense to say that you didn't know your information was wrong; and it would be a defense to say that your lie was inconsequential. The law could govern any public utterance, or only certain contexts, such as formal speeches given by high officials. We already have perjury laws that apply to sworn testimony; these would be broadened. Another precedent is the Oregon law that says that candidates' personal statements in state voter guides must be true. Former Congressman Wes Cooley was convicted of falsely claiming that he had served in the Special Forces.

In favor of this reform: Lying is wrong. It can cause serious harm to other people. Lying by public officials can undermine the public's sovereignty by giving citizens false information to use in making judgments. Although it can be challenging to prove intent, that is certainly possible in some circumstances, as we know from perjury trials.

Against: There could be a chilling effect on free speech, because people who participate in heated debates do occasionally stray from the truth. It would be bad to suppress such debates altogether. Also, criminalizing lying would shift power from the legislative and executive branches to the judiciary, which might therefore become even more "political." The reform might reduce the public's sense that we are responsible for scrutinizing our government's statements and actions and punishing bad behavior at the ballot box.

Finally, it would distort the political debate if there were frequent, high-stakes battles over whether individuals had knowingly lied about specific facts. Often a specific prevarication is not nearly as important as someone's bad values and priorities. For instance, the Bush Administration very publicly and openly denigrated the importance of foreigners' human rights and chose an aggressive and bellicose strategy. These were not lies; they were public choices that unfortunately happened to be quite popular.

permanent link | comments (3) | category: philosophy

September 9, 2008

"love" as a family-resemblance word

This is one of several recent posts in which I struggle with definitions of the word "love" as a way of thinking about how we define moral concepts, generally. Here I borrow the idea of “family-resemblance” from the later Wittgenstein. Sometimes, we recognize that people belong to a family, not because they all have one feature in common, but because each individual looks like many of his or her relatives in many ways. Maybe eight out of twelve family members have similar noses; a different six out of the twelve have the same color hair; and a yet another seven have the same chin. Then they all resemble each other, although there is no (non-trivial) common denominator. Wittgenstein argued that some--although not all--perfectly useful words are like this. They name sets of objects that resemble one another; but members of each set do not share any defining feature. Their resemblance is a statistical clustering, a greater-than-random tendency to share multiple traits.

A good example is “curry,” which the dictionary defines as a dish flavored with several ground spices. The word “curry” thus describes innumerable individual cases, where each one resembles many of the rest, but there is no single ingredient or other characteristic that they all share. Nor is there a clear boundary between curry and other dishes. Is bouillabaisse a curry? Clearly not, although the dictionary’s definition applies to it. Indeed, any definition will prove inadequate, yet we can learn to recognize a curry and distinguish it from other kinds of food. If we want to teach someone how to use the word “curry,” we will serve several particular examples and also perhaps some dishes that are not curries. If the student draws the conclusion that a curry must always contain coriander, or must be soupy, or must be served over rice, then we can serve another curry that meets none of these criteria. Gradually, he will learn to use the word. Even sophisticates will debate about borderline cases, but that is the nature of such concepts. Their lack of definition does not make them useless.

It seems to me that “love” is also a family-resemblance word, because there is no common denominator to love for ice cream, love for a newborn baby, love of country, brotherly love for humanity, self-love, tough love, Platonic love, making love, amor fati, philately, etc. Some (but not all) of these forms of “love” involve a high regard for the object. Some (but not all) imply a commitment to care for the object. Some (but not all) signify an intense emotional state. Dictionaries cope by providing numerous definitions of love, thus suggesting that “love” means “lust” or “enthusiasm” or “adoration” or “agape” or “loyalty.” But “love” never quite means the same as any of these other words, because we faintly recognize all of its other meanings whenever it is used in a particular way. For instance, “love” is always different from “lust,” just because the former word can mean loyal adoration as well as sexual desire.

The experience of love is complex because one has usually loved before in several different ways and has seen, heard, or read many descriptions of other loves; and these past examples and descriptions become part of one's present experience. “Love” is a family-resemblance word that brings its family along when it visits.

When we read a literary work that vividly describes an example of love, it changes our experience of the concept. Any philosophical discussion of "love" must be a discussion of the experience; and therefore what we conclude philosophically must depend (in part) on how love has been portrayed for us in the arts. (Cf. Tzachi Zamir, Double Vision: Moral Philosophy and Shakespearean Drama, p. 127).

permanent link | comments (0) | category: philosophy

September 4, 2008

what would Kant say about Peggy Noonan?

Yesterday morning, the speechwriter and columnist Peggy Noonan published a piece in the Wall Street Journal arguing that Sarah Palin was a great choice for vice president: potentially a "transformative political presence." Later the same day, she was recorded saying that Palin was not the best qualified person and was chosen because of "political bullshit about narratives and youthfulness."

What's wrong with this? Perhaps it's evidence of a lie. In the morning, Noonan published a proposition about her own feelings toward Palin. In the afternoon, she said a different proposition about her own feelings. If the two claims were contradictory, then she lied unless she changed her mind. But I'm not sure they're flatly contradictory, since the original column was at least somewhat conflicted: Palin, she wrote, "is either going to be brilliant and groundbreaking, or will soon be the target of unattributed quotes by bitter staffers shifting blame in all the Making of the President 2008 books." I think that's compatible with saying that Palin was chosen for a foolish reason. Noonan could be hopeful about Palin, yet suspicious of the reasons she was chosen. In short, the case for a lie seems weak to me.

Instead of treating Noonan's private remarks as evidence of mendacity, we could accuse her of violating Kant's principle of publicity: "All actions relating to the right of other human beings are wrong if their maxim is incompatible with publicity." The idea is that one can test the rightness of an action by asking whether the actor's private reason for so acting could be made public. If you cannot disclose the reason you have done P, you should not do P. Peggy Noonan's private remarks suggest that she thought Palin was probably a bad choice. But she could not say that in the Wall Street Journal without hurting the Republican ticket and costing herself powerful friends. So she shouldn't have written her Wall Street Journal column, according to at least one interpretation of Kant.

The publicity principle can seem over-demanding. Does it mean that one cannot mutter something to one's spouse unless one would also announce it in an office meeting? The glare of publicity can expunge the safe shadows of a private or personal life. That thought gives me a little sympathy for public figures like Peggy Noonan who are caught on tape being frank with friends. (Jesse Jackson and many others have done the same.) But Kant offered his publicity principle in a book about politics (Perpetual Peace), and he qualified it by limiting it to "actions relating to the right of other human beings." In other words, it applies to willing participants in the world of power, law, and politics--not to private individuals. By writing a column in the Wall Street Journal, Noonan committed herself to a public role. The implied promise to her readers was that she was acting transparently and sincerely in that public arena. If her private remarks show otherwise, then she violated Kant's publicity principle.

permanent link | comments (4) | category: philosophy

August 24, 2008

the moral evaluation of literary characters

I'm on p. 521 of Dickens' Bleak House--hardly past half-way--but so far Mrs Jelleby is proving to be a bad person. Like many of my friends (like me, in fact) she spends most of her days reading and writing messages regarding what she calls a "public project"--in her case, the settlement of poor British families on the left bank of the River Niger at the ridiculously named location of Borrioboola-Gha. Meanwhile, her own small children are filthy, her clothes are disgraceful, her household is bankrupt, her neglected husband is (as we would say) clinically depressed, and she is casually cruel to her adolescent daughter Caddy. Caddy finds a man who pays some attention to her, but Mrs Jellyby is completely uninterested in the wedding and marriage:

Mrs Jellyby's friends dominate the wedding breakfast and are "all devoted to public projects only." They have no interest in Caddy or even in one another's social schemes; each is entirely self-centered.

Within the imaginary world of Bleak House, Mrs Jellyby is bad, and her moral flaws should provoke some reflection in the rest of us--especially those of us who spend too much time sending emails about distant projects. The evident alternative is Esther Summerson, a model housekeeper who cares lovingly for her friends and relatives and refuses to interfere with distant strangers' lives on the ground "that I was inexperienced in the art of adapting my mind to minds very differently situated ...; that I had much to learn, myself, before I could teach others ..."

Fair enough, but we could also ask why Dickens decided to depict Mrs Jellyby instead of a different kind of person, for instance, a man who was so consumed with social reform that he neglected his spouse, a woman who successfully balanced public and private responsibilities, or a woman, like Dorothea Brooke, who yearned for a public role but instead devoted her life to the private service of men. Both the intention and the likely consequences of Dickens' portrait are to suppress the public role of women.

The general point I'd like to propose is this: the moral assessment of literary characters (lately returned to respectability by theorists like Amanda Anderson) requires two stages of analysis. First one decides whether a character is good or bad--or partly both--within the world of a fiction. And then one asks whether the author was right to choose to create that character instead of others.

permanent link | comments (0) | category: none

August 18, 2008

broadening philosophy

Moral philosophy (or ethics) forms a diverse and eclectic field, about which few accurate generalizations can be made.* However, I think I detect a very widespread preference for concepts whose significance is always the same--either positive or negative--wherever they appear. In defining moral concepts, philosophers like to identify necessary and sufficient conditions, such that if something can be done, it will always be obligatory, praiseworthy, desirable, permissible, optional, regrettable, shameful, or forbidden to do it. These moral propositions may have to be considered along with other valid propositions that also apply in the same circumstances. For instance, honesty may be obligatory (or at least praiseworthy); yet tact is also desirable. Honesty and tact can conflict. Hardly anyone doubts that we face genuine moral conflicts and dilemmas. Yet the hope is to develop general moral propositions, built of clearly defined concepts, that are always valid, at least all else considered.

But what should we say about complex and ambiguous phenomena that have evolved over biological and historical time and that now shape our lives? I am thinking of concepts like love (recently discussed here), marriage, painting, the novel, lawyers, or voting. We can't use these words in a deontic logic made up of propositions like "P is necessary." They are sometimes good and sometimes not. We could try to divide them into subconcepts. For instance, love could be divided into agape, lust, and several other subspecies; painting can be categorized as representational, abstract, religious, etc. Once we have appropriate subconcepts, we can say that they have a particular moral status if (and only if) specified conditions apply.

The urge is to avoid weak modal verbs like "may" and "can" or other qualifiers like "sometimes" and "often." Love can be wonderful; it can also be a moral snare. Paintings sometimes invoke the sublime; sometimes they don't. Lawyers have legitimate and helpful roles in some cases and controversies, but not in others. A core philosophical instinct is to get rid of these qualifiers by using tighter definitions. For example, agape (properly defined) might turn out to be always good and never a snare. You always need and have a right to a lawyer when you are arraigned. All paintings by Giorgione or similar to Giorgione's are sublime. And so on.

My fear is that the pressure to avoid soft generalizations prevents us from saying anything useful about a wide range of social institutions, norms, and psychological states. They don't split up neatly into subcategories, because they didn't evolve or develop so neatly. They won't work in a deontic logic unless we allow ourselves soft modals like "may" and "can." And yet, outside of philosophy, much of the humanities involves moral evaluations of just such concepts. For example, a great nineteenth-century novel about marriage does not claim that marriage is always good or bad, or always good or bad under specified conditions. The novel evaluates one or two particular marriages and supports qualified conclusions: marriage (in general) can be a happy estate, but it also has dangers. It is wise, when contemplating a marriage, to consider how events may play out for both partners. "Marriage," of course, means marriage of a specific, culturally-defined type (monogamous, exogamous, heterosexual, voluntary, permanent, patriarchal, and so on). That institution will evolve subtly and may be altered suddenly by changes in laws and norms. The degree to which the implied advice of the novel generalizes is a subtle question which the novel itself may not address.

Much contemporary philosophy has a forensic feel. The goal is to work out definitions and rules that, like good laws, permit the permissible and forbid the evil. I do not doubt the value of forensic thinking--in law. I do doubt that it is adequate for moral thinking. It seems to me that the search for clearly defined and consistent concepts narrows philosophers' attention to discrete controversial actions (abortion, torture, killing one to save another) and discourages their consideration of complex social institutions. It also directs their energy to metaethics, where one can consider questions about moral propositions, rather than "applied" topics, which seem too messy and contingent.

*I am struggling a bit to test my claims about what is central and peripheral, given the enormous quantity of articles and books published every year. If you use the Philosopher's Index (a fairly comprehensive database) to search for words that have been chosen as "descriptors" for books and articles, you will find 2,131 entries on utilitarianism, 445 on Kantianism, and 541 on metaethics; but also 2,121 on love and 351 on marriage. Given what is typically taught in philosophy departments, I was surprised to find a moral topic (love) almost matching a philosophical approach (utilitarianism.) Closer inspection reveals much diversity. There are articles in the Index on classical Indian philosophical writing, and articles on Victorian novels that seem more like literary criticism than philosophy. (The Index encompasses some interdisciplinary journals in the humanities.) There is much contemporary Catholic moral theory that seems to be in conversation mainly with itself. I will stick to my claims about what is most influential, highly valued, and canonical in the profession today, although I acknowledge that people with jobs as philosophers have written about practically everything and in practically all imaginable styles.

permanent link | comments (0) | category: philosophy

July 14, 2008

worrying about "love"

What is the meaning of a principle like "causing needless pain is bad" or "lying is wrong"? These principles are not always right--think about the pain of an athletic event or lying to the Gestapo. Various explanations have been proposed for the relationship between such principles and their exceptions. Maybe lying is wrong if certain conditions are met, and those conditions are common. Or maybe lying is really the union of two concepts--"mendacium" (mendacious untruths) and "falsiloquium" (blameless misleading), to use medieval concepts. Or maybe lying and pain-causing are always bad "pro tanto"--as far as that goes. They are always bad but their badness can be outweighed.

Mark Norris Lance and Maggie Little have another theory: "defeasible generalization."* The following are defeasible generalizations taken from science: Fish eggs turn into fish. A struck match lights. These assertions are certainly not always true. In fact, very few fish eggs actually turn into fish, and I rarely get a match going on the first try. Nevertheless, a fish egg turns into a fish unless something intervenes. Even though the probability of its reaching the fish stage is low, to do so is its nature. The privileged cases are the ones in which the egg turns into a fish and the struck match catches fire. All the other outcomes, even if they are more common, are deviant. To understand that something will normally or naturally turn into a fish is to realize that it is a fish egg.

Lance and Little make a close analogy to moral issues: "Many key moral concepts--indeed, the workhorses of moral theory--are the subjects of defeasible moral generalizations. ... Take the example of pain. We believe it is important to any adequate morality to recognize that defeasibly, pain is bad-making." In other words, it is correct that causing pain is bad, even though there are exceptions that may turn out to be common. "To understand pain's nature, then, is to understand not just that it is sometimes not-bad, but to understand that there is an explanatory asymmetry between cases in which it is bad and cases in which it is not: it is only because pain is paradigmatically bad-making that athletic challenges come to have the meaning they do, and hence provide a kind of rich backdrop against which instances of pain can emerge as not-bad-making, as not always and everywhere to-be-avoided." Moral discernment is grasping the difference between paradigm cases and aberrant ones. We learn this skill, but it is not just a matter of applying rules. It may not be codifiable.

This seems plausible to me. But I do not think that every moral issue works this way. Take the absolutely crucial concept of love. We might say, as a defeasible generalization, that love is good. We know that in some cases love is bad. Adultery, obsessive love, and lust are common examples (although each of these bad categories admits counter-examples that happen to be good). But maybe it is true to say that love is good just in the same way that it is true to say that fish eggs turn into fish. This principle (arguably) reveals an understanding of the concept of love even though many cases are exceptional.

Here is my worry. I do believe, as a statistical generalization, that most cases of love are good. However, I also believe that we have a tendency to overlook the bad side of love, especially if we are the subject or object of it. We have biases in favor of love that presumably arise from our biological desires for sex and companionship and from the legacy of a million stories, poems, paintings, movies, and songs in which the protagonists fall in love and are admired for it. So the principle that love is good, if treated as a defeasible generalization, a default position, or a rebuttable presumption, is likely to mislead.

And we have an alternative. That is to say that love is nearly always morally significant. It is rarely neutral. Yet you cannot know, without looking at the whole situation, whether love is a good or a bad thing. Given the important possibility that love may be bad, or that a good love may have some element or danger of bad love (or vice-versa), it is not right to make any presumption about its moral "valence" until you hear the whole story.

This is exactly the position that Jonathan Dancy calls "particularism" (and Anthony W. Price has called "variabalism"). Dancy says at times that it applies to every reason, principle, or value--none has a good or bad "valence" that we can know in advance. Whether anything is good depends on the context. I would argue that particularism or variabalism applies to love--but not to lying or causing pain. Still, this is only a minor setback for particularism, because love is a hugely important issue and is unlikely to be the only one that behaves this way. In fact, I suspect that most of Aristotle's list of virtues (courage, temperence, liberality, frindliness, patience, etc.) are like love. We can make the defeasible generalization that they are morally significant. That shows that we understand these concepts. But to say that they are good means jumping to conclusions, even if we insist that there are exceptions.

Incidentally, there are various alternatives to particularism about love that I have not addressed here. Most alternatives would involve categorizing types of love or explaining the general conditions under which love is good or bad. I think these are, at best, heuristics. Love is relatively unlikely to be good if Emma loves Rodolphe while Emma is married to Charles, for example. But there are plenty of real and fictional stories in which adulterous love is a good thing. The differences between good and bad love are unlikely to be codifiable, and the effort to divide "love" into its good and bad forms misses a basic fact about it. Love just is something that can be great, or can be awful, or can be both; and you have to be careful about it.

* See Mark Norris Lance and Maggie Little, “From Particularism to Defeasibility in Ethics," in Mark Norris Lance, Matjaž Potrč, and Vojko Strahovnik, eds., Challenging Moral Particularism (New York: Routledge, 2008), pp. 53-74. This chapter is very similar, but not identical, to Mark Norris Lance and Margaret Olivia Little, "Defending Moral Particularism," in James Dreier, ed., Contemporary Debates in Moral Theory (Oxford: Blackwell, 2006), pp. 305-321.

permanent link | comments (0) | category: philosophy

July 2, 2008

good lives

Friends returned recently from Alaska, where they had encountered people who prefer to live alone and "off the grid," with as little interaction with the United States as possible. I don't think this is a great form of life. I admire people who provide more service to humanity. Also, I'm not impressed by a way of life that must be denied to most other human beings (for we simply don't have enough space on the planet to allot each family many acres). It's possible that some day we'll all gain benefit from Alaskan survivalists--we may need their special knowledge. But that would make the case easy. Let's keep it hard by presuming that they will never do any practical good for anyone other than themselves.

This example is an opportunity to try to make sense of three premises:

1. Some ways of life are better than others.
2. It takes many types of lives (each with its own prime virtue) to make a livable world; and
3. It's a better world if it contains many different types of character and virtue, rather than a few.

I take 1 as pretty obvious. If you don't agree with me that Alaskan survivalists lead less meritorious lives than hospice workers, you must at least concede that hospice workers are better people than Storm Troopers. It might sound pretentious to assert that some lives are lived better than others. But the alternative is to deny that it makes any difference how we live, and that makes life a joke.

I think 2 is also pretty obvious. If we didn't have people who were committed to practical organizing work and productive labor, we'd starve. If there was no one who was concerned about security (and willing at least to threaten legitimate force on behalf of the community), we'd be in grave danger. Were it not for curious scientists, we would live shorter lives. But what follows from these examples? Not that several different kinds of lives are equally meritorious. Aristotle knew that it took many types of people, including manual laborers and soldiers, to sustain the polis. He nevertheless believed that the life of dispassionate inquiry was the single best life. He could hold these two positions together because he was no moral egalitarian. For him, it did not follow that if we need laborers and soldiers as well as philosophers, therefore all three are equally valuable. Moral egalitarianism is not self-evident or universal, although I certainly endorse it.

One can combine 1 and 2 by saying that there is a list of valuable ways of life, which includes all the necessary roles (e.g., producers, protectors, healers) plus some that have less practical advantages: for example, artists and abstract thinkers. This is a limited kind of pluralism. It supports moral distinctions but admits more than one type of goodness.

I'm inclined to go further and say that the world is better if it includes forms of life that are neither essential nor intrinsically meritorious. Our environment is simply more interesting if it contains Alaskan survivalists as well as productive farmers and cancer researchers. Thus I would propose that an individual who goes off the grid is probably not leading the best possible life for him; yet it is better that some people do this than that none do.

permanent link | comments (3) | category: philosophy

June 16, 2008

the ethics of liking a fictional character

(Waltham, Mass.) I have mentioned before that Middlemarch is my favorite book. Specifically, I am fond of Dorothea Brooke, its heroine. I like her; I want her to succeed and be happy. Allowing for the fact that she is a fictional character, I care about her.

Such feelings represent moral choices. Caring about someone is less important when that person happens to be fictional, but novels are at least good tests of judgment. Thus I am interested in whether I am right to care about the elder Miss Brooke. It seems to me that George Eliot was also especially fond of her heroine, and one could ask whether that was an ethical stance. Or, to put the question differently, was Eliot right to pull together a set of traits into one fictional person and describe that person in such a way as to make us like her?

The traits that seem especially problematic are Dorothea's beauty, her high birth, and her youth. She is a young woman from the very highest social stratum in the hierarchical community of Middlemarch, surpassed by no one in rank. She is consistently described as beautiful, not only by other characters, but also by the narrator. In fact, these are the very first lines of Chapter One:

Miss Brooke had that kind of beauty which seems to be thrown into relief by poor dress. Her hand and wrist were so finely formed that she could wear sleeves not less bare of style than those in which the Blessed Virgin appeared to Italian painters; and her profile as well as her stature and bearing seemed to gain the more dignity from her plain garments, which by the side of provincial fashion gave her the impressiveness of a fine quotation from the Bible,--or from one of our elder poets,--in a paragraph of to-day’s newspaper. She was usually spoken of as being remarkably clever, but with the addition that her sister Celia had more common-sense.

This introduction contains no physical detail, in contrast to the portrayals of other characters in the same novel, such as Rosamond and Ladislaw. The simple fact of Dorothea's beauty is not complicated by the mention of any particular form of beauty that a reader might happen not to like.

We have a tendency, I think, to want beautiful and high-born but lonely young ladies to live happily ever after. When we were young, we heard a lot of stories about princesses. We expect a princess to become happy by uniting with a young and attractive man; and whether that will happen to Dorothea is a suspenseful question in Middlemarch.

If we are prone to admire and like Dorothea because she is beautiful, Eliot complicates matters in three ways. First, she produces a second beautiful young woman in need of a husband, but this one is bad and thoroughly unlikable. (At least, it is very challenging to see things from Rosamond's perspective, as perhaps we should try to do.) Second, in Mary Garth, Eliot creates a deeply appealing young female character who, we are told, is simply plain. Third, Eliot makes Dorothea not only beautiful, but also "clever" and good.

Evidently, beauty does not guarantee goodness, nor vice-versa; yet several people in Middlemarch think that Dorothea's appearance and quality of voice manifest or reflect her inner character. This seems to be a kind of pathetic fallacy: people attribute virtues to her face, body, and voice as poets sometimes do to flowers or stars. But of course the characters who admire Dorothea's appearance as a manifestation of her soul may be right, within the world that Eliot has created in Middlemarch. Or perhaps character and appearance really are linked. Rosamond, for instance, could not be the same kind of person if she were less pretty.

I presume that it is right to like someone for being good, but it is not right to like someone because she is beautiful. One could raise questions about this general principle. Is someone's goodness really within his or her control? Perhaps we should pity (and care about) people like Rosamond who are not very virtuous. On the other hand, if we can admire beauty in nature and art, why not in human beings? And what about cleverness, which is not a moral quality but is certainly admired?

One interpretation of the novel is that Dorothea does not have a moral right to her inheritance or to her social status. These are arbitrary matters of good fortune, and she is wise to be critical of them. She does, however, according to the novel, deserve a happy marriage to a handsome man because she is both good and beautiful (and also passionate). The end of the novel feels happy to the extent that she gets the marriage she deserves. Does this make any sense as a moral doctrine? Is it an acceptable moral doctrine within a fictional world, but inapplicable to the real world?

Beautiful people tend to find other beautiful people, just as the rich tend to marry the rich and (nowadays) the clever marry the clever. Lucky people have assets in the market for partners. But is this something we should want to see? What if the plain but nice Mary Garth ended up with a broodingly handsome romantic outsider, and Dorothea married a nice young man from the neighborhood? Would that ending be wrong because beauty deserves beauty, or would it only be an aesthetic mistake (or a market failure)?

permanent link | comments (0) | category: none

June 5, 2008

teach philosophy of science in high school

I think controversies about whether to allow the teaching of "intelligent design" and whether teachers should present global warming as a fact are more complicated than is presumed by most scientific and liberal opinion. To announce that evolution is "science," while intelligent design is "religion," begs a lot of questions about what science is and how it should operate. To say that global warming is a "fact" implies a view about facts and what justifies them. Serious people hold relativist views, arguing that what we call science is a phenomenon of a particular culture. Others favor what used to be called "the strong programme in the sociology of science." That is the view that science is a social institution with its own power structure, and one can understand current scientific opinions by understanding the power behind them. I don't hold that view myself, but it's interesting that it originated on the left, and yet many people who hold it today are religious fundamentalists. And you can understand (without necessarily endorsing) their perspective when you consider that people who are anointed as "scientists" by older scientists get to control public funds, institutions, degrees, jobs, curricula, and policies in areas like health and the environment. These scientists are mostly very secular and declare that only secular beliefs qualify as science. There is a prima facie case here for skepticism, and it deserves a reasoned response.

Even among people who are strongly supportive of science (which includes most contemporary philosophers in the English-speaking world), there are live controversies about what constitutes scientific knowledge, whether and how a theory differs from other falsifiable assertions, how and why scientific theories change, how theories relate to data, etc. To tell students that evolution is a theory and that creationism isn't is dogmatism. It glosses over the debate about what a theory is.

There are also important questions that cross over from philosophy of science to political philosophy. Does a teacher have an individual right to teach creationism if he believes in it? Does he have an individual right to promote Darwinism even if local authorities don't want it taught? Should the Institute for Creation Research in Texas be allowed to issue graduate degrees? Does it have a right of association or expression that should permit this, or does the state have the right--or obligation--to license certain doctrines as scientific. Why?

I am one of the last people (I hope) to pile more tasks on our schools. In fact, I published an article arguing that we shouldn't ask schools to teach information literacy, even though it is important, because they simply have too much else to accomplish. (Instead, I argued, we need to make online information and search functions as reliable as possible). Yet I think philosophy of science is a real candidate for inclusion in the high school curriculum--or at least we ought to experiment to see if it can be taught well. I'd stake my case on two principles:

1. Making critical judgments about science as an institution is an essential task for citizens in a science-dominated society; and
2. Students are being required to study science (as defined by scientists), and taxpayers are being required to fund it. Fundamental liberal principles require that such requirements be openly debated.

permanent link | comments (1) | category: none

May 20, 2008

why join a cause?

I have been involved in a lot of causes--mostly rather modest or marginal affairs, but ones that have mattered to me: public journalism, campaign finance reform, deliberative democracy, civilian national service, civic education, media reform, and service-learning, among others. The standard way to evaluate such causes and decide whether to join the movements that support them is to ask about their goals and their prospects of success. To be fully rational, one compares the costs and benefits of each movement's objectives with those of other movements, adjusting for the probability and difficulty of success. A rationally altruistic person joins the movement that has the best chance of achieving the most public good, based on its "cause" and its strategies.

To use an overly-technical term, this is a "teleological" way of thinking. We evaluate each movement's telos, or fundamental and permanent purpose. Friedrich Nietzsche was a great critic of teleological thought. He saw it everywhere. In a monotheistic universe, everything seems to exist for a purpose that lies in its future but was already understood in the past. Nietzsche wished to raise deep doubts about such thinking:

the cause of the origin of a thing and its eventual utility, its actual employment and place in a system of purposes, lie worlds apart; whatever exists, having somehow come into being, is again and again reinterpreted to new ends, taken over, transformed, and redirected by some power superior to it; all events in the organic world are a subduing, a becoming master, and all subduing and becoming master involves a fresh interpretation, an adaptation through which any previous "meaning" and "purpose" are necessarily obscured or even obliterated. However well one has understood the utility of any physiological organ (or of a legal institution, a social custom, a political usage, a form in art or in a religious cult), this means nothing regarding its origin ... [On the Genealogy of Morals, Walter Kaufmann's translation.]

I think that Nietzsche exaggerated. In his zeal to say that purposes do not explain everything, he claimed that they explain nothing. In the human or social world, some things do come into being for explicit purposes and then continue to serve those very purposes for the rest of their histories. But to achieve that kind of fidelity to an original conception takes discipline, in all its forms: rules, accountability measures, procedures for expelling deviant members, frequent exhortations to recall the founding mission. The kinds of movements that attract me have no such discipline. Thus they wander from their founding "causes"--naturally and inevitably.

As a result, when I consider whether to participate, I am less interested in what distinctive promise or argument the movement makes. I am more interested in what potential it has, based on the people whom it has attracted, the way they work together, and their place in the broader society. I would not say, for example, that service-learning is a better cause or objective than other educational ideas, such as deliberation, or media-creation, or studying literature. I would say that the people who gather under the banner of "service-learning" are a good group--idealistic, committed, cohesive, but also diverse. Loyalty to such a movement seems to me a reasonable basis for continuing to participate.

permanent link | comments (0) | category: philosophy

April 28, 2008

three different ways of thinking about the value of nature

These are three conflicting or rival positions:

1. People value nature, and the best measure of how much they value it is how much they would be willing to pay for it. Actual market prices may not reflect real value because of various flaws in existing markets. For example, if you find an old forest that no one owns, chop it down, and burn the wood for fuel, all that activity counts as profit. You don't have to deduct the loss of an asset or the damage to the atmosphere. However, it would be possible to alter the actual price of forest wood by changing laws and accounting rules. Or at least we could accurately estimate what its price should be. The real value of nature is how much human beings would be willing to pay for it once we account for market failures.

2. Nature has value regardless of whether people are willing to pay for it. Perhaps nature's value arises because God made it, called it "good," and assigned it to us as His custodians. Or perhaps nature has value for reasons that are not theistic but do sound religious. Emerson:

The stars awaken a certain reverence, because though always present, they are inaccessible; but all natural objects make a kindred impression, when the mind is open to their influence. Nature never wears a mean appearance. ... The greatest delight which the fields and woods minister, is the suggestion of an occult relation between man and the vegetable. I am not alone and unacknowledged. They nod to me, and I to them.

Emerson's view is sharply different from #1 because he believes that his fellow men do not value nature as they should. "To speak truly, few adult persons can see nature. Most persons do not see the sun. At least they have a very superficial seeing. ..." Thus prices do not reflect nature's value.

If you're an economist or a scientist, you may not personally feel that God is present in nature or that nature is ineffably precious. Regardless, you can respect your fellow citizens who hold those feelings. One version of scientific positivism says that there are (a) testable facts about nature and (b) opinions about nature as a whole. The latter are respectable but not provable. They are manifestations of faith, neither vindicated nor invalidated by science. This sounds like the early Wittgenstein.

3. Nature has value irrespective of price: real value that may or may not be recognized by people at any given moment. But this value does not derive from a metaphysical premise about nature as a whole, e.g., that God made the world. We can make value judgments about particular parts of nature, not all of which have equal value. We can change other people's evaluations of nature by providing valid reasons.

Yosemite is more precious than your average valley. How do we substantiate such a claim? Not by citing a foundational, metaphysical belief, but by describing Yosemite itself. Careful, appreciative descriptions and explanations of natural objects are valid arguments for their value, just as excellent interpretations of Shakespeare's plays are valid arguments for the excellence of those works.

This view rejects a sharp distinction between facts and values. "Thick descriptions" are inextricably descriptive and evaluative. This view also rejects the metaphor of foundations, according to which a value-judgment must rest on some deeper and broader foundation of belief. Why should an argument about value be like the floor of a building, which is no good unless it sits on something else? It may be sufficient on its own. (This all sounds like the later Wittgenstein.)

This third position contrasts with Emerson's. He says:

Nature never wears a mean appearance. Neither does the wisest man extort her secret, and lose his curiosity by finding out all her perfection. Nature never became a toy to a wise spirit. The flowers, the animals, the mountains, reflected the wisdom of his best hour, as much as they had delighted the simplicity of his childhood.

This third view says, pace Emerson, that nature varies in quality. Tigers are more magnificent than roaches. A good way to make such distinctions is indeed to "extort [the] secrets" of nature. When we understand an organism better--including its functioning, its origins, and its place in the larger environment--we often appreciate it more, and rightly so. The degree to which our understanding increases our appreciation depends on the actual quality of the particular object under study.

permanent link | comments (1) | category: philosophy , philosophy

three different ways of thinking about the value of nature

These are three conflicting or rival positions:

1. People value nature, and the best measure of how much they value it is how much they would be willing to pay for it. Actual market prices may not reflect real value because of various flaws in existing markets. For example, if you find an old forest that no one owns, chop it down, and burn the wood for fuel, all that activity counts as profit. You don't have to deduct the loss of an asset or the damage to the atmosphere. However, it would be possible to alter the actual price of forest wood by changing laws and accounting rules. Or at least we could accurately estimate what its price should be. The real value of nature is how much human beings would be willing to pay for it once we account for market failures.

2. Nature has value regardless of whether people are willing to pay for it. Perhaps nature's value arises because God made it, called it "good," and assigned it to us as His custodians. Or perhaps nature has value for reasons that are not theistic but do sound religious. Emerson:

The stars awaken a certain reverence, because though always present, they are inaccessible; but all natural objects make a kindred impression, when the mind is open to their influence. Nature never wears a mean appearance. ... The greatest delight which the fields and woods minister, is the suggestion of an occult relation between man and the vegetable. I am not alone and unacknowledged. They nod to me, and I to them.

Emerson's view is sharply different from #1 because he believes that his fellow men do not value nature as they should. "To speak truly, few adult persons can see nature. Most persons do not see the sun. At least they have a very superficial seeing. ..." Thus prices do not reflect nature's value.

If you're an economist or a scientist, you may not personally feel that God is present in nature or that nature is ineffably precious. Regardless, you can respect your fellow citizens who hold those feelings. One version of scientific positivism says that there are (a) testable facts about nature and (b) opinions about nature as a whole. The latter are respectable but not provable. They are manifestations of faith, neither vindicated nor invalidated by science. This sounds like the early Wittgenstein.

3. Nature has value irrespective of price: real value that may or may not be recognized by people at any given moment. But this value does not derive from a metaphysical premise about nature as a whole, e.g., that God made the world. We can make value judgments about particular parts of nature, not all of which have equal value. We can change other people's evaluations of nature by providing valid reasons.

Yosemite is more precious than your average valley. How do we substantiate such a claim? Not by citing a foundational, metaphysical belief, but by describing Yosemite itself. Careful, appreciative descriptions and explanations of natural objects are valid arguments for their value, just as excellent interpretations of Shakespeare's plays are valid arguments for the excellence of those works.

This view rejects a sharp distinction between facts and values. "Thick descriptions" are inextricably descriptive and evaluative. This view also rejects the metaphor of foundations, according to which a value-judgment must rest on some deeper and broader foundation of belief. Why should an argument about value be like the floor of a building, which is no good unless it sits on something else? It may be sufficient on its own. (This all sounds like the later Wittgenstein.)

This third position contrasts with Emerson's. He says:

Nature never wears a mean appearance. Neither does the wisest man extort her secret, and lose his curiosity by finding out all her perfection. Nature never became a toy to a wise spirit. The flowers, the animals, the mountains, reflected the wisdom of his best hour, as much as they had delighted the simplicity of his childhood.

This third view says, pace Emerson, that nature varies in quality. Tigers are more magnificent than roaches. A good way to make such distinctions is indeed to "extort [the] secrets" of nature. When we understand an organism better--including its functioning, its origins, and its place in the larger environment--we often appreciate it more, and rightly so. The degree to which our understanding increases our appreciation depends on the actual quality of the particular object under study.

permanent link | comments (1) | category: philosophy , philosophy

April 22, 2008

against legalizing prostitution

The Eliot Spitzer fiasco generated some blog posts (which I neglected to bookmark) arguing that prostitution should be legal. The bloggers I read acknowledged that Governor Spitzer should be liable for breaking the law, but they argued that the law was wrong. Their premise was libertarian: private voluntary behavior should not be banned by the state. One can rebut that position without rejecting its libertarian premise, by noting that many or most prostitutes are actually coerced. In the real world, incest, rape, violence, and human trafficking seem to be inextricably linked to prostitution. But that fact will only convince libertarians if the link really is "inextricable." If some prostitution is voluntary, then it should be legal, according to libertarian reasoning.

Which I reject. Libertarians are right to prize human freedom and to protect a private realm against the state; but issues like prostitution show the limits of libertarian reasoning. We are deeply affected by the prevailing and official answers to these questions: What is appropriate sexual behavior? What can (and cannot) be bought and sold? Our own private, voluntary behavior takes on very different meanings and significance depending on how these questions are answered. Answers vary dramatically among cultures and over time. Deciding how to answer them is a core purpose of democracy.

This position can make liberals uncomfortable because of its implications for other issues, such as gay marriage. One of the leading arguments in favor is that adults should be allowed to do what they like, and the fact that two men or two women decide to marry doesn't affect heterosexuals. Actually, I think gay marriage does affect heterosexual marriage by subtly altering its social definition and purpose. I happen to think that the change is positive. It underlines the principle that marriage is a voluntary, permanent commitment (which is clearly appropriate for gays as well as for straight people). Other moral principles also favor gay marriage, including equal respect and, indeed, personal freedom. But for me, personal freedom does not trump all other considerations.

By the way, because prostitution seems to be so closely linked to incest, rape, and violent coercion, I think the best policy would be very strict penalties against soliciting. It is buying, rather than selling, sex that seems most morally odious.

permanent link | comments (1) | category: philosophy

April 4, 2008

philosophy of the middleground

1. Should the government require national service?

That's a question that modern political philosophers are primed and ready to address. It concerns the proper power of the state and the responsibilities of its citizens. Libertarians, communitarians, civic republicans, and others have fundamental principles that they can easily apply to this question. I call it a "background" issue because it deals with the fundamental rights and duties that define a whole society. It's like a question about whether everyone has a right to health care or free speech, or whether the government may compel taxation. These "background" issues are central to modern political theory.

2. Should I enlist in the military or join a civilian service program such as CityYear?

This is also a topic that political philosophers are equipped to address. It raises fundamental ethical questions about the use of force, membership in hierarchical organizations, duties to the community, and the shape of a good life. Pacifists, communitarians, various kinds of virtue-ethicists, pluralists, and others have fundamental principles that apply pretty directly to this question. I call it a "foreground" issue because it deals with a matter very close to the individual--a personal choice. It is like questions about whether to marry, have an abortion, or join a church. Such foreground issues are central to modern ethics.

3. What would a good service program be like and how could we make such a program come into being?

This is the kind of question that modern philosophers are not very good at addressing. One cannot easily answer it by applying the fundamental intuitions that drive mainstream theories of ethics and political theory. There isn't necessarily a libertarian or communitarian answer.

As a result, the question tends to be addressed in thoroughly empirical, administrative, or tactical ways. The empirical issue is what consequences result from various types of service programs. The administrative issue is what rules or processes increase the probability that the program will be run well. And the tactical issue is how one can build and sustain political support for the program.

All these questions have crucial moral dimensions. It's not enough to know whether a given program causes a particular outcome (such as higher incomes, or more civic duty). We must also decide whether those outcomes are good, whether they are distributed fairly, whether any harms to others are worthwhile, and what means for deriving these consequences are acceptable. Further, it's not enough to understand how to run or structure a good program. We must also decide what forms of governance or administration are ethical. (Mussolini made the trains run on time, but that was not an adequate defense of fascism). Finally, it's not enough to know that a given argument or "message" would produce political support for a program. We must also decide which forms of argument are ethically acceptable.

Thus it's a shame that philosophers tend to cede the "middleground" to social scientists, administrators, and tacticians. As a result, no one raises the serious, complex moral issues that arise when one thinks about political tactics, the design of programs, and their administration. This is not only bad for policy and public discourse; it is also bad for philosophy. Theories are impoverished when they miss the middleground. For example, it would be a decisive argument against requiring national service if it were impossible to build and sustain a good service program. So any argument for national service that depends entirely on first principles is a lousy argument. It needs its middleground.

Some areas of philosophy have developed a middleground and thereby not only served public purposes but also enriched the discipline. Medical ethics is the best example. It's no longer restricted to matters of individual ethics (e.g., should a physician conduct an abortion?) or matters of basic structure (e.g., is there a right to life?), but also to matters of administration, politics, and program design. Medical ethicists work in hospitals, advise commissions, and review policies. Harry Brighouse has argued that the philosophy of education should follow the same model. I would generalize and say that across the whole range of policy and social questions, it is worth asking moral questions not only about basic rights and individual behavior, but also about institutional arrangements and political tactics.

permanent link | comments (0) | category: philosophy

March 27, 2008

happiness over the course of life

Imagine two people who experience exactly the same amounts of happiness over the course of their whole lives. A experiences most of his happy times near the beginning, whereas B starts off miserable but ends in happiness.* We are inclined to think that B is more fortunate, or better off, than A. If the story of A's life were written down, it would be tragic, whereas B's tale has a happy ending. But does B really have more welfare?

One view says no. The happiness of a life is just the happiness of all the times added up. Maybe we feel happier when we are on an upward trajectory, but that extra satisfaction should be factored into an accurate estimate of our happiness. If A and B really have identical total quantities of happiness over the courses of their lives, they are equally well off. Any aesthetic satisfaction that we obtain from the happy ending of B's life is no reason to declare him better off.

Another view says says that happiness is equally valuable at any time, but we wish devoutly that our own happiest times are still to come. That wish colors our estimation of other people's lives; but perhaps it shouldn't. Just because I want the end of my life to be (even) better than the beginning, it doesn't follow that B was better off than A. Once the ledgers are closed at death, it no longer matters how the happiness was distributed.

A third view says: even if the amount of happiness is the same at two times of life, somehow the quality of happiness is better if it comes later, because then it's more likely to be the outcome or satisfaction of one's plans and one's work. That is sometimes true, but it's not necessarily the case. One can be happy late in life because of sudden dumb luck. One can have early happiness as the well-deserved accomplishment of youthful efforts.

I incline to a fourth view. Happiness is not more valuable if it happens to come later. But a morally worthwhile life is one that develops, and one should take satisfaction in one's own development. Thus we think of the old person who has learned, grown, and become better--and who is satisfied with that achievement--as a moral paradigm. He or she happens to be happy, but what matters is that the happiness is justified. The child who is naively happy makes us glad but does not inspire our admiration. Thus our intuition that happiness is better late in life does not mean that it has a greater impact on welfare. Our intuition is a somewhat confused reflection of our admiration for a particular kind of mature satisfaction.

*This topic was raised by Connie Rosati in a fine paper she delivered at Maryland this week. These views are my own and I'm deliberately not summarizing her interesting thesis because I didn't seek permission.

permanent link | comments (0) | category: philosophy

March 25, 2008

the "general turn to ethics" in literary criticism

I need to revise my book manuscript about Dante, which is under consideration by a publishing house. In the book, I argue that interpreting literature has moral or ethical value. Literary critics, I claim, almost always take implicit positions about goodness or justice. They should make those positions explicit because explicit argumentation contributes more usefully to the public debate. Also, the need to state one's positions openly is a valuable discipline. (Some positions look untenable once they are boldly stated.)

I had taken the stance that contemporary literary theorists and academic critics were generally hostile to explicit ethical argument. My book was therefore very polemical and critical of the discipline. But I was out of date. In Amanda Anderson’s brilliant and influential book The Way We Argue Now: A Study in the Cultures of Theory (Princeton, 2006), she announces: "We must keep in mind that the question. How should I live? is the most basic one" (p. 112).

This bold premise associates her with what she rightly calls the "general turn to ethics" that's visible in her profession today (p. 6). This turn marks a departure from "theory," meaning literary or cultural theory as practiced in the humanities from the 1960s into the 1990s. "Theory" meant the use of (p. 4) "poststructuralism, postmodernism, deconstruction, psychoanalysis, Marxism, feminism, postcolonialism, and queer theory" in interpreting texts and discussing methods and goals within the humanities.

"Theory" tended to deprecate human agency. Poststructuralism "limit[ed] individual agency" by insisting that we could not overcome (or even understand) various features of our language, psychology, and culture. Multiculturalism added another argument against human agency by insisting "on the primacy of ascribed group identity." Anderson, in contrast, believes in human agency, in the specific sense that we can think morally about, and influence, the development of our own characters. We don’t just "don styles [of thinking and writing], … as evanescent and superficial as fashion" (p. 127). Instead, we are responsible for how we develop ourselves.

Focusing on character does not imply a faith in untrammeled free will or individualism. "Such an exercise can (and, in my view, ideally should) include a recognition of the historical conditions out of which beliefs and values emerge (psychological, social, and political) that can thwart, undermine, or delay the achievement of such virtues and goods" (p. 122).

Anderson takes the side of liberals, Enlightenment thinkers, and proponents of deliberation in the public sphere, theorists like Jurgen Habermas (p. 5). But she emphasizes that a rational, critical, analytical stance--sometimes seen as the liberal ideal--is just one kind of character. Like other character types or identities, it must be cultivated in oneself and in others before it can flourish. Thus a Kantian or Habermasian stance is not an abstract ideal, but a way of being in the world that requires education, institutional support, and "on ongoing process of self-cultivation" (p. 127). Like other character types, the critical rationalist and the civic deliberator must be assessed morally. The primary question is how should one live. Living as a critical rationalist is just one response, to be morally examined like the others (p. 112).

For all that they seem to reject deliberation about how to live, postmodernist theorists also have views about ethos (character). For example, Stanley Fish and Richard Rorty have presented the ironist as an ideal character type. “With varying degrees of explicitness and self-awareness, I argue, contemporary theories present themselves as ways of living, as practical philosophies with both individualist and collective aspirations” (p. 3) Most of The Way We Argue Now is devoted to close, often sympathetic, but also critical readings of theoretical texts. Anderson is very insightful about character, form, irony, ambiguity, and development in these works--elements that we usually associate with literature, not with literary theory. She defends several postmodernist and multicultural authors by showing that they embody moral stances or characters that have value. She is a pluralist, in contrast to a liberal or deliberative democrat who would see the only valuable theory as one that embodied the character traits of reasonableness or tolerance. She believes that the question, "How should I live?" opens a broad discussion in which the radical theoretical movements of the 1960s to 1990s have a place.

To investigate the link between each theory and the character of those who endorse and live by it would broaden the discussion beyond "identity politics, performativity, and confessionalism," which "have exercised a certain dominance" (p. 122). Identity politics reduces the choice to either the "espousal" or the "subversion of various ascriptive and power-laden identities (gender, race, ethnicity, class, sexuality); such enactments are imagined, moreover, as directly and predominantly political in meaning and consequence." There is more to be discussed than how we relate to ascribed identities in political contexts. "Ultimately, a whole range of possible dimensions of individuality and personality, temperament and character, is bracketed, as is the capacity to discuss what might count as intellectual or political virtue or, just as importantly, to ever distinguish between the two" (pp. 122-3)

permanent link | comments (0) | category: philosophy

March 17, 2008

science from left and right

On the left today, most people seem to think that science is trustworthy and deserves autonomy and influence. The Bush Administration must be a bunch of rubes, because they continually get into struggles with scientists. Thus, for example, the first masthead editorial in today's New York Times is entitled "Science at Risk." The Times says:

As written in 1970, the [Clean Air Act] imposes one overriding obligation on the E.P.A. administrator: to establish air quality standards "requisite to protect the public health" with "an adequate margin of safety." Economic considerations--costs and benefits--can be taken into account in figuring out a reasonable timetable for achieving the standards. But only science can shape the standards themselves.

Congress wrote the law this way because it believed that air quality standards must be based on rigorous scientific study alone and that science would be the sure loser unless insulated from special interests.

But the definitions of "requisite to protect the public health" and an "adequate margin of safety" could never be scientific. These were always value-judgments--implicit decisions about how to balance mortality and morbidity versus employment and productivity. Costs always factored in, because the only level of emissions that would cause no harm to human health is zero. EPA has allowed enormous quantities of emissions into the air, surely because the agency balances moral goods against moral evils. What the Clean Air Act said was: professional scientists (not politicians or judges) shall estimate the costs of pollution. Since it is unseemly to talk about human deaths and sickness as "costs," scientists shall not use this word, nor set explicit dollar values on lives. Instead, they shall declare certain levels of safety to be "adequate," and present this as a scientific fact.

I well remember when people on the left were the quickest to be skeptical of such claims. Science is frequently an ally of industry and the military. It is intellectually imperialistic, insensitive to cultural traditions. It is arrogant, substituting expertise for public judgment even when there are no legitimate expert answers to crucial questions. (For instance, What is the economic value of a life?). Science is a human institution, driven by moral and cultural norms, power, and status. It is not an alternative to politics.

So progressives used to say. Yet scientific consensus now seems to favor progressive views of key issues such as climate change. The conservative coalition encompasses critics of science, such as creationists. And, as Richard Lewontin wrote immediately before the 2004 election, "Most scientists are, at a minimum, liberals, although it is by no means obvious why this should be so. Despite the fact that all of the molecular biologists of my acquaintance are shareholders in or advisers to biotechnology firms, the chief political controversy in the scientific community seems to be whether it is wise to vote for Ralph Nader this time."

These are short-term political calculations that lead progressives to ally themselves with science and endorse its strongest claims to power. If we are going to defend science, we should do so on the basis of principle, not political calculation. I agree with the Times that the EPA should clamp down on air pollution. I disagree that this would represent a triumph of science over politics. It would be a moral and political victory--and that is all.

permanent link | comments (1) | category: philosophy

March 12, 2008

conservative relativism

Moral relativism is the idea that there isn't any objective or knowable right or wrong; there are only the opinions of individuals or cultures at particular times in history. Some famous conservatives have made their names by attacking moral relativism: Bill Bennett and Allan Bloom, for instance. Many of us also object to it from the left, since it undermines claims about social justice. But conservatives and liberals sometimes make moral-relativist arguments when it suits them.

Consider Justices Roberts and Thomas in the case of Parents Involved in Community Schools v. Seattle School District (2007). This is racial segregation/integration case. Defendants want to use race as a factor in assigning kids to schools, for the purpose of increasing diversity or integration. They claim that this goal is benign, unlike segregationists' use of race, which was malicious. They ask the court to allow racially conscious policies that are well-intentioned, reasonably supported by evidence, and enacted through democratic procedures.

In response, Justice Roberts quotes Justice O'Connor from an earlier case: "The Court's emphasis on 'benign racial classifications' suggests confidence in its ability to distinguish good from harmful governmental uses of racial criteria. History should teach greater humility… . '[B]enign' carries with it no independent meaning, but reflects only acceptance of the current generation's conclusion that a politically acceptable burden, imposed on particular citizens on the basis of race, is reasonable." Justice Thomas likewise argues that allowing a school system to promote diversity through racial classification means acceding to "current societal practice and expectations." That was the approach, he argues, that led the majority in Plessy v Ferguson to uphold Jim Crow laws, which were the fad of that time. "How does one tell when a racial classification is invidious? The segregationists in Brown argued that their racial classifications were benign, not invidious. ... It is the height of arrogance for Members of this Court to assert blindly that their motives are better than others."

These justices doubt that there is a knowable difference between benign and invidious uses of race. But surely there are moral differences between Seattle's integrationist policy of 2005 and the policy of Mississippi in 1940: differences of intent, principle, means, ends, expressive meaning, and consequences or outcomes. If we cannot tell the difference, we are moral idiots. There can be no progress, and there isn't any point in reasoning about moral issues.

To be sure, Seattle's policy is open to critique. The conservative justices quote some politically correct passages from the school district's website to good satirical effect, and the policy could also be attacked from the left. Whether Seattle should be able to decide on its use of race, or whether that should be decided by judges, is a good and difficult question. But it's almost nihilistic to assert that "benign" has "no independent meaning" and reflects only the opinions of the "current generation." That equates Seattle's policy with that of, say, George C. Wallace when he "barred the schoolhouse door."

permanent link | comments (0) | category: philosophy

January 18, 2008

on shared responsibility for private loss

(Syracuse, NY) Yesterday, I wrote a fairly frivolous post in response to Steven Landburg's New York Times op-ed, because I found one of his analogies risible. But I suppose it's worth summarizing the standard serious, philosophical argument against his position (which is libertarian, in the tradition of Robert Nozick). Lansburg asks whether we should compensate workers who would be better off without particular free-trade agreements that have exposed them to competition and have thereby cost them their jobs.

One way to think about that is to ask what your moral instincts tell you in analogous situations. Suppose, after years of buying shampoo at your local pharmacy, you discover you can order the same shampoo for less money on the Web. Do you have an obligation to compensate your pharmacist? If you move to a cheaper apartment, should you compensate your landlord? When you eat at McDonald’s, should you compensate the owners of the diner next door? Public policy should not be designed to advance moral instincts that we all reject every day of our lives.

I need not compensate a pharmacist if I buy cheaper shampoo than she sells, because I have a right to my money, just as she has a right to her shampoo. We presume that the distribution of property and rights to me and to the pharmacist is just. We're then entitled to do what we want with what we privately own. But who says that the distribution of goods and rights on the planet as a whole is just? It arose partly from free exchanges and voluntary labor--and partly from armed conquest, chattel slavery, and enormous helpings of luck. For example, some people are born to 12-year-old mothers who are addicted to crack, while others are born to Harvard graduates.

Given the distribution of goods and rights that existed yesterday, if we let free trade play out, some will become much better off and some will become at least somewhat worse off as a result of voluntary exchanges. Landsburg treats the status quo as legitimate--or given--and will permit it to evolve only as a result of private choices (which depend on prior circumstances). However, the Constitution describes the United States as an association that promotes "the general Welfare." Within such an association, it is surely legitimate for people who are becoming worse off to state their own interests, and it is morally appropriate for others to do something to help. (How much they should do, and at what cost to themselves, is a subtler question.)

Of course, one can question the legitimacy of the American Republic. It is not really a voluntary association, because babies who are born here are not asked whether they want to join. And its borders are arbitrary. That said, one can also question the legitimacy of our system of international trade. It is based on currencies, corporations, and other artificial institutions.

The nub of the matter is whether you think that individuals may promote their own interests in the market, in the political arena, or both. If one presumes that the economic status quo is legitimate, then the market appears better, because it is driven by voluntary choice. But if one doubts the legitimacy of the current distribution of goods and rights, then politics becomes an attractive means to improve matters. Because almost all Americans believe in the right and duty of the government to promote the general welfare, even conservatives like "Mitt Romney and John McCain [battle] over what the government owes to workers who lose their jobs because of the foreign competition unleashed by free trade."

permanent link | comments (3) | category: philosophy

October 2, 2007

tightening the "nots"

For what it's worth, I have listed my fundamental commitments and beliefs here. I can also define my own position by saying what kind of a scholar/writer I am not:

Not a positivist, because I don't believe that one can isolate facts from values, nor that one can live a good life without reasoning explicitly about right and wrong.

Not a technocrat, because I don't believe that any kind of technical expertise is sufficient to address serious public problems.

Not a moral relativist, because the arguments for moral relativism are flawed, and the consequence of relativism is nihilism.

Not a post-modernist of the type influence by Foucault (who is a major influence across the cultural disciplines), because I believe that deliberate human choices and actions matter and freedom is real.

Not a social constructivist, because I believe we are responsible for understanding the way the world actually works.

Not a utopian, because I believe that any persuasive theory of justice must incorporate a realistic path to reform. An ideal of justice that lacks a praxis is meaningless, or worse.

Not a utilitarian, because I don't believe that any social welfare function can define a good society.

Not a deontologist, because I doubt that any coherent list of principles can define a good society.

Not a pure pragmatist, because we need criteria for assessing whether a social process for defining and addressing problems is fair and good. Such criteria are extrinsic to the process itself.

Not a pluralist (in the political-science sense), because I believe there is a common good. But also not a deliberative democrat (in the Habermas version), because I believe that there are real conflicts of interest.

permanent link | comments (2) | category: philosophy

September 19, 2007

where morality comes from

Nicholas Wade's New York Times article, entitled "Is 'Do Onto Others' Written into Our Genes?" started off badly enough that I had a hard time reading it. Stopping would have been a loss, because I appreciated the reference to YourMorals.org, where (after registering) one can take a nifty quiz.

Wade begins: "Where do moral rules come from? From reason, some philosophers say. From God, say believers. Seldom considered is a source now being advocated by some biologists, that of evolution."

First of all, the evolutionary basis of morality is not "seldom considered." It has been the topic of bestselling books and numerous articles. Even the student commencement speaker at the University of Maryland last year talked about it.

More importantly, Wade's comparison of philosophers and biologists is misleading. Biologists may be able to tell us where morals "come from," in one sense. As scientists, they try to explain the causes of phenomena, such as our beliefs and behaviors. We call some of our beliefs and behaviors "moral." Biology may be able to explain why we have these moral characteristics; and one place to look for biological causes is evolution.

But why are we entitled to call some of our beliefs and behaviors moral, and others--equally widespread, equally demanding--non-moral or even immoral? Why, for example, is nonviolence usually seen as moral, and violence as immoral? Both are natural; both evolved as human traits. Moreover, not all violence is immoral, at least not in my opinion. Not even all violence against members of one's own group is wrong.

Morality "comes from" reason, not in the sense that reason causes morality, but because we must reason in order to decide which of our traits and instincts are right and wrong, and under what circumstances. Evolutionary biology cannot help us to decide that. If biologists want to study the origins of morality, they must use a definition that comes from outside of biology. One approach is to use the definition held by average human beings in a particular population. But why call that definition "moral"? I would call it "conventional." Conventional opinion may, for example, abhor the alleged "pollution" caused by the mixing of races or castes. It is useful to study the reasons for such beliefs, but it is wrong to categorize them as moral.

Perhaps I wrote that last sentence because of my genes, my evolutionary origins, or what I ate for breakfast this morning. Whether it is true, however, depends on reason.

permanent link | comments (1) | category: philosophy

August 29, 2007

hypocrisy

If Senator Larry Craig opposed gay rights and said hostile things about gays while occasionally soliciting gay sex, he was hypocritical. Hypocrisy is one of the easiest faults to prove, but it is not one of the worst faults, especially in a leader.

Hypocrisy is easy to establish, once the facts are out, because it involves a contradiction between the person's statements and his actions. (Likewise, lies are evident when a person's statements contradict what he knows or believes.) You can have very few moral commitments and very little knowledge of issues, and yet detect other people's hypocrisy.

But what if Larry Craig were completely heterosexual and totally faithful to his wife, yet anti-gay? In my view, his position would then reflect injustice and intolerance. These are worse faults than hypocrisy; they have far more serious consequences. But many Americans are uncomfortable about charging anyone with injustice. That's because: (1) the charge is controversial, given that definitions of justice vary; (2) the accusation reflects deep moral commitments, which are incompatible with moral relativism or skepticism; and (3) the claim requires knowledge of issues and policies. The issue of gay rights happens to be relatively easy to understand, but I would argue that Senator Craig's votes on economic policy display equally serious injustice. To make that claim, I have to follow politics fairly closely and develop strong moral commitments.

Thus I think that Americans who are disconnected from politics and issues tend to jump on evidence of hypocrisy as if it were very momentous (and interesting) news, whereas far worse faults are ignored.

(It's not even crystal-clear that Larry Craig is a hypocrite, because one could oppose certain rights for gays and yet be gay or bisexual, without a contradiction. If Craig is a hypocrite, it's not because of his policy positions but because he falsely denies being gay himself--or so his accusers claim. I happen to feel considerable sympathy for a gay person who hides his orientation, given the general climate of intolerance and the tendency of police to entrap gay men. But hypocrisy, while not the worst moral fault, is wrong. The wrongness, it seems to me, lies in the failure to treat other people as responsible and rational agents who can make decisions on the basis of facts. Instead, the hypocrite feels it necessary to deceive in order to get the results he wants. This is manipulative; it is using someone else as a means to one's ends, not as an end in himself. But of course there are many forms of political manipulation that do not involve hypocrisy--for example, fear-mongering and exaggeration.)

permanent link | comments (5) | category: philosophy

July 18, 2007

stability of character

I think most people believe, as a matter of common sense, that individuals have stable characters. In fact, it turns out that the word "character" comes from a Greek noun for the stamp impressed on a coin. We think that adults have been "stamped" in some way, so that one person is brave but callous; another, sensitive but vain. We make fine discriminations of character and use them to predict behavior. We also see categories of people as stamped in particular ways. For instance, we may think that men and women have different characters, although that particular distinction is increasingly criticized--and for good reasons.

Experiments in social psychology, on the other hand, tend to show that most or all individuals will act the same way in specific contexts. Details of the situation matter more than differences among individuals. For instance, in a famous experiment, seminary students on their way to give a lecture on helping needy people are confronted with an actor who is slumped over and pretending to be in distress. Whether the students stop depends on how late they believe they are--a detail of the context. All the self-selection, ideology, training, and reflection that goes into seminary education seems outweighed by the precise situation that a human being confronts on his way to an appointment.

On a much broader scale, we are all against slavery and genocide today. But almost all White people condoned slavery in American ca. 1750, and almost all gentile Germans turned a blind eye to genocide ca. 1940. It seems safe to say that context made all the difference, not that our characters are fundamentally better than those of old. (For a good summary, see Marcia Homiak, "Moral Character," The Stanford Encyclopedia of Philosophy [Spring 2007 Edition], edited by Edward N. Zalta.)

My question is why the common sense or folk theory of character seems so attractive and is so widespread. If human behavior depends on the situation and is not much affected by individuals' durable personality traits, why do we all pay so much attention to character?

In fact, most people we know are rarely, if ever, confronted with new categories of challenging ethical situations. Neither the political regime nor one's social role changes often, at least in a country like the USA. An individual may repeatedly face the same type of situation, and these circumstances differ from person to person. Thus a big-city police officer in the US faces morally relevant situations of a certain type--different from those facing a suburban accountant. An American lives in a different kind of social/political context from an Iraqi. Individuals occupy several different social roles at once. But the roles themselves are pretty stable. They are, to varying degrees, the result of choices that we have made.

Thus what we take to be "character" may be repeated behavior resulting from repeated circumstances--which, in turn, arise because of the roles we occupy, which (to some degree) we choose. In that case, it is reasonable to expect people to act "in character," yet situations are what drive their behavior. By the way, this seems a generally Aristotelian account.

permanent link | comments (1) | category: philosophy

July 16, 2007

the purposes of political philosophy

(In Philadelphia for the National Conference on Volunteering and Service) Why would a person sit down at a desk to write general and abstract thoughts about politics? This is a significant question, because people who think hard about politics are likely to be interested in social change. Yet it is not obvious that writing abstract thoughts about politics can change anything.

One might write political theory in order to persuade someone with the power to act on one's recommendations: for instance, the sovereign. Machiavelli addressed his book The Prince "ad Magnificum Laurentium Medicem"--"to Lorenzo (the Magnificent) de' Medici"--a man who surely had the capacity to govern.

Today, political theorists still occasionally write papers for the World Bank or a national government, preserving the tradition of philosophy as advice to the ruler. Ronald Dworkin, Thomas Nagel, Robert Nozick, John Rawls, et al. sent a brief to the Supreme Court whose first section was headed, "Interest of the Amici Curiae." The authors explained their "interest" as follows: "Amici are six moral and political philosophers who differ on many issues of public morality and policy. They are united, however, in their conviction that respect for fundamental principles of liberty and justice, as well as for the American constitutional tradition, requires that the decisions of the Courts of Appeals be affirmed."

Unfortunately, one rarely finds a sovereign willing to act on morally demanding principles. And if one's principles happen to be republican, one may not wish to serve or help the sovereign at all. (It is a subtler question whether a powerful Supreme Court is compatible with republicanism.)

Rousseau, being a republican, thought that Machiavelli's advice to Lorenzo had to be ironic. Machiavelli's real audience was--or so Rousseau presumed--the Florentine people, who would realize that a prince, in order to be secure, must be ruthless and cruel. They would therefore rise up and overthrow Lorenzo, becoming what they should always have been: the sovereign. In this "theory of change," the philosopher addresses the sovereign as an apparently loyal courtier, but his real effect is to sew popular discontent and rebellion.

Whether or not Rousseau's reading of Machiavelli was correct, many philosophers have addressed themselves to the public as the sovereign. Rousseau himself dedicated his Discourse on Inequality "To the Republic of Geneva." He began: "Magnificent, very honorable, and sovereign sirs, convinced that it is only fitting for a virtuous citizen to give to his nation the honors that it can accept, for thirty years I have labored to make myself worthy to offer you a public homage. ..."

There is, I'm sure, some irony in Rousseau's dedication. He didn't expect the oligarchs of Geneva to whom he addressed his discourse to act in accord with his ideas. He understood that "la Republique" was not the same as the "souverains seigneurs" who might actually read his book.

Today, a dedication or appeal to the public would seem pretentious in a professional philosophy book--partly because it's clear that "the public" won't read such a work. John Rawls' Theory of Justice is dedicated to his wife, a common (and most appropriate) opening. Still, I think we can assume that Rawls wanted to address the whole public indirectly. He believed that the public was sovereign. He knew, of course, that most citizens would not read his book, which was fairly hard going. Even if it had been an easier work, most people were not interested enough in abstract questions of politics to read any "theory of justice." But Rawls perhaps hoped to persuade some, who would persuade others--not necessarily using his own words or techniques, but somehow fortified by his arguments.

This is a third "theory of change" that may be implicit in most modern academic political theory. The idea is: We must first understand the truth. Since it is complex and elusive, we need a sophisticated, professional discussion that draws on welfare economics, the history of political thought, and other disciplines not easy for a layperson to penetrate. But the ultimate purpose of all this discussion is to defuse diffuse true ideas into the public domain. We do that by lecturing to undergraduates, writing the occasional editorial, persuading political leaders, filing amici briefs, etc.

This theory is not foolish, but I don't believe in it. I doubt that a significant number of people will ever have the intellectual interests or motivations to act differently because they are exposed to philosophical arguments.

I further doubt that one can develop an intellectually adequate understanding of politics unless one thinks through a theory of change. It is easy, for example, to propose that the state should empower people by giving them various political rights. But what if saying that has no effect on actual states? What if saying it actually gives states ideas for propaganda? (Real governments have sometimes used political theory as the inspiration for entirely hypocritical rhetoric.) What if talking about the value of particular legal rights misdirects activists into seeking those rights on paper, when the best route to real freedom lies elsewhere? In my view, an argument for political proposition P is an invalid argument if making it actually causes not-P. And if you argue for P in such a way that you can never have any impact on P, I am unimpressed.

Finally, I doubt that philosophical arguments about politics are all that persuasive, except as distillations and clarifications of experience. Too much about politics is contingent on empirical facts to be settled by pure argumentation. (In this sense, political philosophy is profoundly different from logic.) Thus I read The Theory of Justice as an abstract and brilliant rendition of mid-20th-century liberalism. But the liberalism of the New Deal and Great Society were not caused in the first place by political theory. They arose, instead, from practical experimentation and negotiation among social interests. Rawls' major insights derived from his vicarious experience with the New Deal and the Great Society--which makes one wonder how much efficacy his work could possibly have. It was interesting analysis, no doubt; but could it matter?

A fourth "theory of change" is implicit in a work like John Gaventa's Power and Powerlessness (1980). This book has no official dedication, but the preface ends, "Most of all, I am indebted in this study to the people of the Clear Fork Valley. Since that summer in 1971, they have continued to teach, in more ways than they know." It's not clear whether Gaventa expected the residents of an Appalachian valley to read his book, but he did move to the region to be a leader of the Highlander Folk School. Gaventa's theory was: Join a community or movement of people who are motivated and organized to act politically. Learn from them and also give them useful analysis and arguments. Either expect them to read your work directly, or use your academic work to develop your analysis and then share it with them in easier formats.

I am the opposite of a Marxist in most respects, but I think we have something to learn from Marxists on the question of "praxis": that is, how to make one's theory consequential. In his Theses on Feuerbach, Marx wrote, "Philosophers have hitherto only interpreted the world in various ways; the point is to change it." That seems right to me, not only because we have a moral or civic obligation to work for social change, but also because wisdom about politics comes from serious reflection on practical experience.

Thus I will end with one more quote from a preface--the 1872 preface of the German edition of the Communist Manifesto. Here we see Marx addressing an organized social movement: "The Communist League, an international association of workers, which could of course be only a secret one, under conditions obtaining at the time, commissioned us, the undersigned, at the Congress held in London in November 1847, to write for publication a detailed theoretical and practical programme for the Party. Such was the origin of the following Manifesto, the manuscript of which travelled to London to be printed a few weeks before the February Revolution."

Now that is political writing with a purpose.

permanent link | comments (0) | category: philosophy

June 14, 2007

Günter Grass’s memories

The June 4 New Yorker presents an excerpt from Günter Grass’s memoir, Peeling the Onion. For the first time, we get the novelist's own lengthy account of his experiences in the Waffen S.S., a story that he had suppressed for about 60 years. The New Yorker (or possibly Grass) chose an excerpt that is action-packed. There is not too much rumination about what the experience meant or why he failed to mention it during the decades when he bitterly denounced German hypocrisy about the Nazi past. Instead, the thrilling adventures of a young man at war make us highly sympathetic. We root for him to survive, notwithstanding the double-S on his collar. And as we read the exciting story (under the flip headline of "Personal History: How I Spent the War"), our eyes wander to amusing cartoons about midlife crises.

I would not be quick to condemn a 16-year-old for joining the S.S., although that was a much worse thing to do than joining a gang and selling drugs, for which we imprison 16-year-olds today. For me, the interesting moral question is what the famous and accomplished adult Günter Grass did with his memories.

So ... why run an excerpt that is mainly about his exciting adventures in the war? Why not write about the 60-year cover-up? Why introduce the memoir in English in a very lucrative venue, America's most popular literary magazine? Also, why write only from his personal perspective, saying almost nothing about the nature of the S.S. or its reputation among German civilians at the time?

Grass cannot recall precisely what the S.S. meant to him when he was assigned to it. But he thinks it had a "European aura to it," since it comprised "separate volunteer divisions of French and Walloon, Dutch and Belgian. ..." The von Frundsberg Division, to which he was assigned, was named after "someone who stood for freedom, liberation." And once Grass was in the S.S., where he was exposed to many months of training, "there was no mention of the war crimes that later came to light."

This paragraph continues: "But the ignorance I claim cannot blind me to the fact that I had been incorporated into a system that had planned, organized, and carried out the extermination of millions of people. Even if I could not be accused to active complicity, there remains to this day a residue that is all too commonly called joint responsibility. I will have to live with it for the rest of my life."

I do not know whether the factual claim here is credible. I must say I find it very surprising that in the course of a whole autumn and winter of S.S. training, there was "no mention" of war crimes. Maybe the details of the death camps were not discussed, but I am amazed that the S.S. trainers never talked in general terms about violence against Jewish, Gypsy, Slavic and other civilian populations. That was a different kind of "European aura": the attempted slaughter of several whole European peoples.

Regardless of what precisely Grass heard in his S.S. training, I find his reflection on "joint responsibility" troubling. He says he has no "active complicity," even though he had joined the S.S. when he could have found his way into the army. His involvement in the Holocaust is passive: "I was incorporated into a system. ..." As a result of this bad moral luck, he feels "joint responsibility"--a term that is "all too often" used. (Actually, I find this sentence hard to interpret and evasive. Is the term "joint responsibility" used when it does not apply? Does it apply in his case?) Finally, Grass emphasizes the distress that his passive complicity has always caused him and will continue to cause him for the rest of his life. There is no hint of an apology for the harm that his active decision to join the S.S. might have caused other people. And then the memoir proceeds to make him its hero--his survival a happy ending.

I would forgive Grass instantly if he took personal responsibility for what he did at age 16 and 17. I am not so sure I like how he is behaving at age 80.

permanent link | comments (2) | category: philosophy

May 31, 2007

a typology of democracy and citizenship

I've been in Chicago for an interesting research conference on civic participation. There was some discussion about how empirical research should relate to "normative" thinking, i.e., arguments about how citizens ought to act, or how institutions should treat citizens. One of my colleagues* suggested that it might be helpful to provide empirical researchers with a menu of reasonable normative ideals, each of which might support different policies and outcome measures.

I'd first note that many people care about politics because they have substantive goals: for instance, social justice, individual liberty, moral reform, or concern for nature. Thus we could begin by listing substantive political ideals. But that would produce a huge array, especially once we cross-referenced each substantive goal with various ideas about appropriate political behavior. (For instance, you can be an environmentalist who believes in public deliberation, an environmentalist revolutionary, or an environmentalist who thinks that consumers and conservationists should bargain with business interests.) Thus I'd begin by conceding that there will be debates about what makes a good (or better) society. Assuming that the people engaged in these debates want to handle their differences democratically, we can turn to various rival views of democracy:

1. Theories of democratic participation

a. Equal influence in an adversarial system: The main purpose of politics is to bend institutions to one's own purposes, nonviolently. As in the title of Harold Lasswell's 1958 book, politics is "Who Gets What, When, How." It is desirable that poor and marginalized people participate in politics effectively, because this is their way to counter massive inequality in the economy. Voting is a core measure of participation; votes should be numerous, and the poor should be at least as prone to vote as the rich. Other forms of political engagement are also aimed at the state or at major private institutions, e.g., persuading others to vote, protesting, and filing lawsuits. The value of a political act depends on its impact, which is empirically measurable. For example, a protest may affect the government more or less than a vote, depending on the circumstances.

b. Deliberation: The main purpose of politics is to exchange ideas and reasons so that opinions can become more fair and informed before people take action. A vote is not a good act unless it is well informed and reflects ethical judgment and learning. Participation in meetings is good, especially if the meetings include ideologically diverse people, operate according to fair rules and norms, and conclude with agreement. The use of high-quality news and opinion sources is another indicator of deliberation.

c. Public work: Citizens create public goods by working together--especially in civil society, but also in markets and within the government if these venues are reasonably fair. Public goods include cultural products, the creation of which is an essential democratic act. Relevant individual-level indicators include "working with others to address a community problem" (a standard survey question) or--specifically--participation in environmental restoration, educational projects, public art, etc. Perhaps the best indicators are not measures of individual behavior but rather assessments of "the commonwealth," which is the sum of public goods.

d. Civic republicanism: Political participation is an intrinsically dignified, rewarding, and honorable activity, particularly superior to consumerism. It is implausible that voting once a year could be dignified and rewarding; but deliberation or public work could be.

Civic participation is not only a means to change society; it is also part of the citizen's life. Thus we also need to consider:

2. Theories of the good life

a. Critical autonomy: The individual should be as free as possible from inherited biases and presumptions. We should hold our opinions and roles by choice and revise them according to evidence and alternative views. Not only should people choose their substantive political values, but they should decide, after due reflection, whether or not to engage politically.

b. Eudaimonism: A good life is a happy life, if happiness is properly understood. (And that's a matter of debate.) The happiness of all human beings should matter to each of us, which implies strong and universalistic moral obligations.

c. Communitarianism: We are born into communities that profoundly shape us. Although we should have some rights of voice within our communities and exit in cases of oppression, true autonomy is a chimera and membership is a necessary source of meaning. Participation in a community is essential, but what constitutes appropriate participation is at least somewhat relative to local norms.

d. Creativity: The good life involves some measure of innovation, expression, and the creation of things that have lasting value. Creative work can be collaborative, in which case it requires civic engagement.

These two lists could be combined to create an elaborate grid or taxonomy (which would become 3-D if we added substantive political goals). I'm struck that especially my second list looks rather idiosyncratic, even though my intention was merely to summarize prevailing, mainstream views. I'm not sure what that says about me or this subject.

*I have a self-imposed policy against identifying other people who attend meetings with me.

permanent link | comments (0) | category: philosophy

May 23, 2007

philosophy and concrete moral issues

The Philosopher's Index (a database) turns up 25 articles that concern "trolley problems." That's actually fewer than I expected, given how frequently such problems seem to arise in conversation. Briefly, they involve situations in which an out-of-control trolley is barreling down the tracks toward potential victims, and you can affect its course by throwing a switch that sends it plowing into a smaller group of victims, or by throwing an innocent person in front of the tram. Or you can refrain from interfering.

The purpose of such thought experiments is to use our intuitions as data and learn either: (a) what fundamental principles actually underlie our moral choices, perhaps as a result of natural selection, or (b) which moral theory would consistently and appropriately handle numerous important cases. In either case, the "trolley" story is supposed to serve as an example that brings basic issues to the fore for consideration. The assumption is that we have, or ought to have, a relatively small set of general principles that generate our actual decisions.

I do not think this approach is useless, but it doesn't interest me, for the following reason. When I consider morally troubling human interactions and choices, I imagine a community or an institution like a standard American public school. The issues that arise, divide, perplex, and worry us in such contexts usually look like this: Ms. X, a teacher, believes that Mr. Y, her colleague, is not dedicated or effective. How should she relate to him in staff meetings? Or, Ms. X thinks that Johnny is not a good student. Johnny is Latino, and Ms. X is worried about her own anti-Latino prejudices. Or, Ms. X assigns Charlotte's Web, a brilliant work of literature but one whose tragic ending upsets Alison. Should Alison's parents complain? Or, Mr. and Mrs. B believe that Ms. X is probably a better teacher than Mr. Y. Yet they cannot be sure. Should they try to get their little Johnny into Ms. X's class, even if that means insulting Mr. Y? Or should they allow Johnny to be assigned by the principal?

Possibly, philosophy has little value in guiding, or even analyzing, such choices. I would like to think that is wrong, and philosophical analysis can be helpful. But it is very hard to see how trolley problems can get us closer to wise to judgment about concrete cases.

permanent link | comments (2) | category: philosophy

March 29, 2007

what I believe

(In Albuquerque) For whatever it's worth, here are the most basic and central positions I hold these days. The links refer to longer blog posts on each idea:

Ethical particularism: The proper object of moral judgment is a whole situation, not an abstract noun. Some general concepts have deep moral significance, but their significance varies unpredictably depending on their interplay with other factors present in any given situation.

Historicism: Our values are deeply influenced by our collections of prior experiences, examples, and stories. Each person's collection is his or her "culture." But no two people have precisely the same background; one culture shades into another. A culture is not, therefore, a perspective (i.e., a single point from which to observe everything), nor a premise or set of premises from which our conclusions follow. There are no barriers among cultures, although there are differences.

Dialectic over entropy: Cultural interaction generally leads to convergence. Convergence is bad when it is automatic and the result is uniformity. It is good when it is deliberate and the result is greater complexity.

Narratives justify moral judgments: We make sense of situations by describing them in coherent, temporal terms--as stories. Narratives make up a large portion of what we call culture.

Populism: It is an appropriate general assumption--for both ethical and practical reasons--that all people can make valuable contributions to issues of moral significance that involve them. (Note that ethical particularism rebuts claims to special moral authority or expertise.)

Public deliberation: When judgments of situations and policies differ, the people who are affected ought to exchange ideas and stories under conditions of peace and reasonable equality, with the objective of consensus. This process can, however, be local and voluntary, not something that encompasses the whole polity.

Public work: Deliberation should be connected to action. Otherwise, it is not informed by experience, nor is it motivating. (Most people don't like merely to talk.)

Civic republicanism: Participation--the liberty of the ancients--is not only a means to an end; it is also intrinsically dignified.

Open-ended politics: We need a kind of political leadership and organizing that does not aim at specific policies or social outcomes, but rather increases the prevalence of deliberation and public work. Like other forms of politics, this variety needs strategies, messages, constituencies, and institutions.

The creative commons: Many indispensable public goods are not just given (like the sun or air) but are created by collective effort. Although there is a global creative commons, many public goods are local and have a local cultural character.

Developmentalism: Human beings pass through a life course, having different needs and assets at different points. Development is not a matter of passing automatically through stages; it requires opportunities. Active citizens are made, not born. They acquire culture and help make it.

Associations: Voluntary private associations create and preserve public goods, host deliberations, and recruit and teach the next generation.

Some of these ideas fit together very neatly, but there are tensions. For example, how can I be skeptical about judging abstract moral concepts and yet offer a positive judgment of "participation," which is surely an abstract idea? As a matter of fact, I don't think participation is always intrinsically good; I simply think that we tend to undervalue it or overlook its intrinsic merits. But how weakly can I make that claim without undermining it entirely?

permanent link | comments (1) | category: philosophy

March 22, 2007

consequentialists should want torture to "work"

I ended yesterday's post with the question, "if killing is worse than torturing, why should we ban the latter--especially if it proves an efficient means of preventing casualties?" I said "if" because this is a controversial empirical hypothesis. Human rights groups argue that torture does not work. It does not prevent terrorism or other grave evils, because those who are tortured can lie or can change their plans once they are captured. It generates false information that justifies even more torture without actually serving national security or any other acceptable end.

This sounds at least plausible. But it isn't impossible to imagine a situation in which a particular form of torture (duly limited and overseen) actually has beneficial net effects on human happiness. That is, the few people who suffer under torture may--in this hypothetical world--cough up enough true information that there is less terrorism, tyranny, or war. Their suffering is far outweighed by the increased security of numerous others.

What I find interesting is that I don't want this scenario to be empirically true. I believe in universal human rights, which rest on a sense of the dignity and intrinsic worth of all people. I also think that virtue excludes the use of torture, which is dishonorable. However, I am not so much of a "deontologist" that I'll stick to principles regardless of their consequences. I won't say "fiat lex pereat mundus"--let the [moral] law prevail even if the world perishes. Instead, I hope that the effects of torture prove harmful, because then arguments about consequences will line up with arguments about principles and virtues and the case will be easy.

One could, however, be a consistent consequentialist and argue that we should institute torture (with appropriate safeguards and limits) if and only if its net effects are positive. If that is your view, you should actually hope that torture is highly effective. If any practice, P, has both costs and benefits, a consequentialist should want its benefits greatly to outweigh its costs and should then press to institutionalize P. A consequentialist should oppose torture if, as the human rights groups say, it doesn't work. But I see no consequentialist grounds for hoping that it doesn't work.

permanent link | comments (2) | category: philosophy

March 20, 2007

Wittgenstein in the kitchen

Wittgenstein used "game" as an example of a word that we can use effectively even though the examples are highly various. Some games are competitive, some are fun, and some have rules-- but some have none of these features. Indeed, Wittgenstein thought that there was no defining feature of "games," but there were many individual games that were similar to many others. The word marked a cluster of cases that one could learn to "see" without being able to identify a common denominator. It might be right or wrong to call a given object a "game," but the test would not be whether the game met any particular criterion.

My favorite example of such words is not "game," but "curry"--a kind of hobson-jobsonism derived from a Tamil word meaning "sauce or relish for rice." But there are plenty of curries served without rice, and plenty of rice sauces that aren't curries. Webster's defines the English word "curry" as "a food, dish, or sauce in Indian cuisine seasoned with a mixture of pungent spices." But there are millions of curries that don't come from India, and some Indian curries are not particularly pungent.

Here are the ingredients for two curries, taken from cookbooks in our house. 1) Whole chicken, onions, blanched almonds, coriander seeds, cardamom pods, pepper, yogurt, salt. 2) Flank steak, peanut butter, coconut milk, basil leaves, fish sauce, sugar, cumin, white pepper, paprika, galanga root, kaffir lime leaves, cumin, coriander, peppercorns, lemon grass, garlic, shallots, salt, and shrimp paste. These recipes both contain coriander and salt, but it is not hard to find other curries without the coriander, and you can leave out the salt. It is hard to find any two curries that share absolutely no common ingredient. Yet the ingredients that any two share may not be found in a third.

If "curry" cannot be defined by its components, perhaps it refers to some cooking method? Many curries involve pastes or thick sauces composed of ground ingredients. But that's also a good description of romesco sauce from Catalonia, pesto from Italy, or chile con carne. No one would call a minestrone with pesto a curry. We could try to define "curry" by listing countries of origin. But there are dishes from India that aren't curries. "Country captain" is arguably a curry of English origin. And what about adobo from the Philippines or a lamb stew from Iran? Curries or not?

In short, you can teach or learn the correct meaning of "curry" (albeit with some controversial borderline cases), but you cannot define it in a sentence that will communicate its meaning. Learning requires experience. I believe the same is true of "love," "happiness," and "virtue"--but that's another story.

permanent link | comments (0) | category: philosophy

February 21, 2007

building alternative intellectual establishments

Think back to the year 1970. ....

  • Almost all university professors are men. They seem to be interested only in male historical figures and male issues. They select their own advanced students and colleagues and decide which manuscripts are published. They defend their profession as rigorous, objective, and politically neutral. Feminists respond by criticizing those claims; some also try to create a parallel set of academic institutions (women's studies departments, feminist journals) that can confer degrees and tenure and publish.
  • Certain academic disciplines, including law, history, and political science, are seen as predominantly liberal. They seem to support a liberal political establishment that has considerable power. For example, law professors are gatekeepers to the legal profession, which produces all judges. Professors in these fields choose their own successors and claim to be guardians of professionalism, expertise, independence, and ethics. Conservatives--disputing these claims--decide to build a parallel set of research institutions, including the right-wing think tanks and organizations like the Federalist Society (founded 1982).
  • The National Endowment for the Arts gives competitive grants to individual artists. NEA peer-review committees are composed of artists, critics, and curators. They are said to be insulated from politics and capable of choosing only the best works. The artists they support tend to come from the "Art World" to which they also belong: a constellation of galleries, art schools, small theaters, and magazines, many based in New York City. Most of the funded work is avant-garde. It is usually politically-correct, aiming to "shake the bourgeoisie." Critics complain about some particularly controversial artists, and ultimately the individual grants program is canceled.
  • Almost all professional biologists are Darwinians. They assert the legitimacy of science; but their religious critics believe that they depend on false metaphysical assumptions. Biologists use peer-review to select their students, to hire colleagues, to disperse research funds, and to choose articles for publication. Religious critics cannot get through this system, so they build a parallel one composed of the Institute for Creation Research, Students for Origins Research, and the like.
  • The most influential news organs in the country (some national newspapers and the nightly television news programs) claim neutrality, objectivity, accuracy, and comprehensiveness: in a phrase, "all the news that's fit to print." Critics from both the left and right detect all sorts of bias. They try (not for the first time in history) to construct alternative forms of media, including NPR (founded in 1970) and right-wing talk radio.
  • If you are influenced by Nietzsche's Genealogy of Morals and Foucault, you may see all knowledge as constructed by institutions to serve their own wills to power. Then you must view all of the efforts mentioned above with equanimity--or perhaps with satisfaction, since they have unmasked pretentious claims to Truth. If you believe in separate spheres of human excellence, then you may lament the way that various disciplines and fields have been enlisted for political organizing. You may concede that all thought has a political dimension, but you may be sorry that scholarly and artistic institutions have been used as strategic resources in battles between the organized left and right. (I owe this idea to Steven Teles.)

    I guess my own response is ad hoc and mixed. For example, I think that conservative ideas about law, history, and political science are interesting and challenging and should be represented in academia. I'm sorry that some legal conservatives have found their way to the Supreme Court, but the solution is to win the public debate about the meaning of the Constitution--not to wish that conservatives would go away. The Federalist Society provides liberals with a valuable intellectual challenge.

    I suspect that the NEA's peer-review committees of the 1970s and 1980s often identified the best artists: meaning those who were most innovative, sophisticated, and likely to figure in the history of art as it is written a century from now. (Although who can tell for sure?) But I'm not convinced that taxpayers' money should be devoted to the "best" artists. Other criteria, such as geographical dispersion, various sorts of diversity, and public involvement, should perhaps also count. If it's fair to say that the New York Art World dispersed public money to itself, that sounds like a special-interest takeover of a public agency.

    Finally, "creation science" and "intelligent design theory" strike me as both scientific and theological embarrassments, destined to disappear but not before they have done some damage. Nevertheless, the anti-Darwinian organizations reflect freedom of association and freedom of speech and must certainly be tolerated.

    (These ad hoc judgments are probably not consistent or coherent at a theoretical level.)

    permanent link | comments (0) | category: academia , philosophy

    January 11, 2007

    consequences of particularism

    I drafted a paper more than a year ago that drew some political implications out of a philosophical doctrine called "moral particularism" (click for pdf). I haven't had a chance to improve and expand that paper for publication. It actually covers a huge amount of ground very thinly (which makes it inappropriate, in its current form, for academic publication). Here are a few key ideas:

    Some concepts have the following features:

    1) They are morally important. When they show up as features of a situation, they usually ought to influence our moral judgment, albeit in conjunction with other features.
    2) These concepts lack consistent moral "valence." Depending on the situation, they can make it worse or better. There are no general rules that reliably tell us what their valence will be in all the instances of a certain description. By way of analogy (which I owe to Simon Blackburn), we can't tell in advance--or by means of a principle or rule--whether a splash of red paint will make a painting better or worse. That is because the proper unit of aesthetic analysis is the whole painting, not an area within it. Nevertheless, a splash of red paint is important to the overall beauty of a painting. It might ruin a Vermeer but save a De Kooning. Likewise, we can't tell whether love makes a situation better or worse; but it usually matters.
    3) These concepts are indispensable. We cannot resolve moral questions appropriately by appealing only to concepts that avoid 1) and 2).

    I think these three features apply to all of the traditional virtues and vices: courage, pride, partiality, respect, and many more. As an example, consider love. Love is morally significant and can be either good or bad depending on the situation. The question is whether we can use a rule or principle to delimit the good cases of love from the bad. Then we could replace the ambiguous word "love" with two words, one for the good form and the other for the bad. (Or there might turn out to be more than two subsets of love.)

    I don't have a proof that such an analysis must fail. But I doubt that it can succeed, because I suspect that we humans happen to have an emotion, love, that can be either good or bad, or can easily change from good to bad (or vice-versa), or can be good and bad at the same time in various complex ways. Even when love is good, it carries some problematic freight because of its potential to be bad. And when love is wrong, it nevertheless has some redeeming qualities because it is akin to love that is good. I think the instinct to divide it into eros and agape or other such subcategories is fundamentally mistaken.

    (That does not mean, however, that there is no difference between good and bad love. Moral judgment is necessary, but it has to be about situations, not about concepts in the abstract.)

    Numerous implications follow from this doctrine, but the one I want to mention here is political. For a particularist, there can be no technique for resolving moral questions that is analogous to the techniques of economics, engineering, or law. First, there cannot be a computational method (such as the one that utilitarianism promises) because that would presume that one consistent good, such as happiness, is the only concept that counts. The particularist replies that other concepts must also be considered, and they happen to be unpredictable. Second, the particularist doubts that we can develop a set of sharp and valid moral definitions or principles and then apply them to cases. Although some moral concepts may be definable in ways that give them consistent moral valence, others cannot.

    Thus there is no expertise or procedure that will yield wise moral judgments. However, particularism is consistent with public deliberation. When people discuss what should be done, they apply rules and principles--sometimes validly and sometimes not. But they also tell stories so as to bring out the salient features of a situation and depict those features in a positive or a negative way. They place particular aspects of the situation in various contexts. And they bring out themes (not rules or principles but repeated motifs of moral significance). In making such arguments, they apply their distinct backgrounds and perspectives. This is the best form of moral reasoning, assuming that particularism is right.

    I do not assume that everyone has equally valid and useful points to contribute in deliberation. Yet we should allow everyone to participate because: (a) rules that exclude some and favor others tend to be biased--merely to serve special interests, and (b) many more people have valid moral insights than one might think. Thus particularism does not imply egalitarianism, but it counts in its favor.

    permanent link | comments (0) | category: philosophy

    October 30, 2006

    the origins of government

    Would this work as a definition of a government? "An institution designed to outlast individual human beings that operates within a fixed geographical territory; it has permanent fiscal accounts, offices with mutually consistent and complementary roles that are held temporarily by individuals, and real property. It has some authority over all the people and institutions within its territory (where 'authority' means the ability to make and enforce rules claimed to be legitimate)."

    If this definition works, then Florence had a government in 1300. Dante, for example, held various offices for his city, was paid for his work out of public accounts, made binding decisions while he was a city magistrate, and represented the government abroad. When he was exiled, he left the jurisdiction and employ of Florence; his office and legal power passed to another man.

    In Dante's time, England basically lacked a government. That is not to say that England was disorganized or backward. The English erected great cathedrals, castles, schools, and universities; their leading cities were international entrepts; their knights were capable of ransacking France. Nor was England an individualistic and atomized society--on the contrary, people were bound to one another by obligations, often inherited and unshakable.

    But there was no English government. A baron was a personal vassal of the king, to whom he owed certain duties and from whom he could expect protection. Each baron had many vassals who owed him duties (as men personally obligated to other men). And each peasant was a vassal of a minor lord, entitled to certain birthrights, such as use of particular fields and woods, but obligated to work the land of his ancestral village and share the crop with his lord. The borders of the realm depended on what fiefs the monarch had inherited; thus the "national" territory might shift with each change of king.

    None of the offices of the realm, from monarch to peasant, was governmental in the modern sense. Take Justices of the Peace: they were the closest equivalents of modern police, but they were not paid, trained, or overseen. They were just vassals of the monarch who were morally obligated to preserve the King's Peace by sword or by persuasion. There was a public treasury, the Exchequer, but it had very minor importance. Even when Queen Elizabeth I ascended the throne in 1558, she was expected to pay for what we would call "government" (e.g., foreign embassies) out of her inherited wealth, rents on the extensive lands that she personally owned, plus some import duties. Her claims to sovereign power were controversial, and in any case, she lacked the personnel, the files, and the budget needed to "govern" in the modern sense.

    She did obtain an effective espionage service when Sir Francis Walsingham started paying for secret information out of his own pocket; Elizabeth then authorized him to supplement those payments from her treasury. Even so, the English secret service was really just a group of Sir Francis' servants and retainers, and he was a personal retainer of the Queen. When Walsingham died, so did the organization.

    In men like Walsingham, we see the origins of government. He was a professionally trained expert (a lawyer), not a nobleman with any hereditary powers. He held an appointed office, Mr. Secretary, which he was free to quit. He structured his civil service as a bureaucracy and tried to serve the permanent interests of England as a Protestant state, not merely those of his Queen. However, had Elizabeth married Franois, the Duke of Anjou and Alenon (as she threatened), then Walsingham would have faced a choice. This Puritan lawyer could have become a personal servant of a Catholic French nobleman, or he could have quit public life.

    The medieval case shows that we could have elaborate social structures without governments; that is a relevant conclusion at a time of globalization, when governments are losing authority over fixed territories. It is not clear, however, that we can have elaborate social structures and personal liberties without governments.

    permanent link | comments (3) | category: philosophy

    October 24, 2006

    smelling memories

    (On my way back to Chicago for another meeting.) Sit quietly, close your eyes, and recall the scent of a lemon ... soy sauce ... pepper ... gasoline ... a baked apple. Inhale through your nose as you remember these smells. I find this entertaining, and I can get quite precise about it. For example, I can choose whether to remember a bitter lemon smell (with some of the white pith), or the pure scent of the inside of the fruit.

    It appears that memories of smells decay more slowly than other sensory memories. This is a bit surprising, because "each olfactory neuron in the epithelium only survives for about 60 days, to be replaced by a new cell." Dr. Maturin in one of the Patrick O'Brien novels notices the power of smells to restore memories and hypothesizes that it's because we don't have many words for scents. He thinks that because we translate our visual and auditory experiences into language, we tend to forget them, whereas we retain our olfactory sensations in their raw form.

    When people (like O'Brien and Proust) write about memory and smell, they usually describe the power of real scents to evoke lost memories. The reverse is interesting, too: the power of deliberate recollection to conjure up imaginary smells.

    permanent link | comments (5) | category: philosophy

    September 26, 2006

    torture: against honor and liberty

    In the Hamdan decision, the Supreme Court said that torture was our responsibility. We couldn't allow the president to decide secretly whether and when to obey the Geneva Convention. There would have to be a public law, passed by our representatives, subject to our review at the next election.

    Alas, the Congress appears likely to pass legislation that will permit torture, buoyed by polls that suggest the American people prefer to sacrifice our ancient common law principles in favor of spurious security. Our national honor and liberty are at risk. Those are old-fashioned terms, more securely anchored in conservative than in progressive thought. Yet they are precisely the correct terms, as I shall argue here.

    Torture is dishonorable because of the perverted personal relationship that it creates between the torturer and the victim. That is why people of honor do not torture, and nations with honor do not condone it. As David Luban writes: "The torturer inflicts pain one-on-one, deliberately, up close and personal, in order to break the spirit of the victim--in other words, to tyrannize and dominate the victim. The relationship between them becomes a perverse parody of friendship and intimacy: intimacy transformed into its inverse image, where the torturer focuses on the victim's body with the intensity of a lover, except that every bit of that focus is bent to causing pain and tyrannizing the victim's spirit."

    Torture may not be the worse injustice. To bomb from 30,000 feet can be more unjust, because more may die. To imprison 5.6 million Americans may be more unjust, because one in 37 of us spends months or years in dangerous, demeaning, state-run facilities. But there is a difference between injustice and dishonor. Bombing people and locking them up are impersonal, institutional acts. Torture is as intimate as rape. It sullies in a way that injustice does not. That is why the House of Lords ruled in 2005: "The use of torture is dishonourable. It corrupts and degrades the state which uses it and the legal system which accepts it."

    Torture threatens liberty because it gives the state the power to generate testimony and evidence contrary to fact, contrary even to the will of the witness. It thus removes the last constraint against tyranny, which is truth. Torture was forbidden in English common law since the middle ages, not because medievals were sqeamish about cruelty--their punishments and executions were spectacularly cruel--but because a king who could use torture in investigations and interrogations could reach any conclusions he wanted.

    Torture is personal, yet torture is an institution. One cannot simply decide to torture in a one-off case, a hypothetical instance of a ticking time bomb. To be effective, torture requires training, equipment, expertise, and settings. The bureaucracy of torture then inevitably seeks to justify and sustain itself--if necessary, by using torture to generate evidence of its effectiveness. As Phronesisaical says, "Torture requires an institution of torture, which ... entails a broader torture program than the administration would have us believe." Again, the Lords were right:

    The lesson of history is that, when the law is not there to keep watch over it, the practice is always at risk of being resorted to in one form or another by the executive branch of government. The temptation to use it in times of emergency will be controlled by the law wherever the rule of law is allowed to operate. But where the rule of law is absent, or is reduced to a mere form of words to which those in authority pay no more than lip service, the temptation to use torture is unrestrained.

    permanent link | comments (0) | category: Iraq and democratic theory , philosophy

    September 25, 2006

    being Pope means never having to say you're sorry

    I have now read the full text of Pope Benedict's Sept. 12 lecture, a passage of which provoked global controversy and violence. I read it with an open mind and genuine interest, but it seems to me that the section on Islam is gratuitous and rather poorly argued.

    As the Pope said in his quasi-apology, he meant his discussion of Islam to be incidental to his main theme, which concerns the relationship between faith and reason in Christianity. This is the skeleton of his argument:

    The Greeks, being philosophical, decided that God could not (or would not) act "unreasonably": in other words, against logos. On this basis, Socrates and other sophisticated Greek thinkers rejected myth, which had described gods acting arbitrarily. Their equation of divinity with reason already influenced Jewish thought before Jesus' time. The Hebrew Bible evolved from mythical thinking toward an abstract, rational, omniscient deity (first evident in the words from the burning Bush: "I am"). The association of reason with divinity was also essential in the Gospels, as shown by John's prologue: "In the beginning was ho logos."

    According to Benedict, the union of faith and reason naturally took place in Europe, where reason had been born, not in the irrational East: "Given this convergence, it is not surprising that Christianity, despite its origins and some significant developments in the East, finally took on its historically decisive character in Europe."

    However, faith and reason have come apart in Europe since the 16th century. First Protestants tried to strip the Bible of Greek metaphysics and treat it only as a sequence of literal events. Liberal theologians (including some Catholics) reinforced this tendency when they advocated a "return simply to the man Jesus and to his simple message, underneath the accretions of theology and indeed of hellenization."

    It is a mistake to drive philosophical reason out of religion, Benedict argues, because God is rational and can be understood by means of philosophy. It is also an error to imagine science without faith:

    [The] modern concept of reason is based, to put it briefly, on a synthesis between Platonism (Cartesianism) and empiricism, a synthesis confirmed by the success of technology. On the one hand it presupposes the mathematical structure of matter, its intrinsic rationality, which makes it possible to understand how matter works and use it efficiently: this basic premise is, so to speak, the Platonic element in the modern understanding of nature. On the other hand, there is nature's capacity to be exploited for our purposes, and here only the possibility of verification or falsification through experimentation can yield ultimate certainty.

    Because modern rationality assumes that nature has a mathematical character, science hints at transcendence. But because it views empirical verification as the criterion of rationality, it rules out the possibility of God. This is a contradictory position, Benedict thinks. He recommends that we "acknowledge unreservedly" the benefits of science, yet we must "[broaden] our concept of reason and its application" so that it can encompass faith. By reuniting faith and reason, the West will reopen a dialogue with "profoundly religious cultures," which cannot fathom "a reason which is deaf to the divine."

    All of the above seems fairly mainstream for a conservative Catholic theologian. But the Pope chooses to illustrate his argument with a digression about Islam. He says that for the Byzantine emperor Manuel II Paleologus, "spreading the faith through violence is something unreasonable. Violence is incompatible with the nature of God and the nature of the soul." This "statement is self evident" to "a Byzantine shaped by Greek philosophy." In contrast, for an "educated Persian" who debates Paleologus, "God is absolutely transcendent ..., not bound even by his own word."

    This is a very odd example to support Benedict's major point. Did Paleologus really emphasize that conversion by the sword was "unreasonable"--incompatible with logos--and thus alien to God? Or did he simply say that it was wrong? Did the Persian really reply that God was "absolutely transcendent," and therefore it was appropriate to convert people forcibly despite the dictates of reason? Or did the Persian agree with the Emperor about forcible conversion, citing Qur'an 2:256: "There shall be no compulsion in religion: the right way is now distinct from the wrong way."

    Benedict calls this passage from the Qur'an "one of the suras of the early period, when Mohammed was still powerless and under threat." Later, according to Benedict, Mohammed preached holy war. I am not competent to assess that interpretation of the Qur'an. But I would note a resemblance between Paleologus and the young Mohammed: both led groups who were very vulnerable to conquest. Indeed, Byzantium soon fell to a Moslem army (one that tolerated Christians and Jews). On the other hand, when Christians have been triumphant, they have not always been eager to argue that faith must be voluntary.

    David Cook writes, "Islam was not in fact 'spread by the sword'conversion was not forced on the occupants of conquered territoriesbut the conquests created the necessary preconditions for the spread of Islam." One could write exactly the same thing about Christianity. For example, the Catholic Encyclopedia notes the advantages enjoyed by the first Franciscans in Mexico: "The fact that they had found the territory conquered, and the inhabitants pacified and submissive, had greatly aided the missionaries; they could, moreover, count on the support of the Government, and the new converts on its favour and protection."

    The Catholic Encyclopedia denies that Mexican natives were converted by force, but there were certainly wars declared for the purpose of converting countries to Christianity. As the Encyclopedia itself states: "The meaning of the word crusade has been extended to include all wars undertaken in pursuance of a vow, and directed against infidels, i.e. against Mohammedans, pagans, heretics, or those under the ban of excommunication. The wars waged by the Spaniards against the Moors constituted a continual crusade from the eleventh to the sixteenth century; in the north of Europe crusades were organized against the Prussians and Lithuanians; the extermination of the Albigensian heresy was due to a crusade, and, in the thirteenth century the popes preached crusades against John Lackland and Frederick II."

    Thus I can imagine the "educated Persian" (a patronizing description, by the way) arguing that mass conversions to Christianity have often followed conquest. He could have observed cases in which Moslems tolerated Jews and Christians and cited the Book of Revelations to illustrate Christian bloodthirstiness: "And out of his mouth goeth a sharp sword, that with it he should smite the nations: and he shall rule them with a rod of iron: and he treadeth the winepress of the fierceness and wrath of Almighty God."

    The Pope was widely criticized for his lecture. As we know, he issued a new statement:

    At this time, I wish also to add that I am deeply sorry for the reactions in some countries to a few passages of my address at the University of Regensburg, which were considered offensive to the sensibility of Muslims. These in fact were a quotation from a medieval text, which do not in any way express my personal thought.

    I by no means condone violent reactions to Pope Benedict's lecture. However, it strikes me that:

    1) The digression about Islam and violence was gratuitous in an essay supposedly about faith and reason;

    2) The Emperor Paleologus was obviously quoted to express Benedict's personal thoughts;

    3) The equation of Europe with reason (and the East with arbitrariness) is disturbing; and

    4) It shows bad faith to depict Islam as a religion spread by the sword without at least noting the advantages that Christianity has reaped from violence.

    permanent link | comments (0) | category: philosophy

    August 24, 2006

    how to respond to the terror risk

    A diverse range of people are arguing that we have overreacted to terror threats after 9/11. Their arguments include the following:

  • The statistical risk of being killed by a terrorist is very low. As John Mueller writes in a paper for the libertarian Cato Institute (pdf), "Even with the September 11 attacks included in the count, the number of Americans killed by international terrorism since the late 1960s (which is when the State Department began counting) is about the same as the number of Americans killed over the same period by lightning, accident-causing deer, or severe allergic reaction to peanuts."
  • Responses to terror, however, can be very costly. Consider the price and inconvenience of airport screening procedures. Or the deaths caused when people drive instead fly because they are afraid of terror. Or public support for the Iraq war.
  • Acting terrified of terror encourages terrorists. It means that they can damage America simply by talking about plots. There is an emerging "we-are-not-afraid" movement that argues we ought to react to terrorist threats in a calm and unruffled manner.
  • The alleged British bombing plot probably shows a desire to blow up airplanes, but the conspirators may have been far from being able to pull off the terror of which they dreamed. (Phronesisaical has links.)
  • Fear of terror steers public resources to certain agencies and companies that have an incentive to stoke the fear further.
  • Irrational fear of terror distorts public opinion, to the advantage of incumbent politicians. Some see evidence of Machiavellian manipulation; but Mueller draws a more cautious conclusion: "There is no reason to suspect that President Bush's concern about terrorism is anything but genuine. However, his approval rating did receive the greatest boost for any president in history in September 2001, and it would be politically unnatural for him not to notice. ... This process is hardly new. The preoccupation of the media and of Jimmy Carters presidency with the hostages taken by Iran in 1979 to the exclusion of almost everything else may look foolish in retrospect. ... But it doubtless appeared to be good politics at the time--Carter's dismal approval rating soared when the hostages were seized."
  • I think these are good points, but there is another side to consider. It's unreasonable to adopt a strictly utilitarian calculus that treats all deaths as equally significant. Every human being counts the same, yet we are entitled to care especially about some tragic events. If deaths were fungible, then none would really matter; they would all be mere statistics.

    In particular, as a nation, we are entitled to care more about the 2,700 killed on 9/11 than about the roughly similar number of deaths to tonsil cancer in 2001. Pure utilitarianism would tell us that 9/11 happened in the past; thus it's irrational to do anything about it, other than to try to prevent a similar disaster in the future. And it's irrational to put resources into preventing a terrorist attack if we could prevent more deaths by putting the same money and energy into seat belts or cancer prevention. However, the attack on 9/11 was a story of hatred against the United States, premeditated murder, acute suffering, and heroic response. Unless we can pay special attention to moving stories, there is no reason to care about life itself.

    In my view, we can rationally respond to 9/11 by bringing the perpetrators to justice, even at substantial cost, and even if they pose no threat. That violates the utilitarian reasoning that underlies Mueller's argument. However, note that the Bush administration has not brought Bin Laden to justice. Also note that the 9/11 story may justify vengeance, but it does not justify excessive fear about similar attacks.

    Finally, we must think carefully about responsibility. On a pure utilitarian calculus, we might be better off with virtually no airport security. A tiny percentage of people would be killed by bombers, because there aren't very many terrorists with the will and the means to kill. By getting rid of airport screenings, we would save billions of dollars and vast amounts of time, and possibly even save lives by encouraging more people to fly instead of drive. But this reasoning doesn't work. If a government cancelled airport screening procedures, some people would die, and it would not be irrational to pin the responsibility for those deaths on the government.

    Thus no government can dismiss the terror threat, because people understandably hold the national security apparatus responsible for protecting them against terror. In contrast, protection against tonsil cancer is not seen as a state responsibility. I like the following passage by Senator McCain (quoted in Mueller), but I'm not sure that any administration could get away with using it as an anti-terror policy:

    Get on the damn elevator! Fly on the damn plane! Calculate the odds of being harmed by a terrorist! Its still about as likely as being swept out to sea by a tidal wave. Suck it up, for crying out loud. Youre almost certainly going to be okay. And in the unlikely event youre not, do you really want to spend your last days cowering behind plastic sheets and duct tape? Thats not a life worth living, is it?

    permanent link | comments (5) | category: philosophy

    August 16, 2006

    the difference between economics and psychology

    To tell the truth, I have never taken a single course in either economics or psychology. However, my professional interests have led me to read a fair amount in both disciplines and to talk to scholars of both persuasions. I think I have noticed a basic difference.

    Economists are interested in concrete actions: behaviors. They began by studying financial exchanges, but now they will investigate practically anything, including learning, war, marriage, and civic participation, as long as it involves observable or reportable acts. In contrast, pyschologists (since the decline of behaviorism) are interested in mental states, many of which are not directly observable. You can't see what someone's identity or mood or capacity is, nor can you necessarily ask the person directly. Pyschologists tend to measure these mental states by asking many questions or making many observations and creating statistically reliable "constructs." Thus they like to use scales and factor analysis. (See this apparently classic 1955 paper.) Economists are suspicious of such constructs because there is always an imperfect correlation between the construct and its directly measured components.

    I don't think you can tell the difference between the disciplines by asking what they study: pyschologists explore human behavior in markets, and modern economists investigate practically everything. Instead, the divide is between a kind of empiricism or nominalism that distrusts general constructs, versus a kind of philosophical "realism" that takes unobserved mental states seriously.

    As for political science--with apologies to my many friends in that field, it isn't a discipline at all, but rather a topic area that uses methods from economics, pychology, philosophy, and narrative history.

    permanent link | comments (2) | category: philosophy

    March 23, 2006

    democracy as education, education for democracy

    I've been commissioned to write an article about John Dewey's 1927 book, The Public and its Problems, and what it implies for contemporary democratic practice. Given my own interests, I have focused on its implications for public deliberation and civic education. My whole first draft is pasted "below the fold" for anyone who's interested in Dewey or the philosophy of democratic education.

    For John Dewey, the link between democracy and learning was profound and reciprocal. Dewey defined "democracy" as any process by which a community collectively learns, and "education" as any process that enhances individuals' capacity to participate in a democracy. Although these definitions pose difficulties, they constitute an insightful and original theory that remains relevant 80 years after Dewey wrote The Public and its Problems. His theory is especially illuminating for those concerned about public deliberation and civic education.

    On a conventional definition of "democracy," it as a system of government that honors equity and freedom. In a democracy-or so we are taught-every adult has one vote, and all may speak freely. For Dewey, however, such rules were merely tools that happened to be in current use. No institution (including free elections and civil rights) could claim "inherent sanctity." There were no general principles, no "antecedent universal propositions," that distinguished just institutions from unjust ones. The nature of the good society was "something to be critically and experimentally determined." [1927, p. 74]

    As described so far, Dewey's theory of democracy gives no guidance and makes no distinctions. If we reject all "antecedent universal propositions," then we cannot know that a system of free elections is better than an tyranny. However, Dewey had one profound commitment, to collective learning. Thus he valued the American constitutional system, not because all human beings were truly created equal, and not because elections would generate fair or efficient outcomes, but because democracy promoted discussion, and discussion was educative. "The strongest point to be made in behalf of even such rudimentary political forms as democracy has already attained, popular voting, majority rule and so on, is that to some extent they involve a consultation and discussion which uncover social needs and troubles."[1927, p. 206]

    If learning is our goal, then we could spend our time reading books or observing nature. However, the kind of learning that Dewey valued most was social and experiential. A democracy was a form of social organization in which people realized that they were interconnected and learned by working together. "Wherever there is conjoint activity whose consequences are appreciated as good by all singular persons who take part in it, and where the realization of the good is such as to effect an energetic desire and effort to sustain it in being just because it is a good shared by all, there is in so far a community. The clear consciousness of a communal life, in all its implications, constitutes the idea of democracy." [1927, p. 149]

    It might seem strange to evaluate societies and institutions largely as opportunities for collective education. But that approach emerged from Deweys beliefs about the purpose of life itself. In Democracy and Education (1916), he argued that individual life had value as experience; and the richer the experience, the better. The value of a society was to permit individuals to share and enlarge their experiences by communicating. "The ulterior significance of every mode of human association," he wrote, is "the contribution which it makes to the improvement of the quality of experience." [1916, p. 12] It followed that a "democracy is more than a form of government; it is primarily a mode of associated living, of conjoint communal experience." [1916, p. 93]

    I think that Dewey's rejection of universal propositions in favor of continuous collective learning was problematic. As he noted, "every social institution is educative in effect." [1916, p. 12] However, not every educative institution is democratic. Consider science, which Dewey valued very highly. Science is a collective enterprise and an excellent means of learning. However, when it works as advertised, it is meritocratic, not democratic. If we equate democracy with collective learning, then we may weaken our commitment to equality and try to organize the government on the same principles as science (as Dewey recommended in Liberalism and Social Action, 1935), or we may try to democratize scientific research. Both reforms are mistakes, in my view.

    Or consider any society in which some oppress others and deprive them of rights. Such arrangements are consistent with "learning": the oppressors learn to dominate, and the oppressed learn to manage. Indeed, the two classes learn together, and they may learn continuously. I would deny that such a system is democratic, because it violates antecedent principles of equality. But Dewey's deep pragmatism prevented him from endorsing such external principles.

    In Democracy and Education, Dewey recognized that "in any social group whatever, even in a gang of thieves, we find some interest held in common, and we find a certain amount of interaction and cooperative intercourse with other groups. From these two traits we derive our standard. How numerous and varied are the interests which are consciously shared? How full and free is the interplay with other forms of association?" In a "criminal band," Dewey thought, the shared interests must be narrow ("reducible almost to a common interest in plunder") and the group must isolate itself from outsiders. [1916, p. 89]. In a good society, by contrast, everyone has everyone else's full range of interests at heart and there are dense networks connecting all sectors.

    This ideal seems more satisfactory than a simple commitment to "learning," but it relies on the kind of abstract moral principles that Dewey elsewhere rejects. For example, concern for the holistic wellbeing of all fellow human beings is a strong moral commitment, characteristic of Kantianism. It does not derive logically from the concept of communal learning, but is a separate principle. It is not clear to me how a Deweyan pragmatist can embrace it.

    Notwithstanding this qualification, there is much of value in Dewey's theory. For those who promote concrete experiments in public deliberation, a theory of democracy-as-learning is inspirational. It explains why adults should be, and are, motivated to gather and discuss public problems: discussion is virtually the purpose of human life. Dewey's theory also provides a response to those who say that small-scale public deliberation is "just talk," that it lacks sufficient impact on votes and policies. Dewey would reply that the heart of democracy is not an election or the passage of a law, but personal growth through communication. "There is no liberal expansion and confirmation of limited personal intellectual endowment which may not proceed from the flow of social intelligence when that circulates by word of mouth from one to another in the communication of the local community." [1927, p. 219]

    Dewey's endorsement of verbal communication does not mean, however, that speech should be disconnected from action. "Mind," he thought "is not a name for something complete by itself; it is a name for a course of action in so far as that is intelligently directed." [1916, p. 139] Likewise, deliberation (which is thinking by groups) should be linked to concrete experimentation. Public deliberation is most satisfying and motivating-and most informed and disciplined-when the people who talk also act: when they argue from personal, practical experience and when their decisions have consequences for their individual and collective behavior.

    Dewey was a developmental thinker: he understood that human beings change over the course of the lifecycles and that a society needs different contributions from each generation. For adults, learning must be collective and voluntary. Adults cannot be given reading assignments on government or public affairs. The forms of adult learning that most interested Dewey were face-to-face adult deliberations, membership in voluntary associations, and communication via the mass media (in his day, newspapers and radio).

    However, in a complex society, he thought, children have too much to learn in too short a time for them to be allowed simply to experience discussions and associations. For them, "the need of training is too evident; the pressure to accomplish a change in their attitude and habits is too urgent. ... Since our chief business with them is to enable them to share in a common life we cannot help considering whether or no we are forming the powers which will secure this ability." Thus the need for a "more formal kind of education": in other words, "direct tuition or schooling." [1916, p. 10] Note again that the purpose of education is to prepare students to "share in a common life" of continual learning.

    Contrary to what some critics of Dewey claim, he favored "direct tuition" as an efficient means of transmitting accumulated knowledge to children so that they could become competent citizens within a reasonable amount of time. However, he recognized that merely imparting information was not good pedagogy. "Formal instruction ... easily becomes remote and dead-abstract and bookish, to use the ordinary words of depreciation." [1916, p. 11] Besides, the most profound effects of education (for better or worse) came from the way schools operated as mini-societies, not from the formal curriculum. "The development within the young of the attitudes and dispositions necessary to the continuous and progressive life of a society cannot take place by direct conveyance of beliefs, emotions, and knowledge. It takes place through the intermediary of the environment." [1916, p. 26] In other words, what adults demonstrated by how they organized schools was more important than what they told their students in lectures and textbooks.

    Dewey argued that young people were more "plastic" than their elders, more susceptible to being deliberately educated. Recent research bears him out. There is ample evidence that civic experiences in adolescence have lasting effects. For example, in an ongoing longitudinal study of the high school class of 1965, Kent Jennings and his colleagues have found that participation in student government and other civic extracurricular activities has a positive effect on people's participation in civil society almost forty years later. More than a dozen longitudinal studies of adolescent participation in community service have found positive effects as much as ten years later. And Doug McAdam's rigorous study of the Freedom Summer voting-rights campaign shows that the activists' experience in Mississippi (admittedly, an intense one) permanently transformed them.

    In contrast, few studies of deliberately educative civic experiences find lasting effects on adult participants. We can explain the difference as follows. Young people must form some opinion about politics, social issues, and civil society when they first encounter those issues in adolescence. Their opinion may be the default one (disinterest) or it may be critical engagement, enthusiastic support, or some other response. Once they have formed a basic orientation, it would take effort and perhaps some psychological distress to change their minds. Therefore, most young adults settle into a pattern of behavior and attitudes in relation to politics that lasts for the rest of their lives, unless some major shock (such as a war or revolution) forces them to reconsider. In a country like the United States, when adults change their political identities, the change results from voluntary experiences, not from exhortations or any form of mandatory civic education.

    It would be immoral to write off adults because they are much less "plastic" than adolescents and less susceptible to deliberate civic education. But it is crucial to invest in the democratic education of young people, since they will be permanently shaped by the way they first experience politics, social issues, and civil society. Civic education, as Dewey recommended, must include not only formal instruction but also concrete experiences and the whole "environment" of schools. Indeed, "one of the weightiest problems with which the philosophy of education has to cope is the method of keeping a proper balance between the informal and the formal, the incidental and intentional, modes of education." [1916, p. 12]

    Dewey and some of his contemporaries tried to "reorganize" American education "so that learning takes place in connection with the intelligent carrying forward of purposeful activities." [1916, p. 144]. Dewey called this reorganization "slow work," and it did encounter many frustrations. Nevertheless, he and his fellow educational Progressives achieved some striking reforms.

    First, to give students opportunities for purposeful civic activities, the Progressives founded student governments and school newspapers. Evaluations find that these activities have lasting positive effects on students' civic engagement, yet the percentage of American students who participate has declined by 50 percent since the 1960s, in large part because high schools have been consolidated. (Fewer schools means fewer school governments and newspapers.)

    The Progressives also created the first courses on "civics" and "social studies." These subjects grew at the partial expense of history, which followers of Dewey saw (mistakenly, in my opinion) as an overly "academic" discipline. In 1915, the US Bureau of Education formally endorsed a movement for "community civics" that was by then quite widespread. Its aim was "to help the child know his communitynot merely a lot about it, but the meaning of community life, what it does for him and how it does it, what the community has a right to expect from him, and how he may fulfill his obligations, meanwhile cultivating in him the essential qualities and habits of good citizenship."

    In 1928-9, according to federal statistics, more than half of all American ninth-graders took "civics." That percentage had fallen to 13.4 by the early 1970s. In 1948-9, 41.5 percent of American high school students took "problems of democracy," another Progressive innovation, which typically involved reading and debating stories from the daily newspaper. By the early 1970s, that percentage was down to 8.9.

    Nevertheless, the percentage of high school students who have taken any government course has been basically steady since 1915-1916. Although the historical data have gaps, it appears most likely that "civics" and "problems of democracy" have disappeared since 1970, while American history, world history, and American government have either stayed constant or grew. As Nathaniel Schwartz notes, the old civics and problems of democracy textbooks addressed their readers as "you" and advocated various forms of participation. Today's American government texts discuss the topics of first-year college political science: how a bill becomes a law, how interest groups form, how courts operate. Social studies arose during the Progressive Era, when philosophical pragmatists argued for a curriculum of practical relevance to democracy. Social studies and civics seem to be waning at a time when academic rigor is the first priority and high schools take their cues from colleges.

    Finally, Dewey and his allies were interested in the overall design of schools: their location, physical architecture, bureaucratic structure, and rules of admission and graduation. They sought to integrate schools into the broader community and to make them into democratic spaces in which young people and adults would practice citizenship by working together on common tasks.

    Today, however, many students attend large, incoherent, "shopping mall" high schools that offer long lists of courses and activities, as well as numerous cliques and social networks. Students who enter on a very good track or who have positive support from peers and family may make wise choices about their courses, friends, co-curricular activities, and next steps after graduation. They can obtain useful civic skills and habits by choosing demanding courses in history and social studies, by joining the student newspaper or serving in the community, and by interacting with administrators. However, relatively few studentsusually those on a path to collegecan fill these roles in a typical high school. Other students who are steered (or who steer themselves) into undemanding courses and away from student activities will pay a price for the rest of their lives. Serious and lasting consequences follow from choices made in early adolescence, often under severe constraints.

    Typical large high schools also tend to have frequent discipline problems, a general atmosphere of alienation, and internal segregation by race, class, and subculture. Often, they occupy suburban-style campuses, set far apart from the adult community of work, family, religion, and politics. Even worse, some of these huge schools occupy prison-like urban blocks, secured with gates and bars. Parents and other adults in the community have little impact on these big, bureaucratic institutions. Therefore, schools are rarely models of community participation, nor do they create paths for youth to participate in the broader world.

    Although large high schools offer opportunities for self-selected students to be active citizensrunning for the student government, creating video broadcast programs, and engaging in community servicemost of their fellow students have no interest in their work. Why pay attention to the student government, or watch a positive hip-hop video that your peers have produced, if you do not share a community with them? Commercial products are more impressive and entertaining.

    Since the 1960s, one of the most consistent findings in the research on civic development is the following: Students who feel that they and their peers can have an impact on the governance of their own schools tend to be confident in their ability to participate in their communities and interested in public affairs. However, it is impossible for anyone to influence the overall atmosphere and structure of a huge school that offers a wide but incoherent range of choices and views its student population merely as consumers. To make matters worse, school districts have been consolidated since Deweys time, so that there are dramatically fewer opportunities for parents and other adults to govern their own public schools. According to data collected by Elinor Ostrom, the number of elected school board seats has shrunk by 86% since 1930, even as the population has more than doubled.

    Those with the most education (relative to their contemporaries) are by far the most likely to participate in democracywhich suggests that education prepares people for citizenship. During the course of the twentieth century, each generation of Americans attained, on average, a higher level of education than those before. Educational outcomes also became substantially more equal. When we put these facts together, we might assume that participation must have increased steadily during the 1900s. On the contrary, voting rates are considerably lower than they were a century ago; levels of political knowledge are flat; membership in most forms of civic association is down; and people are less likely to say that they can make a difference in their communities.

    Although many causes have been suggested for these declines, part of the problem is surely a decline in the quality of civic education. People are spending many more years in school, but getting less education for democracy. What we need is just what Dewey and his allies championed-not merely government classes (although they have positive effects and are in danger of being cut), but also community-service opportunities that are connected to the academic curriculum, student governments and student media work, and the restructuring of schools so that they become coherent communities reconnected to the adult world.

    --
    Sources

    Dewey, John, Democracy and Education, 1916 (Carbondale and Evansville: Southern Illinois University Press, 1985).

    ----------------, The Public and Its Problems (New York: Henry Holt, 1927)

    permanent link | comments (2) | category: philosophy

    January 16, 2006

    an exercise for Martin Luther King Day

    I find it useful to teach WALKER v. CITY OF BIRMINGHAM, 388 U.S. 307 (1967) as an example of legal and moral reasoning. This is the case that originated with the arrest of Martin Luther King and 52 others in Birmingham, AL, at Easter, 1962. It is a rich example for exploring the rule of law, civil disobedience, religion versus secular law, procedures versus justice, and even the way that our moral conclusions follow from how we choose to tell stories.

    By way of background:

    In 1962, the Southern Christian Leadership Conference (SCLC) hoped to generate massive protests in Birmingham before the end of the term of Eugene 'Bull' Connor, the violently racist Commissioner of Public Safety. As the protests began, Connor obtained a state-court injunction against the marchers. When the SCLC leaders received the injunction on April 11, they stated, "we cannot in good conscience obey" it. King called it a "pseudo" law which promotes "raw tyranny under the guise of maintaining law and order."

    At this point, the Direct Action campaign is in crisis: there have been only 150 arrests so far, and no more bail credit is available. On April 12 (Good Friday), Norman Amaker, an NAACP lawyer, says that the injunction is unconstitutional, but breaking it will result in jail time. King disappears from a tense conference, reappears in jeans. "I don't know what will happen ... But I have to make a faith act. ... If we obey this injunction, we are out of business." Leads 1,000 marchers; he and 52 are arrested. He is sent to solitary confinement. In NYC, Harry Belafonte raises $50,000 for bail. The New York Times and President Kennedy condemn marches as ill-timed.

    April 15 (Easter Sunday): MLK is released from solitary confinement, still in jail. Writes "Letter from a Birmingham Jail."

    April 26: King is sentenced to five days with a warning not to protest. Sentence is held in abeyance.

    May 2: Children's march. King: “We subpoena the conscience of the nation to the judgment seat of morality."

    May 20: Supreme Court strikes down Birmingham's segregation ordinances. A deal is worked out.

    September: bomb kills four little girls at Birmingham's Sixteenth Street Baptist Church.

    SCLC appeals King's conviction for two reasons: to overturn the Birmingham parade ordinance, and to prevent future uses of injunctions against civil rights marchers. The case is [Wyatt Tee] Walker v. City of Birmingham. It is not decided until 1967 by the Supreme Court, which upholds King's arrest and imprisonment on basically procedural grounds:

    The text of the Supreme Court decision, written by Potter Stewart My commentary and questions
    On Wednesday, April 10, 1963, officials of Birmingham, Alabama, filed a bill of complaint in a state circuit court asking for injunctive relief against 139 individuals and two organizations. With whom does the opinion begin? How are those people described? What do we usually think of when we hear "city officials"? How else could these particular men be described? (Hint: the Klan was powerfully influential in city government). How would the narrative read if it started with King and the other civil rights leaders?
    The bill and accompanying affidavits stated that during the preceding seven days:
      "[R]espondents [had] sponsored and/or participated in and/or conspired to commit and/or to encourage and/or to participate in certain movements, plans or projects commonly called `sit-in' demonstrations, `kneel-in' demonstrations, mass street parades, trespasses on private property after being warned to leave the premises by the owners of said property, congregating in mobs upon the public streets and other public places, unlawfully picketing private places of business in the City of Birmingham, Alabama; violation of numerous ordinances and statutes of the City of Birmingham and State of Alabama . . . ."
    It was alleged that this conduct was "calculated to provoke breaches of the peace," "threaten[ed] the safety, peace and tranquility of the City," and placed "an undue burden and strain upon the manpower of the Police Department."
    How are the petitioners described? Were the petitioners a "mob" -- or a group of citizens assembled to petition for the redress of their grievances? Is there a corect answer to this question?
    What is not said about them? What context is missing? What are their alleged actions? How else could the SCLC's actions be described?
    The bill stated that these infractions of the law were expected to continue and would "lead to further imminent danger to the lives, safety, peace, tranquility and general welfare of the people of the City of Birmingham," and that the "remedy by law [was] inadequate." Apart from unrest, what else might the city officials fear?
    The circuit judge granted a temporary injunction as prayed in the bill, enjoining the petitioners from, among other things, participating in or encouraging mass street parades or mass processions without a permit as required by a Birmingham ordinance Is the ordinance constitutional? If not, why not? Why did Connor get an injunction instead of arresting people under the ordinance? Does the opinion explain his motivations? Would it read differently if it did?
    Five of the eight petitioners were served with copies of the writ early the next morning. Several hours later four of them held a press conference. There a statement was distributed, declaring their intention to disobey the injunction because it was "raw tyranny under the guise of maintaining law and order." At this press conference one of the petitioners stated: "That they had respect for the Federal Courts, or Federal Injunctions, but in the past the State Courts had favored local law enforcement, and if the police couldn't handle it, the mob would." That night a meeting took place at which one of the petitioners announced that "[i]njunction or no injunction we are going to march tomorrow." The next afternoon, Good Friday, a large crowd gathered in the vicinity of Sixteenth Street and Sixth Avenue North in Birmingham. A group of about 50 or 60 proceeded to parade along the sidewalk while a crowd of 1,000 to 1,500 onlookers stood by, "clapping, and hollering, and [w]hooping." Does the SCLC "respect" the state courts? Should it? Why are the SCLC's disrespectful words quoted here? (See footnote #3: petitioners "contend that the circuit court improperly relied on this incident in finding them guilty of contempt, claiming that they were engaged in constitutionally protected free speech. We find no indication that the court considered the incident for any purpose other than the legitimate one of establishing that the participating petitioners' subsequent violation of the injunction by parading without a permit was willful and deliberate." Why then quote them verbatim?)

    The crowd is described as "hollering and [w]hooping." How else could they be described? Who's being quoted here?
    Some of the crowd followed the marchers and spilled out into the street. At least three of the petitioners participated in this march. Meetings sponsored by some of the petitioners were held that night and the following night, where calls for volunteers to "walk" and go to jail were made. On Easter Sunday, April 14, a crowd of between 1,500 and 2,000 people congregated in the midafternoon in the vicinity of Seventh Avenue and Eleventh Street North in Birmingham. One of the petitioners was seen organizing members of the crowd in formation. A group of about 50, headed by three other petitioners, started down the sidewalk two abreast. At least one other petitioner was among the marchers. Some 300 or 400 people from among the onlookers followed in a crowd that occupied the entire width of the street and overflowed onto the sidewalks. Violence occurred. Members of the crowd threw rocks that injured a newspaperman and damaged a police motorcycle. What of factual significance is described here? Why say "Violence occurred"? (NB: Garrow mentions no violence; Branch says MLK was "suddenly seized without warning by police.") Were the city officials justified in their initial fears? (They feared violence; violence occurred.) Does this make the injunction valid?
    The next day the city officials who had requested the injunction applied to the state circuit court for an order to show cause why the petitioners should not be held in contempt for violating it. At the ensuing hearing the petitioners sought to attack the constitutionality of the injunction on the ground that it was vague and overbroad, and restrained free speech. They also sought to attack the Birmingham parade ordinance upon similar grounds, and upon the further ground that the ordinance had previously been administered in an arbitrary and discriminatory manner. The circuit judge refused to consider any of these contentions, pointing out that there had been neither a motion to dissolve the injunction, nor an effort to comply with it by applying for a permit from the city commission before engaging in the Good Friday and Easter Sunday parades. Why didn't the SCLC go back to Connor for a permit? How does the Court want the SCLC to treat Connor? Does Connor merit this?
    Consequently, the court held that the only issues before it were whether it had jurisdiction to issue the temporary injunction, and whether thereafter the petitioners had knowingly violated it. Upon these issues the court found against the petitioners, and imposed upon each of them a sentence of five days in jail and a $50 fine, in accord with an Alabama statute.
    ... The generality of the language contained in the Birmingham parade ordinance upon which the injunction was based would unquestionably raise substantial constitutional issues concerning some of its provisions. ... The petitioners, however, did not even attempt to apply to the Alabama courts for an authoritative construction of the ordinance. What is the Supreme Court's attitude toward the Alabama courts? Were those courts legitimate?
    ...The breadth and vagueness of the injunction itself would also unquestionably be subject to substantial constitutional question. But the way to raise that question was to apply to the Alabama courts to have the injunction modified or dissolved.

    ... The petitioners also claim that they were free to disobey the injunction because the parade ordinance on which it was based had been administered in the past in an arbitrary and discriminatory fashion. In support of this claim they sought to introduce evidence that, a few days before the injunction issued, requests for permits to picket had been made to a member of the city commission. One request had been rudely rebuffed, and this same official had later made clear that he was without power to grant the permit alone, since the issuance of such permits was the responsibility of the entire city commission. Petitioners raise the issue of past discrimination. What kind of discrimination would this have been? (racial) Has race been mentioned at all in the opinion? Why does Justice Stewart say "a member of the city commission" instead of "Connor"? (According to testimony by Lola Hendricks, this is what happened: "I asked Commissioner Connor for the permit, and asked if he could issue the permit, or other persons who would refer me to, persons who would issue a permit. He said, 'No, you will not get a permit in Birmingham, Alabama to picket. I will picket you over to the City Jail,' and he repeated that twice." Why does Steward say that Connor "made clear" his lack of authority to issue permits? (Connor actually did issue permits to other groups.) Why not use the words "asserted" or "claimed"?
    This case would arise in quite a different constitutional posture if the petitioners, before disobeying the injunction, had challenged it in the Alabama courts, and had been met with delay or frustration of their constitutional claims. But there is no showing that such would have been the fate of a timely motion to modify or dissolve the injunction. There was an interim of two days between the issuance of the injunction and the Good Friday march. The petitioners give absolutely no explanation of why they did not make some application to the state court during that period. What was the significance to the Civil Rights Leaders of Easter? Why was it important for them to have innocent people jailed on Good Friday and released on Easter Sunday? How does this reasoning and motivation collide with that of the legal system ?
    ... The rule of law that Alabama followed in this case reflects a belief that in the fair administration of justice no man can be judge in his own case, however exalted his station, however righteous his motives, and irrespective of his race, color, politics, or religion. This Court cannot hold that the petitioners were constitutionally free to ignore all the procedures of the law and carry their battle to the streets. One may sympathize with the petitioners' impatient commitment to their cause. But respect for judicial process is a small price to pay for the civilizing hand of law, which alone can give abiding meaning to constitutional freedom. The "civilizing hand of law." Does this value count against the marchers? Or against Connor? "... which alone can give abiding meaning to constitutional freedom." Alone? Contrast MLK, in Atlanta (1962): "legislation and court orders can only declare rights. They can never thoroughly deliver them. Only when people themselves begin to act are rights on paper given life blood."

     

    permanent link | comments (0) | category: philosophy

    January 10, 2006

    entropy and dialectic

    The world grows more alike. Global culture is more uniform today than at any time in the past. Ecosystems are more similar, thanks to human interventions and the mixing of species. Although there are countervailing trends toward diversity, the pressure for similarity is palpable and powerful.

    two explanations

    I think two theories help to explain this pressure. The first is entropy. In nature, when unlike things come into contact, they become more alike. Likewise, when cultures interact through trade or conquest, they come to share features.

    A natural system loses dynamism as entropy grows, to the point that a perfectly entropic universe would be a smooth and inert field of matter. If there were no differences, then time itself would end. Some of the anxiety about globalization derives from fear that cultural differences will disappear, and with them, human dynamism. Some of the impetus for environmentalism arises from fear that all ecosystems will become alike. (This is why biodiversity seems so precious and "invasive species" are such a concern.)

    Entropy is fundamentally mindless. It is "noise," the opposite of a meaningful "signal." In nature, only intelligence can reduce entropy. For example, by sorting objects into separate piles, a person can make a heap less entropic. In the domain of culture, human beings can use their intelligence to wall themselves off from contact with outsiders, but such barriers always ultimately weaken. The Second Law of Thermodynamics applies: the entropy of a closed system tends to increase. However, intelligent beings can also deliberately create new cultural forms in opposition to global averages. Even by the simple act of remembering the diversity of the past, we can make our own minds more complex.

    The second explanation is Hegelian. Contrary to popular belief, Hegel never said anything about a thesis meeting its opposite (the antithesis) and generating a synthesis. His model is much more plausible. It starts with consciousness: naive thinking and doing. In a world of diverse people and cultures, a conscious person or group will sooner or later encounter and recognize alternative values and ways of being. At that point self-consciousness arises. This is an uncomfortable feeling, full of tension and doubt; but it is also generative and dynamic, and it can lead to what Hegel calls reason. Hegelian reason is the deliberate and informed creation of values and beliefs, based on the available alternatives. Reason will again become self-consciousness whenever, having built a satisfactory solution, a person or a group realizes that there are other available solutions. That new stage of self-consciousness can again become reason. The whole cycle is "dialectic."

    Like the Second Law of Thermodynamics, Hegelian dialectic leads ultimately to universal sameness, but it is a sameness deliberately constructed by human beings through the application of intelligence and will. Barring a catastrophe, world culture should become more uniform but also more sophisticated, because it will encompass more history and more awareness of alternatives. It will not be a static state of sameness, but a dramatic narrative leading toward consensus, recorded in the minds of the human actors.

    Perhaps the most profound issue of our era is whether we will grow more alike through dialectic or through entropy. Since I am unable to think of any other way to explore this tension, I have made it the theme of a long narrative poem (only part of which is online so far).

    consumerism and creativity

    I suspect that entropy is connected to the problem of consumerism. Raw materials have been globally traded for a long time. However, the salient feature of "globalization" is the exchange of finished, consumer products. The volume of such trade has surely increased with deregulation and with new communications technology. As a result, people can choose from rapidly growing menus of cultural products. This choice increases as a result of market exchanges, but it is also something that we fight for--for instance, when people who favor "diversity" in education demand more choices in the curriculum, or when civil libertarians assert a right to purchase information from abroad.

    Everyone who can choose from a global list of finished cultural products becomes more like everyone else: a phenomenon that Russell Arben Fox insightfully describes. This is a passive, detached, inert sameness. The only way to prevent it is to block people from exercising consumer choice, which restricts their freedom--and never works for long.

    In contrast, when we make things, we put our own stamp on them. We thereby exercise Hegelian "reason." Unlike restrictions on trade and communication, policies that support the local creation of cultural products expand freedom. And even if everyone's creations turn out to be increasingly similar as history proceeds, at least the resulting sameness will be something that we human beings have made. Likewise, an environmentalism devoted to creativity (rather than preservation) would make the world less entropic even as we put a human stamp on nature.

    [This post is being discussed on the Philosophy New Service "community" page]

    permanent link | comments (2) | category: philosophy

    January 2, 2006

    why libertarians need a theory of political socialization

    The interesting libertarian David Friedman argues that the First Amendment bans public schools. This is a portion of his argument, which deserves to be read in full:

    The judge who recently held it unconstitutional for public schools to be required to teach the theory of intelligent design correctly argued that doing so would be to support a particular set of religious beliefsthose that reject evolution as an explanation for the apparent design of living creatures. His mistake was not carrying the argument far enough. A school that teaches that evolution is false is taking sides in a religious disputebut so does a school that teaches that evolution is true.

    The problem is broader than evolution. In the process of educating children, one must take positions on what is true or false. Over a wide range of issues, such a claim is either the affirmation of a religious position or the denial of a religious position. Any decent scientific account of geology, paleontology, what we know about the distant past, is also a denial of the beliefs of (among others) fundamentalist Christians. To compel children to go to schools, paid for by taxes, in which they are taught that their religious beliefs are false, is not neutrality.

    [...]

    My conclusion is that the existence of public schools is inconsistent with the First Amendment. Their purpose is, or ought to be, to educateand one cannot, in practice, educate without either supporting or denying a wide variety of religious claims.

    Friedman's logic applies even more generally: almost all actions by a government (e.g., speeches by elected leaders, the design of public buildings, interventions in the Middle East) may make statements--implied or explicit--in favor or against religious beliefs. For instance, maintaining an army violates Quaker and other pacifist beliefs, yet citizens are required to pay for the military. Jefferson once wrote, "to compel a man to furnish contributions of money for the propagation of opinions which he disbelieves and abhors, is sinful and tyrannical." Taken very literally, this is an argument not only against public schools, but against government itself.

    To me, that's a reductio ad absurdum. As a deliberative democrat, I believe that the public ought to be able to build and control public institutions without many limitations. That means that it should be constitutional for a community to teach "intelligent design." The First Amendment's ban on the "establishment of religion" should mean what it says: No established religion. In public debates about our schools, I will argue against Intelligent Design, which strikes me as intellectually embarrassing as well as possibly blasphemous. But if my side loses, I don't want the courts to bail us out by declaring ID unconstitutional. The public debate should simply continue.

    Having staked out this contrary position, let me try to say something quasi-constructive about libertarianism. Libertarians are leery of political power, because it can be used to restrict freedom. However, political power exists wherever there are millions of people with opinions. Constitutional limitations on the public's will are just pieces of paper unless the public wants to be limited.

    Therefore, libertarians must change majority opinion so that individual liberty becomes a higher moral priority than it is today. I can think of three strategies to attain that end:

    1. Libertarians can make arguments in favor of maximum liberty. Such arguments have been available for two centuries and may have enhanced popular support for civil liberties, yet most people have not been convinced that the economic role of the state should be minimized. Programs like Social Security and public education remain highly popular. A libertarian who believes (as I do not) that these programs violate liberty might consider the general limits of reasons and arguments. They must always butt up against interests, cultural norms, inherited values, experiences, and traditions--not to mention contrary arguments. Even in the long run, there is no guarantee that libertarian arguments will prevail (even if they are right).

    2. Libertarians might assume that people are being influenced against liberty by the state itself, especially through the institution of public education. Then their strategy would be to dismantle state schools (perhaps using vouchers) and rely on families and independent schools to raise children who value liberty above all else.

    I doubt that this approach would work. First of all, I'm not convinced that today's public schools socialize young people to favor the state. True, schools are authoritarian institutions, but that just makes many teenagers rebel. Schools also try to teach civil liberties and tolerance, which may be one reason that each generation comes of age more civil libertarian that its predecessors.

    Besides, I doubt that parents, left to their own devices, would pay to educate their own children to treasure liberty for all. First, developing such principles is not in kids' individual self-interest. Second, most parents want to limit, not expand, their kids' sense of individual freedom.

    We know that when adults organize neighborhood associations (largely unregulated corporations that meet market demand), they choose to impose all kinds of rules against the display of signs, against congregating on the streets, even against the private possession of pornography. Through their free choices, they socialize their kids to believe that freedom is dangerous and bad for property values. There is no reason to believe that private, voucher-supported schools would be different.

    3. The third option is to recognize that public schools are instruments for attaining public goods such as love of freedom. Today's schools probably increase students' support for civil liberties. They do not teach students to distrust the state and prefer the market. Therefore, libertarians would have to argue for some changes in curriculum and pedagogy. In doing so, they would address their fellow citizens with arguments about the public value of teaching respect for liberty. It's my sense that Americans might be responsive to such arguments.

    In making decisions about where and how to educate their own kids, most people seek to maximize their earning potential; however, in considering educational policies that will apply to everyone, they often favor more idealistic outcomes. For instance, in a 2004 poll, 71% of adults said that it was important to "prepare students to be competent and responsible citizens who participate in our democratic society" (pdf). Thus it's possible that Americans would support better education for liberty.

    To be sure, most people (including me) do not think that "competent and responsible" citizens are those who value liberty above all else. I, for one, want to see young citizens develop a concern for equality as well as freedom. Nevertheless, it seems possible that libertarians could prevail in arguments about the curriculum. If they can't persuade their fellow citizens that liberty should be taught in schools, then they certainly can't convince the majority to cut Social Security--which is against their immediate economic self-interest.

    permanent link | comments (2) | category: advocating civic education , philosophy

    December 14, 2005

    an aesthetic question

    Why does a distant mountain often look beautiful? It is a simple shape, maybe an inch high if you look at it next to your hand--not unlike a mound of grass-covered earth that's a few feet away, or even a pile of laundry. Yet a mountain is much more likely than those things to be beautiful.

    One answer: Human vision is not the perception of a flat field of shape and color, composed of little reflections on our retinas. It is a thoroughly interpretive act. We see the mountain differently from a pile of clothes because we know that the mountain is far away. The space between the viewer and the object is part of what we see. But why should we appreciate a large volume of empty space? Perhaps because we interact with it in our imaginations. We feel a potential to move freely through the space or to "conquer" the mountain by climbing it.

    Another answer: Human perception is thoroughly interpretive, and we have learned to value mountains. They are God's work; they are humbling creations of Nature; they are sublime. Supposedly, Petrarch was the first European since antiquity to appreciate outdoor views. Five hundred years later, we have absorbed positive evaluations of landscape. But that appreciation was absent in 12th-century Europe and might not exist in some current cultures. It might be possible for a culture to learn to love the sight of small mounds of earth.

    What about pictures of mountains? They are just flat fields of color. Perhaps we enjoy them because we are able to derive the same experiences from them that we take from real mountains.

    Also, we appreciate representation itself. A picture of some objects on a table can be as beautiful as a landscape painting of a huge mountain; but the mountain itself will be more beautiful than any set of plates and food. A picture of a mountain may be beautiful even if it is so stylized or abstract that we cannot imagine ourselves entering the space depicted in it. These examples show that it is often the feat of representation, rather than what is represented, that matters in art. In that way, the aesthetics of art and of nature seem fundamentally different.

    permanent link | comments (0) | category: philosophy

    November 17, 2005

    ethics of international intervention

    This afternoon, I will guest-teach a public policy seminar for a friend who's in Venezuela on a Fulbright. The topic of the day is international intervention. When is it appropriate (or obligatory) to impose sanctions or invade another country to promote human rights? Click below if you want to read my whole class plan.

    The reading for this seminar is:

  • Michael Doyle, "The New Interventionism," in Thomas Pogge, ed., Global Justice (Blackwell, 2001).

  • David Luban, "Intervention and Civilization: Some Unhappy Lessons of the Kosovo War,' in Pablo de Greiff & Ciaran Cronin, eds., Global Justice and Transnational Politics: Essays on the Moral and Political Challenges of Globalization, (MIT Press, 2002), pp. 79-115.

  • Mark Amstutz, International Ethicsoncepts, Theories, and cases in Global Politics (Rowman and Littlefield, 1999, chapters 7 and 9.
  • Decisions about whether to intervene involve three conflicting principles: human rights, national self-determination, and state sovereignty. My strategy is to consider each principle in turn, as if it were our only guide. We will then be able to see the pros and cons clearly.

    I. Human Rights

    Imagine a principle like: "Honor human rights in foreign policy." Or "Act in the interests of human rights." Or, "Act to maximize human rights." Or, "Human rights are the only important things in the world."

    Q. Why might one act according to these principles?

    A. ["A's" refer to answers that I hope to get from students]: Human beings have infinite intrinsic worth. States and other institutions have no intrinsic value.

    Q. What are the implications for country A if there are human rights abuses in country B? What if the human rights abuses are minor?

    Q. Would proportionality apply? (Proportionality is an idea from just war theory. It recognizes that the perpetrators of human rights abuses also have rights) Just war theory also provides other criteria listed by Amstutz on p. 188, including: right intentions; limited objectives; doctrine of double-effect. What do we think of these criteria?

    Q. Would pragmatic considerations apply? How would one assess them?

    Q. What are problems with acting only according to this principle?

    A. #1: Cultural relativism

    Human rights are only beliefs of particular cultures, not to be imposed on others.

    Q. What does David Luban say about that?

  • all cultures agree about the "great evils"(intentional killing, intentional infliction of pain, starvation etc.)

  • yet it is necessary sometimes to inflict the great evils deliberately (e.g., in punishment and war)

  • cultures differ in what they consider to be necessary inflictions of evil. For instance, Americans permit capital punishment; Europeans do not. There is no compelling philosophical argument in favor of any particular view. Relativism applies at this level.
  • Q. Is Luban right?

    A. #2: Neo-Imperialism

    Enforcement of human rights is likely to be conducted by "Western" countries. Why isn't this imperialism, as when British imperialists intervened in India to prevent suttee?

    A. # 3: Rights imply duties. But human rights cannot imply duties across borders.

    If I have a right to life, then everyone must refrain from killing me--fair enough. If I have a right to education, someone has a duty to pay for it. Who? My parents? My local community? My country? The world? We generally say that Americans have a right to education, implying that other Americans have a duty-to-pay. But we don't say that a Haitian child's right to an education implies a duty for Americans to pay for it. Maybe we should.

    Likewise, my right not to be tortured imposes a duty on everyone else not to torture me. But does it impose a duty on everyone else to rescue me from being tortured? Even if it puts them in danger?

    A. #4: Rights, to be real, must be enforced

    Q. Who should enforce them?

  • Foreign states? But Luban says, p. 94, "a people always has the right not to go to war." Since most peoples dont want to fight for others human rights, interventions seem unlikely.
  • Q. Is Luban right?

    Intervention abroad requires democratic consent at home.

    Intervention may have to be violent, since sanctions don't seem to work

  • A superstate? This has its own problems, although see Doyle on the recent UN success in Cambodia.
  • Civil society? (Cf the South African divestment campaign: Amstutz, p. 191.)
  • Q. If human rights are unenforceable, then are they rights?

    Luban prefers to argue from shame, rather than rights-and-duties. Shame is appropriate when we stand by as others suffer. One implication of his view: We don't have to help everyone. If we are helping some appropriate victims, then we needn't be ashamed.

    II. National Self-Determination

    Now the principle is not, "Honor human rights," but rather, "Let nations govern themselves."

    Q. What does this mean?

    Perhaps there are two theories ...

    A. #1: a nation (defined as some homogeneous identity group) ought to be independent

    This need not require the consent of the majority, let alone of all individuals. Spain under Franco was an independent nation. All the Spanish people were part of a political entity that no one else governed. They had self-determination, according to this first definition, even if Franco was unpopular. He was one of them. (Q. does any state actually enjoy consent?)
    Note: National self-determination is often an argument in favor of minority rights, when the minority in question lacks its own state. E.g, the Basques under Franco and under the new Spanish democracy.

    A. #2: When a government is democratic, "the people" rule, so outsiders should not intervene. Here the emphasis is on democratic procedures

    Then "the people" need not be homogeneous. They need not belong to one "nation." Cf. Belgium-one regime with two peoples-or many African states with arbitrary borders, or the USA, which increasingly defines membership in legal rather than cultural terms. To be American is to have a vote in the USA.
    But why should people within one geographical boundary determine their collective fate by voting? Why not draw the boundary differently? The democratic definition of self-determination usually relies on some sense of common nationhood, as in the US.
    The democratic conception of national self-determination does not support minority group rights.

    Q. why should we favor self-determination?

    Consider the US. We have capital punishment and permit people to be sentenced to life imprisonment for cocaine possession. Others see these actions as barbarous.

    Q. Would it be good if some foreign country with 10 times our military power invaded us and forced us to abolish the death penalty?

  • A. #1: the intervention wouldnt be proportional to the human rights abuse. (But some interventions might be)

  • A. #2: it would cause a backlash

  • A. #3: it wouldn't be democratic. (But note that this argument only applies to democratic states)

  • A. #4: it would disrupt the evolution of our political community, and the evolution of independent historical communities is a good thing. However, who deserves to have their own community? All the people of Ireland? All the people of the United Kingdom? Kurds? Iraqis?
  • Q. What are some problems with self-determination?

    Overrides human rights. Consider, for example, the case of Argentina (popular regime commits atrocities against unpopular minority.)

    Overrides minority rights (unless we think that the minority is a nation, in which case we may invoke national self-determination in their defense).

    The leaders of a state can bind their own people in ways that seem unjust, for example, by selling assets or borrowing deeply.

    III. State Sovereignty
    Here the principle is: There are 170 states that are members of the United Nations. No state is to make war on another, even for moral reasons. Regardless of whether states represent nations or peoples, they are entities that have rights to operate on their own territory.

    Q. Why might one hold this view?

    A. #1: A "romantic" view of the state

    It's hard for Americans and many other moderns to understand this view, because we have a social contract theory of the state. (It is an association formed to advance individual interests.) But many people have seen states as having intrinsic moral worth. They are not just contracts among citizens. If you have the opportunity to die for France, that is an honorable thing. You are not dying for all the people who belong to France, but for La Patrie. Do any Americans feel this way about the USA?

    Note: identity is very important to politics, and it's an analytical mistake to assume that we always have the identity of private individuals. We can and do adopt broader identities, and nations have often provided those.

    A# 2: A pragmatic defense of sovereignty

    Intervening in other UN members' territory is a recipe for constant warfare. Cf. the Treaty of Westphalia. Reliably preventing all states from attacking all other states is the most pragmatic route to peace.

    Q. What are some problems with state sovereignty?

  • overrides individual rights

  • overrides minority rights

  • freezes in place a set of arbitrary boundaries that many oppose

  • doesn't apply to failed states

  • perhaps it doesn't apply to transnational conflicts like that represented by Al Queda.


  • IV. Where do we stand?

    One answer might be: We care about human rights, but concerns about national delf-determination and sovereignty cause us to impose limiting conditions:

  • last resort

  • UN sanction

  • proportionality

  • prospect of success

  • (cf. Amstutz, p. 135)

    Perhaps it is self-evident that we should only intervene when the prospects of success are good. But the "last-resort" limitation only applies if we care about sovereignty or national self-determination

    V. Other issues

    Q. What is the morality of sending someone else (e.g., a soldier) to intervene in defense of human rights and not going oneself? What if there's a draft? What if there isn't?

    Q. Should we try to have "clean hands"? For example, is it right to divest from a wicked country even if the effects are neutral, or negative, just so that we don't trade with bad people?

    VI. Cases

  • Kosovo (note: bombing from 30,000 feet to avoid allied casualties. Note: symbolic resemblance to Holocaust. Note: European setting)

  • South African sanctions

  • Iraq
  • permanent link | comments (1) | category: philosophy

    October 11, 2005

    thoughts about game theory

    The Nobel Prize for Tom Schelling (which is enormously exciting for everyone in Maryland's School of Public Policy), makes me think of a few points about game theory:

    1. It's a form of political theory that harkens back to classical authors from Hobbes to Rousseau (with echoes of Plato's Crito and other ancient works). That is, it makes certain assumptions about the preferences and goals of "players"--usually individuals or states--and then asks what must happen when they interact. This is the same method that led Hobbes to believe that individuals, motivated by the goal of minimizing pain, would kill one another absent a state. Hobbes' conclusions were rejected by other theorists, but his method remains alive in modern game theory. There is a rival tradition of political theory that treats people as deeply embedded in cultural contexts. For Hegel, Nietzsche, Dewey, Foucault, Habermas, and others, the important question is how and why culture has changed, not how individuals will act under specified theoretical conditions. Some results of game theory seem to generalize across all existing cultures--which wouldn't have surprised Hobbes or Locke.

    2. Since game theory starts with players and models their interaction, it can handle markets, wars, and votes equally well. Schelling's work is typical in that it doesn't fit within the borders of his own field (economics), but could equally belong to political science or--in the case of his famous model of racial "tipping points"--sociology. There is something impressive about a theory that explains human behavior without arbitrary limits.

    3. Some people assume that the "players" in game theory are selfish. That is not true. A game-theoretical model can work very well to explain behavior driven by any motives. Usually, altruism makes human interactions turn out better, and then games are uninteresting--but not always. Consider, for example, the bad outcomes that can result when X and Y are picking a restaurant, and X only wants to eat at Y's favorite place, and Y only wants to go where X wants to go. They may withhold information about their own preferences, causing a big mess, even though their motives are selfless.

    4. If game theory has a limitation, it is not an assumption of selfishness but rather a presumption that the players have preferences and identities prior to interacting. For instance, if the players are the USA and USSR (as in Schelling's classic work), then their identities are those of the two nations and their goals are assumed to be security, or domination, or whatever. However, a person's identity as a representative of the USA or the USSR is not just given; it is forged as a consequence of social and historical change, and it can fall apart. Soviet officials were supposed to bear the identity of "international Communists"; they really identified with the USSR or narrowly with their individual security interests; and then suddenly around 1990 most began to see themselves as Russians or even Europeans, but not as Soviets. This was a massive political change.

    Even given players with fixed identities, it is not obvious that they will want any particular goals (such as security, pleasure, dominance, honor or salvation). We may start wanting one thing and persuade ourselves to value something different. It's not clear that these processes of identity-formation and preference-setting can themselves be modeled as games. When we deliberate about who we are and what we want, the reasoning is not strategic in the same way. However, this is not a criticism of game-theory, simply an argument that it belongs in a broader context.

    permanent link | comments (2) | category: philosophy

    September 27, 2005

    making room for God without "intelligent design"

    Here's a proposal for how to think about evolution if you want to believe in divine providence: Even if science explains the "efficient causes" of evolution, God can be the "final cause."

    Aristotle argued that "Why?" can generally be answered in four ways simultaneously. The efficient cause is a preceding event that reliably generates the result, as my finger hitting the keyboard causes a letter to appear on the screen. The material cause is a characteristic of the object involved. For instance, my computer is so structured that it produces letters when touched in a certain way. (Perhaps a better example: I am communicating right now because it is in the nature of the substance known as "mind" to think and to express its thoughts.) The formal cause relates to the essence or definition of the object. For instance, my computer generates letters because that's what computers do; that's its "form". And the final cause concerns goals and purposes. I type these letters in order to communicate certain ideas to you.

    Modern science began when Francis Bacon argued that experiments could (only) reveal efficient causes. Since Bacon, scientists have considered Aristotle's material and formal causes to be simplifications of efficient causes. If you want to know how a computer works, you really need to understand a chain of predictable events. To say that a computer is an object that generates text (an example of a formal cause) is not to explain it.

    As for final causes, they are not detectable by science. According to Bacon, natural events don't happen for a purpose; they happen because something else happened first and led regularly to the result. Without Darwin, we would assume that there were final causes in the biological world, for organisms seem to move and evolve toward goals. But biology since Darwin has been free of final causes (it has become "non-teleological"). For example, biologists do not really believe that genes are "selfish" and "want" to propagate. Rather, genes mutate as a result of prior chemical processes, and the mutations that help organisms survive tend to proliferate. It is unnecessary to cite purpose. Higher organisms have wills and goals, but those arise because of prior, physical causes.

    Stripping the natural world of purpose upsets religious believers. In fact, I can understand that a non-teleological universe is profoundly disturbing; nothing has a purpose. Thus some believers are moved either to deny that evolution has occurred at all, or to claim that an "intelligent designer" is the efficient cause of evolutionary change.

    The latter is a dangerous strategy--on theological grounds. Efficient causes should be detectable by experiments. To hypothesize an efficient cause is to make an empirically testable claim. In theory, an experiment might reveal the existence of a hidden intelligence--or it might not. Or (in principle) it might detect an intelligence that is fairly powerful and fairly wise--but that would not be the Judeo-Christian/Islamic god, who is all-powerful. Real omnipotence cannot be detected, because all that one can ever see is a finite quantum of power. I am reminded of the controlled experiments that (supposedly) find a statistically significant effect from prayer. To claim that invoking God has an effect of a few percentage points strikes me as almost blasphemous. Is this the God of glory who thundereth, who breaketh the cedars of Lebanon?

    Besides, any efficient cause is itself an effect. If we were to discover an Intelligent Designer who plays with genes to achieve certain ends through evolution, then we should be able to explain what caused that designer to exist. But God has no efficient cause.

    Thus I think orthodox monotheists should choose a different course. They should accept Bacon's idea that efficient causes are detectable by experiments. But God should not be subjected to such tests; that would not be compatible with faith or with the concept of omnipotence. Still, efficient causes can co-exist with final causes. Perhaps homo sapiens evolved because of a random genetic mutation in a primate ancestor (that would be the efficient cause of our species). At the same time, people could have evolved in order that God would have the opportunity to sacrifice His only-begotten son. The latter claim would be untestable, but it would follow logically from faith and revelation.

    People who accept this recommendation (which has surely been made before), cannot find a justification for teaching "intelligent design" in science classes. However, they can be confident that no experiment will ever threaten their faith or reduce divine power to something observable and (therefore) finite.

    permanent link | comments (0) | category: philosophy

    September 8, 2005

    against "systematizing" in ethics

    In a recent comment, Metta Spencer asks, "Im ... curious about your notion that systemizing ethical principles is not a good way to go. I would love to hear more about that. I suppose its more than just not being a Kantian, but i can't fill in the blanks to guess what you mean, and there aren't citations in these blog thingies." I thought an answer would be worth a full post, so here goes. My position could be summarized as follows:


    1. There is a category of concepts that includes the traditional virtues and vices, many institutions (such as marriage and democracy), and many emotions (certainly including love). These concepts have the following features:

    a. They are indispensable for moral reasoning. We cannot, for example, do without the concept of "love."
    b. They are "thick" terms, in Bernard Williams' sense. That is, they combine fact and value. For instance, to say that someone "loves" someone else is to make a factual claim that is also essentially laden with moral evaluation.
    c. They have moral significance, but it is unpredictable. Sometimes their presence makes an act better than it would be otherwise; sometimes, it makes the act worse. (This idea is the heart of Jonathan Dancy's "particularism.")
    d. They have vague borders. We can use them effectively to communicate, yet they cannot be defined by pointing to any essential common features. They are examples of what Wittgenstein called "family-resemblance" words.
    e. Their vagueness and unpredictability reflect truths about the world, or at least reflect our accumulated experience of life. We know, for instance, that "love" can mean many things and has an unpredictable moral significance. Thus we should not try to gain moral clarity by splitting "love" into two categories (e.g., eros and agape). Love is not just the union of two concepts, one good and the other bad. Part of the definition of love is that it can be either good or bad, or can easily change from good to bad (or vice-versa), or can be good and bad at the same time in various complex ways.
    2. If indispensable moral concepts are also unpredictable and vaguely defined, then moral theory has severe limitations, because moral theory is composed of concepts, abstracted from particular circumstances. That is true not only of Kantian theory, but also of utilitarianism and virtue ethics.

    3. What justifies the use of a "thick" moral concept in a particular context is not a theory but a story, one that describes what happened earlier and later with reference to people's motivations, purposes, and beliefs. There is much more to say about the logic of narrative and how it supports moral judgment, but my view essentially follows J.L. Austin.

    Citations: I advanced part of this argument in a book entitled Living Without Philosophy: On Narrative, Rhetoric, and Morality. However, Dancy's ideas about particularism, acknowledged in that book, are now much more central to my position. My latest views are explained and defended in my manuscript entitled The Myth of Paolo and Francesca: Poetry, Philosophy, and Adultery in Dante and Modern Times, which is out for review by publishers. A summary is online.

    permanent link | comments (0) | category: philosophy

    August 9, 2005

    empathy versus systematic thought

    For the second day in a row, here's a response to an opinion piece in The New York Times. The new article, entitled "The Male Condition," has two distracting features. First, it takes Larry Summers' side in the argument about women in science. Second, it's written by someone called Baron-Cohen--not the very funny Sascha, but his distinguished cousin Simon. If you get past Larry and Sascha, the article is as interesting as it is disturbing.

    Dr. Baron-Cohen argues that people can be placed on a continuum from systematic to empathetic. "Systemizing involves identifying the laws that govern how a system works. Once you know the laws, you can control the system or predict its behavior. Empathizing, on the other hand, involves recognizing what another person may be feeling or thinking, and responding to those feelings with an appropriate emotion of one's own."

    Baron-Cohen says that males are statistically more likely to be systematic; females, to be empathetic. Some of this difference may be cultural, but there may also be a biological factor, since (a) the difference shows up very early in childhood; and (b) it may be connected to the fact that prenatal testosterone correlates negatively with socialibility in young children. If the level of prenatal testosterone affects where one falls on the empathetic-systematic spectrum (which has not been shown directly), then males would tend to be more more systematic and females would tend to be more empathetic, although there would be much variation and overlap.

    Three thoughts occur to me in response ...

    1) Even if our degree of empathetic versus systematic thinking is ordained by biology or culture, we can still consider when it is better to be systematic or empathetic. Despite being male, I have spent a lot of time arguing against systematic thinking in ethics. I adopted this position because I was persuaded by certain arguments in favor of empathy and against abstract principles. The same arguments don't apply to mathematics or engineering; they only concern ethics. Thus it's possible to be reflective about the role of empathy in various domains and to adjust our thinking accordingly. Which brings me to the second point ...

    2) Some leaders, and some cultures, have taken very strong positions on one side or the other of the continuum. Calvin, Lenin, and Khomeini were three men who built whole regimes dedicated to the notion that behavior should be guided by a few principles. Maybe they had a lot of testosterone before they were born, but that's not the point. The point is that historical circumstances favored their vision. In contrast, Aristotle and Hume favored the cultivation of moral sentiments, including empathy. Hume's culture prized sentiments and refined a curriculum designed to make people empathetic.

    Presumably, the quantity of prenatal testosterone per capita is pretty stable. Perhaps the amount of systematic versus empathetic thought is also constant--although I have no idea how to measure this. But one thing varies by culture and can be changed through political action: the role of systematic thought in morality and the law.

    3. Baron-Cohen ends with a striking hypothesis. He thinks that autism is an extreme form of systematic thinking, resulting (in part) from the union of two parents who are both quite far over on the systematic side of the spectrum. I don't know how plausible this theory is. It does occur to me, however, that it could provide an explanation for the rapid increase in autism reported in industrialized democracies. The rate of what Baron-Cohen calls "assortative mating" (similar people mating with each other) has perhaps increased as people have begun to marry members of their own profession. Even forty years ago, an engineer would most likely be a man who would marry a woman of the same social origins but quite different skills. Today, there is a higher probability that he will marry another engineer--perhaps a woman from a very different background, but with similar mental proclivities.

    permanent link | comments (3) | category: philosophy

    August 1, 2005

    Tony Blair and the Doctrine of Double Effect

    Reflecting on the July 7 bombings, the leftist MP George Galloway (on whom I have written before) said, "London has reaped the involvement of Mr. Blair's involvement in Iraq." Most people to Galloway's right--which means most people--think he is wrong to blame Blair for the terrorism. Yet it seems likely that a causal chain does connect the bombings to British participation in the Iraq invasion. Extremist Muslim radicals attacked UK targets only once they had become incensed by the presence of British "crusaders" in Iraq. The use of terrorism against civilian British targets was a fairly foreseeable result of the invasion and occupation.

    I wouldn't try to deny Blair's causal role, but I would argue that someone can be a cause of something without being morally responsible for it. Blair set in motion a chain of events that led to the bombings, but the bombers are completely responsible for what they did, and Blair is completely innocent of it. Thomas Aquinas' Doctrine of Double Effect comes in handy here. The New Catholic Encyclopedia, as quoted in the Stanford Encyclopedia of Philosophy, explains that the Doctrine excuses an act (in this case, the invasion) that has bad results under these conditions:

    1. The act itself must be morally good or at least indifferent. 2. The agent may not positively will the bad effect but may permit it. If he could attain the good effect without the bad effect he should do so. The bad effect is sometimes said to be indirectly voluntary. 3. The good effect must flow from the action at least as immediately (in the order of causality, though not necessarily in the order of time) as the bad effect. In other words the good effect must be produced directly by the action, not by the bad effect. Otherwise the agent would be using a bad means to a good end, which is never allowed. 4. The good effect must be sufficiently desirable to compensate for the allowing of the bad effect.

    Thus, to take Tony Blair's side, we would say: The act of invading Iraq to remove Saddam Hussein was morally good. Even if its net consequences turn out to be bad for Iraq (mainly because of the incompetent US leadership), British participation was well-intentioned and reasonable. Blair did not will a terrorist response to the invasion, even if he had reason to predict it. The removal of Saddam was a direct consequence of the invasion; the London bombings were highly indirect results. Finally, the end that Blair willed was sufficiently good to compensate for the death of Londoners.

    The Doctrine is relevant to other current events as well. For example, last Friday, the IRA promised to renounce violence. Did the Doctrine of Double Effect ever excuse its use of terror? Alison McIntyre would say "no." She writes in the Stanford Encyclopedia article, "The terror bomber aims to bring about civilian deaths in order to weaken the resolve of the enemy: when his bombs kill civilians this is a consequence that he intends." Thus bombing a pub or train station is a bad act with a bad intention, and the Doctrine never excuses it.

    However, McIntyre thinks that bombing campaigns undertaken by people in uniform can be permissible under the Doctrine. She writes, "The strategic bomber aims at military targets while foreseeing that bombing such targets will cause civilian deaths. When his bombs kill civilians this is a foreseen but unintended consequence of his actions. Even if it is equally certain that the two bombers will cause the same number of civilian deaths, terror bombing is impermissible while strategic bombing is permissible."

    Supporters of the IRA deny this distinction. They argue that it's unfair to defend established nations with large budgets that drop bombs from airplanes--yet damn individuals as "terrorists" if they kill smaller numbers of people with car bombs.

    Of course, one response is that no one has the right to kill anyone else except in the most immediate self-defense. Then the Doctrine of Double Effect would not cover the IRA, but it wouldn't excuse Tony Blair, either. By invading Iraq, he willed the death of Iraqis (and Brits); a pacifist would deny him that right. But the true pacifist would also say that Neville Chamberlain impermissibly willed the death of civilians when he declared war on Hitler. Once we admit that someone can cause death for a good reason, then we are either "consequentialists" (i.e., we assess acts by subtracting their costs from their benefits), or we subscribe to the Doctrine of Double Effect.

    In the last few months, three different people have told me that the IRA bombings had a good consequence: they brought the British and the Unionists to the bargaining table. I do not know whether this is true or whether the same result could have been obtained by peaceful means. Consequentialist reasoning might possibly rationalize the IRA bombings, but not those of Hamas and the other Palestinian terrorist groups. It seems to me that suicide bombings in Israel and the Occupied Territories have had one overall consequence: Israel has begun to build a security fence. Thus Hamas indirectly caused the fence. For consequentialists, that makes Hamas responsible for damage to Palestinian national interests--which is indeed what I believe. However, according to the logic of the Doctrine of Double Effect, Hamas might be causally responsible for the fence, yet Israel might have sole moral responsibility for it.

    (See also a good short article by William Soloman from the Encyclopedia of Ethics.)

    permanent link | comments (0) | category: Iraq and democratic theory , philosophy , philosophy

    Tony Blair and the Doctrine of Double Effect

    Reflecting on the July 7 bombings, the leftist MP George Galloway (on whom I have written before) said, "London has reaped the involvement of Mr. Blair's involvement in Iraq." Most people to Galloway's right--which means most people--think he is wrong to blame Blair for the terrorism. Yet it seems likely that a causal chain does connect the bombings to British participation in the Iraq invasion. Extremist Muslim radicals attacked UK targets only once they had become incensed by the presence of British "crusaders" in Iraq. The use of terrorism against civilian British targets was a fairly foreseeable result of the invasion and occupation.

    I wouldn't try to deny Blair's causal role, but I would argue that someone can be a cause of something without being morally responsible for it. Blair set in motion a chain of events that led to the bombings, but the bombers are completely responsible for what they did, and Blair is completely innocent of it. Thomas Aquinas' Doctrine of Double Effect comes in handy here. The New Catholic Encyclopedia, as quoted in the Stanford Encyclopedia of Philosophy, explains that the Doctrine excuses an act (in this case, the invasion) that has bad results under these conditions:

    1. The act itself must be morally good or at least indifferent. 2. The agent may not positively will the bad effect but may permit it. If he could attain the good effect without the bad effect he should do so. The bad effect is sometimes said to be indirectly voluntary. 3. The good effect must flow from the action at least as immediately (in the order of causality, though not necessarily in the order of time) as the bad effect. In other words the good effect must be produced directly by the action, not by the bad effect. Otherwise the agent would be using a bad means to a good end, which is never allowed. 4. The good effect must be sufficiently desirable to compensate for the allowing of the bad effect.

    Thus, to take Tony Blair's side, we would say: The act of invading Iraq to remove Saddam Hussein was morally good. Even if its net consequences turn out to be bad for Iraq (mainly because of the incompetent US leadership), British participation was well-intentioned and reasonable. Blair did not will a terrorist response to the invasion, even if he had reason to predict it. The removal of Saddam was a direct consequence of the invasion; the London bombings were highly indirect results. Finally, the end that Blair willed was sufficiently good to compensate for the death of Londoners.

    The Doctrine is relevant to other current events as well. For example, last Friday, the IRA promised to renounce violence. Did the Doctrine of Double Effect ever excuse its use of terror? Alison McIntyre would say "no." She writes in the Stanford Encyclopedia article, "The terror bomber aims to bring about civilian deaths in order to weaken the resolve of the enemy: when his bombs kill civilians this is a consequence that he intends." Thus bombing a pub or train station is a bad act with a bad intention, and the Doctrine never excuses it.

    However, McIntyre thinks that bombing campaigns undertaken by people in uniform can be permissible under the Doctrine. She writes, "The strategic bomber aims at military targets while foreseeing that bombing such targets will cause civilian deaths. When his bombs kill civilians this is a foreseen but unintended consequence of his actions. Even if it is equally certain that the two bombers will cause the same number of civilian deaths, terror bombing is impermissible while strategic bombing is permissible."

    Supporters of the IRA deny this distinction. They argue that it's unfair to defend established nations with large budgets that drop bombs from airplanes--yet damn individuals as "terrorists" if they kill smaller numbers of people with car bombs.

    Of course, one response is that no one has the right to kill anyone else except in the most immediate self-defense. Then the Doctrine of Double Effect would not cover the IRA, but it wouldn't excuse Tony Blair, either. By invading Iraq, he willed the death of Iraqis (and Brits); a pacifist would deny him that right. But the true pacifist would also say that Neville Chamberlain impermissibly willed the death of civilians when he declared war on Hitler. Once we admit that someone can cause death for a good reason, then we are either "consequentialists" (i.e., we assess acts by subtracting their costs from their benefits), or we subscribe to the Doctrine of Double Effect.

    In the last few months, three different people have told me that the IRA bombings had a good consequence: they brought the British and the Unionists to the bargaining table. I do not know whether this is true or whether the same result could have been obtained by peaceful means. Consequentialist reasoning might possibly rationalize the IRA bombings, but not those of Hamas and the other Palestinian terrorist groups. It seems to me that suicide bombings in Israel and the Occupied Territories have had one overall consequence: Israel has begun to build a security fence. Thus Hamas indirectly caused the fence. For consequentialists, that makes Hamas responsible for damage to Palestinian national interests--which is indeed what I believe. However, according to the logic of the Doctrine of Double Effect, Hamas might be causally responsible for the fence, yet Israel might have sole moral responsibility for it.

    (See also a good short article by William Soloman from the Encyclopedia of Ethics.)

    permanent link | comments (0) | category: Iraq and democratic theory , philosophy , philosophy

    June 30, 2005

    why moral positions should be explicit in literary criticism

    During a conversation with a friend on Wednesday, I realized why most contemporary literary criticism bothers me--from a moral perspective. (I use the word "moral" broadly, to mean any issue about how we should live or how our institutions should function.) Although there are some exceptions, whom I admire, most critics have moral agendas that they keep implicit.

    They look in fiction for themes of race and gender, politics, or markets because they have views about those topics themselves. For instance, to identify an authorial voice with "imperialism" is to criticize the author; to say that he "subverted" a "patriarchal order" is to praise him. Yet literary critics rarely make their own moral positions explicit. Wayne Booth and Martha Nussbaum have explained critics' resistance to explicit moralizing as a result of several assumptions. Critics tend to believe that moral opinions are arbitrary, that openly moral evaluation of fiction might justify censorship, that moral criticism must be reductive, and so on. Yet implicit moral evaluation remains very widespread.

    I don't believe that literary critics should supply elaborate philosophical arguments in favor of their moral views. I'm a "particularist," someone who believes that moral judgments are about concrete cases, not about general categories. Therefore, I'm not a big fan of abstract philosophical arguments, even when they appear in philosophy books written by professionals. It would be even stranger--and less valuable--if abstract moral argumentation started to appear in works of literary criticism. For example, imagine that a critic had to prove that imperialism is bad, before he or she could use the word "imperialism" with a negative connotation in an interpretation of Shakespeare's Tempest or A Passage to India. This would not be helpful.

    Nevertheless, I do believe that critics should explicitly state the moral views that are implied by their literary interpretations. Making explicit one's own moral position forces one to confront the possibility of exceptions, tradeoffs, and limits. For example, "imperialism" has a bad ring to it. But what are we against, if we oppose the imperialism that can be found, for instance, in The Tempest? (Whether Shakespeare approves of that imperialism is a different question.) If forced to express a moral view explicitly, instead of merely using "imperialism" as an epithet, a critic would have to adopt a position like one of these:

  • Imperialism is a form of coercion, and coercion is bad. (But what is the alternative to some form of political authority? Are literary critics who always take the side of weak individuals against states and big institutions willing to endorse anarchism?)Or ...

  • It's always wrong to build a large, centralized nation by force. (That verdict would apply to ancient Rome and China, or to the creation of the Spanish and French nation-states.) Or ...

  • Empires may be good or bad, but the British imperialism that began in Shakespeare's day was immoral. (Why? Was the fact that white people conquered non-white people the essential problem? Or was the problem economic exploitation, which also occurs within nations?) Or ...

  • Imperialism has bad aspects and effects under certain circumstances, including the circumstances on Caliban's Island. (Then one would have to explain what precisely is wrong with the invaders' behavior).
  • Fundamentally, I believe it's irresponsible not to state one's own moral positions clearly enough that their scope and implications are evident. I suspect that many literary critics are willing to be "irresponsible" because they see themselves as outsiders, as adversaries of mainstream culture and the social order. It's not their job to plan or manage societies; they just identify bad things like imperialism and violence. But this stance strikes me as bad faith, since professors in rich countries are actually quite powerful. Irresponsibility is also a recipe for the alienation that I described here recently.

    permanent link | comments (0) | category: philosophy

    June 2, 2005

    why the commons is not for communists

    "The commons" is composed of our shared assets: the earth's atmosphere, oceans, and water-cycle; basic scientific knowledge (which cannot be patented); the heritage of human creativity, including folklore and the whole works of Plato, Shakespeare and every other long-dead author; the Internet, viewed a single structure (although its components are privately owned); public law; physical public spaces such as parks and plazas; the broadcast spectrum; and even cultural norms and habits. Some of us believe that protecting and enhancing the commons is a central political task of the 21st century. For different flavors of that argument, see, for example, OnTheCommons, The Tomales Bay Institute, and Lin Ostrom's workshop at Indiana.

    I have suggested that enhancing the commons might be a strategy for increasing equality. If that strategy belonged to the radical left, I would not hesitate to embrace it. However, I don't think that it has much to do with traditional leftist thought. It is worthwhile to distinguish the theory of the commons from Marxism, just for the sake of clarity. I see several fundamental points of difference.

    The commons is not state-centered. Some common assets are completely un-owned (e.g., the ozone layer), and some are jointly owned and managed by associations. Some belong legally to states and are controlled by them: think of Yellowstone. However, it is by no means clear that states are ideal--or even adequate--owners of commons. I realize that some Marxists have also been skeptical of the state--including perhaps old Karl himself, who wished that it would wither away. Nevertheless, a major current in Marxism has been statist, and the commons isn't.

    The commons is only a part of a good society, not the whole. Some anarchists want everything to be treated as a common asset, but most of us simply value the common assets we already have and want to protect them against corporate "enclosure," over-use, and other threats. We have no interest in abolishing either the state or the market; on the contrary, we think that both work better if they can draw appropriately on a range of un-owned assets, from clean air to scientific knowledge.

    The commons supports "negative liberty." Isaiah Berlin famously contrasted the absence of constraints ("negative liberty") from the capacity to do something ("positive liberty"). For example, the First Amendment gives us negative liberty by removing the constraint of censorship, but we don't have positive freedom unless we own a newspaper--or a website. Marx's own ideas about liberty were complex and perhaps ambiguous. But most Marxists have believed that positive liberty is more important than negative liberty--or have even dismissed the latter as a snare and a delusion. Although a commons may enhance positive liberty, what it most obviously provides is negative liberty. If something is un-owned, then there is no legal constraint on our using it. This is both the beauty of a commons and its weakness. The commons, if anything, is a utopian libertarian idea rather than a Marxist one (although some libertarians have forgotten that they are inspired by freedom, not by markets).

    The commons is not (literally) a revolutionary idea. Preserving the commons may take radical action at a time when the oceans are being depleted, big companies are privatizing the software that underlies the Internet, and scientific research is being diverted to produce patented products. However, I don't think we need fundamentally different national institutions from the ones we have today, and therefore I see no need to upset our polity. On the contrary, we ought to revive old and powerful traditions that support the commons. At the global level, I suspect that treaties and trans-national popular movements will be sufficient to protect the commons; there is no need for anything like a global state. It is good that we don't need revolutionary political change, because revolutions almost always go wrong and destroy what they set out to promote.

    permanent link | comments (0) | category: Internet and public issues , philosophy , revitalizing the left

    May 3, 2005

    neuroscience and morality

    I recently had occasion to poke around in the growing literature on neuroscience and the morality.* I have not had time to read some of the big and important books on this subject, so the following are just preliminary notes, largely untutored.

    Some evidence from brain science suggests that people need emotions in order to reason effectively about human behavior. Patients with damage to certain brain regions are able to think clearly about many matters but cannot make smart practical judgments, even in their own self-interest. An old example was Phineas Gage, the Victorian railwayman who lost a portion of his brain in a freak accident and could think perfectly well about everything except human behavior. He also lacked emotions. Often patients with similar brain damage are devoid of all empathy and guilt; they act like sociopaths. It seems that moral emotions (such as care) are biologically connected to all reasoning about human beings.

    These studies support Aristotle's view, according to which an emotion is always a combination of desire and cognition. Anger, for example, is desire with distress, where what is desired is retribution for a seeming slight, the slight being improper (Rhetoric 1378a). I could get red in the face and have a high pulse rate, but I wouldnt be angry unless I believed that someone had done wrong. Empirical beliefs and moral interpretations form part of my emotional state.

    Some leading brain researchers hypothesize that human beings evolved to respond emotionally to categories of situations. These instinctive responses allowed people to read one another and generated the limited altruism necessary for group survival in prehistory. Moral reasoning and theory then arose post facto. Today, we use principles to rationalize judgments that we make on the basis of instinctive emotions. For example, the Golden Rule developed as a generalization from our emotional reactions to concrete cases. However, our intuitions are often inconsistent. For example, we oppose killing an individual to save more lives, but we favor inaction when it would have the same effect. We regret an involuntary action that harms other people, but we don't feel bad when we do the same thing without any consequences. (For example, as Michael Slote observed, if you stray across the median and kill someone, you will feel terrible for a long time; but if you make the same driving mistake and nothing happens, you will soon forget about it.) These response make little sense within most moral theories, but they can be explained as a result of an emotional adversion to active killing that arose in prehistoric times.

    In essence, the brain researchers believe in an Aristotelian theory of the emotions plus a Darwinian theory of morality. The Darwinian part of their account strikes me as bad news, because it suggests that our moral intuitions are instincts that developed so that our ancestors could preserve their genes. Our instincts are biased in various ways: for example, in favor of our genetic relatives. Therefore, we cannot rely on our intuitions as guides to truly good behavior, yet we are so "wired" that instincts powerfully influence us. Fortunately. the Darwinian explanation seems empirically less certain than the basic finding that emotions and cognitions are interdependent.

    The neuroscience raises but doesn't answer important normative questions: What kind of moral reasoning can we reasonably expect of human beings? Are we at our best when we rely openly and fully on emotions and/or narratives, or when we try to use moral theories or rules? Which features of human practical reasoning are good, and which are bad?

    *See, for example, Joshua Greene and Jonathan Haidt, How (and Where) Does Moral Judgment Work? Trends in Cognitive Sciences, vol. 6, no. 12, (2002) 517-523; Steven W. Anderson, Antoine Bechara, Hanna Damasio, Daniel Tranel and Antonio R. Damasio, Impairment of Social and Moral Behavior Related to Early Damage in Human Prefontal Cortex, Nature Neuroscience, vol. 2, no. 11 (Nov. 1999), pp. 1032-1037.

    permanent link | comments (0) | category: philosophy

    April 25, 2005

    how to argue for the moral value of literature

    At least since Ovid (see EI.VI:1-54), some people have argued that reading fine literature improves us morally. In particular, fiction and poetry are supposed to enhance our empathy and make us more humane. This effect is a staple theme--perhaps even a cliche--of commencement addresses and English textbooks.

    Judge Richard Posner has considered that case and found it lacking. "There is no evidence," he writes, "that talking about ethical issues improves ethical performance. This is not the place to expound and test a theory of how people become moral. Genes, parental upbringing, interactions with peers, and religion must all play a role. That casuistic analysis stimulated by imaginative works of literature also plays a role is unproven and implausible. Moral philosophers, their students, literary critics, and English majors are no more moral in attitude or behavior than their peers in other fields."

    Since Posner wrote that passage, a tidbit of relevant evidence has emerged. A survey sponsored by the National Endowment for the Arts in 2004 found that reading "literature" (defined as "any novels or short stories, plays, or poetry") correlated with habits of volunteering and charity work when education, gender, income, and race were statistically controlled. This finding is consistent with the theory that stories and poems enhance human sympathies, but the data certainly do not prove causation. Much more social science would have to be conducted before we could assess the impact of various types of literature on various moral attitudes and behaviors, or compare literature to other forms of communication.

    Besides, anyone who wants to claim that literature has morally good effects must explain cases in which the opposite is true. For example, I have argued (in an article and a book chapter) that Nabokov's Lolita is a devastating portrait of selfishness, moral blindness, and rape. The four-foot-ten child whom Humbert Humbert calls "Lolita" is actually Dolores Haze (sad and bewildered). She is repulsed by her rapist but maintains a certain dignity despite him.

    Unfortunately, many of Nabokov's contemporaries reached the opposite conclusion. Lionel Trilling excused Humbert by claiming that Lolita, "perpetually the cruel mistress, lacked any emotions that could be violated." Robertson Davies asserted that Nobokov's theme was "not the corruption of an innocent child by a cunning adult, but the exploitation of a weak adult by a corrupt child." Soon the word "Lolita" entered the English language as a synonym for an adolescent temptress (not a pre-adolescent rape victim).* We might conclude that Nabokov's masterpiece had bad effects; it reinforced an urge for selfish sexual "liberation" among leading (male) critics of the 1950s.

    That would be true, yet the fault would lie with the critics and not with the text. While some of the early consequences of Lolita were harmful, the book is profoundly good, and one can justify that assessment with a close reading. Even if most literature has negative effects on most people (which is surely too pessimistic), our first duty is still to use it for our own moral growth and improvement.

    In short, I believe that literature is morally justified, but not because of its consequences. One might ask what is so great about stories if even the morally best ones can have bad effects. I would answer: if you read it correctly, a good narrative contains moral truth--which is available nowhere else. In my current work, I'm trying to explain what makes a morally good narrative and a morally good reading.

    *One of the search terms that often brings people to this website is "Lolita." I don't think they are looking for articles about Nabokov's ethics, but I hope this entry will make them pause. The word "Lolita" derives from a book about a lecherous middle-aged man who rapes a child and then tries to justify his behavior in hundreds of pages of brilliantly insidious rhetoric. Despite Humbert's best efforts to dominate and control his readers, Dolores' perspective emerges between the lines. Abused but unbroken, she is Nabokov's greatest heroine.

    permanent link | comments (1) | category: philosophy

    April 21, 2005

    me on the radio, from down under

    People are interested right now in the "Straussians"--the somewhat cliquish followers of the late Leo Strauss, some of whom hold influential political positions in the Bush Administration. In my Nietzsche book, I argued that Leo Strauss was not the conservative proponent of natural law that he appeared to be on the surface; he was actually a secret Nietszchean with radical, "postmodern" beliefs. This interpretation became the basis of my novel Something to Hide. I've summarized the arguments in a previous blog. Recently, I was interviewed on the subject for an Australian radio program. The audio file is available here.

    permanent link | comments (0) | category: philosophy

    March 25, 2005

    ethical criticism of literature

    Wayne Booth (in The Company We Keep, 1998) observed that most people, including most sophisticated literary critics, evaluate literature ethically, asking whether particular stories are good for us to read and how we should react to them. Yet literary theory since the 1940s has usually been hostile to ethical evaluation. I've just come across an article by Noel Carroll from 2000 ("Art and Ethical Criticism: An Overview of Recent Directions for Research," Ethics, 110, pp. 350-387) that begins with a similar observation: "Of course, despite the effective moratorium on ethical criticism in philosophical theories of art, the ethical evaluation of art flourished. ... Indeed, with regard to topics like racism, sexism, homophobia, and so on, it may even be the case today that the ethical discussion of art is the dominant approach on offer by most humanistic critics, both academics and literati alike."

    At the core of Carroll's article are three theoretical objections to ethical criticism, and his response to each. I would paraphrase them as follows:

    criticism #1: The value of art cannot be ethical, because some great art has little or no ethical purpose (consider purely abstract music); and some art is good even though its ethical meaning is on balance bad (e.g., Wagner).
    response: Not all art has the same kind of value. Ethical evaluation of some genres is appropriate, but not of others. The ethical value of art is only one kind of value, but it is important.

    criticism #2: The moral propositions implied by even the best works of art are usually unoriginal, and sometimes even trivial. For example, "Perhaps the moral of Emma is that people (such as Emma) should not treat persons (such as Harriet) simply as means." But Kant was much clearer on that point. "If James's Ambassadors shows the importance of acute perceptual discrimination for moral reflection, well, Aristotle already demonstrated that."
    response: One kind of knowledge is propositional--"knowledge that." Art rarely provides such knowledge in sophisticated or original forms. But there is also "knowledge how" (i.e., skill). And there is "knowledge of what it is like," or "knowledge of what it would be like." Art provides these forms of knowledge much better than moral philosophy does. For instance, Aristotle said: Be perceptive of other people. James shows what moral perception is like, and gives us opportunities to practice it.

    criticism #3: The moral consquences of art are unresearched and probably impossible to predict. Who knows whether reading James makes people finely perceptive of others' inner states? Maybe it causes a backlash against such concerns. Who knows whether a racist novel creates racists or makes people angry about racism? Who even knows whether reading novels is good or bad for character?
    response: For thousands of years, people have been interested in the ethical meaning or structure or purpose of particular works of art, quite apart from their effects on any particular audience. For instance, we can discuss Henry James' ethical intentions in writing The Ambassadors . Or we can discuss the ethical meaning of the text (leaving James' intentions aside). It is yet a third question whether The Ambassadors has, or could have, a positive effect on readers of any particular type.

    If people misread a book, that can be because the author is insufficiently clear and persuasive (a fault in the text), or because the audience has been inattentive (their fault), or because the author holds bad values and the audience chooses to interpret him critically and subversively. In any case, the ethical function and the moral consequences of a story are different. Most readers are rightly concerned with the former, because our job in reading is to decide what a book means, not what most other people may think of it.

    permanent link | comments (0) | category: philosophy

    March 14, 2005

    evolution in schools

    Today's Washington Post reports:

    WICHITA Propelled by a polished strategy crafted by activists on America's political right, a battle is intensifying across the nation over how students are taught about the origins of life. Policymakers in 19 states are weighing proposals that question the science of evolution.

    Here are some scattered thoughts of mine ....

    1. Although it is desirable for public schools to be neutral about religion, pure neutrality isn't possible. To teach evolution is to put the weight of the state behind a set of views that some people find theologically abhorrent. To teach both evolution and "intelligent design" is to give arbitrarily equal attention to two doctrines, while omitting many others (including the Biblical account). To avoid offense by skipping the origins and history of life is to give members of certain denominations a veto over the curriculum for religious reasons.

    2. My opinion on this subject may not be worth anything, but I think it's a theological mistake for fundamentalist Christians to try to place creationism on an equal footing with evolution in schools, or to champion "intelligent design" as a scientific hypothesis. The Post quotes Senator Santorum: "students should be exposed to 'the full range of scientific views that exist. ... My reading of the science is there's a legitimate debate [between evolution and 'intelligent design']. My feeling is let the debate be had.'" If I were a fundamentalist, I would not accept the idea that core principles of my faith were testable hypotheses on par with those of science--subject to confirmation or refutation. First of all, I wouldn't want to give so many hostages to fortune. What if the data do not ultimately support the existence of God--must I then agree that there is no deity? In any case, the data will not support the Genesis account, and surely it's a retreat to move from Genesis to "intelligent design." Even I were confident that the scientific evidence would ultimately corroborate my beliefs, I wouldn't want religion to rest on data. Faith is faith. It should stand against all evidence.

    3. Civil libertarians should be aware of, and concerned about, a tension in this debate. According to the Post, "Alabama and Georgia legislators recently introduced bills to allow teachers to challenge evolutionary theory in the classroom. Ohio, Minnesota, New Mexico and Ohio have approved new rules allowing that." On one hand, it may offend the constitutional separation of church and state for an agent of the state, the biology teacher, to challenge evolutionary theory on religious or quasi-religious grounds. On the other hand, doesn't the First Amendment grant a biology teacher a right to say what he or she believes? I can probably be talked into a "gag order," but not without deep regrets about the offense to the teacher's rights.

    4. I generally like the idea of "teaching the controversy." In this case, that would mean teaching high school students some philosophy of science. I realize that schools face excessive mandates already, but I suspect that debating the meaning and purpose of science is more important than knowing most particular scientific facts and theories. Thus, for example, some assert that science consists only of conjectures that stand until evidence refutes them. In that case, Darwinism is "just a theory," and so is "intelligent design"--but so is heliocentrism. We merely hypothesize that the earth circles the sun, and we stand ready to change our theory. Is this a plausible philosophy of science, or is there such a thing as certainty (or near-certainty)? If so, it would seem that evolution has a lot more evidence behind it than intelligent design.

    Meanwhile, are articles of religious faith also conjectures that stand until evidence makes them fall? Or is science fundamentally different from religion?

    Sociology of science becomes relevant here, too. We can know very little directly about nature. Even if we make our own observations, we must use instruments and techniques that others created. Thus trust in other people is essential to science. The kind of people who believe in evolution are very different from the kind of people who believe in creationism or intelligent design. I'm not saying that one group is better than the other, only that they have radically different sociologies. The evolutionists dominate biology departments at Research-1 universities. The proponents of intelligent design mostly work at independent outfits funded by wealthy fundamentalists, or in academic departments other than biology. On one view of the sociology of science, the dominant strand is just more powerful: it's the one with money and prestige. That's what Senator Santorum means when he says: "Anyone who expresses anything other than the dominant worldview is shunned and booted from the academy." (Note: this kind of diagnosis is also common on the postmodern left.) On a different theory, mainstream science is a self-correcting, transparent, rational community. Students, as budding citizens, need to develop informed opinions about science and scientists.

    5. Fundamentalist opponents of evolution may do some damage if they prevent our students from gaining access to modern biology--not to mention geology, medicine, anthropology, physics, psychology, and other disciplines that have embraced the notions that the earth is very old and that natural selection explains many biological changes. The damage is likely to be worst for young people who come from relatively sheltered--and often disadvantaged--backgrounds.

    However, I am at least as worried about the threat from today's "Darwinian fundamentalists," who believe that almost all important social, economic, and even moral questions can be answered by speculating about what traits must have increased our ancestors' chances of survival in the early Pleistocene. We are evolved, physical creatures with certain inherited limitations. But we know much less about these limitations than many pop-Darwinians claim. Besides, our evolved traits or tendencies do not tell us much about what is valuable. Roaches are very durable and "fit" (in the Darwinian sense), whereas tigers only survive today on human charity. Yet it is important to be able to see that tigers are beautiful and priceless. The equation of the fit with the good is a great mistake, more characteristic of our age than religious fundamentalism.

    permanent link | comments (0) | category: philosophy

    March 2, 2005

    the computer as a metaphor for the brain

    Last Friday, some colleagues and I discussed a very strong paper by Joe Oppenheimer et al. that bridged rational choice theory and cognitive psychology. The authors of the paper (and the texts they quoted) said that memories are "stored," "linked," "tagged," "called up," and "retrieved" by the brain. These metaphors have come originally from various domains of human activity. (I suppose that shopkeepers store things, dogs retrieve things, and archaeologists tag things--to name just a few uses). However, the proximate source for all these words, obviously, is computer programming. Without thinking twice about it, we use the computer as an analogy for the human brain. This analogy can be illuminating, but we must be careful to remember that it is not literal. Brains are like today's computers in some respects, but not in many others. It struck me that in John Locke's day, the main metaphor for the brain was painting: i.e., representation of sense-data on flat surfaces. Painting was a very advanced technology in 1700--better than it is today. But it was an imperfect metaphor for cognition, and so is computing in our era.

    permanent link | comments (0) | category: philosophy

    February 24, 2005

    why Dante is "good to think with"

    The Cambridge philosopher Miles Burnyeat says that Plato is good to think with (pdf, p. 20) I believe the same of Dante, which is why I chose to write a book about current moral issues by interpreting sections of the Divine Comedy. Like Platos dialogues, the Comedy is a concrete story in which abstract ideas appear as statements by embodied characters in specific historical circumstances, who attempt (to various degrees) to live by what they say. In both works, the question of irony arises. Plato is not Socrates, and Dante-the-poet is not Dante-the-pilgrim. It isn't clear what the author thinks of his main character's views.

    It is not obvious why we should use old literary works to think about current moral issues, especially if the authors of those texts refused to say straightforwardly what they believed. However, the humanities are premised on the idea that we should think with novels, dialogues, and other narratives.

    One explanation is that any text from the distant past provides an alternative perspective on the world. For instance, the Divine Comedy helps us to understand what it would be like to see everything (historical events, the behavior of animal species, even the movements of the stars) as if it had a moral purpose. But I must say that I do not find a morally teleological universe at all plausible; thus it may be interesting to understand Dantes medieval teleology, but it is not life-altering. Perhaps it would be more challenging for a modern democrat to take seriously Dantes celebrations of aristocratic and martial virtues.

    However, Dantes exotic perspective is not what I find most useful in him. The Divine Comedy is good to think with because it embodies several moral perspectives in vivid characters and situationsincluding the character of the author. Embodying moral values is how we must think if we want to make really serious ethical choices.

    Philosophers often hope to be able to construct persuasive moral arguments that run inevitably from premises to conclusions. So, for example, Robert Nozick argues that if you value freedom, then you cannot favor schemes to guarantee particular distributions of wealth. Peter Singer argues that if you believe that we must minimize the quantity of suffering in the world, then you cannot permit vivisection. Judith Jarvis Thompson argues that if you believe that individuals may refuse to be involuntary life-support systems for other individuals, then you must permit abortion in cases of rape and incest.

    Impressive as some of the arguments are, they have two major limitations. First, there is substantial and reasonable disagreement about the premises that generate the conclusions, and there may never be arguments strong enough to decide the premises. Second, there cannot be abstract arguments that address a wide (and crucial) range of questions involving our choice of a life or our valuation of characters and institutions. It is simply implausible that an argument, abstracted from context, could decide whether I should lead an active or a contemplative life, advise the powerful or seek power myself, pursue civic engagement or study mathematics, raise children or devote myself to work, or prefer the political economy of Norway to that of Hong Kong (or vice-versa). To grapple with such issues, we need detailed, thick descriptions that give us portraits of whole situations over time.

    Thus, when I wanted to consider whether it was better to take moral guidance from stories or from philosophical principles, I found it most illuminating to think with a storythe Divine Comedyin which that choice is a major theme, woven into the structure and not merely talked about. The tension between Dantes love of human particularity and his commitment to abstract principles is embodied in the narrators ambivalence toward his main character; in the gradual but relentless movement from concrete and emotional narrative toward abstract speculation; and even in the metrical scheme, terza rima, which marries a metronomic regularity to great variety of rhythm and texture. Thus all aspects of literary criticism, including formal analysis, can help us to identify the values of Dante-pilgrim and of Dante-poet, and to decide whether we should agree with either of them.

    permanent link | comments (0) | category: philosophy

    January 24, 2005

    strategy, for intellectuals

    In a comment on last Thursday's post, Michael Weiksner argues that political theorists employ a "high risk/high return" strategy for social change. They develop comprehensive, sometimes radical arguments that can be used in public debates. Mostly, such arguments have little influence, if only because there is no organized constituency or institution with the capacity to realize them. "But every now and again, you have Machievelli or JS Mill or Rawls, and their frameworks impact society for decades or longer." In contrast, Michael says, people like me take a "hedged position." We work closely with practitioners and communities. This strategy increases our odds of making a small difference but rules out any major effect. For instance, as a result of the projects I'm involved in, some day there may be better civics courses in high schools. There will definitely not be a new social order.

    One problem with the high-risk strategy is that it may achieve catastrophically bad results. From Plato through Calvin to Marx, many of the most influential theorists have been, in my opinion, disastrously wrong. They have been wrong precisely because they have not been anchored in practical experience.

    But there are also drawbacks to the low-risk strategy. Some thinkers who are deeply immersed in practice suffer from narrow horizons or excessive caution. John Dewey was an exemplary "engaged scholar," yet he made some spectacularly bad calls (applauding World War I and opposing US entry into World War II, for instance). In any case, there is nothing dangerous about most of today's highly abstract political theory. For example, Elizabeth Anderson's arguments against natural property rights, posted on left2right, were what originally got me thinking about the role of political theory. If Anderson were somehow to influence popular opinion, no harm would follow--perhaps some good.

    Nevertheless, I'm against the high risk/high return strategy for a different reason, one that's specific to our time. Mainstream political philosophy has long been consumed with questions of distribution--who should get what goods and rights. For most liberals, property should be redistributed (to some limited degree). For most libertarians, existing property distributions should be left alone. I suppose that on a completely theoretical level, I lean the liberals' way. But I see two problems with this whole debate:


    1. The only mechanisms we have for distributing wealth and protecting rights are the actual governments that exist today. I can argue for equality of opportunity (or even for some degree of welfare equality), but I cannot defend the proposition that our government spends our money very effectively, transparently, accountably, or equitably. Thus a debate about how much wealth individuals should keep and how much should be redistributed is fundamentally sterile. It's politically irrelevant because it doesn't confront the main argument against government, which is not libertarian but pragmatic (i.e., government doesn't work very well). The debate is also normatively weak because it assumes that we can have better institutions than we do. That's like the economist on the desert island who "assumes a can-opener."

    2. The premises of the debate are zero-sum. Politics, according to both left-liberals and libertarians, is a mechanism for distributing the goods that already exist. But citizens also have constructive potential; through politics, we can make new goods. Of course, one could develop an abstract political theory in favor of political creativity. But I suspect this would be very vague and unpersuasive unless it were anchored in current examples.

    By coincidence, I recently read two separate scholarly descriptions of government in Hampton, VA. Hampton is an old, blue-color city, not in any way privileged. Yet the city has thoroughly reinvented its government and civic culture so that thousands of people are directly involved in city planning, educational policy, police work, and economic development. The prevailing culture is deliberative; people truly listen, share ideas, and develop consensus, despite differences of interest and ideology. Young people hold positions of responsibility and leadership. Youth have made believers out of initially suspicious police officers and school administrators.

    Imagine that the whole country were more like Hampton. Then we could have a really interesting debate about distribution. If some people argued that the government should tax and spend more, their fellow citizens would see the potential advantages. They would feel capable of influencing the use of tax money. Meanwhile, libertarians would be able to make arguments in favor of markets and individual freedom; they might even prevail. But the whole discussion would be the opposite of sterile.

    So how can intellectuals help to make America more like Hampton? First of all, they should aware of the civic innovation that is going on today. Thanks to the scholars who identified Hampton as a site of innovation, I was able to read two articles about that city. These scholars found Hampton through their own networks of practitioners. We need plenty more of that kind of writing. Second, we must grapple with the subtle and difficult issues that all such cases raise. How did Hampton get where it is today? Are its achievements sustainable? Are they replicable? Is the city's deliberation truly inclusive? Does all that participation generate good economic and social outcomes? Is democracy worth all the time people have to spend in meetings? To me, these are the crucial questions, much more likely to yield real social change than any novel argument in favor of equality.

    permanent link | comments (2) | category: philosophy

    December 31, 2004

    particularism and coherence

    Im a moral particularist. I believe that some words and concepts have moral significance, but we can only tell whether they are good, bad, or neutral in particular cases. Abstracted from any specific context, they have indeterminate significance. Examples include love, loyalty, pleasure, courage, and generosity. These words and concepts are indispensable. We cannot replace them with ones that have determinate and predictable significance without oversimplifying morality. Therefore, moral judgment ought to be about whole situations, not about abstract concepts.

    Im also a cultural particularist. I believe that people have large sets of values, experiences, preferences, and opinions that jointly constitute their cultures. Often, many people who live at roughly the same time and place share a lot of ideas, values, etc., and then we say that they belong to the same culture. However, there is usually no single perspective, worldview, premise, or foundation that defines or underlies their culture. Thus there may be no precise boundary to a culture; and we can often classify one person as a member of several cultures at once. Some philosophers have argued that the various cultures of the world are fundamentally incompatible or unable to comprehend one another. But every member of a complex, reasonably free society will have slightly different ideas, experiences, and values, so each person can be described as having his or her own culture. This is a reductio ad absurdum; it suggests that there can be no deep incompatibility among cultures (or else no one could understand anyone else). If everyone in a society does share exactly the same set of values, then we suspect that they are deprived or politically repressed.

    These two forms of particularism are independent and separable, but they go together well. The combined position has implications for moral reasoning and the humanities. Im spelling out the implications in my book on Dante, which is nearly finished.

    However, I recently realized that there is a phenomenon that particularists have difficulty explaining: coherence.

    [Substantially revised on Jan. 3]: If "morality" is the set of all the right judgments about all the situations in the world, then I don't actually believe that it is very coherent. We should respond the same way to any two situations that have exactly the same morally significant empirical features. Beyond that, there is not much coherence to morality: there are just incorrect and correct judgments of cases.

    However, cultures are different from "morality." They are plural, because they consist of aesthetic, spiritual, and moral judgments as well as goals, preferences, empirical beliefs, and expectations about certain questions. Why are cultures often internally coherent? Why is the set of values and preferences held by a group often harmonious, not completely random and unpredictable? We could also ask this question about individuals. Why do all the opinions and values of a person tend to cohere, at least to some degree?

    I acknowledge the phenomenon of coherence. I suspect it arises for several reasons. First, some people have a preference for coherence itself. They believe that all their separate judgments ought to arise from as few premises as possible. They also believe that everyone should share these premises. If this wish came true, then the whole society would become uniform and consistent. We see a preference for coherence in Calvinists, utilitarians, Marxists, and Freudians, among othersdespite their deep disagreement about virtually everything else. This preference is not a good thing, in my view. It doesnt reflect some deep truth about the universe (i.e., that everything must follow from one or a few assumptions). On the contrary, it causes people to force situations into a Procrustean bed. But I recognize that the preference is widely held, and it has caused people to make their various views cohere.

    Second, some people are deeply influenced by a few stories or situations. Whether one is especially moved by the Passion of Christ, the Holocaust and the foundation of Israel, or ones own rags-to-riches story, it can have a powerful influence on many or all of ones moral judgments. If a group of people constantly refer to few stories or situations, then they will share a culture that is relatively coherent. As a particularist, I dont believe that any story ought to provide the foundation of morality. But I acknowledge that a story may provide the basis for all the views of certain people and societies at certain times.

    Third, institutions prize coherenceand for good reasons. For example, even though the best moral judgments arise only from careful consideration of particular cases, we worry that real judges and juries may be biased, incompetent, or simply unpredictable. Therefore, we define legal concepts in general terms and ask courts to enforce them almost mechanically. This is beneficial even though there must be an imperfect match between law and morality. Religious denominations and educational institutions also pursue a degree of internal coherence. Furthermore, it is useful for the various institutions of a single society to harmonize with one another. Thus there is social pressure toward coherence, for basically pragmatic reasons.

    permanent link | comments (0) | category: philosophy

    December 28, 2004

    aesthetics and history

    Last week in Bruges, Belgium, at the medieval Hospital of St. John, we saw an altarpiece by Hans Memling that's sometimes entitled the "Mystic Marriage of St. Catharine." (The picture to the right is just a detail; click here for a photo of the whole original painting.)

    Even if you knew nothing about this work, you might like it--not necessarily in a digital photograph, but in its original 31 square feet of paint. The figures are extraordinarily realistic. The cloth is rich; the colors are luminous and balanced. The woman wears an expression of repose and kindness. Her pale white skin, the ruddier skin of the man behind her, and the wool of the lamb create interesting tactile contrasts. However, if you somehow thought this were a modern illustration, you might not give it a great deal of thought. You would have to acknowledge the artist's technique, since practically no one can paint light, texture, and skin so naturalistically today. But then again, naturalistic oil painting isn't very useful now that we have color photographs. And if the image turned out to be a photo of models in medieval clothing, it would be downright strange.

    Actually, the altarpiece was painted from 1474-79. That fact makes it much more beautiful than it would otherwise be, I believe. But how can an external fact increase the beauty of an image? The colors would be as rich and harmonious if they had been painted yesterday.

    I think that the date and provenance of a work are relevant to its aesthetic value--for two reasons. First, a painting can evoke a whole lost culture. Flanders in the 15th century was cruel, superstitious, oppressive, dirty, and sometimes vulgar. (There is even some vulgarity in the right wing of the "Mystic Marriage of St. Catharine.") The same civilization was also dynamic, prosperous, and vigorous--the world's leader in international commerce--yet capable of spiritual purity and calm. An image like Memling's altarpiece reflects the best of its entire cultural milieu, which greatly increases its beauty.

    Second, a great work from the past belongs to the "history of art." We tell this story as a series of discoveries and revolutions (borrowing ideas from other fields of history). It is a heroic tale, beginning with the Archaic Greeks and ending with Picasso and Matisse, if not with post-modernism. Each era or movement is described as solving problems or overcoming prejudices inherited from the past. Once the great artists of a particular moment have solved their problems, we no longer admire repetitions of their success. Thus Memling is impressive because he can imply complex interactions among multiple figures much better than his teachers, Van Eyck and Van der Weyden, could. But any journeyman artist of the 17th century could place eight people in an organized open space and show how each related to the others. So what is original in Memling is commonplace two centuries later. And what is original is also beautiful, because we view the whole history of (Western) art as a moving narrative.

    Our emphasis on the historical development of art is itself a feature of our own civilization, not something universal. The first people to tell heroic stories about the development of art were Pliny and Vasari, each coming after a great era of creativity. Their way of appreciating painting and sculpture works perfectly in a secular museum, less well in a temple or a church, which has a different purpose. Memling himself would have had a very limited understanding of the history of art, as shown by the fact that he placed biblical figures in late-Gothic, Flemish settings. Yet our historical sense is what makes us find Memling so beautiful.

    permanent link | comments (1) | category: fine arts , philosophy

    December 15, 2004

    against "cultural preservationism"

    Near the end (p. 227) of Anne Fadimans The Spirit Catches You and You Fall Down (which I discussed on Monday), theres a dialogue between a doctor and psychotherapist. They have been talking about Lia Lee, the Hmong girl whose treatment for epilepsy violated several basic Hmong beliefs. Ive reformatted Fadimans paragraphs into a mini-dialogue:

    Physician: You have to act on behalf of the most vulnerable person in the situation, and thats the child. The childs welfare is more important than the parents beliefs. You have to do whats best for the child, even if the parents oppose it, because if the child dies, she wont get the chance to decide twenty years down the road if she wants to accept her parents beliefs or if she wants to reject them. Shes going to be dead.
    Psychotherapist (tartly): Well, thats the job you have taken on in your profession.
    Physician: Id feel the same way if I werent a doctor. I would feel I am my brothers keeper.
    Psychotherapist: Thats tyranny. What if you have a family who rejects surgery because they believe an illness has a spiritual cause? What if they see a definite possibility of eternal damnation for their child if she dies from the surgery? Next to that, death might not seem so important. Whats more important, the life or the soul?
    Physician: I make no apology. The life comes first.
    Psychotherapist: The soul.

    The psychotherapist mentions beliefs about the after-life, which are especially thorny because no one can know what happens after deaththere is no empirical evidence. If a treatment saves lives but causes damnation, then one should certainly forgo the treatment. However, just because parents believe that a treatment will put their childs soul in peril of eternal torture, that doesnt make them right. Parents do not own their children. As I argued earlier in discussing the Amish, there is a profound conflict between childrens freedom and parental freedom. I believe that a liberal state should protect children against their parents, although it is harrowing to read about Californias unjust and harmful decision to take custody of Lia Lee.

    In any case, the Hmong dont believe in eternal damnation. Although Lias parents were concerned about what would happen to her reincarnated soul if her blood were drawn (violating a taboo), that was not the main problem. The main problem was their belief in the efficacy of traditional Hmong healing and their skepticism about the effects of Western medicine. In short, they thought that a Hmong shaman could cure their daughter, while American doctors were making her worse. Fadiman argues that there was some limited truth to this; the physicians made serious errors, whereas Hmong shamans are non-invasive healers who work only on the spiritual level and often get good psychological results. They would have done Lia no harm and might at least have helped her parents.

    But ultimately, Western medicine is going to work better than Hmong shamanism for a lot of diseases. Hmong people are learning this; some are even becoming doctors. Thus their traditional culture is bound to change. Even if they preserve shamanistic medicine, it will have a new meaning for them. They will either use it to fill gaps left by Western medicine (especially psychiatry), or they will choose to preserve it because of its cultural significance. But a ritual performed because it is traditional is fundamentally different from a ritual performed because it cures a disease.

    Cultural institutions address problems and must change when they are no longer effective. Sometimes there is a lag, because people understandably cling to what they know; but there is no way to stop history. Contrary to the racist articles that described Hmong immigrants as moving out of the Stone Age when they reached America, they had been part of history all along. In fact, they had participated in high-tech battles and suffered a holocaust during the Vietnam War. Some had learned to fly fighter jets. And this was by no means the first time that they had adjusted to a changing world.

    The argument against preservationism also applies to cases in the West. For example, some people want to preserve jobs for Yorkshire coal-miners and the Chesapeake Watermen. But their ways of life no longer make sense. Coal is expensive and bad for the atmosphere; crab-trapping doesn't pay. Preserving these traditional jobs and cultures would require state subsidies or new business models based on tourism instead of commodity sales. A tough, blue-collar culture must change fundamentally if its function changes. It cannot be preserved, because its traditional values included efficiency and self-sufficiency, and those are gone. The only way is forward.

    permanent link | comments (2) | category: philosophy

    November 21, 2004

    humanistic versus technical philosophy

    My two good friends from as early as kindergarten, the brothers Marcus and Jason Stanley, are guest-blogging with Brian Leiter. Lately, they have considered the very question that I have been writing about lately as I try to finish my current book-in-progress: the distinction (if there is one) between humanistic and technical philosopy.

    My expertise, to the extent that I have any, is strictly limited to moral and political questions. In those fields of philosophy, there are not two distinct camps, the humanists versus the technical analysts. But there are two poles in a continuum. The same continuum defined moral philosophy in the Renaissance, when humanists (writers and teachers who practiced the studia humanitatis) challenged the highly technical Scholastics, who saw philosophy as a science. I believe that we should move closer to the humanistic pole today, reviving certain aspects of Renaissance humanism. [Warning: The rest of this post is long, because I've pasted a section from my book into it.]

    "Technical" moral philosophy resembles medieval scholasticism in several important respects. First, technical ethicists (like the Scholastics) usually analyze raw materials that come from outside of contemporary academic philosophy. For the most part, they analyze intuitions--i.e., the judgments and opinions of contemporary people, especially those who are socially and culturally similar to the author--or canonical doctrines from the past, such as Kantianism and utilitarianism. Philosophers strive to make these raw materials more consistent and clear and reject any aspects that prove fatally contradictory.

    In my view, however, philosophy is unsatisfactory if all it does is to analyze exogenous data, whether modern intuitions or doctrines from the past. The best moral philosophy has been synthetic and generative rather than merely analytical. Philosophers have proposed new and challenging moral ideas. Today, analytical moral philosophers sometimes achieve novel results by applying canonical doctrines in new ways. (For instance, Peter Singer showed that certain forms of utilitarianism bar the exploitation of animals.) At least as often, they debunk received moral opinions by showing that these ideas cannot be stated in highly clear and consistent language. But we need moral opinions, even if we cannot state them in perfectly clear and mutually consistent ways. Indeed, clarity and consistency are easily overrated. We are better off wrestling with a set of incompatible, partial, but demanding truths, rather than retaining only the ones the fit comfortably together. In any event, it is unlikely that our store of canonical theories and conventional judgments is satisfactory, even once analyzed and made consistent. To renew its traditional role, philosophy must generate and defend moral ideas, rather than merely refine or reject existing ones.

    Second, technical ethical philosophy is ahistorical. Philosophers are, of course, aware that cultural change occurs. Yet their efforts to refine and restate pre-modern philosophy often resemble Aquinass reconstruction of Aristotle. For instance, a reconstructionist reading of Kants moral theory does not ask what Kant meant to say. He was a pietist from eighteenth-century Riga who held many superannuated beliefs that need not concern us. Rather, the point is to develop a true doctrine by retaining and clarifying persuasive aspects of Kants writing while jettisoning the rest. This was exactly the Scholastics approach to Aristotle.

    Again, I think this is a largely misguided method for moral philosophy. It may make sense in other fields. For example, Strawson wrote: When I allude to the system of Leibniz, I will scarcely be troubled if the doctrines I discuss are not at each point identical with the historical doctrines espoused by the philosopher called Leibniz. Leibniz was simply a good aide for Strawson as he considered metaphysics. However, the raw materials of moral analysisthe intuitions of the present and the philosophical doctrines of the pastare always reflections of local circumstances. They arise because of peoples experiences in the world, including the representations and stories that they have found persuasive. Moral ideas are never self-evident, axiomatic, or self-justifying, although they may appear self-evident to people who have narrow horizons. Nor are moral ideas and judgments self-contained: they always assume and imply numerous other ideas. Philosophers should treat intuitions and philosophical theories as cultural phenomena that must be understood before they can be judgedand that can only be understood in context.

    Third, the style of analytic philosophy is third-person exposition. There is no reason to wonder whether the author whose name appears on the title page actually holds the views that are described, as unambiguously as possible, in the contents. Nor is there much reason to wonder about the context, audience, or motivation of the work. To learn that the author has a hidden agenda or fails to follow his own moral advice is merely to engage in gossip; the value of a book lies exclusively in its arguments. Note, however, that this was not true of some of the best moral philosophy of the past, in which questions of irony, intention, and context were complex and essential.

    Finally, "technical" moral philosophy adopts an implicitly superior position vis--vis the narrative arts, such as history and fiction. These arts generate stories; moral philosophy decides whether the judgments and intuitions supported by such stories are correct. The superiority of moral theory was more explicit and uncontroversial in the Middle Ages. Then, most writers described the various disciplines not as independent ways of thinking, but as parts of an overall hierarchy of knowledge. For instance, theorists constructed many rival lists of the seven liberal arts, but all lists described a progression from the elementary disciplines of the trivium (from which we derive the word trivial) to the advanced sciences of the quadrivium. Some theorists placed moral philosophy and theology in the quadrivium; others saw them as higher pursuits than all seven of the liberal arts. But consistently, medieval theorists assumed a progression from grammar and rhetoric toward philosophy. The former disciplines were simply tools for communicating truth (or falsehoods). They were taught by exposing students to Latin stories and speeches. Students were expected to master grammar and rhetoric early, so that they could proceed to study truth as revealed by philosophy and theology. These disciplines, in turn, were abstract and encyclopedic, not concrete or based in narrative.

    Renaissance humanism ultimately undermined the medieval system. We sometimes think of it as a new set of philosophical doctrines about the dignity and value of human beings. On this view, Pico della Mirandolas Oration on the Dignity of Man is central text. But Pico was neither original nor highly influential. His ideas would have been broadly familiar a century earlier, although he knew more Greek and wrote better classical Latin than his medieval predecessors. He was part of a philosophical tradition that continued for at least the next centurymystical, eclectic (in the original sense), and speculativebut he had little to do with humanism.

    A better way to understand humanism is as a revolt of the trivium. The first people to call themselves humanists were independent tutors who provided advanced undergraduates with instruction in grammar and rhetoric. They taught what they called the studia humanitatis on the side, while the universitys formal curriculum emphasized logic and theology. Parents paid for this humanistic instruction because they wanted their sons to learn eloquence to succeed at court or in the law. Humanist pedagogy consisted of reading and imitating ancient narrative authors, with attention to style and form, plot and character.

    The truly innovative and representative works of renaissance humanist philosophy do not consistently endorse the dignity of human beings. If they have anything in common, it is not any doctrine, but rather a similarity of form. Many are literary texts that are explicitly concerned with character, context, voice, irony, and plot. In each case, the role of philosophical argumentation is itself a theme. Thus, for example, Thomas Mores Utopia contains a blueprint of a society, complete with arguments for why that polity is ideal. In this respect, it resembles Rawls Theory of Justice. However (just as in Platos Republic) the account of an excellent society is set in a complex and deliberate literary and rhetorical frame. The narrator, also an Englishman named Thomas More, is visiting Flanders on a mission for his king. He meets a friend and colleague named Peter Giles, who is talking by accident with an old and somewhat ragged man whom More takes for a sailor. But you are much mistaken, said Giles, for he has not sailed as a seaman, but as a traveler, or rather a philosopher. (He is later described as a friend of Plato). This mans name turns out to be Raphael Hytholday, and he relates how he had debated economics with a lawyer in the very household in which More had been raised: that of Cardinal John Morton. The scene retold by Hythloday involves not only his opponent (the lawyer) and the even-handed Cardinal, but also an incompetent jester who speaks truths, and a hot-tempered friar.

    More is so impressed with Hythlodays recollected arguments that he tries to persuade the wise traveler to become a counselor to princesas More is. Hythloday responds that his advice, based on philosophical arguments and experience, would be so radical that no one would pay him any attention; so he prefers a private life (the opposite of Mores). Several times in the course of this discussion, Hythloday alludes to a superior society that he had visited called Utopia (No Place). The character Thomas More doubts Hythlodays philosophical positionwhich is an attack on private propertybut he seems to recognize that the concrete existence of a real superior to modern England might be persuasive. Thus he earnestly begs Hythloday to describe that island very particularly to us. There follows Hythlodays description of Utopia.

    The Praise of Folly is a book by Mores friend Erasmus. (In fact, the Latin title, Encomium Moriae, could be translated as Praise of More, an inside joke.) It is a speech by Folly eulogizing herself. Self-praise is always foolish, and anything that fools say is the opposite of wise; so one might assume that every claim that comes out of her mouth is the precise reverse of the truth. Thus, for example, when Folly calls scholastic theologians her servants and praises them for interpreting scripture and history as illustrations of abstract truthswithout concern for literal details or authorial intentionsit seems clear that this is Erasmus attack on those methods. However, Folly is extraordinarily learned (if fallible); and some of her arguments resemble those that Erasmus made elsewhere under his own name, for instance, his critique of monastic orders. She even quotes and compliments him.

    Finally, consider Machiavellis Prince. This book looks like a treatise on government, an argument in favor of tyranny. But it is also a letter written by the exiled and recently tortured author to a particular prince at a particular moment. Therefore, some readers have long suspected that Machiavelli was deeply ironic. As Rousseau wrote: Machiavelli was a proper man and a good citizen; but, being attached to the court of the Medici, he could not help veiling his love of liberty in the midst of his countrys oppression. The choice of his detestable hero, Caesar Borgia, clearly enough shows his hidden aim. This may not be an accurate theory of Machiavellis motives, but the fact remains that Machiavelli is a character in the Prince, living in particular historical circumstances, writing with particular motives, and not necessarily identical to the author. It is possible that he is as much of a fool as Follyor Hythloday, or Erasmus, or More.

    Each of these works invites us to ask whether the author agrees with the doctrines that are expressed inside its complex narrative frame. There is a layer of ambiguity that violates the modern (or Scholastic) philosophers preference for clarity. We cannot paraphrase a humanistic work without losing its significance, whereas a modern philosophical argument is supposed to be subject to restatement and summary. In order to assess the intended purpose of these bookswhich is only one of several questions we might pursue in interpreting themwe must explore the immediate context in which they were written. For instance, Machiavellis real relationship with the Medici is relevant to interpreting The Prince.

    I believe that the humanists meant something very serious by adopting the forms that they did. They assumed that philosophical arguments were important, but not universally binding. Moral arguments were appropriate to particular people in particular settings. They were always partial truths, because other people, differently situated, could legitimately hold and believe different values. This did not mean that ethics was a matter of individual preference and taste. But readers always had to ask whether the reasons and conclusions of any speaker were relevant to them. This question required a holistic judgment of the circumstances described in the text and those of the reader. Since all the circumstances had to be considered together, humanist authors described settings, personalities, and even facial expressions as well as arguments.

    Humanists derived all of these literary devices from classical philosophy. They were able to do so because they paid attention to the literary qualities of texts by Plato, Cicero, and Plutarch, their favorite moral philosophers. Whereas a Scholastic reader would consider a doctrine of Plato (probably via a medieval Islamic treatise), the humanists debated the character of Socrates, his rhetorical figures, and his behavior under various concrete circumstances. Since their greatest books made use of dramatic irony, it seems likely that they treated Platos dialogues, too, as possibly ironic.

    Today, moral philosophy could take at least four forms if it became more of a humanistic discipline. First, philosophers could tell stories with moral themes. Fashioning plausible and moving fiction is a special skill not often possessed by people who are also good at philosophical analysis, although Iris Murdoch, Rebecca Goldstein, and a few others have shown that this combination remains possible. In any case, philosophers have another option, which is to write true stories in order to highlight moral themes. A philosophers version of a narrative would be distinctive. Compared to historians and novelists, philosophers are more explicitly concerned with moral analysis and more likely to put theoretical arguments in the mouths of characters; but they can still write concrete and particular narratives. An extraordinary example is Susan Brisons autobiographical Aftermath: Violence and the Remaking of a Self.

    Second, philosophers could closely read fictional and historical stories and legal testimony in order to elucidate moral themes. A fine example of a philosophers close, sensitive, and original narrative interpretation is Richard Rortys chapter entitled The Barber of Kasbeam: Nabokov on Cruelty. Rorty uncovers a subtle but moving subtext in Lolita and uses it to illustrate the theme of moral obliviousness, which (in turn) motivates his form of liberalism.

    A moral philosopher who reads narratives ought to borrow some methods and concerns from the other humanistic disciplines. Thus, for example, Rorty rightly considers literary issues such as point-of-view, style, and irony, as well as historical issues such as context and audience. At a more practical, everyday level, professional interpreters ought to read their texts in the original languages (whenever possible) and trace allusions and other intertextual references. Whereas a conventional modern work of analytical philosophy is meant to be self-contained, narratives almost always incorporate other stories by reference.

    On the other hand, moral philosophers need not simply replicate the methods of literary critics and historians. Critics examine single works or combinations of texts that share common authorship, genre, or provenance. They often (and appropriately) investigate matters that have little bearing on moral judgment. Historians study periods, traditions, or communitiesand, like critics, they often investigate non-moral questions as well as moral ones. In contrast, moral philosophers should look for common moral themes, not only in literary texts and episodes from the past, but also in legal testimony, contemporary newspaper accounts, and hypothetical cases. Furthermore, moral philosophers have a comparative advantage when they analyze the explicitly theoretical statements that literary and historical characters and narrators often make. While these statements and arguments should be understood in the context of the overall genre and purpose of the works in which they appear, they should also be analyzeda task that philosophers can perform especially well.

    A third approach to humanistic moral philosophy is to look for patterns and developments in the history of ideas. For example, in After Virtue, Alasdair MacIntyre tells a story about the progressive loss of teleologyof a sense that human life aims toward some knowable endin Europe after the Middle Ages. More modestly, Seyla Benhabib once showed that classical liberals, despite their claim to reason a priori from the state of nature, actually drew a line between the public and private spheres that mirrored the traditional distinctions between male and female work-roles. MacIntyre and Benhabib both practice genealogical criticism, arguing that widely shared assumptions are based on suspect moves made at particular points in the past.

    Finally, philosophers who are humanists can help to recover attitudes and frames of reference from past or distant places that challenge widespread current assumptions. Clifford Geertz writes, The essential vocation of interpretive anthropology is not to answer our deepest questions, but to make available to us answers that others, guarding their sheep in other valleys, have given, and thus to include them in the consultable record of what man has said. Anthropologists are very good at this, as are historians and critics; but sometimes it takes a philosopher, steeped in the distinctions of moral theory, to recognize the hidden moral assumptions of a distant time or place. An example is the concept of moral luckincompatible with both Christian and liberal thoughtthat Bernard Williams discovered in Greek tragedy. It is possible to describe moral luck as a doctrine: We are not in control of our moral condition, but can be made better or worse by chance. However, I find it much more fruitful to see moral luck as a theme, a tendency in particular circumstances for individuals to become better or worse by sheer luck. Williams analysis of moral luck does not prove that it is a correct theory (which would imply, in turn, that Kantian and Christian ethics are fundamentally mistaken). In fact, the contrast between Greek notions of moral luck and modern ethics seems fairly intractable. But Williams performed a major service in revealing a lost theme.

    permanent link | comments (1) | category: philosophy

    November 18, 2004

    how deep is cultural diversity?

    "Historicism" is the view that our values are phenomena of our cultural backgrounds and contexts; and contexts differ from time to time and place to place. Although even the ancient Greeks recognized some degree of moral diversity, true historicism was a discovery of the nineteenth century.

    However, modern natural and social science have suggested that some important aspects of psychology are common to all members of homo sapiens, the results of our evolved physical natures. For example, it appears that all people place a higher value on a certain gain than on a probable gain of much greater worth; but they have the opposite view of losses. For related reasons, people will go to great lengths to save $5 on a $10 purchase (fifty percent off!), but will not inconvenience themselves to save exactly the same $5 on a $125 purchase. A loss of money reduces happiness more than an equivalent gain increases it.

    I mention these findings because we are told that they emerge consistently in studies from around the world; they may reflect mental heuristics that evolved when people were hunter-gatherers. Robert Wright tells us that peoples minds were designed to maximize fitness in the world in which those minds evolved, our ancestral state, which apparently resembles modern life among the !Kung San people of the Kalahari Desert or the Inuit of the Arctic.

    However, even if such claims are true, they do not negate the existence of deep diversity in other aspects of psychology and moral judgment. If our physical natures directly determined our answers to all moral questions, then we would not debate ethics or literally wage wars over differences of principle. Besides, many of the features of human psychology that are universal are not moral. Perhaps we evolved to be aggressive toward competitors and altruistic toward relatives. Yet we also have the capacity to limit our aggression and to generalize our altruism beyond family and tribe. People disagree about when aggression is appropriate and in what circumstances one must be altruistic. These differences are especially evident when one compares individuals from long ago or far away. Thus the natural basis of aggression and altruism does not in any way reduce the importance of moral diversity and disagreement.

    Finally, the very science that generates findings about human nature is embedded in a particular time and society. This does not mean that truth is inaccessible to science or that its findings are arbitrary. It does mean that we should ask whether the questions and methods of recent science are at least somewhat limited by our local interests and capacities. In sum, Isaiah Berlin was right: human beings differ, their values differ, their understanding of the world differs; and some kind of historical or anthropological explanation of why such differences arise is possible, though that explanation may itself to some degree reflect the particular concepts and categories of the particular culture to which these students of this subject belong.

    permanent link | comments (0) | category: philosophy

    October 10, 2004

    Derrida (the death of the author)

    Jacques Derrida died on Friday. All the obituaries I have seen have fundamentally mischaracterized his thought and the movement he inspired, deconstruction. (The Times gets the biographical facts right but avoids defining deconstruction by stressing its obscurity.) I found Derrida annoying when, as an undergraduate, I watched him sign students t-shirts and then cross out his name to put it under erasure. I criticized him in my Nietzsche and the Modern Crisis of the Humanities (pp. 175-181). After I finished that book in 1992, I ignored him. So did many others, for he became increasingly irrelevanta fate that may have bothered him much more than angry criticism. So I dont think much of Derrida; but we ought to associate his name with views that he actually held, not with the vaguely Marxist (materialist and historicist) opinions that are often pinned on him.

    Derrida claimed that certain prejudices, which he called logocentric, are to be found in all the Western methods of analysis, explication, reading or interpretation [Of Grammatology, translated by Gayatri Chakravorty Spivak (Baltimore, 1974), p. 46.] These prejudices include a preference for the world over language, for reality over fiction, for sounds over letters, for the signified over the signifier, and for masculinity over femininity. A classic deconstructionist reading of a text involves (a) demonstrating that the text presumes these dichotomies and (b) calling the distinctions and value-judgments into question. For instance, one might very plausibly argue that Dante combines irrationality, verbosity, femininity, and falsehood in the figure of Francesca da Rimini, whereas God is male, rational, silent, and true. Drawing attention to this dichotomy would be deconstructionist criticism.

    Derrida went beyond standard deconstruction, howeverstarting at the latest with Glas (1974). He knew that any argument against logocentrism would itself be logocentric, just because it would be an argument. He wanted to get outside a form of thinking that was, according to him, universal. To achieve exorbitant effects (ones that went outside the normal orbit), he played with styles of writing. For example, Glas consists of two parallel columns, one inspired by Hegel and the other by Genet. Hegel was a great systematic thinker who could incorporate all alternative views within his comprehensive system. Criticizing Hegel would be playing the philosophers own game. So Derrida analyzed a completely different author in the same book, discussed disgusting bodily functions, stretched puns beyond any reasonable limit, and said, in effect, Philosophize this.

    Everything depends upon the universality of the logocentric prejudices that Derrida identified. If they are omnipresent and important, then Derrida was engaged in a radical project of some interest (but of doubtful value). I think, however, that calling the West logocentric was a massive oversimplification. There are binary oppositions in our thinking, but also trinities and unities. Some of us believe that written text is merely a representation of sounds, which are primary; but others disagree. If the thinking of the West is deeply diverse, then there is no way out of its orbit. In that case, Derrida invented a rather easy game for himself: escaping prejudices that plenty of people had always disagreed with. Some deconstructionist readings are trenchant and plausible, but Derridas own works mainly look ridiculous.

    Jack Balkin has a nicer take, as does Michael Brub.

    permanent link | comments (2) | category: philosophy

    July 23, 2004

    two doses of realism about democracy

    I'm an egalitarian, participatory democrat (with a lower-case "d"). I believe that everyone should have as close as possible to an equal say in the political process. We can then decide fairly what scope we will give to markets. I also believe that participating in political institutions and community work can be intrinsically rewarding; therefore, as many people as possible should have the skills and opportunities to participate. Finally, I believe that everyone has knowledge, talents, and energies to contribute.

    Nevertheless, political equality has two limitations that I think we should face squarely:

    1. Business has a privileged position," as Charles Lindblom noted long ago. Corporations shouldn't be able to buy influence through campaign contributions or control of the mass media. However, they will be influential in any commercial societyand I believe that that's what we have, by virtual consensus, in the United States. Without even seeking to affect government policies, they will allocate investments in communities and in nations that have favorable economic policies. Governments will compete to attract investment, and this competition will put downward pressure on taxes and regulation. Although there should be countervailing pressures, the influence of business is unavoidable in a commercial society.

    If this is true, then we should be concerned about the degree of alignment between business interests and those of the rest of the public. Peter Peterson, Nixon's Secretary of Commerce, recently lamented the demise of "corporate patriotism" and the lack of "corporate statesmen" today. He recalled the essential role that business had played in passing the Employment Act of 1946, (attacked at the time as "socialistic"), creating the president's Council of Economic Advisors and the World Bank and IMF, and selling the Marshall Plan. Each of these reforms can be criticized for its substance, but each had broad support on the left.

    We will be particularly suspicious of such reforms if we view the very idea of benign business influence as a myth and a sham. My sense is that business interests sometimes align sufficiently with public interests to allow compromises that are about the closest we can get to social justice in a commercial society. I also have the sense that such alignment is less likely today than in the period 1945-1970. Big businesses should be concerned about the federal government's long-term fiscal solvency, and also about extremes of wealth and poverty, since their broader self-interest is involved. Yet they have little tangible positive influence today.

    I suspect that business interests are most likely to align with broader interests if (a) firms have a lot of sunk costs and cannot casually move their investments around; (b) the personal standing of their leaders is connected to their reputations for public service; (c) they are forced, by collective-bargaining and other arrangements, to consult regularly with workers and consumers, so that they are aware of other perspectives; and (d) they know that corporate statesmanship is valued by religious congregations, community associations, colleges, and the press. Each of these factors is weaker than it used to be because of globalization, market worship, and declining unions.

    2. Civic engagement is a minority taste. All types of people can and do participate in politics and civil society, whether they are young or old, rich or poor, white or people of color, women or men, citizens, residents, or even illegal aliens. However, participation is not for everyone. Only a minority of any community will attend meetings regularly, closely follow the news, lead and form associations, and organize and motivate others.

    If this is true, then we should care whether these civic activists are a diverse and representative group, whether their interests align with those of average people, what techniques they use to gain influence, and how public-spirited they are. We should also care what resources they have at their disposal.

    This is an abstract argument, but it has concrete, practical implications. For example, I have argued in favor of some kind of separate space on the Internet that imposes civic norms (decided on by the participants) and that serves civic activists. One way to do this would be to have a separate .civ (dot-civ) domain in which websites would be governed by norms that they enacted deliberatively.

    Theres an argument against such an approach. The dot-civ space would doubtless become a kind of walled-garden for people who are already civically active--uninteresting to those who go online for other reasons, including pop culture. Beth Noveck writes (pdf, p. 22) that my proposal was roundly criticized and rejected by the group assembled to consider it. I remember the same conversation as considerably more balanced. In any case, I would argueas a general matterthat it can be more effective to provide resources and networks for the civic tenth in all our communities than to try to infuse small doses of civic values into mass culture. Again, we must be concerned about how diverse the active citizens are, but its a mistake to imagine that they will be very numerous.

    permanent link | comments (2) | category: philosophy

    July 22, 2004

    Paolo & Francesca

    Among the most common keyword searches that lead visitors to this website are "Paolo" and "Francesca." I don't blog about those two doomed lovers from Canto V of Dante's Inferno, but I am (slowly) writing a book about them. It's an odd book (which may prove very hard to publish), because it combines rather detailed readings of the Inferno and various modern versions of Francesca da Rimini's story with a lot of analytical philosophy to build an argument for a certain way of thinking about morality. I've recently rewritten the Introduction to match the evolving content of the book.

    permanent link | comments (2) | category: philosophy

    June 14, 2004

    libertarianism and socialization: replies

    (Written in Macon, GA): It's amazing how a comment about libertarianism draws more attention than almost anything else in the "blogosphere." In a post from last week, I argued that libertarians ought to be concerned about how parents and communities raise their kids, because most people are not raised to value individual liberties as highly as libertarians would want. I also expressed some openness to pragmatic libertarianism while rejecting a pure philosophical form of the ideology. This post provoked comments on my site, in my email inbox, and on the Crooked Timber site, thanks to a nice mention by Kieran Healy. I'd like to respond to several of these comments together:

    1. I predicted that people who grow up in communities that bar public displays of political opinion will lack respect for the First Amendment. How do I know this?

    I don't. In theory, raising kids in speech-free zones could provoke a reaction: the next generation could passionately embrace political debate. It's always hard to predict the effects of social arrangements on political beliefs. This is one good reason not to use state power to manipulate private choices. (The other reason is liberty itself.)

    However, it seems at least plausible that the next generation will lack respect for free speech if we raise many of them in affluent and well-ordered communities that deliberately banish signs, leaflets, and canvassers. If I were a libertarian, I would worry enough about this that I would want to collect data on the attitudes of young residents of homeowners' associations. I might also launch a rhetorical campaign to support libertarian associations--those that choose to allow (or even to encourage) public displays of free speech.

    More generally, adults' political attitudes and behaviors are heavily influenced by their parents' political views and actions. Of course, there are exceptions: people who renounce the party or ideology of their parents. Libertarians are often examples, since libertarianism is a rather contrarian philosophy. However, the statistics are clear: parents' beliefs correlate very strongly with children's beliefs. We don't choose our parents, yet their beliefs tend to influence our choices for the rest of our lives. This is a conundrum that ought to provoke more thinking in libertarian circles.

    2. How important is a ban on political signs and canvassers? After all, neither form of political "speech" is common anywhere. Maybe in walled communities, campaign signs are banned, but everyone is inside checking out political websites. Then the ban would do no harm.

    This is a good point and a source of some consolation. Nevertheless, I worry that an explicit and deliberate ban on a certain kind of speech sends the message that such speech is socially undesirable. Why shouldn't people be able to put up small signs with political messages? Why does banning such signs seem to increase property values?

    3. "Bill" points out that if we are pragmatic libertarians (who embrace markets only when they work better than governments), we need a method for deciding when to embrace market solutions. One method is exactly what I favor: "have a big public argument and then let politicians decide (subject to any discipline voters place on them by voting them out)." Bill adds: "This ... has problems. Politicians often do not have proper incentives to decide the right way even for the x for which there is a big public argument. ..."

    I agree that politicians have incentives to make the wrong decisions, and a "big public argument" can turn out badly. (Among other things, it can turn into majority tyranny.) However, it seems clear to me that markets work for some things and not for others. They don't provide national defense, finance universal education, protect the ozone layer, etc. I don't believe that any existing social theory can tell us when they work and when they don't, because success is a normative matter, not a scientific one. Therefore, we must have a "big public argument" followed by a decision by our elected representatives. This is a flawed process and not one that can be perfected; but it can be improved. We have an array of safeguards to employ, starting with the Madisonian toolkit (checks and balances, a free press) and moving to more radical ideas (decentralization and subsidiarity, citizens' deliberations).

    4. I said, "I believe that human beings may make claims on others for economic support; that some of these claims are morally obligatory..." Craig asks, "I for one would like to hear(see) those claims made; offhand I can't think of any that I'd be persuaded by, excepting familial claims."

    I think most Americans would agree with me that when we are born, helpless and ignorant, we deserve at least an affordable education through the 12th grade, protection against abuse by our own parents, shelter and nutrition, protection against crime and foreign invasion, and basic health care. If we have rights to these things, then someone has a correlative duty to pay for them. Parents certainly have the primary duty, but many cannot afford education, housing, and health care for their children. Some might say that it is wrong for them to have children, but it happens, and it's not the kids' fault.

    Why should citizens of a person's nation pay the difference between what his parents can afford and what he needs? Why doesn't the obligation apply to all citizens of the world, or only to the local neighborhood? There is no a priori argument that nation-states ought to provide safety nets, but we do have some positive experience with states that do so. By far the highest standards of living ever attained in human history exist in democratic states that guarantee a package of social services: the United States and its allies in North America, Western Europe, and East Asia. It would take a very strong argument in favor of a different political unit before I'd want to reject the social contract that has made Norway, Australia, Japan, the US, and similar countries such extraordinarily good places to live.

    permanent link | comments (1) | category: philosophy

    June 6, 2004

    thoughts on libertarianism

    Since Im at a Liberty Fund conference with several libertarians, Id like to make two comments about this ideology:

    1. Im open to pragmatic but not philosophical libertarianism: If you come at me with a coherent and radical version of libertarianism, I will resist it. In contrast to libertarians, I believe that human beings may make claims on others for economic support; that some of these claims are morally obligatory; and that governments may enforce such claims through taxing and spending. I dont see a tax as an immoral taking of sacrosanct private property. This is only one place where I part company with abstract libertarian theory.

    However, libertarians have also developed a whole set of pragmatic arguments to accompany their core philosophical beliefs. They say that governments tend to fail at their own explicit purposes, are often captured by special interests, and promote upward economic redistribution; and that markets work better. Libertarians often assert that these arguments must apply in all (or almost all) circumstances. They rely on fundamental theoretical reasons that derive from economics, not philosophyfor example, the idea that markets efficiently deliver what everyone demands. I think, in partial contrast, that market solutions often work in particular domains and are worth testing. In practice, this means that I am open to, and interested in, libertarian arguments that take the form, A market will solve problem x (where x is something like poverty, crime, or environmental degradation). Pure philosophical libertarianism, however, says, We shouldnt structure the ground rules of society in order to solve problems of this type; we should simply respect private individual liberty. I disagree with this formulation, but that doesnt prevent me from learning practical lessons from libertarianism.

    For example, my colleague Bob Nelson is a libertarian who has argued for a long time that cities ought to grant all their zoning power to neighborhood associations. I can imagine granting such associations the right to buy garbage and sewer services on the open market; and the right to operate charter schools. Local police precincts could also be made accountable to the same associations. I suspect that in poor neighborhoods, people could do better for themselves than the city government can do for them. Im not positive that this is a libertarian position, but whatever it is, its well worth a try.

    2. Libertarians should be much more concerned than they are with political socialization: For well over a century, libertarian authors have been arguing eloquently for a minimal state. Yet most Americans favor Social Security and Medicare, oppose drug legalization, and are even lukewarm about the Bill of Rights. Whats gone wrong? Perhaps libertarian arguments are not compelling. (That is my own view.) Or perhaps parents and communities are raising their kids to be other than libertarians. A shelfload of books and articles by the likes of Hayek, Nozick, and Ayn Rand cannot counteract powerful socialization by millions of parents.

    I mentioned an example in my last post, but let me spell it out a little more. In some metropolitan areas, theres a stark contrast between neat, safe, prosperous private communities in which open displays of political opinion are banned, and poor, relatively high-crime urban neighborhoods in which you often see political signs and even some picketers and canvassers. There is also a contrast between fancy suburban mallsconsidered private propertyin which canvassing and leafleting are banned, and decrepit urban streets in which you can see all kinds of political speech, including graffiti. If millions of kids grow up in communities that are wealthy but intolerant of public speech, they are likely to draw the conclusion that speech is detrimental to order and prosperity. As I wrote in my last post, this is political socialization for fascism.

    Libertarians are loath to restrict private contracts, even those that voluntarily restrict speech. They have a point: we arent free if we cannot associate in intolerant communities. But if many people choose to ban freedom within their commonly-owned private property, then they are highly unlikely to raise libertarian kids. This is a big problem for libertarianism. Paper guarantees of freedom mean nothing if most people are against freedom.

    The great libertarian economist Frank Knight wrote in 1939:

    The individual cannot be the datum for the purposes of social policy, because he is largely formed by the social process, and the nature of the individual must be affected by social action. Consequently, social policy must be judged by the kind of individuals that are produced by or under it, and not merely by the type of relations which subsist among individuals taken as they stand.

    Moral: if you want libertarian policies, you need "social processes" that make people libertarians, and those policies may not arise as a result of free choices by individuals "taken as they stand." What's more, free parents make choices that overwhelmingly shape their children, which means that there can be tradeoffs between parental liberty and the liberty of the next generation. As Knight wrote, "liberalism is more 'familism' than literal individualism." But if families don't produce children who strongly prize freedom, then liberalism and "familism" will work at cross purposes.

    permanent link | comments (4) | category: philosophy

    June 4, 2004

    condos, gated communities, and shadow governments

    Montral: Im at a Liberty Fund conference on private neighborhood associations. The Liberty Fund is a basically libertarian foundation that organizes more than 100 small conferences a year. The participants are not all libertariansor else I would not have been invited.

    It turns out that some 50 million Americans now live in some kind of community governed by an association: a condominium, cooperative, or a planned community with a board. Often a developer subdivides some land or constructs an apartment building and sells the units with deeds that (a) impose numerous rules on the buyer; and (b) create a board or other body that can legislate further and enforce existing rules.

    These are voluntary associations: you dont have to buy a house or an apartment in any particular condo or planned community. At the same time, they act like governments, taxing, regulating and fining residents and enforcing their decisions in courts. Indeed, they are more powerful than conventional governments, which are restrained by the Constitution of the United States. Residential associations can, and actually have, banned the display of signs critical of themselves, banned the sales of certain newspapers, even banned the private possession of materials they deem pornographic. The rationale for these rules is to increase property values, although the rules may also have other purposes, benign or malevolent.

    These quasi-governments raise questions of interest to libertarians and others. For example:

  • Are they ways for people to secede from their responsibilities to the broader society? If so, will they leadfor better or worseto less redistribution from rich to poor, as rich people become responsible for their own streets, schools, and policing, and refuse to pay into the common pool? Or will they offer opportunities for self-government to all, including the poor in inner cities? (Note: its practically much more difficult to create residential associations in existing cities, where there are existing deeds, then in open fields.)
  • Do they replace the special-interest politics that is said to be typical of cities with efficient market transactions? Or does each association become a special interest, pressuring the government to give it favorable treatment?
  • What happens to the individual rights of the members of households that join these associations? For example, in a city, a teenager has a first amendment right to picket, but not in a gated community. If his parents voluntarily move into the gated community, are his rights abused?
  • What kind of political socialization will these living arrangements create? Will residents grow up thinking that government is unnecessary, since a private association provides for their needs? Or will they decide that security and prosperity depend upon pervasive regulation of private behavior? If they learn to rely on regulation without political participation and individual rights, then they will be socialized for fascism.
  • Existing residential associations are pretty homogeneous. They aim to increase property values by preserving certain kinds of bourgeois appearance and decorum. Will this always be true, or will there be more condos, co-ops, and planned communities that are dedicated to utopian experiments: kibbutzes, communes, and 21st century Oneida communities?
  • permanent link | comments (4) | category: philosophy

    May 21, 2004

    the Nuremberg defense

    I have long supported the Nuremberg Doctrine: soldiers are individually responsible for war crimes, and following orders is no excuse. Nor is it an excuse to say that an action seemed acceptable and triggered no feelings of bad conscience. War often suppresses our conscience or turns it upside down, causing us to view mercy as a tempting form of weakness that we are obliged to avoid. Nevertheless, when we carry guns, operate prisons, or give orders, it is our responsibility to make sure that our conscience is working right. As Hannah Arendt observes, "politics is not like the nursery." A person with a gun is not a child who knows that he is good if only he is obedient. One can follow orders without meaning to violate a law, and still be culpable.

    However, I now see a complication. In the military, you are legally required to disobey illegal orders, but you are equally obligated to obey every legal command. A mistake in either direction can send you to a court martial. In civilian life, we have much more margin for error. If someone, even my boss, tells me to do something, I can say, "I don't know if that's legal (or moral), so I won't do it." Or I can make an arbitrary excuse to get out of doing something that I fear may be wrong. The worst that can happen to me if I avoid making a yes-or-no decision is losing my job. Because we have this leeway, we should be held fully accountable for participating in any illegal acts, even if we don't understand the law or realize that we're doing something wrong. It's our responsibility to do the right thing, and if we're not sure, we can duck the issue.

    But soldiers are in a much tougher position. They must obey or disobey--immediately. It may be genuinely difficult to see that a grievous wrong is illegal under the hellish circumstances of war. Both historical evidence and experiments in social pyschology show that most people will do the wrong thing in hellish contexts. They will kill and maim other human beings out of duty, even though they don't want to harm anyone. If most people will act this way, then I must assume that I would, too. And if I have no leeway, no opportunity to get myself out of the situation, then I am especially likely to make the wrong choice.

    Thus it seems to me the rule ought to be: Don't obey patently illegal orders. Indeed, this appears to be the legal standard. It is then a hard question whether the despicable acts committed at Abu Ghraib were obviously illegal. If the accused soldiers were free-lancing--deciding on their own to humiliate and abuse prisoners, and hiding their actions from their superiors--then they are guilty. If they were following orders, even vague ones, then I am open to a verdict of "not guilty," as long as their commanders are held accountable.

    permanent link | comments (0) | category: philosophy

    May 18, 2004

    why stories are good for moral thinking

    I believe in the moral value of narrative. A story, whether fictional or historical, is a coherent description of a set of events. Its coherence is not simply causal, such that the first event causes the second, which causes the third, etc. Instead, narrative coherence can take many forms, including: unity of character (one agent does a set of things sequentially); unity of community (a set of connected agents do a set of things); teleological unity (a set of events build up to a significant conclusion); or thematic unity (many things with similar meanings are described). Often more than one form of unity applies.

    I would like to mention four features of narratives that make them useful for moral reasoning:

    1. Narratives enable thick descriptions. In Gilbert Ryles famous example, we may either say that someone contracted his eyelid or that he winked conspiratorially. The former is a thin description; the latter, a thick one. Thick descriptions often have moral significance. Contracting an eyelid is neutral, but winking conspiratorially is morally dubious. If it turns out that the contracting eyelid was a signal to commit murder, then that even thicker description marks the act as prima facie immoral.

    What justifies a thick description is almost always a story. For example, a video camera would record a wink as a wink, whether it was a signal to commit murder or the result of biting a lemon. We know that it is one thing rather than the other because of what comes before and after it. But we dont consider every prior and subsequent event, nor do we focus exclusively on actions that cause the wink or are caused by it. Rather, we thicken the description by placing the event within a coherent narrative. This brings me to the second point

    2. The selection of events in a coherent narrative is moral: Human institutions and actions are always dramatically overdetermined; they arise because of many events that are insufficient but necessary parts of unnecessary but sufficient (INUS) conditions. It is a common ambition of social science to measure as many of these factors as possible in order to assess their relative contribution to the outcome. For instance, we try to predict the decision to vote in terms of factors like the voters demographics, the nature of the election, and the voters opinions and preferences. Only an unreconstructed positivist would claim that this approach is value-neutral. Social scientists must always omit some contributing factors, and they must always decide how to measure the factors that are included in their models. (For example, demographic background includes race, which is a morally contested category). Nevertheless, social science aspires to neutrality and comprehensiveness. Ideally, every contributing factor goes into the model. If the morally significant factors play no explanatory role, so be it.

    In contrast, a historian almost always emphasizes factors of moral significanceespecially the intentions of human beings. (So does a novelist, in constructing fictional narratives). Writers of narrative combine causal explanation with moral judgment by making salient those causes that they deem most morally weighty. They are not engaged in retrospective prediction; their goal is much closer to moral interpretation. I think social science is extremely useful, because it allows us to assess causes that may not be deliberate or intentional. But if we want to make judgments and decisions, we need to tell stories.

    3. Narratives help to ascribe responsibility for collective actions: Chris Kutz argues that we make the following assumptions: I am accountable for a harm only if what I have done made a difference to that harms occurrence. And I am accountable for a harms occurrence only if I could control its occurrence, by producing or preventing it. Unfortunately, we may belong to groups that do very serious harms, yet each member of the group can rightly say, I made no difference to the outcome, and I couldnt control what happened. In these caseswhich probably create the bulk of the worlds evilsno one is responsible or accountable for the wrong.

    You need not will an end to be responsible for it; you only have to be knowingly part of a group that is moving toward some end. And it doesnt matter whether the predictable or intended outcome of the group is actually reached: you are accountable if you associate yourself with a group that has a bad telos. Unfortunately, it is often unclear whether a person is an intentional participant in a group. Its one thing when I voluntarily join a defined and formal body. For example, if I choose to buy stock in a company whose negligence kills people, that is my problem (morally), even if I had no reason to know about the companys behavior. But there are many harder cases, especially ones involving loose social networks.

    When we consider whether someone morally belongs to a group, the form of our reasoning is a narrative. We want to know whether people are intentionally part of a set of coherent actions that lead toward some telos. Novelists are good at showing that sets of characters are linked in morally salient ways; indeed, such linkages often provide the main themes of bourgeois novels. Like novelists, historians tell stories that link people together for teleological reasons. Their methods, which we also use in ordinary life, are the only means we have for ascribing responsibility for group behavior.

    4. Stories have themes. A theme is usually a concept or situation that is significant and that repeats throughout the narrative. Determining the theme of a story is a dynamic process. We become gradually aware that a concept or situation is going to be repeated. As we look for themes, we also decide what is literally going on in a text. For instance, in the first scene of King Lear, is Cordelia proud and hurt, or young and very shy, or perplexed by the formal ritual? Our answer does not determine the words she utters, but it decides much else (her tone, body language, location, expression). The only way to determine how she literally behaves is to consider what Lear is about as a whole. Thus Roger Seamon argues that a storys theme is not some general proposition that we derive (validly or invalidly) from the words on the page. Rather, our emerging sense of a theme helps to tell us what literally happens.

    The importance of thematic interpretation has at least two moral implications. First, themes are essential to rhetoric. We deliberate by telling (putatively) factual stories that have themes; therefore we need to know how to tell good thematic stories and how to judge their quality.

    Second, it was Hannah Arendts view that modern history has no causal coherence. The terrible events of her century could not be retrospectively predicted by measuring the factors that jointly created them. We must understand these events, but their explanation beggars the mind. At best, we are capable of identifying repeating motifs in history. That is why Arendts Origins of Totalitarianism is not a causal explanation of Hitler and Stalin, but rather a search for relevant themes in preceding history. It describes certain fundamental concepts which run like red threads through the whole. If we can identify the major themes of our own time, we are doing the best that can be done.

    permanent link | comments (0) | category: philosophy

    April 28, 2004

    Christopher Kutz on Complicity

    Yesterday, I went to the National Institutes of Health to hear Chris Kutz discuss his book, entitled Complicity: Ethics and Law for a Collective Age. Kutz sets himself the following problem. As a matter of common sense, I assume that "I am accountable for a harm only if what I have done made a difference to that harm's occurrence." I also assume that "I am accountable for a harm's occurrence only if I could control its occurrence, by producing or preventing it." We are raised to make these two assumptions. Unfortunately, we may belong to groups that do very serious harms, yet each member of the group can rightly say, "I made no difference to the outcome, and I couldn't control what happened." In these cases--which probably create the bulk of the world's evils--no one is responsible or accountable for the wrong.

    The case that we discussed most deeply yesterday was the firebombing of Dresden by allied forces during World War II, which probably caused 35,000 civilian deaths in one night and did nothing to advance the Allied victory over Nazism. The firestorm (which sucked oxygen out of the air and caused civilians in shelters to die of asphyxiation) was caused by bombs from 1,000 airplanes. Eight thousand crewmen flew in those planes, and "many thousands further were involved in planning and support." Exactly the same number of deaths would have occurred if 999 bombers had flown instead of 1,000. Thus each crewman or ground-support person can rightly say, "I made no difference, and I had no control over the outcome."

    Indeed, because these people were not causally responsible as individuals, I think that no one should accuse them of homicide. But they do have a deep and permanent moral connection to the Dresden firestorm, unlike someone who was home in Iowa at the time. This moral connection requires actions and attitudes on their part: for instance, regret, memory, confession, self-scrutiny, and perhaps active support for peace with post-War Germany. We should consider as morally defective anyone who says, "I was part of a group that killed 35,000 civilians for no military purpose, but I had no effect on the numbers killed, so I don't care what happened."

    At the most general level, Kutz argues that "I am accountable for what others do when I intentionally participate in the wrong they do or the harm they cause. I am accountable for the harm or wrong we do together, independently of the actual difference I make." This "complicity principle" conflicts with the common-sense principles of "individual difference" and "individual control" that I mentioned earlier. The conflict is the main subject of Complicity.

    The difficulties, which Kutz handles very skillfully, arise when it's not clear whether a person is an intentional participant in a group. It's one thing when I voluntarily join a defined and formal body. For example, if I choose to buy stock in a company whose negligence kills people, that is my problem (morally), even if I had no reason to know about the company's behavior. But there are many harder cases. For instance, everyone drives too quickly on the Washington Beltway, resulting in at least one death/day. But each average driver does not make the roads any more dangerous than they would be without him. In fact, if you slowed down, that would make the Beltway modestly more dangerous. Are you complicit in unnecessary deaths if you drive to work at 70 mph?

    Or what about a journalist traveling with a military unit in Iraq? If the unit kills a civilian, is the reporter part of the group and therefore subject to moral scrutiny for the death? Does it matter whether the journalist is "embedded"? Does it matter whether she comes from one of the Coalition countries? I am not assuming that being responsible for killing a civilian implies some severe punishment or censure--there is a war on, and civilian casualties may be unavoidable. But those involved in the killing morally owe an account, and ought to feel emotions such as deep regret. Do these obligations also apply to an embedded reporter who is present at the event?

    Since a critical review by John Gardner is currently the top result when one searches for "Christopher Kutz [and] Complicity" on Google, I want to address a mistake in that review. Contrary to what Gardner says, Kutz acknowledges that a person owes special kinds of accountability when he is directly and causally responsible for a harm, whether or not he acts as part of a group. Complicity is an additional layer of responsibility that arises only in virtue of our participation in a group that does something wrong, regardless of whether we affect the outcome.

    Complicity is clear, precise, well organized, original, and morally challenging. I must disclose that I know the author very well; nevertheless, I can report that this book is prized by philosophers working on problems of collective responsibility.

    permanent link | comments (0) | category: philosophy

    April 22, 2004

    moderate "particularism"

    Here is an argument for a moderate form of the philosophical position known as "particularism." A full-blown particularist believes that whole situations are either good or bad; they can be validly judged. However, the separate qualities or aspects of situations can only be assessed in context. A quality is neither good nor bad in all the cases where it arises. The very same quality may make x better and yet make y worse. For instance, the quality of generosity is (normally) good if it makes me donate to the homeless, but it is bad (and makes matters worse) if I give generously to a terrorist organization.

    According to particularists, the moral aspects of situations are analogous to splashes of red paint. (This is Simon Blackburn's analogy.) Adding a red splash might make a painting by de Kooning better, but a Vermeer worse; by itself, the red splash it is neither beautiful nor ugly. The de Kooning (overall) is a good painting and the Vermeer (overall) is a great one. We can make valid judgments, but only about whole works of art, not about small components of them.

    Note: there is a problem here about what constitutes a "component" or a "whole." Can one make moral judgments about people, about policies and institutions, about whole societies? Is a law a component of a society, or a whole object in itself? The same problem sometimes arises in aesthetics, because it may be valuable to assess a whole suite of paintings, or a small detail of a picture, rather than a single and complete work.

    In contrast to radical particularists, I think our moral vocabulary is very heterogeneous. It includes:

    1. concepts that are tautologically good or bad. For instance, the right thing to do is always right.
    2. concepts that are good or bad pro tanto, which is Latin for "as far as that goes." For instance, one might argue that kindness always makes things better, but an act can be both kind and stupid, and the stupidity is sometimes more important than the kindness. Thus kindness is only pro tanto good.
    3. concepts that are good or bad prima facie, "on their face." For example, we rightly assume that a generous act is good, overall. But sometimes unusual circumstances arise that make generosity bad;
    4. concepts that are morally neutral most of the time, although in rare circumstances they take on moral significance (e.g., redness or bigness); and finally
    5. concepts that operate as particularists would expect them to: they usually make situations morally better or worse when they apply, yet we cannot tell in advance whether they will help or hurt in each circumstance. We must look at the whole situation.

    Radical particularists imply that every important concept fits in category #5. I am a moderate particularist because I believe that the other categories also exist, but #5 is common and unavoidable.

    For instance, I would place love in category #5. The Romantics thought that love was always pro tanto good. To say that someone was in love might not be the only thing to say about a situation, but it was always a good thing. I think the Romantics was wrong. If Guinivere is married to Arthur, then Guinivere's love for Lancelot is not even pro tanto good; it is bad, and she should work to reduce it. (I believe that we have the capacity to control the emotion of love, but that is a psychological claim and it is not important to my overall argument. Even if love is not in our control, adulterous love is still bad.)

    An opponent of particularism may say: Love is sometimes good and sometimes bad. This makes it a highly imperfect concept. We would actually be better off with two words, for instance, "good-love" and "bad-love." The definition of these words would not be morally tautological; we wouldn't just say that "good-love" is love whenever it is good. Instead, a proper definition would connect "good-love" to more general moral concepts like justice and virtue, which we would also define. For example, good-love might include love between two consenting, unmarried adults, because our general theories of the good and of freedom would tell us to value love when it arises freely between unencumbered adults.

    The anti-particularist's goal is to use only words that are pro tanto or prima facie good and bad. The goal is to excise words that are morally tautologous and words that have unpredictable moral valences. In practice, of course, we'll always retain our inherited vocabulary. We won't actually talk about "good-love" (because it's an ugly coinage), but we will explain what forms of love, in general, are good or bad.

    As a moderate particularist, I reply: love is an extremely important moral concept. It is morally ambiguous, in the precise sense that it only has a moral valence in context--sometimes it makes things better pro tanto, and sometimes it makes things worse, but it is almost always morally significant. Although it may be good more often than it is bad, it is not prima facie good (because it's highly unpredictable).

    Furthermore, we cannot make live morally without the concept "love," nor can we split it into two categories. Love is not just the union of two concepts: good-love and bad-love. Part of the definition of "love" is that it can be either good or bad, or can easily change from good to bad (or vice-versa), or can be good and bad at the same time in various complex ways.

    Although I don't know how to prove that one cannot replace all morally ambiguous concepts with ones that are pro tanto or prima facie good or bad, I strongly doubt that this effort can ever succeed. If, for example, one tried to reason with the concept of "good-love," and defined it so that it included love between unmarried consenting pairs of adults, there would be many cases in which good-love turned out to be bad. So I think "good-love" would quickly collapse into a tautologously good thing (that is, it would mean "love in all cases where love is good"), or it would turn out to be unpredictably good or bad, depending on the context. But that was the problem with our ordinary concept of love.

    In short, there is no escaping particularism about love, although we don't have to be particularists about everything.

    permanent link | comments (0) | category: philosophy

    March 24, 2004

    the limitations of analytic moral philosophy

    Analytic philosophy is the dominant tradition in the English-speaking world today, and I belong to it. (I was trained in the rival tradition known as "continental" philosophy, but have moved over; see this post for the distinction.) It recently occurred to me that analytic moral philosophy really is "analytical"; it takes views, values, and positions from outside of modern philosophy and analyzes them to see whether they are internally consistent, whether they match our intuitions about a range of cases, whether they agree with various other plausible views, and so on. Virtually all modern analytic philosophers endorse some form of what John Rawls called "reflective equilibrium". They think that we should go back and forth between intuitions (which we obtain from outside of philosophy) and philosophical arguments, trying to make each conform to the other. If our intuitions are inconsistent, we should change our intuitions; but if our philosophical arguments are counter-intuitive, we should change our arguments.

    Until at least 1900, philosophers were in the business of generating new moral views and positions. Indeed, modern analytical philosophers often analyze the views of long-dead theorists, but they do not develop new moral views of their own. Animal rights is one of the few examples of a moral or political doctrine that arose from philosophical inquiry, in this case, Peter Singer's. In general, philosophers don't possess a method for creating or discovering moral positions, whereas they do have a toolbox for analyzing positions that are, so to speak, "exogenous" to philosophy.

    Analysis is useful, but it is not the only kind of relatively abstract and general moral thought that we need. In fact, I tend to agree with Bernard Williams that analysis reduces our confidence in received moral ideas, but our major problem now is that we have not too many but too few, and we need to cherish as many as we can. (Ethics and the Limits of Philosophy, 1985, p. 117).

    permanent link | comments (0) | category: philosophy

    March 16, 2004

    what does it mean to be "civic"?

    I spend most of my time in and around groups and institutions that have explicitly civic goals: CIRCLE, the Campaign for the Civic Mission of Schools, the National Commission for Civic Renewal, the Kettering Foundation, and the National Alliance for Civic Educationto name just five. Civic rhetoric seems to be spreading and deepening. But what does it mean to be civic today?

    Good citizens care about issues and debatesoften passionately. They want to save unborn children or to defend womens reproductive freedom, to rescue the environment or to promote growth, to achieve world peace or to punish Americas enemies. These are matters of life and death, so naturally we want our positions to win, and we are entitled to fight for public support.

    But a civic attitude begins when we notice that a great democracy is always engaged in such debates. It matters not only which side wins each round, but also what happens to the nations public life over the long term. Are most people inclined to participate in discussions and decisions (at least within their neighborhoods and schools), or are many citizens completely alienated or excluded? Do young people grow up with the necessary skills and knowledge to allow them to participate, if they so choose?

    Do we seriously consider a broad range of positions? Do good arguments and reasons count, or has politics become just a clash of money and power? Can we achieve progress on the goals that we happen to share, or have our disagreements become so sharp and personal that we cannot ever cooperate?

    Being civic means asking these questions. It is compatible with fighting hard for a positioneven a radical onebut it requires avoiding collateral damage to the civic infrastructure. It asks us to worry about long-term civic health, not just immediate tactical victory. And it obliges us to care about our public institutions, not just particular policies.

    More specifically, being civic means keeping the following principles in mind:

    1. We should choose styles of engagement that expand participation. Politics, political debate, and social action have become considerably less popular over time. According to National Election Study data, Americans are about half as likely as they were 30 years ago to talk about public affairs, to follow serious news, and to attend local meetings. One reason, I am convinced, is that politics is optional. Most other voluntary activities (shopping, dining out, tourism) promise polite and harmonious interactions. But all forms of politicsfrom neighborhood meetings to televised debatestend to be uncomfortable, so many people avoid them.

    Politics cannot be consistently civil: sometimes it is necessary to challenge the powerful and generate anger. Since politics is our main way of addressing deep disagreements in a diverse society, it will not always be a friendly business. And even if we would like most people at a neighborhood meeting to be polite to one another, everyone (even the local lunatic) has a right to participate. So civility is not a realistic standard. Nevertheless, if we are concerned about our long-term civic health, then we should strive to make politics as amicable and welcoming as possible. Often, harsh rhetoric wins points in the short term but also drives people out of public life altogethera good example of collateral damage.

    2. Arguments should be about ideas, not about people.
    A great way to win a political debate is to show that ones opponent is hypocritical or selfish. But some people hold wise and generous positions for selfish reasons (to get reelected, for example), while others have altruistic motives for espousing foolish ideas. Thus making personal accusations very rarely advances public understanding. Maybe every Democratic incumbent wants to seize more of your income to spend it on programs that will help him stay in office, and maybe every Republican just wants to cut taxes for his wealthy friends. Nevertheless, some federal programs and some tax cuts are good policy. It is a logical mistake (the ad hominem fallacy) to oppose an idea just because the person who espouses it happens to be flawed.

    Besides, it is always possible to charge an opponent with bad motives, yet we can never tell if the accusation is true. We cannot even be sure how pure our own motives are, so how can we possibly know why George W. Bush favors a constitutional amendment to ban gay marriage or what John Kerry hopes will happen in Iraq? We can, however, decide whether the amendment is good and what we should do in Iraq.

    Everyone can learn to assess the merits of a policy, but only insiders really know the characters of powerful politicians. So a political process that revolves around motives and personalities gives tremendous authority to anonymous officials and the famous reporters who know them, to kiss-and-tell autobiographers, black-sheep relatives, and former White House conseglieri. Because personalized politics makes a few well-placed insiders into the only experts, it is profoundly elitist.

    Personal attacks are effective, so they encourage politicians and parties to try to bring down their enemies, rather than win a mandate for their ideas. Neither liberalism nor conservatism has recently developed a popular governing vision, and one reason is that partisans have found it too easy to knock each others knights off their white horses. Think of Jim Wright, Newt Gingrich, Robert Bork, and many other political ghosts (living and dead) who can testify to the power of the personal attack.

    As charges and counter-charges accumulate, politics begins to look generally unseemly, even though politicians are probably not as unprincipled as their opponents imply. Under such circumstances, many people tune the whole business out. Meanwhile, personal attacks keep good people from taking leadership roles out of fear that someone will charge themfalsely, but irrefutablywith hypocrisy or selfishness.

    3. We should see politics as creative, not just a zero-sum game: People across the political spectrum demand that certain groups give up something of value. They argue that the rich should be taxed more heavily to pay for education for the poor, or that welfare recipients should be denied their checks, or that incumbent politicians should be kicked out of office.

    Probably at least some of these arguments are valid. But whatever you think about these proposals, they are not all there is to politics. Governments, parties, and local civic organizations dont just move existing goods, rights, jobs, and powers from some interests to others; they also make new goods. Think what happens when we start a neighborhood watch, teach a community to eat healthy foods, generate trust or mutual understanding through sustained dialogue, or reinvent a government agency to make it work better.

    It might seem that making new goods is a workable strategy only for the rich and powerful; the poor need help at someone elses expense. But when poor people simply demand subsidies or rights, they almost never get what they want. It is only when they are able to build institutions of their own that they acquire enough power to win at zero-sum politics. The African-American church is perhaps the best example.

    Sometimes, zero-sum messages are a good way to mobilize citizens by making them angry and giving them a political outlet. Generating anger can get citizens to the polls or persuade them to open their wallets for a cause. But such mobilization is almost always followed by defeat, discouragement, and burnout. Activists who stay involved for the long haul are the ones who have learned how to collaborateeven with some of their supposed enemiesto create new durable institutions. As Lewis A. Friedland and Carmen Sirianni show in their book Civic Innovation in America, lifelong activists do not assume that they can only make progress by defeating someone. They take pride in the institutions and programs that they have built together.

    5. Truth-telling is a civic obligation, even when its a tactical nuisance. [This section needs some fleshing-out with examples, but the point is clear enough.]

    6. We should avoid rampant partisanship. The word civic sounds almost synonymous with non-partisan. In classic civic republican thought, from Aristotle to Rousseau, parties were always seen as evidence of faction and strife, their appearance proof that civic virtue had waned. To be a good citizen was to serve the nation and to apply honest principles. Service to a party required disloyalty to the broader community; and arguments among parties indicated that at least one side was not being honest and principled.

    It is clear today that parties and partisan competition are valuable. We citizens lack the time, information, and inclination to form opinions about the proposals and personal merits of every candidate on the ballot. Party endorsements tell us that candidates are at least minimally qualified and that they belong to one of the major political ideologies of the day, from which we can choose. If anything, it helps if the major parties differ rather starkly in their ideologies, so that we can choose clearly.

    Moreover, we need institutions that have long-term, national horizons, that do no simply try to win the next election at any cost to their reputations, but that build over time. Parties fit the bill, better at least than candidates and political consultants. And we need parties to compete avidly for power, because competition keeps the powerful honest.

    Notwithstanding these arguments for parties, civic-minded citizens think that Washington is too partisan. And they have a point. The problem is not that there are stark differences in philosophy or a fierce competition over how to govern the country; if anything, the debate may be too blurred. The problem is rather that parties and interest groups fight over matters that are not connected to their philosophies or their visions for the future.

    In Federalist 60, James Madison criticized Pensylvania's Council of Censors (which had met in 1783 and 1784) as overly partisan. He wrote: Throughout the continuance of the council, it was split into two fixed and violent parties. In all questions, however unimportant in themselves, or unconnected with each other, the same names stand invariably contrasted on their opposite columns. Every unbiased observer may infer ... that, unfortunately, passion, not reason, must have presided over their decisions. When men exercise their reason coolly and freely on a variety of distinct questions, they inevitably fall into different opinions on some of them. When they are governed by a common passion, their opinions, if they are so to be called, will be the same.

    Madison's description would apply not only to Washington, DC in 2004, but also to the "blogosphere." Many prominent blogs are designed to score points, day in and day out, against an opposing party or ideology.

    These are five rules to guide the behavior of individual citizens. But it is equally important to think about the structures and institutions within which individuals act. I have said, for example, that we should pay attention to reasons and arguments, and not speculate about the hidden (probably selfish) motives of our leaders. But this is an unreasonable demand if politicians raise money from the very interests that they regulate. Rather than ask citizens to believe that money has no influence, we need to clean up the system.

    There are many other ways in which flawed institutions can make a civic approach nave. For example, to have a reasonable chance of winning, todays campaigns must target the most likely voters, and not waste their resources on young people and other unlikely participants. As a result, no one makes an effort to mobilize great masses of citizens. A system that revolved around parties might do a better job.

    Similarly, many nonprofit groups now raise their funds through bulk-mail appeals that seem to work best if they deliver an inflammatory message to a friendly mailing list. Civil society would be more inclusive and less polarized if nonprofit groups were built the old-fashioned way, as coalitions of local chapters. Changes in the tax structure could encourage nonprofits to reorganize themselves this way.

    A civic spirit thus pushes us to think about changes in procedures and institutions. (That is why organizations with civic in their name tend to be concerned about process, not about particular policies.) Unfortunately, as soon as we start debating reform proposals, reasonable people disagreepartly because of differences in their underlying political ideologies. For instance, conservatives sincerely oppose campaign-finance reform that requires new federal regulations, just as liberals sincerely welcome legal limits.

    We need to debate the merits of reforms, without bogging down in partisan strife. One final rule should help:

    6. Institutions should be designed to work well for the ages, not to get the results we want tomorrow. We might suspect that calls for reform are always just indirect ways for partisans to advance their everyday interests. Some liberals, for instance, call for campaign-finance reform because they predict that making politicians less beholden to corporate donors will result in liberal legislation. Meanwhile, some conservatives advocate term-limits and federalism because they believe that incumbent federal politicians usually drift to the left once they become caught in Washingtons iron triangle of career politicians, lobbyists, and regulators.

    In practice, however, it is very difficult to predict the impact of political reform. Republicans suspected that the system of full public financing for presidential campaigns that was enacted after Nixon fell would benefit Democrats, yet Ronald Reagan prospered under it.

    Then, in the early 1990s, liberals and Democrats championed easier voter registration laws, in the name of inclusion and democracy. They also thought that the new registrants would be poor and would vote for them. Participation did rise, but at least half the new voters turned out to be Republicans.

    The Law of Unintended Consequences applies, and it is good news. It means that we cannot safely manipulate the political system to get the results we wantso we might as well consider any proposed change on its merits.

    The founders of our Republic often guessed wrong about the future. Their wisdom was not foresight. Rather, they were wise enough to know that they could not predict the future, so they had to create institutions that would work well under a variety of unpredictable circumstances. If we follow their example, we can debate how to reform our political institutions to encourage and reward civic behavior.

    permanent link | comments (1) | category: philosophy

    March 4, 2004

    Straussophobia

    Straussians are back in the news--and all over blogs--because of the controversy surrounding the President's Council on Bioethics. The Council's chair, Leon Kass, was influenced by the late Leo Strauss. Two of its members have just been replaced--possibly for dubious ideological reasons. I'm not going to comment on that controversy, since I don't know the facts. I do enjoy the renewed attention to Straussianism, because it allows me to follow the postings of various young folks who are under Strauss's influence. See, for example, the collection of links after Jacob Levy's post, or this guide to "How to Spot a Straussian.."

    Strauss is generally seen as a cultural conservative. However, his form of writing is indirect. He doesn't say what his personal views are; instead, he "reads" classic authors of the past. He explains that great philosophers are always in peril because of the unpopularity of their views, so they write "esoterically"--with coded or hidden messages. Strauss rarely (if ever) says what the messages of these past authors are. If, however, you apply Strauss' interpretative methods to his own writing, you find some evidence that he is actually endorsing a profound moral skepticism, akin to Nietzsche's philosophical position. It so happens that Nietzsche used the same methods of encoding secret messages in his own writing, and explicitly described himself as an esoteric author. Thus I have argued that Strauss was the opposite of a cultural conservative. He was a God-is-dead Nietzschean.

    Then the sociological question becomes: Which Straussians (proteges of Leo Strauss) are in on this game? My guess is, not many. One can actually do very interesting work as a Straussian minus the esoteric nihilism. Strauss drew our attention to the perilous position of critical thinkers in most, if not all, societies, and thus invited us to read the classics for hidden messages. This can be a productive approach. He also took some hard and effective shots at modern liberalism. I doubt that he favored straightforward conservatism as the alternative. But I do think he identified some of the deepest problems with liberalism, especially its tendency to support moral relativism as a moral absolute (a position that comes very close to self-refutation). Since Strauss, there has been a sophisticated and wide-ranging discussion of that issue, so he hardly had the last word. But he introduced an important topic.

    Finally, Straussians make useful colleagues because they are relentlessly opposed to political correctness and are willing to be "elitists." When we carelessly repeat nostrums like "the people's right to know," it's great to have a Straussian around to say, "That's complete nonsense." They are excellent prods to actual thinking--which may have been Leo Strauss' only goal in the first place.

    permanent link | comments (1) | category: philosophy

    February 27, 2004

    "social capital": political and apolitical

    Robert Putnam is mainly famous for reviving the concept of "social capital." As he measures it, social capital is the aggregate of certain habits and attitudes that individuals possess--especially trust for other people and membership in groups.

    There are two main interpretations of social capital theory. The political interpretation says that people deliberately develop organizations and networks in order to solve public problems. Trust is a by-product of this work; it is also something that people deliberately enhance by developing personal relationships and by raising children as members of communities. It is good to develop social capital because it enhances a community's capacity to solve problems in the future.

    The apolitical interpretation assumes that social capital goes up or down because of large social forces and trends, such as suburbanization, the work environment, and exposure to television. (TV makes people less trusting and less sociable.) The reason we should care--according to this interpretation--is that social capital correlates with mental health, longevity, and good educational outcomes. Therefore, if we can, we should tinker with big institutions to increase social capital.

    Although these two theories reflect different values, there are also empirical differences. It is either true or false that people can create social capital through deliberate action at the local level. I'm optimistic that they can, but I'm not sure how strong the evidence is.

    permanent link | comments (0) | category: philosophy

    January 1, 2004

    the moral value of literary themes

    For several years, I've been developing a moderate version of moral particularism, which says that the appropriate things to judge are situations, choices, or events, not concepts or categories (such as lying, happiness, or justice). I am therefore skeptical about the more ambitious forms of moral philosophy, which do focus on concepts. Lately, I've become interested in literary themes as an alternative.

    It's not easy to explain how any story can provide moral guidance for people who are not actually named in it. Unless one is an elderly land-owner with three daughters, it is not morally illuminating to learn that Lear should have given a third of his kingdom to Cordelia. If King Lear has moral value, the value lies in its themes, not its direct messages or "morals." Stanley Cavell demonstrates good thematic interpretation when he shows that Lear depicts several people who are moral skeptics. They refuse to act kindly toward others until they can prove to themselves that these others have good natures and that nature itself is good. This search for proof, Cavell says, is just one way of "avoiding love" that is portrayed in the play. If we wanted to base a moral rule on Lear, it would be something like this: "Act kindly without seeking ultimate reasons." But as general advice, this seems unsophisticated and unpersuasive, especially compared to the way that Shakespeare handles the "avoidance-of-love" theme in his concrete fictional world. Among other things, he shows that moral skepticism can result in distance, coldness, and cruelty.

    What is a theme?

    In a nice 1989 article entitled "The Story of the Moral," Roger Seamon argues that a story's theme is not some general proposition that we derive (validly or invalidly) from the words on the page. Rather, our emerging sense of a theme helps to tell us what literally is going on.

    No narrative can supply sufficient information to tell us what to imagine as we read or listen. For example, the Inferno doesn't tell us about Francesca da Rimini's appearance, facial expressions, or tone of voice. Another story might fill in some of those details, but it would necessarily omit others. Thus readers must supply information, and much of it will depend on our evolving sense of the story's theme. Some readers have seen Francesca as a regal figure, suffering with dignity on account of her selfless but forbidden love; others have imagined her as a carnal sinner who refuses to acknowledge her sexual misbehavior. When she says that she and Paolo "read no more," some readers imagine a sly wink, while others are deeply offended by that very suggestion. Each camp might choose a different actress to play Francesca and would expect her to utter her lines in a different way.

    As we read, we develop such assumptions and judgments, influenced by the text but not completely constrained by it. These assumptions are sometimes moral judgments, yet they influence our view of what happens in the story. (For instance, how we imagine Francesca's tone of voice depends on our interpretation of the overall themes of the Canto.) Thus, in narrative, fact and value are deeply intertwined; and it is not simply that facts imply values--the reverse is also true.

    To detect a moral doctrine in Francesca's story (e.g., "Adultery is wrong") would mean reducing the text to the most trivial moral message. Yet the story is extremely challenging and useful for thinking about adultery in conjunction with related concepts or themes, including sentimentality and the abuse of literature. Because it describes a concrete case, the text can explore these ideas together, without analyzing or defining them abstractly; and then we can look for roughly similar situations in the real world.

    permanent link | comments (0) | category: philosophy

    December 17, 2003

    philosophy & the young child

    I love Gareth B. Mathews' Philosophy & the Young Child (1980). It's full of dialogues in which kids between the ages of 4 and 10 explore profound issues of metaphysics, epistemology, logic, and ethics with an adult who's genuinely interested in their perspective. They supply fresh vision and curiosity; the adult provides some useful vocabulary and provocative questions.

    Mathews believes that it's hard to think straight about fundamental philosophical questions once you've been encumbered by a bunch of conventional theories--and once you've been told that most deep questions are really simple and obvious. For example, we're inclined to think that a kid is silly if she asks why she doesn't see double, since she has two eyes. Actually, this is not such an easy question to answer, but most of us are soon socialized to dismiss such matters as childish.

    Mathews skewers the great developmental psychologists, especially Piaget, who assumed that children first express naive views and then develop correct adult positions. Mathews points out that many of the "primitive" statements quoted by Piaget are actually more philosohically defensible than the adult positions he espouses without thinking twice. For instance, Piaget asserts that small children confuse "the data of the external world and those of the internal. Reality is impregnated with self and thought is conceived as belonging to the category of physical matter." When you grow up, according to Piaget, you realize that there are two separate domains: thought and matter. But Mathews quotes his own teacher, W.V.O. Quine (often called the greatest American philosopher), who told him, "Let's face it, Mathews. It's one world and it's a physical world." This is exactly the position that Piaget calls "primitive" and expects kids to drop as they "develop."

    Another treat in Mathews' book is his identification of a whole genre of children's literature: "philosophical whimsy." In some books that small children love, the plot is not driven forward by a practical problem or threat or a clash among characters. Rather, the protagonists face purely logical or epistemological puzzles. A simple example is Morris the Moose by B. Wiseman, in which Morris keeps trying to prove to other animals that they are moose like him. "My mother is a cow, so I'm a cow," says the cow. "You're a moose, so your mother was a moose," Morris replies. The whole book is about what makes a proof. This is a short and light-weight example, but the genre of philosophical whimsy also embraces Alice in Wonderland, Winnie the Pooh, and the Wizard of Oz.

    permanent link | comments (0) | category: philosophy

    December 5, 2003

    universal v. particular in ethics

    In ethics, the words "universal," "general," and "particular" are used in three entirely different contexts. First, there is the issue of cultural difference. Some people say, "Morality is universal," meaning that the same rules or judgments ought to apply to members of any culture. Their opponents reply that at least some moral principles are particular to cultures (they only bind people who come from some backgrounds).

    Meanwhile, some people say, "Obligations are universal," meaning that we have the same duties to all human beings. For instance, perhaps we are required to maximize everyones happiness, to the best of our ability, not favoring some over others. Opponents of this kind of universalism reply that we have stronger obligations in particular people, such as our own children or compatriots. (See, for example, this good article by blogger and public intellectual Amitai Etzioni.)

    Finally, some people say, "What is right to do in a particular case is shown by the correct application of a general or universal moral rule." Their opponents reply that we can and should decide what to do by looking carefully at all the features of each particular case. They agree that there is a right or wrong thing to do in each circumstance; but general rules and principles are unreliable guides to action. Any rule or principle that makes one situation good may make another one bad.

    These three arguments are distinct analytically. If you take the "universalist" side in one debate, it does not follow that you must also take it in the others. One can, for example, believe that all people (regardless of culture) ought to be partial toward their own particular children. That view would combine two forms of universalism with one variety of particularism. Or one can believe that very abstract, general rules are never good guides to action, yet everyone from every culture should agree that this mother, in this particular set of circumstances, was right to feed her own child and to let a stranger go hungry. Or one can believe that we ought to treat everyone with precise equality, but only because we are members of a distinctly Western and modernist culture; there is an abstract rule of equal treatment that binds us but does not apply elsewhere.

    I think that the only illogical combination is resistance to universal rules plus commitment to impartiality, because impartiality seems best construed as a rule that applies in all cases ("treat everyone alike"). Particularism is consistent with partiality, if partiality just means that sometimes it's OK to discriminate.

    I suspect that there is a psychological tendency for some people to embrace universalism in all its forms, or else all forms of particularism; but there is no logical reason for this tendency. On the contrary, there may be some illogic involved. For example, some people favor partiality towards kin and countrymen, and they think but they can support this value by rejecting cultural universalism. That is a non sequitur, although probably a common one. (We see it in "Romantic" reactions against "Enlightenment" universalism.) Likewise, one might fear the nihilistic consequences of cultural relativism and therefore favor abstract, rule-based ethics, but this is another illogical move.

    My own view, in a nutshell, combines cultural universalism (everyone should agree in their assessment of any particular case, if they understand it fully); openness to partiality (sometimes it is right to discriminate in favor of certain people with whom one has special relationships); and "particularism" about ethical judgments (we can and should judge cases by closely examining their details, not by applying rules).

    permanent link | comments (0) | category: philosophy

    November 24, 2003

    libertarians and socialists have something in common

    I see libertarianism and modern democratic Socialism as flawed for similar (or parallel) reasons:

  • Libertarians believe in markets, which they consider just and free as well as efficient. They see politics as a threat, because masses of people may decide to interfere with markets by taxing and spending or by regulating industry. To libertarians, such political interference is morally illegitimate as well as foolish. It means that some individuals are robbing others of freedom.
  • Democratic socialists believe in egalitarian politics: in one-person, one-vote. They dont trust markets, because unregulated capital may exit a locality or country that chooses to impose high taxes or tight regulations. Even in the US, the bond market will fall if investors suspect that the federal government is going to borrow and spend, no matter how popular this policy may be. When investors discipline democracies by withdrawing their capital, socialists see a morally illegitimate constraint on the peoples will and interests.
  • For what its worth, I think that markets and politics are both inevitable. Theres much that we do not know, and its always wise to remain skeptical and open to new possibilities, yet I doubt very much that we can ever escape from a few basic laws that govern the political and economic spheres. A study of politics tells us, for example, that great masses of people have power. They can be suppressed temporarily by dictators, but tyrants tend to meet grisly ends. They can be restrained by constitutions, but complex systems that frustrate popular will usually get changed. If politics is inevitable, then libertarians have no practical way to attain the minimal state they dream of, unless one day most of their fellow citizens come to share their values (which is highly unlikely). Meanwhile, markets are obdurate too. Even a popular, legitimate, democratic government cannot create a supply of goods unless consumer demand produces a high enough price to motivate producers. Thus, when markets discipline governments, this is not corrupt or illegitimate interference; it is reality coming into play.

    All this explains why every successful country in the modern world is a mixed economy, with a substantial public and private sector, majoritarian institutions and free markets. But the successful models differ in important respects, and there is room for debate about whether the US approach is better or worse than that of Germany, Sweden, Japan, or Canada. The criteria of excellence would include efficiency, sustainability, liberty, and quality of life (broadly defined) for the poorest as well as average residents.

    permanent link | comments (0) | category: philosophy

    November 5, 2003

    Renaissance humanism today

    I think that Renaissance humanist philosophy is often misunderstood; and this mistake matters to me because I favor a revival of the real methods of the humanists. The standard view is that Renaissance humanists taught original doctrines, especially the "dignity of man" that was the theme of Marsilio Ficino's famous oration. They are thought to be "humanists" because they believed in the centrality of human beings as opposed to God.

    In fact, Ficino was neither original (in the context of medieval thought) nor especially influential. But Renaissance humanism did introduce a revolutionary change. Medieval scholastic philosophy had involved a particular style of writing. In the Middle Ages, philosophical works were third-person treatises: systematic, abstract, theoretical, and very logically sophisticated compared to anything written in the Renaissance. They included concrete examples, but always extracted from their original contexts to support abstract points. In contrast, Renaissance humanists meant by "philosophy" the dialogues, speeches, and moralistic biographies of ancient times, especially those written by Plato, Cicero, Seneca, and Plurarch. Plot and character featured prominently in these works. Humanist readers were mainly interested in philosophers (such as as Socrates or Diogenes) as role models, as men who had demonstrated virtues and eloquence in specific situations. The works they enjoyed were also full of irony: for example, Plato did not speak except through Socrates, for whom he probably had complex and ambiguous feelings.


    In turn, Renaissance humanists wrote, not abstract treatises, but stories told by and about literary characters in concrete situations. Often these works were ironic. Utopia, the Praise of Folly, and the Prince share a surprising feature: people have argued for centuries about whether their authors were serious or joking. Utopia and the Praise of Folly are narrated by fictional characters, distant from their authors. And Machiavelli wrote the Prince for a ruler who was likely to execute him if he spoke his mind. Its real meaning may be ironic.


    Today, mainstream moral philosophy is "scholastic": sophisticated, aiming at systematic rigor and clarity, logical, abstract, and ahistorical. But there are also works that try to make philosophical progress by interpreting past works in all their literary complexity, ambiguity, and original context. I'm thinking of Alasdair McIntyre's After Virtue, Martha Nussbaum's Fragility of Goodness, Bernard Williams' Ethics and the Limits of Philosophy, and Richard Rorty's Contingency, Irony, and Solidarity. These authors have no common theme or message, but they treat philosophy as a particular kind of discipline. They place it among the humanities, not the sciences. In this respect they are "humanist" philosophers in the Renaissance tradition.

    permanent link | comments (1) | category: philosophy

    October 27, 2003

    the Amish and freedom

    We're just back from a family weekend in Lancaster, PA--Amish country. It's dispiriting to watch real Amish people walk or trot in waggons past huge Amish-themed tourist attractions. (One store is actually called "Amish Stuff Inc.") Extreme simplicity seems to attract the worst form of consumerism.

    The Amish raise a philosophical dilemma that has often been written about. If you believe in freedom, this must include freedom of religion, which means the ability to raise your own children within your faith. Central to most religions are detailed rules or traditions concerning the rearing of children. However, if you believe in freedom, then you must believe in the right of individuals to choose their own values and commitments. Parents can interfere profoundly with such freedom. Indeed, all parents necessarily do. Anyone who grows up in a family is constrained by the legacy of family beliefs and values. (Even those who rebel have been influenced.) However, the tension between parental freedom and children's liberty is especially sharp and clear in cases like the Amish, who prefer to be as isolated as possible from the rest of the world. In particular, they prefer their children to "drop out" of school in late childhood.

    This means, on the one hand, that Amish kids lack the skills and breadth of experience necessary to understand or pursue a wide range of alternative forms of life. A book that I skimmed in Pennsylvania claimed that it was "nonsense" to complain about the limits that Amish children face, for those who leave the faith can always find work locally as farm hands. To me, this proves the point.

    On the other hand, if we bring children up in a "liberal" way, so as to maximize their ability to make free choices, then they cannot become Amish. Amish culture would be entirely different if most of its members spent their childhood and adolescence in mainstream society. Being Amish means being intentionally naive; it means not knowing much about the corrupt modern world. It means living with a small group of people who all came from the same background, very few of whom leave the fold. And it means valuing communal solidarity more than choice. Thus, if we insist on children's freedom of choice, then we can't let the Amish raise their kids as they want. Not only would this reduce the freedom of each adult generation; it would erase an alternative culture whose existence broadens all of our horizons.

    I'm still seeing powerful mental images of Amish farmers walking behind their horse-drawn plows past huge outlet stores. The stores represent "choice" in its most extreme form: millions of affordable items for your house, stomach, and wardrobe. But how much choice do you have if you don't realize that choice is itself an option, and incompatible with some of the best ways of living?

    permanent link | comments (1) | category: philosophy

    October 21, 2003

    the capabilities approach

    I was just refreshing my memory about the "capabilities approach" pioneered by Amartya Sen, Martha Nussbaum, and others. (I have been asked to comment on a paper about "positive youth development," and I thought that Sen's ideas would be relevant and helpful.) The rough idea is that we ought to implement social policies that maximize people's capabilities. The important human capabilities can be listed, although theorists differ somewhat about what belongs on the list. Enhancing capabilities is better than maximizing a set of behaviors or goods, because people should be able to choose what to own and how to behave, within broad limits; and different things are valued in different cultures. Thus trying to maximize goods or behaviors is too prescriptive. Enhancing capabilities is also better than simply giving people what they say they want or need. People can want completely bad things, e.g., crack cocaine. Or they can want too much, as in the case of Hollywood actors who want to have six Hummers. Or they can want too little, which is a common problem among the world's very poor.

    In contrast, capabilities are inherently good things, yet increasing one's capabilities does not restrict one's freedom. Furthermore, capabilities are defined loosely enough so that they are compatible with various forms of diversity. For instance, I would say that there is a capability of "raising children." Increasing this capability does not compel anyone to raise actual children. And people can choose to express it in diverse ways, from parenthood within a nuclear family, to participation in a peasant village where everyone raises all the kids, to working in a convent orphanage.

    Applying the capabilities approach to adolescent development would mean saying that we want (and will help) teenagers to develop a list of capabilities, such as: providing for themselves financially; loving others; expressing themselves creatively; developing spiritually; understanding nature; raising the next generation; and participating politically.

    permanent link | comments (0) | category: philosophy

    October 16, 2003

    analytic versus continental philosophy

    Ten to 15 years ago, when I first studied philosophy, the great divide was between the "analytic" and "continental" traditions. Some people wouldn't talk to colleagues in the opposite camp, and departments fell apart as a result. I think the conflict is dying down today, partly because of the waning significance of the French postmodern thinkers. They were the figures in the continental canon who provoked the deepest contempt from the analytic side. Many analytic philosophers can understand why one would study Hegel, Nietzsche, or Husserl, but not Derrida or Baudrillard.

    The two groups are difficult to define. (One analytic colleague told me, in all seriousness, that "continental" means "unclear"an example of an unhelpful definition.) In my view, analytic philosophers are those who treat science as the paradigm of knowledge. Science is cumulative, so studying its past is not particularly important for progress. Everyone admits that scientists have cultural biases, but science aims to be universal and uses techniques to overcome bias. Not all analytic philosophers are pro-science; some are skeptics, relativists, or political critics of organized science. However, they all see science as the paradigm of thinking, even if they criticize it. And some actually see philosophy as a branch of science (consisting of the most abstract parts of physics, math, and neurobiology).

    In contrast, continental philosophers think that philosophy is an expression of a culture. Thus there is Greek philosophy, German philosophy, and post-modern philosophy, but philosophy per se is only an abtraction. As Richard Rorty said, philosophy is a kind of writing, similar to other written cultural products such as novels and plays. This does not mean that continental philosophers must always be relativists. Some discern a pattern in cultural history: for example, a story of progress (as in Hegel and Marx) or decline (as in Heidegger). Or they may believe that it's possible to advance a rational critique of a culture from within. But they see philosophy as more similar to fiction and literary criticism than to science.

    This explains the prevailing difference in methodology. Analytic philosophers try to solve problems. They do think about others' work, especially recent articles that embody the latest thinking. But a perfect analytic argument would require no footnotes or quotations; it would be self-contained and persuasive, without any recourse to authority. By contrast, the typical continental philosopher tries to show what Famous Dead Philosopher X thought about an issue of his day. For continental philosophers, the history of the discipline is not merely of "antiquarian" interest, as it would be for an analytic philosopher. Rather, the deepest philosophical truths (if there are any to be known) are patterns in the history of thought.

    permanent link | comments (0) | category: philosophy

    September 8, 2003

    the Iowa political futures market

    A well-known experiment, run by Iowa Electronic Markets, allows traders to place bets on the outcome of political elections, including the current California governor's race. According to a paper by Joyce Berg and others, the Iowa Political Market has outperformed polls in predicting 9 out of 15 elections. Its average error in predicting election results is about 1.5%, compared to about 2% for an average poll. In some past elections, the Market avoided major errors that marred all the major national surveys, whereas it has never made a gross mistake itself. The apparently uncanny ability of the Iowa Electronic Market to predict the future was one of the reasons that the Defense Department recently floated the grisly idea of a futures market in terrorism.

    I'm struggling to understand the theoretical explanation for this phenomenon. I realize that markets efficiently aggegrate the knowledge of investors (who must try to make honest predictions, since their money is on the line). But where do the investors in a political futures market get their knowledge? They cannot simply ask themselves how they intend to vote. As Berg et al. note, traders are "not a representative sample of likely voters; they are overwhelmingly male, well-educated, high income, and young" (p. 2). Some are not even US residents. Thus their own choices in the real election, assuming they vote at all, will be very different from those of the American people. Yet they seem to be able to predict the actual result more accurately than a random-digit telephone poll.

    One clue is that a relatively small number of "marginal traders" drive the market; they make many more trades than other people and are less prone to sticking with an unlikely bet out of loyalty. I would guess that these "marginal traders" are political junkies: people who have no sentimental attachment to any of the candidates but love to prognosticate about elections. We can assume that they have seen all the polls—but that still doesn't explain how they outperform surveys on average. Could it be that they instinctively recognize a consistent error in polling, and adjust accordingly? For example, maybe polls tend to pick the real winner but predict a larger margin of victory than actually occurs. (Races tend to "tighten" right at the end.) Or maybe polls tend to make inflated predictions for the Democrats' share of the vote, because they count too many low-income people as "likely voters." It's also possible that the marginal traders rely on one or two polls that are better than the average. (Then we would find that the market outperformed polls in general, but was no more accurate than the best of the polls.)

    These are hypotheses backed with no evidence. But if one of them turns out to be true, then we don't need a market to improve on surveys. We just need to make the same adjustment to poll results that the marginal traders (a.k.a., the political junkies) are making. Likewise, we would not benefit from a futures market in terrorism, but we should strive to understand how the best informed and least sentimental observers of terrorism make their predictions.

    permanent link | comments (0) | category: philosophy

    August 27, 2003

    the 18th century comments on Campaign '04

    (Written while stuck in the Manchester, NH, airport, and posted on Thursday): Imagine that some of the major political philosophers of the eighteenth century are observing modern politics from their permanent perches in Limbo. What would they say?

    Edmund Burke: We should normally maintain the status quo (whatever it may be), since people have learned to adjust to it and it embodies the accumulated wishes and experiences of generations. I am especially skeptical of efforts to reform societies quickly by imposing ideas that came from other cultures or from the exercise of "universal reason" (as if there were such a thing). Good conservatives are hard to find today. This Newt Gingrich person represents the polar opposite of my views. Daniel Patrick Moynihan was sensible throughout his career, from his days opposing Great Society programs to his battles to preserve welfare (always in the interests of maintaining an existing social structure). Some modern leftists are Burkeans, in their efforts to conserve indigenous cultures against markets. The IMF and the World Bank remind me of the British Raj—they are arrogant purveyors of a rationalist philosophy that will backfire in distant lands. I'd vote Green, just to shock people.

    Edward Gibbon: The Roman Republic exemplified the main civic virtues: patriotism, military discipline, sobriety, love of the common good, and worldly reason. These virtues were undermined by Christianity, which was other-worldly, pacifistic, superstitious, and hostile to national pride. I have a soft spot for your deist Founding Fathers, but I can't find anyone to like these days. Conservatives share my list of virtues, but they're revoltingly pious. Things continue to decline and fall.

    Thomas Jefferson: The New Dealers used to like me because I was a civil libertarian and a political populist. They built me a nice monument. Now conservatives love to quote statements of mine like "That government is best which governs least." But I've given up on politics. I don't know what to make of a society in which independent family farmers represent much less than one percent of the population. I was surprised when governments started enacting expensive programs with the intention of benefiting ordinary people; that never happened before 1850. Did the programs of the Progressive Era and the New Deal represent popular will, or did they interfere excessively in private life? I can't decide. In any case, my own dead hand should not weigh heavily on the living, so I advise you to ignore any advice I gave in my own lifetime. I now spend my whole time working on labor-saving gadgets.

    James Madison: I sought to construct a political system that would tame the ruling class (to which I admit that I belonged) and align our interests with those of the broad public. The ruling elite in my day included Southern planters and Northern traders, manufacturers, and bankers. They had reasons to care about their own families' reputations (especially locally), and thus could be induced to play constructive roles. Also, they had conflicting interests: planters stood on the opposite side of many issues from manufacturers and shippers. Thus each group could be persuaded to check the worst ambitions of the others. I expected men of my class to hold all the offices in an elaborate system of mutually competitive institutions. They would seize opportunities to feather their own nests, but they would also care about the long-term prospects of their home communities, the institutions within which they served, and the United States. Therefore, they would act in reasonably public-spirited ways. In contrast, today's ruling class consists of large, publicly traded corporations. They have no concern with their political reputations, and no loyalty to communities or the nation. You moderns need to look for a different mechanism for inducing today's ruling class to serve public purposes. I do not view the system that I created as adequate for that purpose.

    Jean-Jacques Rousseau: All patriotic, decent people have the same interests and goals. Disagreements arise because people chatter together privately in little groups or factions, and also because some people mislead others with their clever rhetoric. A perfect democracy would have no factions and no debate. I am heartened to read in a book by Hibbing and Morse that millions of Americans are Rousseauians. They hate political debate, parties, legislatures, and professional politicians, for they realize that all decent people have the same interests. I like this Schwartzenegger fellow; he seems so natural.

    Tom Paine: Most Americans still agree with me, and yet the aristocrats run things. I'm going to endorse Dean.

    Adam Smith: Everyone realizes now that international trade creates wealth, that markets encourage specialization (and thus efficiency), and that official monopolies and trade barriers are bad for the economy. Fewer people pay attention to my moral philosophy and my account of civil society. I get plenty of praise, but some of it from embarrassing quarters.

    permanent link | comments (0) | category: philosophy

    July 23, 2003

    against intuitionism

    I'm still in Indianapolis at the Kettering Foundation retreat. Meanwhile, here's something I've been thinking about lately:

    Most moral philosophers appeal to intuitions as the test of an argument's validity. At the same time, they presume that our moral judgments should conform to clear, general rules or principles. An important function of modern moral philosophy is to improve our intuitions by making them more clear, general, and consistent.

    This methodology can be attacked on two fronts. From one side, those who admire the rich, complex, and ambiguous vocabulary that has evolved within our culture over time may resist the effort to reform traditional moral reasoning in this particular way.

    As J.L. Austin wrote: "Our common stock of words embodies all the distinctions men have found worth drawing, and all the connexions they have found worth marking, in the lifetime of many generations." Thus there is a lot of wisdom contained in the vague and morally indeterminate vocabulary that ordinary language gives us. Words like "love" introduce complex and not entirely predictable penumbra of allusions, implications, and connotations. Barely conscious images of concrete events from history, literature, and our personal lives may flit through our heads when someone uses words. Everyone may recall a somewhat different set of such images, sometimes with contrary moral implications. This array of sometimes inconsistent references is problematic if we prize clarity. Hence moral theorists attempt to excise overly vague terms or to stipulate clear meanings. But the complexity and vagueness of words is beneficial (rather than problematic) if human beings have embodied in their language real family resemblances and real ambiguities. There really are curries, and it would reduce our understanding of food to ban the word "curry" for vagueness or to define it arbitrarily. Likewise, there really is "love," and it would impoverish our grasp of moral issues to try to reason without this concept or to define it in such a way that it shed its complex and ambiguous connotations, some of which derive from profound works of poetry, drama, and fiction.

    The methods of modern philosophy can be attacked on another flank, too. Instead of saying that philosophers are too eager to improve our intuitions, we could say that they respect intuitions too much. For classical pagans and medieval Christians alike, the test of a moral judgment was not intuition; it was whether the judgment was consistent with the end or purpose of human life. However, modern moral philosophers deny that there is a knowable telos for human beings. Philosophers (as Alasdair MacIntyre argues) are therefore thrown back on intuition as the test of truth. Even moral realists, who believe that there is a moral truth independent of human knowledge, must still rely on our intuitions as the best evidence of truth. But this is something of a scandal, because no one thinks that intuitions are reliable. It is unlikely that we were built with internal meters that accurately measure morality.

    permanent link | comments (0) | category: philosophy

    June 24, 2003

    freedom of speech for universities

    For me, one of the most interesting aspects of Monday's Supreme Court decisions on affirmative action was Justice O'Connor's deference to universities. In her majority opinion, she writes:

    The Law School's educational judgment that such diversity is essential to its educational mission is one to which we defer. ... Our scrutiny of the interest asserted by the Law School is no less strict for taking into account complex educational judgments in an area that lies primarily within the expertise of the university. Our holding today is in keeping with our tradition of giving a degree of deference to a university's academic decisions, within constitutionally prescribed limits. .... We have long recognized that, given the important purpose of public education and the expansive freedoms of speech and thought associated with the university environment, universities occupy a special niche in our constitutional tradition. ... In announcing the principle of student body diversity as a compelling state interest, Justice Powell invoked our cases recognizing a constitutional dimension, grounded in the First Amendment, of educational autonomy: 'The freedom of a university to make its own judgments as to education includes the selection of its student body.'

    Courts have occasionally deferred to universities, not only in admissions, but also in free-speech cases. Most people think that it is unacceptable for a university, especially a public one, to discriminate against students or faculty who adopt radical views, even in the classroom or in their writing. However, most people think that a university can discriminate against teachers and students for failing to use appropriate methods of reasoning in the classroom, in papers, and in publications. The first amendment does not guarantee you a passing grade even if your final exam is lousy. Thus "academic freedom" is not only an individual right; it is also an institutional right of colleges to set their own standards of discourse. (See J. Peter Byrne, "Academic Freedom: A 'Special Concern of the First Amendment'," Yale Law Journal, November, 1989, pp. 251 ff.) In Bakke and other cases, justices have extended institutional freedom to cover admissions and hiring decisions, within broad limits. Peter Byrne observes that moderate jurists like O'Connor and Frankfurter are the ones who typically argue this way. Strong liberals and conservatives of each generation want to decide constitutional issues that arise within colleges; moderates prefer to defer to academic institutions.

    Deference to universities could be grounded in freedom of association—but this defense would not apply to state institutions. Byrne and other commentators want to base institutional academic freedom on respect for academia as a separate social sphere. They say that science and scholarship should be masters of their own domains. After about a decade in the academic business, I can't decide whether this degree of respect is warranted. Sometimes I think that academia is an impressive social sector guided by Robert Merton's KUDOS norms: knowledge held in common, universalism, disinterestedness, and organized skepticism. At other times, I think that academia is a snake pit of favoritism, logrolling, and faddish conformity. I also think that the broader question is complicated, i.e., Should (or must) democratic governments defer to professions as the authorities within their own spheres of expertise?

    Monday, June 23

    permanent link | comments (0) | category: philosophy

    June 12, 2003

    why blog?

    A friend of mine saw my May 23 entry, which is about the moral dangers of seeking fame, and asked: "Is writing a blog part of an effort to become famous?" I replied (in effect): "I have looked deep within and discovered that 75% of my original motivation for starting the blog was self-aggrandizement." (At least I'm honest.) But I do have other goals, including:

    1. To explore the ethics of recording ideas and experiences in a public way—that is, in a way that's honest and potentially interesting for other people, and that respects others' privacy rights and my own duties to the institutions that I work for. Being public in this way is somewhat tricky, and it's supposed to be a modest experiment in living democratically.
    2. To experiment with this new genre ("the blog") by writing unusual kinds of entries. For the most part, I try not to offer statements of personal opinion or simple links to other sites, but instead I like to pose moral or philosophical questions that have arisen in some recent experience.
    3. To create a notebook from which I can later borrow for longer, more systematic writing.
    4. To have a platform for presenting short comments for a small audience, easily and quickly.
    5. To present myself to anyone who's interested. The best description of who I am (as a professional) is a record of what I've been doing.
    6. permanent link | comments (0) | category: philosophy

      June 6, 2003

      ideology: pros and cons

      Is it good to be ideological? This seems to be an important question, since ideologies are what many people use to engage in political and civic life, yet there are good reasons to be against ideology.

      First of all, What is ideology? I think we are "ideological" to the degree that our concrete judgments are determined by a set of assumptions that cohere or grow from a common root. Thus:

      degree of ideology =

      (range of judgments generated by a set of assumptions) x (coherence of the set)


      number of items in the set of assumptions

      For example, Ayn Randians have a very small set of assumptions—maybe just one. Their belief that individual freedom is the only moral value generates a very wide range of judgments, not only about politics and economics, but also about religion, the virtues, and aesthetics. For them, a good novel must be about an iconclastic genius, because individual creativity and freedom are all that matters. So Ayn Randians are highly ideological.

      Classical liberals are somewhat less ideological, according to this theory, because the range of judgments supported by their initial assumptions is narrower. For instance, they may say that liberalism only tells us how to organize a state; it says nothing about what makes a good novel, or whether God exists, or what are the best personal virtues.

      So is it good to be highly ideological? I would say Yes if:

      • there is a small set of coherent and true principles that can guide us.
      • everyone is inevitably ideological, in which case an overt ideology is more honest than a hidden one.
      • the alternatives are unpalatable (e.g., we must make no judgments at all, or we can only decide randomly).
      • ideology gives us roughly correct answers while lowering the cost of political participation, thereby allowing poor and poorly educated people to participate
      • ideology is the only way to solve "voting cycles"

      I would say No if:

      • there is not a small set of coherent and true principles.
      • it is possible to make judgments individually, and generalizations distort a complex reality
      • there are preferable alternatives to ideology.

      permanent link | comments (0) | category: philosophy

      June 2, 2003

      Jonathan Dancy's particularism

      I think that Jonathan Dancy, a British moral philosopher, has made an important contribution with an argument that I would loosely paraphrase as follows. (See this webpage for his own statement.)

      Although moral philosophy is highly diverse in its methods and conclusions, it almost always involves an effort to identify concepts or words that have consistent moral significance. For instance, when we examine a complex case of adultery, we may detect numerous features that are morally relevant: promise-breaking, deceit, self-indulgence, lust, pleasure, happiness, love, freedom, and self-fulfillment. We may not know how to judge the case, since its features push us in various directions. But we do know—or we think we know—the valence of each concept. Regardless of our overall judgment of an adultery story, the fact that it involves a broken promise makes it worse than it would otherwise be. The fact that it expresses freedom or increases happiness makes it better. And so on.

      This kind of analysis has the advantage of allowing what Dancy calls "switching arguments." We form a strong opinion about the moral polarity of a concept that arises in well-understood cases, and then we apply (or "switch") it to new situations. So, for example, if we admire conventional marriage because it reflects long-term mutual commitment, then we ought to admire the same feature in gay relationships.

      But what if moral concepts do not have the same valence or polarity in each case? What if they are not always good or bad (even "all else being equal"), but instead change their polarity depending on the context? Clearly, this is true of some concepts. Pleasure, for example, is often a good thing, but not if it comes from observing someone else's pain—then the presence of pleasure is actually bad, even if it has no impact on the sufferer. In my view, it is a mistake to isolate "pleasure" as a general moral concept, because one cannot tell whether it makes things better or worse, except by examining how it works in each context.

      Philosophers have always been eager to reject some potential moral concepts as ambiguous and unreliable; but they have wanted to retain at least a few terms as guides to judgment. Thus, for instance, Kant drops "pleasure" and "happiness" from the moral lexicon, but "duty" remains. It would be revolutionary to assert, as Dancy does, that "every consideration is capable of having its practical polarity reversed by changes in context." Dancy believes that no concepts, reasons, or values have the same moral polarity in all circumstances. Whether a feature of an act or situation is good or bad always depends on the context, on the way that the feature interacts with other factors that are also present in the concrete situation. To shake our confidence that some important moral concepts have consistent polarities, Dancy provides many examples in which the expected moral significance of a concept is reversed by the context. For example, truth-telling is generally good. But willingly telling the truth to a Gestapo agent, even about some trivial matter such as the time of day, would be regrettable. Returning a borrowed item is usually good-but not if you learn that it was stolen, in which case it is wrong to give it back to the thief.

      permanent link | comments (0) | category: philosophy

      May 23, 2003

      perils of fame

      I received this year's edition of The Higher Education Exchange today, with an interview of me by David Brown. The interview starts with me worrying about academics who pursue fame. I think that the desire for fame is a major motivation in academia; in fact, status and fame seem to be professors' main selfish goals. (Curiosity is one of their main unselfish motives.) I'm interested in this because I think that both the pursuit of fame and its attainment can have distorting—even corrupting—effects on scholars. I also think that fame goes to the already famous in a way that's unfair and that undermines meritocracy in the university. This would be a good subject for a serious philosophical article, I believe.

      permanent link | comments (0) | category: philosophy

      May 12, 2003

      Brian Barry on inequality

      Brian Barry spoke at Maryland on Friday, making a good old-fashioned case for economic equality. He cited the following statistics as evidence that we do not have much social mobility in the US: If you are a male born in the poorest tenth of the population, you have only a 1.3 percent chance of reaching the top ten percent during your lifetime, and just a 3.7 percent chance of becoming at all wealthy (in the top fifth). If you are born in the bottom tenth, the odds are more than even that you will never make it out of the bottom fifth. Barry's source is Samuel Bowles and Herbert Gintis, "The Inheritance of Inequality," Journal of Economic Perspectives 16 (2002) 3 - 30, p. 3.

      permanent link | comments (0) | category: philosophy

      May 6, 2003

      legacy preferences

      At a seminar today, some colleagues and I discussed Senator John Edwards' proposal to eliminate the preference for "legacies" (children of alumni) in college admissions. Some people are saying that legacy preferences are on the same footing with affirmative action for racial minorities and women. If we ban affirmative action as a form of discrimination that undermines meritocracy, we should ban legacy admissions as well. If we keep one, we may (or must) keep the other. A third problematic policy is the preference that public universities often give to in-state students. Isn't it discriminatory for UC Berkeley to prefer Californians?

      (It is worth noting that being denied admission to Harvard because one's place went to a "legacy" is not a tragedy—there are many other fine schools. Being denied admission or financial aid at Michigan because one lives in Kentucky is at least as unfair.)

      I think this issue is fairly complicated. First, there are practical considerations. Presumably a policy banning legacy preferences would cause at least some rich alumni to curtail their contributions, thus removing some financial support from scholarship and education. Likewise, a policy banning in-state preferences could lead states to withdraw support from their own colleges. However, either or both of these fears might turn out to be unwarranted.

      If one justifies legacy preferences mainly on practical, economic grounds, then it doesn't make sense to prefer the children of alumni who have never contributed anything to a college. Yet most colleges deny that they prefer donors' children; that would be too crass. Implicitly, their argument seems to rest on freedom of association and the value of preserving their membership as a community over time.

      Private universities probably have a right as associations to prefer their own members (alumni, staff, and current students). That doesn't make a legacy policy morally admirable, however. It certainly has the disadvantage of preserving a heriditary elite and undermining meritocratic competition. Thus we might want to use the leverage of federal funding to discourage such preferences. On the other hand, maybe it is admirable to build community bonds within private associations. In that case, is it equally acceptable for states to treat themselves as exclusive communities that prefer their own citizens? Should federal policy allow or discourage this?

      permanent link | comments (0) | category: philosophy

      May 5, 2003

      why Dante damned Francesca da Rimini

      I looked at statistics for this site recently and was surprised to see that the most popular search terms that take people here include "Dante," "Paolo," "Francesca," and "Inferno." I am surprised because I think of myself as a civics, democracy, and political-reform guy; I have not contributed much to the study of Dante, and this website certainly doesn't offer much on the topic (beyond the one page about my ongoing Dante project). Today, however, I posted one of my published Dante articles, and I will add more soon—all in the interests of serving my audience.

      In "Why Dante Damned Francesca da Rimini," I argue that there are two explanations for Dante's decision to place Francesca in Hell (even though her real-life nephew was his patron and benefactor). First, he may have sympathized with this fellow lover of poetry who tells her own sad story so movingly, but he realized that she had committed the mortal sin of adultery. Thus he damned her because his philosophical reason told him that she was guilty, and he wanted to suggest that moral reasoning is a safer guide than stories and the emotions that they provoke. For the same reason, the whole Divine Comedy moves from emotional, first-person, concrete narrative toward abstract universal truth as Dante ascends from Hell to Heaven.

      But there is also another, subtler reason for his decision. Francesca loves poetry, but she reads it badly. Her speech is a tissue of quotations from ancient and medieval literature, but every one is inaccurate. In general, she takes difficult, complex texts and misreads them as simple cliches that justify her own behavior. Meanwhile, she says nothing about her lover or her husband—not even their names—which suggests that she cannot "read" them well or recall their stories. Her failure as a reader suggests that Dante was not necessarily against poetry and in favor of philosophical reason. Instead, perhaps he wanted to point out some specific moral pitfalls involved in careless reading.

      permanent link | comments (0) | category: philosophy

      May 4, 2003

      on praising one's own children

      I like to say nice things about other people, in their presence and also behind their backs. Yet I try not to say overly nice things about myself. Praising others makes me feel good (and often comes naturally); praising myself makes me feel guilty. I used to be able to follow both principles consistently—until I had kids. Now, I often want to say nice things about my children, even when they are not around. But many people see praising one's own offspring as a way of bragging about oneself. This is especially true of other parents, for we moms and dads are a very competitive lot (even the nicest ones). Indeed, when I praise my own children behind their backs, I feel a tinge of guilty pride that resembles the feeling I would have if I had just bragged about myself, even though I honestly do not see myself as responsible for the good things that my children do. (Then again, I'm not sure that I'm responsible for any good things I may do.) Is this feeling of pride a sign that it is wrong—immodest—to praise one's children when they are not present? Or is it right to praise them, as long as one does not feel pride when doing so? (After all, they are individuals in their own right, so why should anyone think about their parents when they are discussed?) Or is it right to praise them and to feel proud about their good qualities, even though it is wrong to praise oneself?

      permanent link | comments (0) | category: philosophy

      May 2, 2003

      thinking about the fetus without analogy

      Here's a question prompted by a seminar discussion today. (The speaker was my colleague Robert Sprinkle.) Would it be possible to consider the moral status of a human fetus without analogizing it to something else? The standard way to think about the morality of abortion is to ask what fetuses are most like—babies, organisms (fairly simple ones at first), or tumors. We know that babies cannot be killed, that simple organisms can be killed for important reasons, and that tumors can be removed and destroyed without regret. So an analogy can help us to answer the fundamantal moral question about abortion. (It's not necessarily the end of the matter. Judith Jarvis Thomson, and many others, have argued that you may kill a fetus even if it is like a person, because it is inside another person.) But a fetus isn't something else; it's a fetus. So could you simply consider it and reach moral conclusions? One might reply: "There is no way of reasoning about this entity; there is nothing to say to oneself about its moral status—unless one compares it to another object whose moral status one already knows." But how do we know the moral status of (for example) human beings? Presumably, experience and reason have rightly driven us to the conclusion that human beings have a right to life. Similarly, most of us have decided that insects do not have rights. Couldn't we reach conclusions about the moral status of fetuses without analogizing them to anything else?

      (Some religious readers may say: "Experience and reason are not the basis of our belief in human rights—we get this belief from divine revelation." But there is no explicit divine revelation about fetuses, so the question arises even for religious people: Could we think morally—and perhaps prayerfully—about fetuses, without analogizing them to other things?)

      permanent link | comments (0) | category: philosophy

      April 23, 2003

      deliberation and philosophy

      I have been thinking a little about the contrast between public deliberation and the professional discipline of philosophy. Philosophers like to make and explore novel distinctions. In part, this is because they pursue truth, and an ambiguity or equivocation is an obstacle to truth. Philosophers can do nothing about faulty or inadequate data, but they can show that A is logically different from B, even when it has hitherto been seen as the same.

      A second reason is that philosophers, like academics in general, need to say something new. Only original arguments can be published and otherwise rewarded. Since the most obvious distinctions are well known, philosophers get ahead by finding obscure ones.

      In contrast, citizen deliberators tend to gravitate toward language that is vague enough to suppress distinctions, when possible. This is because there is always some pressure to gain agreement, and distinctions drive groups apart. Citizens may care about truth, but often their top priority is to reach acceptable agreements, and to that end they may be willing to overlook vagueness. There is even an art to devising rhetorical formulas that can accommodate different positions. (Diplomats speak of "creative ambiguity.") Also, unlike philosophers, deliberating citizens don't care much about novelty or originality. Sometimes a new perspective can have a powerful effect in a public conversation, because it can break a deadlock or reinvigorate the participants. But at least as often, novelty per se is an impediment, because people don't have time to absorb a completely new idea. Besides, a novel argument may be associated too closely with its author, so others will not endorse it wholeheartedly.

      Thus it will often be easy for professional philosophers to tear apart a consensus statement issued by a large and diverse group of deliberators. But professional philosophers would not be able to run a democratic community.

      permanent link | comments (0) | category: deliberation , philosophy

      March 6, 2003

      what is moral philosophy?

      My Dante book (in progress) is really an essay on the limitations of moral theory. But what is that? I'm playing with the following definition: Moral theories are collections of descriptive terms, each of which has a known moral valence. For example, "unjust" is a descriptive term with moral significance. We might argue that anything that is "unjust" is wrong—at least all else being equal. In that case, the moral valence of the term "unjust" is negative; calling something "unjust" pushes us toward rejecting that action (or institution, or character).

      Knowing the moral valence of a descriptive term does not always tell us what to do, because a single act can be described in multiple ways. A given action may be "unjust" but also "loving." (For example, a parent might favor her own children over others.) In such cases, the negative moral valence of injustice is countered by the positive moral valence of love, and we have a difficult decision to make. In another kind of situation, an action may be "unjust" but also "necessary"; and if something is necessary, then we may have to ignore moral considerations altogether.

      Few (if any) philosophers have ever believed that moral theories could be sufficient to determine action; we also need judgment to tell us which moral terms to apply in particular cases, and how to balance conflicting terms. Nevertheless, philosophers generally think it is useful to have a moral theory composed of terms with known moral valences.

      A moral theory can simply be a list of such terms (this was W.D. Ross' view); but preferably it is an organized structure. For example, a theorist may argue that some moral terms underlie and explain others, or trump others, or negate others. The more the full list can be organized and/or shortened, the more the theory has achieved.

      permanent link | comments (0) | category: philosophy

      January 13, 2003

      earned and unearned income

      The President's surprise proposal to abolish dividend taxes is big news. There are many ways to evaluate the idea, including consideration of the effects on short- and long-term economic growth, equity, and the federal budget. The Administration also emphasizes that a dividend tax is unfair, since the income that was used to buy the shares was already taxed once. (The response from commentators like Paul Krugman is that there are many double taxes, including all sales taxes.) Another consideration seems to be overlooked. I believe we should retain a distinction between earned and unearned income, and we should be less hesitant to tax the latter. There is nothing wrong with investment income. But work—purposeful human effort—is much more closely linked to human dignity and value. As Pope John Paul II wrote in Laborem Exercens (1981), "Work is a good thing for man-a good thing for his humanity-because through work man not only transforms nature, adapting it to his own needs, but he also achieves fulfillment as a human being and indeed in a sense becomes 'more a human being.'"

      Robert Nozick, the great libertarian philosopher, denied that there was any moral difference between work and other activities (such as investing) that produce value. He was reacting to the old leftist idea that labor alone creates value, and therefore laborers deserve the full price of their products. It is a scandal of capitalism that some of the reward goes instead to capitalists, who do not work. In the words of Ralph Chaplin's old Wobbly anthem, "Solidarity Forever" (1915):

      It is we who plowed the prairies; built the cities where they trade;
      Dug the mines and built the workshops, endless miles of railroad laid
      * * *
      All the world that's owned by idle drones is ours and ours alone.
      We have laid the wide foundations; built it skyward stone by stone.
      It is ours, not to slave in, but to master and to own.

      This song implies that we should recoup the money that capitalists have taken from the workers who really made everything. Dividend taxes would then be a good idea, and the higher the better. Unfortunately, the song is pretty clearly wrong: investors, managers, and inventors create value, just as workers do. However, we can still understand labor as morally different from other economic activities. Compare two people, one of whom makes a living by digging ditches, while the other profits from inherited stocks even though she is comatose after an accident. The first labors; the second does not. An intermediate case is someone who actively invests, mixing knowledge, intellectual labor, and accumulated capital to generate wealth. I think that the work aspect of wealth-creation is virtuous, onerous, and not sufficiently rewarded by the market. This is an argument for policies (such as dividends taxes) that favor work.

      (All this is "auto-plagiarized" from a law review article I wrote some time ago.

      My wife and I went to an event at my 3-year-old's nursery school today. She saw me still in my pajamas just before she left for the day, and said "Daddy, you will get dressed before you come to my school, won't you?" This is the beginning of at least 18 years of her worrying about whether I am about to embarrass her in public.

      permanent link | comments (0) | category: philosophy

    Site Meter