May 6, 2011
how to save the Enlightenment Ideal
If there is such a thing as the "Enlightenment Ideal," it says that individuals should hold general, publicly articulable, and correct moral principles that, in turn, guide all their opinions, statements, and actions. That is a view that--with some variations--Kant, Madison, J.S. Mill, and many others of their era explicitly defended. None of those writers was naive about the impact of "prejudice [and] voluntary ignorance" (Mill), "accident and force" (Madison), or "laziness and cowardice" (Kant) on actual people's thought and behavior, but they presumed that ideals could have causal power, shaping actions. Reasons were supposed to be motives.
That assumption has seemed to recede into implausibility as evidence has accumulated about the scant impact of reasons or values on actions. It seems that people cannot articulate consistent moral reasons for their opinions. We choose our moral principles mainly to rationalize our decisions after we have made them.*
Scholars who reflect on this evidence seem either to dismiss the relevance of morality entirely or to defend a different model of the moral self. This alternative model presumes that our intuitive, non-articulable, not-fully-conscious, private reactions to situations can be valid, can affect our behavior, and can be improved by appropriate upbringings and institutions. The new model retains some Enlightenment optimism about the importance of morality and education, but at the cost of treating moral judgment as intuitive and non-discursive.
I would propose that we misinterpret the empirical findings and miss their normative implications if we rely on a dichotomy of conscious, logical, articulable reasons versus unconscious, emotional, private intuitions. There is more than one kind of valid, publicly articulable reason.
The Enlightenment thinkers cited above and their skeptical critics seem to share the view that a good moral reason must be highly general and abstract. They have in mind a kind of flow chart in which each of one's concrete choices, preferences, and actions should be implied by a more general principle, which should (in turn) flow from an even more general one, until we reach some kind of foundation. This is not only how Kant thinks about the Categorical Imperative and its implications, but also how J.S. Mill envisions the "fundamental principle of morality" (utilitarianism) and the "subordinate principles" that we need to "apply it." Consistency and completeness are hallmarks of a good overall moral structure.
But many people actually think in highly articulate, public, reflective ways about matters other than general principles and their implications. They think, argue, and publicly defend views about particular people, communities, situations, and places. They do not merely have intuitions about concrete things; they form reasonable moral opinions of them. But their opinions are not arranged in a hierarchical structure with general principles implying concrete results. Sometimes one concrete opinion implies another. Or a concrete opinion implies a general rule. That may not be post hoc rationalization but an example of learning from experience.
Moral thinking must be a network of implications that link various principles, judgments, commitments, and interests. We are responsible for forming moral networks out of good elements and for developing coherent (rather than scattered and miscellaneous) networks. But there is no reason to assume that the network should look like an organizational flowchart, with every concrete judgment able to report via a chain of command to more general principles.
I plan to support this argument by comparing two clear and reasonable moral thinkers, John Rawls and Robert Lowell. Both lapsed protestants who were educated in New England prep schools, drafted during World War II, and taught at Harvard, they shared many political views. In his writing, Rawls both endorsed and employed highly abstract moral principles, but Lowell was equally precise and rigorous. His moral thinking was a tight network of associations among concrete characters, events, and situations.
*One summary of the evidence, with an emphasis on sociology, is Stephen Valsey, "Motivation and Justification: A Dual-Process Model of Culture in Action," American Journal of Sociology, vol. 114, no. 6 (May 2009), pp. 1675-1715.
permanent link | comments (0) | category: philosophy
April 29, 2011
the character of poets and of people generally
In Coming of Age as a Poet (Harvard, 2003), Helen Vendler interprets the earliest mature verse of four major poets: Milton, Keats, Eliot, and Plath. She argues that great poets reach maturity when they develop consistent diction and formal styles; favored physical and historical milieux; major symbolic referents; characters or types of characters whom they include in their verse; and some sort of (at least implicit) cosmology. They often retain these combinations to the ends of their careers.
Robert Lowell provides an example (mine, not Vendler's). From the 1940s until his death, his characteristic milieu is New England--specifically the coastal region from Boston to Nantucket--over the centuries from the Puritan settlement to the present. His diction mimics the diverse voices of that region's history, from Jonathan Edwards to Irish Catholics, but he brings them into harmony through his own regular rhymes and rhythms. His major symbolic references include gardens, graveyards, wars of aggression, the Book of Revelation, and the cruel ocean. He avoids presenting a literal cosmology, but he describes several worldviews in conflict. Sometimes, the physical and human worlds are cursed or damned and we are estranged from an angry, masculine God. Other times, the world is a garden: organic, fecund, and pervasively feminine. (See my reading of The Indian Killer's Grave for detail.)
A combination of diction, favored characters, milieux, subjects of interest, value-judgments, and a cosmology could be called a "personality." I don't mean that it necessarily results from something internal to the author (a self, soul, or nature-plus-nurture). Personality could be a function of the author's immediate setting. For instance, if Robert Lowell had been forceably moved from Massachusetts to Mumbai, his verse would have changed. Then again, we often choose our settings or choose not to change them.
A personality is not the same thing as a moral character. We say that people are good or virtuous if they do or say the right things. Their diction and favorite characters seem morally irrelevant. For example, regardless of who was a better poet, Lowell was a better man (in his writing) than T.S. Eliot was, because Eliot's verse propounded anti-Semitism and other forms of prejudice, whereas Lowell's is full of sympathy and love.
So we might say that moral character is a matter of holding the right general principles and then acting (which includes speaking and writing) consistently with those principles. Lowell's abstract, general values included pacifism, anti-racism, and some form of Catholic faith. Eliot's principles included reactionary Anglicanism and anti-Semitism--as well as more defensible views. The ethical question is: Whose abstract principles were right? That matter can be separated from the issue of aesthetic merit.
I resist this way of thinking about virtue because I believe that it's a prejudice to presume that abstract and general ideas are foundational, and all concrete opinions, interests, and behaviors should follow from them. One kind of mind does treat general principles as primary and puts a heavy emphasis on being able to derive particular judgments from them. Consistency is a central concern (I am tempted to write, a hobgoblin) for this kind of mind. But others do not organize their thoughts that way, and I would defend their refusal to do so. What moral thinking must be is a network of implications that link various principles, judgments, commitments, and interests. There is no reason to assume that the network must look like an organizational flowchart, with every concrete judgment able to report via a chain of command to more general principles. The hierarchy can be flatter.
To return to Lowell, one way of interpreting his personality would be to try to force it into a structure that flows from the most abstract to the most concrete. Perhaps he believed that there is an omnipotent and good deity who founded the Catholic church when He gave the keys of heaven to Peter. Peter's successors have rightly propounded doctrines of grace and nature that are anathema to Puritans. Puritans massacred medieval Catholics and Native Americans who loved nature and peace. Therefore, Lowell despises Puritans and admires both medieval Catholics and Wampanoags. In his diction, he mocks Puritans and waxes mournful over their victims. His poetic style follows, via a long chain of entailments, from his metaphysics.
But I think not. It is not even clear to me that Lowell, despite his conversion to Catholicism, even believed in a literal deity. (Letter to Elizabeth Hardwick, April 7, 1959: "I feel very Montaigne-like about faith now. It's true as a possible vision such as War and Peace or Saint Antony--no more though.") The point is, literal monotheism did not have to be the basis or ground of all his other opinions, such as his love for and interest in Saint Bernard or his deep ambivalence toward Jonathan Edwards. Those opinions could come first and could reasonably persuade him to join the Catholic Church. By mimicking the diction of specific Puritans in poems like "Mr Edwards and the Spider," Lowell could form and refine opinions of Puritanism that would then imply attitudes toward other issues, from industrial development to monasticism.
Poets are evidently unusual people, more self-conscious and aesthetically-oriented than most of their peers, and more concerned with language and concrete details than some of us are. As a "sample" of human beings, poets would be biased.
But they are a useful sample because they leave evidence of their mental wrestling. Poetry is a relatively free medium; the author is not constrained by historical records, empirical data, or legal frameworks. Poets say what they want to say (although it need not be what they sincerely believe), and they say it with precision.
I think the testimony of poets at least suffices to show that some admirable people begin with concrete admirations and aversions, forms of speech, milieux and referents, and rely much less on abstract generalizations to reach their moral conclusions. Their personalities and their moral characters are one.
permanent link | comments (0) | category: philosophy
April 4, 2011
why political recommendations often disappoint: an argument for reflexive social science
In an essay entitled "Why Last Chapters Disappoint," David Greenberg lists American books about politics and culture that are famous for their provocative diagnoses of serious problems but that conclude with strangely weak recommendations. These include, in his opinion, Upton Sinclair's The Jungle (1906), Walter Lippman's Public Opinion (1922), Daniel Boorstin's The Image (1961), Allan Bloom's Closing of the American Mind (1987), Robert Shiller's Irrational Exuberance (2000), Eric Scholsser's Fast Food Nation (2001), and Al Gore's The Assault on Reason (2007). Greenberg asserts that practically every book in this list, "no matter how shrewd or rich its survey of the question at hand, finishes with an obligatory prescription that is utopian, banal, unhelpful or out of tune with the rest of the book." The partial exceptions are works like Schlosser’s Fast Food Nation that provide fully satisfactory legislative agendas while acknowledging that the most important reforms have no chance of passing in Congress.
The gap between diagnosis and prescription is no accident. Many serious social problems could be solved if everyone chose to behave better: eating less fast food, investing more wisely, using less carbon, or studying the classics. But the readers of a given treatise are too few to make a difference, and even before they begin to read they are better motivated than the rest of the population. Therefore, books that conclude with personal exhortations seem inadequate.
Likewise, some serious social problems could be ameliorated by better legislation. But the readers of any given book are too few to apply sufficient political pressure to obtain the necessary laws. Therefore, books that end with legislative agendas disappoint just as badly.
The failure of books to change the world is not a problem that any single book can solve. But it is a problem that can be addressed, just as we address complex challenges of description, analysis, diagnosis, and interpretation that arise in the social sciences and humanities. Every work of empirical scholarship should contribute to a cumulative research enterprise and a robust debate. Every worthy political book should also contribute to our understanding of how ideas influence the world. That means asking questions such as: "Who will read this book, and what can they do?"
Who reads a book depends, in part, on the structure of the news media and the degree to which the public is already interested in the book’s topic. What readers can do depends, in part, on which organizations and networks are available for them to join and how responsive other institutions are to their groups.
These matters change over time. Consider, for example, a book that did affect democracy, John W. Gardner's In Common Cause: Citizen Action and How It Works (1972). After diagnosing America's social problems as the result of corrupt and undemocratic political processes and proposing a series of reforms, such as open-government laws and public financing for campaigns, Gardner encouraged his readers to join the organization Common Cause. He had founded this organization two years earlier by taking out advertisements in leading national newspapers, promising "to build a true 'citizens'' lobby—a lobby concerned not with the advancement of special interests but with the well-being of the nation. … We want public officials to have literally millions of American citizens looking over their shoulders at every move they make." More than 100,000 readers quickly responded by joining Gardner's organization and sending money. Common Cause was soon involved in passing the Twenty-Sixth Amendment (which lowered the voting age to 18), the Federal Election Campaign Act, the Freedom of Information Act, and the Ethics in Government Act of 1978. The book In Common Cause was an early part of the organization’s successful outreach efforts.
It helped that Gardner was personally famous and respected before he founded Common Cause. It also helped that a series of election-related scandals, culminating with Watergate, dominated the news between 1972 and 1976, making procedural reforms a high public priority. As a book, In Common Cause was well written, fact-based, and clear about which laws were needed.
But the broader context also helped. Watergate dominated the news because the news business was still monopolized by relatively few television networks, agenda-setting newspapers, and wire services whose professional reporters believed that a campaign-finance story involving the president was important. Everyone who followed the news at all had to follow the Watergate story, regardless of their ideological or partisan backgrounds. In contrast, in 2010, some Americans were appalled by the false but prevalent charge that President Obama's visit to Indonesia was costing taxpayers $200 million per day. Many other Americans had no idea that this accusation had even been made, so fractured was the news market.
John Gardner was able to reach a generation of joiners who were setting records for organizational membership.* Newspaper reading and joining groups were strongly correlated; and presumably people who read the news and joined groups also displayed relatively deep concern about public issues. Thus it was not surprising that more than 100,000 people should respond to Gardner's newspaper advertisements about national political reform by joining his new group. By the 2000's, the rate of newspaper reading had dropped in half, and the rate of group membership was also down significantly. The original membership of Common Cause aged and was never replaced in similar numbers after the 1970s. John Gardner's strategy fit his time but did not outlive him.
Any analysis of social issues should take account of contextual changes like these. Considering how one’s thought relates to the world means making one's scholarship "reflexive," in the particular sense advocated by the Danish political theorist Bent Flyvbjerg. He notes that modern writers frequently distinguish between rationality and power. "The [modern scholarly] ideal prescribes that first we must know about a problem, then we can decide about it. … Power is brought to bear on the problem only after we have made ourselves knowledgeable about it."** With this ideal in mind, authors write many chapters about social problems, followed by unsatisfactory codas about what should be done. As documents, their books evidently lack the capacity to improve the world. Their rationality is disconnected from power. And, in my experience, the more critical and radical the author is, the more disempowered he or she feels.
Truly "reflexive" writing and politics recognizes that even the facts used in the empirical or descriptive sections of any scholarly work come from institutions that have been shaped by power. For example, in my own writing, I frequently cite historical data about voting and volunteering in the United States. The federal government tracks both variables by fielding the Census Current Population Surveys and funding the American National Election Studies. Various influential individuals and groups have persuaded the government to measure these variables, for the same (somewhat diverse) reasons that they have pressed for changes in voting rules and investments in volunteer service. On the other hand, there are no reliable historical data on the prevalence of public engagement by government agencies. One cannot track the rate at which the police have consulted residents about crime-fighting strategies or the importance of parental voice in schools. That is because no influential groups and networks have successfully advocated for these variables to be measured. Thus the empirical basis of my work is affected by the main problem that I identify in my work: the lack of support for public engagement.
Reflexive scholarship also acknowledges that values motivate all empirical research. Our values--our beliefs about goals and principles--should be influenced and constrained by what we think can work in the world: "ought implies can." Wise advice comes not from philosophical principles alone, but also from reflection on salient trends in society and successful experiments in the real world. An experiment can be a strong argument for doing more of the same: sometimes, "can implies ought." If there were no recent successful experiments in civic engagement, my democratic values would be more modest and pessimistic. If recent experiments were more robust and radical than they are, I might adopt more ambitious positions. In short, my values rest on other people’s practical work, even as my goal is to support their work.
Finally, reflexive scholarship should address the question of what readers ought to do. A book is fully satisfactory only if it helps to persuade readers to do what it recommends and if their efforts actually improve the world. In that sense, the book offers a hypothesis that can be proved or disproved by its consequences. No author will be able to foresee clearly what readers will do, because they will contribute their own intelligence, and the situation will change. Nevertheless, the book and its readers can contribute to a cumulative intellectual enterprise that others will then take up and improve.
*In 1974, 80 percent of the "Greatest Generation” (people who had been born between 1925 and 1944) said that they were members of at least one club or organization. Among Baby Boomers at the same time, the rate of group membership was 66.8%. The Greatest Generation continued to belong at similar rates into the 1990s. The Boomers never caught up with them, their best year being 1994, when three quarters reported belonging to some kind of group. In 1974, 6.3% of the Greatest Generation said they were in political clubs. The Boomers have never reached that level: their highest rate of belonging to political clubs was 4.9% in 1989. (General Social Survey data analyzed by me.)
**Bent Flyvbjerg, Making Social Science Matter (Cambridge: Cambridge University Press, 2001), p. 143
permanent link | comments (0) | category: philosophy
February 14, 2011
a real alternative to ideal theory in political philosophy
In philosophy, "ideal theory" means arguments about what a true just society would be like. Sometimes, proponents of ideal theory assert that it is useful for guiding our actual political decisions, which should steer toward the ideal state. John Rawls revived ideal theory with his monumental A Theory of Justice (1971). His position was egalitarian/liberal, but Robert Nozick joined the fray with his libertarian Anarchy, State and Utopia (1974), and a huge literature followed.
Recently, various authors have been publishing critiques of ideal theory. I am, for example, reading Raymond Geuss' Philosophy and Real Politics (2008) right now. One of the most prominent critiques is by Amartya Sen in The Idea of Justice (2009). Sen argues that there is no way to settle reasonable disagreements about the ideal state. Knowing what is ideal is not necessary to make wise and ethical decisions. Even an ideally designed set of public institutions would not guarantee justice, because people must be given discretion to make private decisions, but those decisions can be deeply unjust. Finally, there is an alternative to the tradition of developing ideal social contracts, as Plato, More, Locke, Rousseau, Rawls, Nozick, and many others did. The alternative is to compare on moral grounds actually existing societies or realizable reforms, in order to recommend improvements, a strategy epitomized by Aristotle, Adam Smith, Benjamin Constant, Tocqueville, and Sen (among many others).
I am for this but would push the critique further than Sen does. The non-ideal political theories that he admires are still addressed to some kind of sovereign: a potential author of laws and policies in the real world, a "decider" (as George W. Bush used to call himself). Sen, for example, in his various works, addresses two kinds of audiences: the general public, understood as sovereign because we can vote, or various specific authorities, such as the managers of the World Bank. In his work aimed at general readers, he envisions a "global dialogue," rich with "active public agitation, news commentary, and open discussion," to which he contributes guiding principles and methods. In turn, that global dialogue will influence the actual decision-makers, whether they are voters and consumers in various countries or powerful leaders.
Unfortunately, no reader is really in the position of a sovereign. You and I can vote, but not for elaborate social strategies. We vote for names on a ballot, while hundreds of millions of other people also vote with different goals in mind. If I prefer the social welfare system of Canada to the US system, I cannot vote to switch. Not can I persuade millions of Americans to share my preference, because I don't have the platform to reach them. Even legislators are not sovereigns, because there are many of them, and the legislature shares power with other branches and levels of government and with private institutions.
Thus "What is to be done?" is not a question that will yield practical guidance for individuals. It is a more relevant question for Sen than for me, because he has spent a long life in remarkably close interaction with famous and distinguished leaders from Bengal to California. (The "acknowledgments" section of The Idea of Justice is the longest I have ever seen and represents a Who's Who of public intellectuals.) But if Sen's full "theory of change" is to become internationally famous and then give advice to leaders, it will only work for a very few.
What then should we do (I who writes these words and you who read them, along with anyone whom we can enlist for our causes)? That seems to be the pressing question, but not if the answer stops with changes in our personal behavior and immediate circumstances. National and global needs are too important for us only to "be the change" that we want in the world. We must also change the world. Our own actions (yours and mine) must be plausibly connected to grand changes in society and policy. Thinking about what we should do raises an entirely different set of questions, dilemmas, models, opportunities, and case-studies than are familiar in modern philosophy.
permanent link | comments (0) | category: philosophy
January 21, 2011
artistic excellence as a function of historical time
The New York Times music critic Anthony Tommasini has compiled his top ten list of all-time greatest classical composers. As explanations for his choices, he offers judgments about the intrinsic excellence of these composers along with comments about their roles in the development of music over time.
These temporal or historical reasons prove important to Tommasi's overall judgments. For example, Beethoven's Fourth Piano Concerto, when played between works composed in the 20th century, "sound[s] like the most radical work in the program by far." Schubert’s "Ninth paves the way for Bruckner and prefigures Mahler." Brahms, unfortunately, "sometimes become entangled in an attempt to extend the Classical heritage while simultaneously taking progressive strides into new territory." Bach "was considered old-fashioned in his day. ... [He] was surely aware of the new trends. Yet he reacted by digging deeper into his way of doing things." Haydn would make the Top Ten list except that his "great legacy was carried out by his friend Mozart, his student Beethoven and the entire Classical movement."
It seems that originality counts: it's best to be ahead of one's time. On the other hand, if, like Haydn, you launch something that others soon take higher, you are not as great as those who follow you. Bach is the greatest of all because instead of moving forward, he "dug deeper." So originality is not the definition of greatness--it is an example of a temporal consideration that affects our aesthetic judgments.
One might think that these reasons are mistaken: timing is irrelevant to intrinsic excellence or "greatness." It doesn't matter when you make a work of art; what matters is how good it is. But I'm on Tommasini's side and would, like him, make aesthetic judgments influenced by when works were composed. Why?
For one thing, an important aspect of art (in general) is problem-solving. One achievement that gives aesthetic satisfaction is the solution of a difficult problem, whether it is representing a horse in motion or keeping the kyrie section of a mass going for ten minutes without boring repetition. The problems that artists face derive from the past. Once they solve the problems of their time, repeating their success is no longer problem-solving. To be sure, one only appreciates art as problem-solving if one knows something about the history of the medium. That is why art history and music history enhance appreciation, although that is not their only purpose.
Besides, in certain artistic traditions, the artist is self-consciously part of the story of the art form. Success means taking the medium in a productive new direction. This is how traditions such as classical music, Old Master Painting, Hollywood movies, and hip-hop have developed. It is not the theory of all art forms in all cultures. Sometimes, ancient, foundational works are seen as perfect exemplars; a new work is excellent to the extent that it resembles those original models.
The Quarrel of the Ancients and the Moderns was a debate about whether the European arts and sciences should be progressive traditions or should aim to replicate the greatness of their original Greco-Roman models. The Moderns ultimately won that debate, not only promoting innovation in their own time but also reinterpreting the past as a series of original achievements that we should value as contributions to the unfolding story of art. Since we are all Moderns now, we all think in roughly the way that Tommasini does, admiring Beethoven because his contemporaries thought his late works were incomprehensible.
Meanwhile, classical music and Old Master painting have become completed cultures for many people. Their excellence is established and belongs to the past. Beethoven was great because he was ahead of his time, but now the story to which he contributed is over. The Top Ten lists of classical music are closed. I am not sure this is true, but it seems a prevalent assumption. Maybe we are all Ancients now.
permanent link | comments (0) | category: fine arts , philosophy
January 10, 2011
upside-down Foucault
Hypothesis: every space where Michael Foucault discovered the operation of power is also a venue for creativity, collaboration, and a deepening of human subjectivity.
By way of background: I respect Foucault as one of the greatest thinkers of the 20th century. Although deeply influenced by other writers and activists, he made his own crucial discoveries. In particular, he found power operating in places where it had been largely overlooked, such as clinics, classrooms, and projects of social science. Further, he understood that power is not just a matter of A deliberately making B do what A wants. It rather shapes all of our desires, goals, and beliefs. Its influence on beliefs suggests that knowledge and power are inseparable, so that even our understanding of power is determined by power. Despite the skeptical implications of Foucault's epistemology, he struggled in an exemplary fashion to get the theory right, revising it constantly. He traveled a long intellectual road, directed by his own conscience and experience rather than any kind of careerism.
So it is as a kind of homage to Foucault that I suggest flipping his theory upside-down. Just as close, critical observation of people in routine settings can reveal the operations of power, so we can detect people developing, growing, reflecting, and collaborating voluntarily. To be sure, social contexts fall on a spectrum from dehumanizing to humanizing, with prisons at one end (not far from office cubicles), and artists' ateliers at the other. But it would be just as wrong to interpret a whole society as a prison as to view it all as a jazz band. And, I would hypothesize, even in the modern US prison system--swollen in numbers, starved of resources for education and culture, plagued by rape and abuse, and racially biased--one could find evidence of creativity as well as power.
permanent link | comments (0) | category: philosophy
December 10, 2010
the philosophical foundations of civic education
Ann Higgins-D’Alessandro and I have published an article under this title in Philosophy & Public Policy Quarterly. It is actually a version (with due permission) of a chapter we published in The Handbook of Research on Civic Engagement in Youth, edited by Lonnie Sherrod, Judith Torney-Purta, and Constance A. Flanagan (John Wiley & Sons, 2010). Here it is online.
We note that educating young people for citizenship is an intrinsically moral task. Even among reasonable people, moral views about citizenship, youth, and education differ. We describe conflicting utilitarian, liberal, communitarian, and civic republican conceptions and cite evaluations of actual civic education programs that seem to reflect those values. We conclude:
With a few exceptions, such as Facing History and Just Communities, one cannot find much explicit moral argumentation in either the justifications or the evaluations of civic programs. Disclosing one’s own ethical judgments as facts about oneself is relatively straightforward. Defending them is harder, especially if one does not resort automatically to utilitarianism. Moral argumentation requires a shift out of a positivist framework, as one gives non-empirical reasons—reasons that go beyond observable facts— for one’s positions. Moral philosophy and normative social theory—as we have argued—provide rich resources for arguments about the values that society should hold and that it ought to try to transmit through civic education to future generations.
Alas, references to influential and relevant schools of philosophy, such as the capabilities approach of Sen and Nussbaum, are entirely missing in the empirical literature on youth civic engagement. The problem, however, goes both ways. Recent academic philosophy in all of its schools has not benefited enough from reflecting on innovative youth programs, a method that Plato, Erasmus, Rousseau, Dewey, and others found generative in earlier times.
permanent link | comments (0) | category: advocating civic education , philosophy
November 4, 2010
against a cerebral view of citizenship
For a faculty seminar tomorrow, a group of us are reading Aristotle's Politics, Book III, which is a classic and very enlightening discussion of citizenship. Aristotle holds that the city is composed of citizens: they are it. Citizenship is not defined as residence in a place, nor does it mean the same thing in all political systems. Rather, it is an office, a set of rights and responsibilities. Who has what kind of citizenship defines the constitution of the city.
According to Aristotle, the core office or function of a citizen is "deliberating and rendering decisions, whether on all matters or a few."* In a tyranny, the tyrant is the only one who judges. In such cases, the definition of a good man equals that of a good citizen, because the tyrant's citizenship consists of his ruling, and his ruling is good if he is good. Practical wisdom is the virtue we need in him, and it is the same kind of virtue that we need in dominant leaders of other entities, such as choruses and cavalry units. Aristotle seems unsure whether a good tyrant must first learn to be ruled, just as a competent cavalry officer first serves under another officer, or whether one can be born a leader.
In democracies, a large number of people deliberate and judge, but they do so periodically. Because they both rule and obey the rules, they must know how to do both. Rich men can make good citizens, because in regular life (outside of politics) they both rule and obey rules. But rich men do not need to know how to do servile or mechanical labor. They must know how to order other people to do those tasks. Workers who perform manual labor do not learn to rule, they do not have opportunities to develop practical wisdom, but they instead become servile as a result of their work. Thus, says Aristotle, the best form of city does not allow its mechanics to be citizens.
Note the philosopher's strongly cognitive or cerebral definition: citizenship is about deliberating and judging. Citizenship is not about implementing or doing, although free citizens both deliberate and implement decisions.
But what if we started a different way, and said that "the city" (which is now likely a nation-state) is actually composed of its people as workers? It is what they do, make, and exchange. In creating and exchanging things, they make myriad decisions, both individually and collectively. Some have more scope for choice than others, but average workers make consequential decisions frequently.
If the city is a composite of people as workers, then everyone is a citizen, except perhaps those who are idle. It does not follow logically that all citizens must be able to deliberate and vote on governmental policies. Aristotle had defined citizens as legal decision-makers (jurors and legislators); I am resisting that assumption. Nevertheless, being a worker now seems to be an asset for citizens, not a liability. Only the idle do not learn both to rule and to be ruled.
Aristotle's definition of citizenship has been enormously influential, but it has often been criticized: by egalitarians who resist his exclusion of manual workers and slaves; by Marxists and others who argue that workers create wealth and should control it; and by opponents of his cerebral bias, like John Dewey. The critique that interests me most is the one that begins by noting the rich, creative, intellectually demanding aspects of work. That implies that working, rather than talking and thinking, may be the essence of citizenship. I draw on Simone Weil, Harry Boyte, and others for that view.
*Politics 1375b16, my translation.
permanent link | comments (0) | category: philosophy , populism
July 19, 2010
the visionary fire of Roberto Mangabeira Unger
We are deep into our annual Summer Institute of Civic Studies, with as much as six-and-a-half-hours of class and many hundreds of pages of reading each day. The most blogging I can manage will be less-than-daily notes about the texts we discuss. Today, one important text is Roberto Mangabeira Unger's False Necessity: Anti-Necessitarian Social Theory in the Service of Radical Democracy. (Unger is a Harvard Law Professor and cabinet member in his home country of Brazil.)
Unger takes "to its ultimate conclusion" the thesis "that society is an artifact" (p. 2). All our institutions, mores, habits, and incentives are things that we imagine and make. We can change each of these things, "if not all at once, then piece by piece" (p. 4). When we observe that people are poisoning their environment or slaughtering each other--or are suffering from a loss of community and freedom--we should view the situation as our work and strive to change it. He "carries to extremes the idea that everything in society is politics, mere politics"--in the sense of collective action and creation (p. 1)
Unger is a radical leftist but a strong critic of Marxism. He views Marxism as one example of "deep-structure" theory. Any deep-structure theory identifies some "basic framework, structure, or context" beneath all our routine debates and conflicts. It treats each framework as "an indivisible and repeatable type of social organization." And then it explains changes from one framework to another in terms of "lawlike tendencies or deep-seated economic, organizational, and psychological constraints" (p. 14-5). So--according to Marxists--all the politics that we observe today is a function of "capitalism"; capitalism is a unitary thing that can repeat or end; and the only way forward is from capitalism to a different deep structure, namely socialism.
Unger argues that this theory fails to acknowledge the virtually infinite forms of social organization that we can make (including, for instance, many definitions of private property and many combinations of property with other laws and institutions). It suggest that perhaps nothing can be done to alter the arc of history. The only possible strategy is to start a revolution to change the unitary underlying structure of the present society. But that solution is generally (perhaps always) impractical, so the leftist thinker or leader is reduced to denouncing capitalist inequality. "Preoccupied with the hierarchy-producing effects of inherited institutional arrangements, the leftist reaches for distant and vague solutions that cannot withstand the urgent pressures of statecraft and quickly give way to approaches betraying its initial aims" (p. 20).
Instead, writes Unger, the leftist should be constantly "inventing ever more ingenious institutional instruments." The clearest failure of actual Marxism was its refusal to experiment, which was legitimized by its deep-structure theory. (Once capitalism was banished, everything was supposed to be fixed). "The radical left has generally found in the assumptions of deep-structure social analysis an excuse for the poverty of its institutional ideas. With a few exceptions ... it has produced only one innovative institutional conception, the idea of a soviet or conciliar type of organization" (p. 24). In theory, a "soviet" was a system of direct democracy in each workplace or small geographical location. But, Unger writes, that was an unworkable and generally poor idea.
In contrast, Unger is a veritable volcano of innovative institutional conceptions. He wants a new branch of government devoted to constant reform that is empowered to seize other institutions but only for a short time; mandatory voting; automatic unionization combined with complete independence of unions from the state; neighborhood associations independent from local governments; a right to exit from public law completely and instead form private associations with rules that protect rights; a wealth tax; competitive social funds that allocate endowments originally funded by the state; and new baskets of property rights.
None of these proposals is presented as a solution. Together they are ways of creating "a framework that is permanently more hospitable to the reconstructive freedom of the people who work within its limits" (p. 34). The task is to "combine realism, practicality, and detail with visionary fire" (p. 14)
On deck: Madison, Hayek, and Burke--all defenders of tradition and enemies of the Ungerian project.
permanent link | comments (0) | category: philosophy
July 9, 2010
on hope as an intellectual virtue
My favorite empirical research programs try to help something good work in the world. For instance, scholars who study Positive Youth Development assess initiatives that give young people opportunities to contribute to their communities. Scholars of Common Pool Resources study how communities manage common property, such as fisheries and forests. Scholars of Deliberative Democracy investigate the impacts on citizens, communities, and policies when people talk in structured settings.
These are empirical research programs, committed to facts and truth. They do not seek to celebrate, but to critically evaluate, their research subjects. However, an obvious goal is to make the practical work succeed by identifying and demonstrating positive impacts and by helping to sort out the effective strategies from the ineffective ones. Underlying these intellectual efforts is some kind of hope that the practical programs, when done well, succeed.
As a philosopher, I am especially interested in that hope and why scholars have it. I like to ask what motivates these research projects. The motives are largely hidden, because positivist social science cannot handle value-commitments on the part of researchers; it treats them as biases to be minimized and disclosed only if they prove impossible to eliminate. Often the search for motives is critical and suspicious: one tries to show that a given research project is biased by some value-judgment, cultural assumption, or self-interest on the scholars' part. But I look for motives in an appreciative spirit, believing that an empirical research program in the social sciences can only be as good as its core values.
Note that it is not at all obvious why we should hope that Positive Youth Development, Common Property Resource Management, and Deliberative Democracy work. These are expensive and tricky strategies. For instance, the core empirical hypothesis of Positive Youth Development is that you will get better outcomes for youth if you help them contribute than if you use surveillance and remediation. But it would be cheaper and more reliable if we could cut crime with metal detectors in every school instead of elaborate service-learning programs. So why should we hope that Positive Youth Development is right?
Likewise, it would be easier to turn all resources into private or state property than to encourage communities to manage resources as common property. And it would be easier for professionals to make city plans and budgets than to turn those decisions over to citizens. So why do scholars evidently hope that good common property regimes produce more sustainable and efficient economic outcomes than expert management, and that deliberations generate more legitimate and fair policies than governments do?
I think part of the reason is simply that things are not going very well in the world, and scholars seek alternatives that may be uncontroversially better: more efficient or sustainable, less corrupt and wasteful. That's part of the reason, but it doesn't fully explain the focus of these research projects. If you're worried about violence in American high schools, you should look for something new that works. But why should that new approach include service and leadership programs, instead of better metal detectors and video cameras?
Ultimately, all three of my examples are anchored in commitments that I would describe as "Kantian." The individual is a sovereign moral agent and our responsibility to others is always to help develop their capacities for autonomy and voluntary cooperation. Real Kantianism is dismissive of utilitarian outcomes (such as efficient public services) and is willing to defend autonomy even if the consequences for health and welfare turn out to be bad. But real Kantianism just doesn't fly. It doesn't influence power and it doesn't satisfy most people's intuitions. So I think the research projects I have mentioned here are motivated by a kind of soft or strategic Kantianism. The best initiatives, on this view, are the ones that achieve efficient and reliable improvements in tangible human welfare by enhancing people's autonomy. Strategies like Positive Youth Development and common property regimes stand out as worthy of study because of their Kantian values. But they deserve critical scrutiny on utilitarian grounds. If they fail to deliver the promised practical outcomes, they should be improved before they are abandoned. The same attention should not be given to surveillance systems or top-down managerial structures. In theory, those solutions might work just as well, but helping them to succeed would not enhance autonomy.
I realize that it is a risky strategy in our culture for scholars to admit their core moral commitments. The smartest move is to pretend that a research program is simply scientific and all the outcomes of interest are utilitarian. But those assumptions have the disadvantage of being wrong. They distort research in various subtle but damaging ways. Even though it is idealistic, I think we should take on positivism directly and not accept the presumption that values are simply biases.
permanent link | comments (0) | category: philosophy
July 6, 2010
moral thinking is a network, not a foundation with a superstructure
When we talk together about public concerns, a whole range of phrases and concepts is likely to emerge. Imagine, for example, that the topic is a local public school: how it is doing and what should change. In talking about their own school, parents and educators may use abstract moral concepts, like fairness or freedom. They may use concepts that have clear moral significance but controversial application in the real world. For example, fairness is a good thing, by definition. It is not the only good thing, and it can conflict with other goods. But the bigger challenge is to decide which outcomes and policies actually are fair.
Other concepts are easy to recognize in the world but lack clear moral significance. We either bus students to school or we do not bus them, but whether busing is good is debatable. (In this respect, it is a very different kind of concept from fairness.) Still other concepts have great moral weight and importance, but their moral significance is unclear. You can't use the word love seriously without making some kind of morally important point. But you need not use that word positively: sometimes love is bad, and the same is true of free and achieve.
People string such concepts together in various ways. They may make associations or correlations ("The girls are doing better than the boys in reading"). They may make causal claims ("The math and reading tests are causing us to overlook the arts.") They may apply general concepts to particular cases. Often they will describe individual teachers, administrators, events, classes, and facilities with richly evaluative terms, such as beautiful or boring. Frequently, they will tell stories, connecting events, individuals, groups, concepts, and intentional actions over time.
All these ways of talking are legitimate in a democratic public discussion. But the heterogeneity of our talk seems problematic. So many different kinds of ideas are in play that it seems impossible to reach any principled or organized resolution. We talk for some arbitrary amount of time, and then a decision must be made by the pertinent authorities or by a popular vote. It is not clear whether the decision was correct based on the discussion that preceded it.
It seems beneficial to organize and systematize public discussion, and several kinds of experts stand ready to help:
- Social scientists propose to organize public discussions by identifying reliable causal relationships among concepts that can be empirically identified in the world. For instance, success comes to mean passing a test or graduating on time, and class size is found to influence (or not to influence) success. The hope is—if not to end the discussion—at least to focus and rationalize it.
- Managers (both actual administrators of our institutions and experts on management) hope to limit or organize public discussions by pronouncing on which strategies will work and which are permissible under the current rules and policies.
- Ideological thinkers try to simplify the discussion by putting heavy weight on certain moral concepts, which then trump others. (For example, personal liberty is a trump card for libertarians; equal welfare, for social democrats.)
- Lawyers are trained to guide public discussions by explaining which options are legal or obligatory under laws, precedents, and constitutions.
- Moral and political philosophers have less public influence than the other groups mentioned so far, but they hold the most subtle and sophisticated views of how public discussions ought to be improved. Contemporary academic philosophers are often disarmingly modest about their contributions, yet a core professional goal is to improve discussions by identifying morally clear and invariant concepts that should then influence decisions. Depending on which philosophical school one defends, those concepts might include rational autonomy, maximum utility, or virtue.
All of these forms of expert and disciplined guidance can be useful. But they often conflict, and so the very fact that they all help should tell us something. There is no methodology that can replace or discipline our public discussions or bring them to a close. This is because of the nature of moral reasoning itself.
Moral concepts are indispensable. We cannot replace them with empirical information. Even if smaller class sizes do produce better test scores, that does not tell us whether our tests measure valuable things, whether the cost of more teachers would be worth the benefits, or whether the state has a right to compel people to pay taxes for education.
But moral concepts are heterogeneous. Some have clear moral significance but controversial application in the world. (Fairness is always good, and murder is always bad.) Others have clear application but unpredictable moral significance. (Homicide is sometimes murder but sometimes it is justifiable.) Still others are morally important but are neither predictable nor easily identified. (Love is sometimes good and sometimes regrettable, and whether love exists in a particular situation can be hard to say.) A method that could bring public deliberation to closure would have to organize all these concepts so that the empirically clear ones were reliably connected to the morally clear ones.
That sometimes happens. For instance, waterboarding either happens or it does not happen. The Bush Administration's lawyers defined it in obsessive detail: "The detainee is lying on a gurney that is inclined at an angle of 10 to 15 degrees to the horizontal. ... A cloth is placed over the detainee's face and cold water is poured on the cloth from a height of approximately 6 to 18 inches …" Waterboarding is, in my considered opinion, an example of torture. Torture is legally defined as a felony, and the reason for that rule is a moral judgment that torture is always wrong (in contrast to punishment or interrogation, which may be right). Therefore, waterboarding is wrong. This argument may be controversial, but it is clear and it carries us all the way from the concrete reality of a scene in a CIA interrogation room to a compelling moral judgment and a demand for action. The various kinds of concepts are lined up so that moral, legal, and factual ideas fit together. There is room for debate: Is waterboarding torture? Who waterboarded whom? But the debate is easily organized and should be finite.
If all our moral thinking could work like that, we might be able to bring our discussions to a close by applying the right methods--usually a combination of moral philosophy plus empirical research. But much of our thinking cannot be so organized, because we confront moral concepts that lack consistent significance. They are either good or bad, depending on the circumstances. Nevertheless, they are morally indispensable; we cannot be good human beings and think without them. Love and freedom are two examples. To say that Romeo loves Juliet--or that Romeo is free to marry Juliet--is to say something important, but we cannot tell whether it is good or bad until we know a lot about the situation. There is no way to organize our thinking so that we can bypass these concepts with more reliable definitions and principles.
A structured moral mind might look the blueprint of a house. At the bottom of the page would be broad, abstract, general principles: the foundation. An individual's blueprint might be built on one moral principle, such as "Do unto others as you would have them do unto you." Or it might start even lower, with a metaphysical premise, like "God exists and is good." At the top of the picture would be concrete actions, emotions, and judgments, like "I will support Principal Jones's position at the PTA meeting." In between would be ideas that combine moral principles and factual information, such as, "Every child deserves an equal education," or "Our third grade curriculum is too weak." The arrows of implication would always flow up, from the more general to the more specific.
I think most people's moral thinking is much more complex than this. Grand abstractions do influence concrete judgments, but the reverse happens as well. I may believe in mainstreaming special-needs children because of an abstract principle of justice, and that leads me to support Mrs. Jones at the PTA meeting. Or I may form an impression that Mrs. Jones is wise; she supports mainstreaming; and therefore I begin to construct a new theory of justice that justifies this policy. Or I may know an individual child whose welfare becomes an urgent matter for me; my views of Mrs. Jones, mainstreaming, and justice may all follow from that. For some people, abstract philosophical principles are lodestones. For others, concrete narratives have the same pervasive pull—for example, the Gospels, or one's own rags-to-riches story, or Pride and Prejudice.
We must avoid two pitfalls. One is the assumption that a general and abstract idea is always more important than a concrete and particular one. There is no good reason for that premise. The concept of a moral "foundation" is just a metaphor; morality is not really a house, and it does not have to stand on something broad to be solid. Yet we must equally avoid thinking that we just possess lots of unconnected opinions, none intrinsically more important than another. For example, the following thoughts may all be correct, but they are not alike: "It is good to be punctual"; "Genocide is evil"; and "Mrs. Jones is a good principal." Not only do these statements have different levels of importance, but they play different roles in our overall thinking.
I would propose switching from the metaphor of a foundation to the metaphor of a network. In any network, some of the nodes are tied to others, producing an overall web. If moral thinking is a network, the nodes are opinions or judgments, and the ties are implications or influences. For example, I may support mainstreaming because I hold a particular view of equity; then mainstreaming and equity are two nodes, and there is an arrow between them. I may also love a particular child, and that emotion is a node that connects to disability policy in schools. A strong network does not rest on a single node, like an army that is decapitated if its generalissimo is killed. Rather, a strong network is a tight web with many pathways, so that it is possible to move from one node to another by more than one route. Yet in real, functioning networks, all the nodes do not bear equal importance. On the contrary, it is common for the most important 20 percent to carry 80 percent of the traffic--whether the network happens to be the Internet, the neural structure of the brain, or the civil society of a town.
I suspect that a healthy moral mind is similar. It has no single foundation, and it is not driven only by abstract principles. Concrete motives (like love or admiration for a particular individual) may loom large. Yet the whole structure is network-like, and it is possible for many kinds of nodes to influence many other kinds. My respect for Mrs. Jones may influence how I feel about the concept of the welfare state, and not just the reverse. I need many nodes and connections, each based on experience and reflection.
I do not mean to imply that a strong network map is a fully reliable sign of good moral thinking. A fascist might have an elaborate mental map composed of many different racial and national prejudices and hatreds, each supported by stories and examples, and each buttressing the others. That would be a more complex diagram than the ones possessed by mystics who prize purity and simplicity. Purity of Heart Is to Will One Thing, wrote Sören Kierkegaard, and the old Shaker hymn advises, "'Tis the gift to be simple, ‘tis the gift to be free, ‘Tis the gift to come down where we ought to be." A righteous Shaker would do more good than a sophisticated fascist. But even if complexity is not a sufficient or reliable sign of goodness, a complex map is both natural and desirable. It reflects the real complexity of our moral world; it reduces the odds of becoming fanatical; it hems in self-interest; and it is resilient against radical doubt.
Four conclusions follow from this discussion.
1. We should banish a certain kind of moral skepticism which arises from thinking that moral conclusions always rest on foundations, but alas there is nothing below our biggest, most abstract ideas. For example, you may believe in the Golden Rule but be unwilling to say why it is true. You may feel that there is no answer to the "Why?" question, and therefore morality is merely prejudice or whim. Your moral house has a foundation (the Golden Rule), but the foundation is floating in air. Fortunately, our whole morality does not rest on any such rule, nor must a principle rest on something below it to be valid. The Golden Rule is part of a durable network. It gains credibility because it seems consistent with so many other things that we come to believe. If it or any other node is knocked out of the network, the traffic can route around it.
2. Moral thinking is influenced by worldly experience, by practice and by stories, and not only by abstract theories and principles. I wrote that it "is influenced" by experience; I have not shown that our thinking should be deeply experiential. But at the least, we can say that there is no reason to put abstract thinking on a pedestal, to treat is as if it were intrinsically and automatically more reliable than concrete thinking. I can be just as certain that I love my children as in the truth of the Golden Rule.
3. We can handle diversity. If individuals' conclusions derived from the foundations of their thought, we would face a serious problem whenever we encountered people who had different from foundations from our own. It is hard to tolerate them, let alone deliberate with them. The existence of a different foundation can even provoke vertiginous skepticism in our own minds. If my worldview rests on utilitarianism, and yours depends on faith in Jesus' resurrection, perhaps neither of us has any reason to hold our own position. But if our respective worldviews are more like networks, then they probably share many of the same nodes even though they differ in some important respects. What's more, each person's network must be slightly different from anyone else's—even his twin brother's. Thus when we categorize people into "cultures," we are crudely generalizing. There is actually one population of diverse human beings who are capable of discussing their differences even though they may not reach agreement.
4. Expertise plays a limited role in reaching good decisions. The moral network in my mind cannot be--and should not be--radically simplified by applying any sophisticated methodology. I can learn from experts about what causes what and about how we should define various concepts and principles. But at the end of that process, I will still have my own moral network map, nourished by many sources other than the experts, and I will have to make decisions both alone and in dialog with my peers. There is no substitute for thinking together about problems and solutions.
permanent link | comments (0) | category: philosophy
April 15, 2010
what was Rawls doing?
John Rawls was the most influential recent academic political philosopher in the English-speaking world, or at least the most influential academic who defended liberal views. If you take him at face value, he is a very abstract kind of thinker. In fact, he says in section 3 of A Theory of Justice:
- My aim is to present a conception of justice which generalizes and carries to a higher level of abstraction the familiar theory of the social contract as found, say, in Locke, Rousseau, and Kant. In order to do this we are not to think of the original social contract as one to enter a particular society or to set up a particular form of government. Rather, the guiding idea is that the principles of justice for the basic structure of society are the object of the original agreement.
In a famous methodological move, he defines the "original position" as one in which persons are ignorant of all morally irrelevant facts so that each cannot "tailor principles to the circumstances of [his or her] own case." By making us ignorant of most empirical facts about ourselves, Rawls makes his theory seem more abstract than even Kant's.
As Rawls works out the actual framework of justice, it turns out that the government should do certain things and not others. Parties to the original contract would want there to be "roughly equal prospects of culture and achievement for everyone similarly motivated and endowed. The expectations of those with the same abilities and aspirations should be not be affected by their social class." To achieve this outcome, the government should fund education and channel educational resources to the least advantaged. I presume it should also regulate employment contracts to prevent discrimination, thus enacting the principle of "careers open to talents." But the government should not be in charge of child-rearing, even though families affect people's capacities and motivations. ("Even the willingness to make an effort, to try, and so to be deserving in the ordinary sense is itself dependent upon happy family and social circumstances.") The state should compensate people from unhappy families, but should not take over the family's traditional function.
Why not? One answer might be that Rawls was insufficiently radical and consistent. He arbitrarily excluded the family from his program of reform because of prejudice. I have a different view than this--more favorable to Rawls' conclusions but less supportive of his methods.
I don't believe that his reasoning was nearly as abstract as he claimed. Instead, I think he was a reader of newspapers and an observer of life in America, ca. 1945-1975. He observed that the actual government did a pretty good job of providing universal education but could still improve the equality of educational opportunity. The government policed employment contracts increasingly well to prevent racial and gender discrimination, albeit with room for improvement. But the government didn't do child-rearing well. (The foster care system was only an emergency response that, in any case, relied on private volunteers.) Rawls derived from the immediate past and present some principles for further reform.
That interpretation makes Rawls a good thinker, sensible and helpful, but not quite the kind of thinker he believed himself to be. In my view, he was less like Kant (elucidating the universal Kingdom of Ends from the perspective of pure reason) and more like Franklin Roosevelt, defending the course of the New Deal and Great Society in relatively general and idealistic terms. Or he was like John Dewey, critically observing reality from an immanent perspective. The reason this distinction matters is methodological. As we go forward from Rawls, I think we need more social experimentation and reflection on it, not better abstract reasoning about the social contract.
permanent link | comments (0) | category: philosophy
April 6, 2010
philosophers dispensing advice
Yesterday, for fun, I posted a clip of the philosopher Jonathan Dancy on the Late Late Show. His interview raises an interesting and serious question. Asked whether philosophers should dispense moral advice, Dancy says: No. I would agree with that, for reasons stated below. But Dancy goes further and suggests that philosophers shouldn't address substantive moral issues at all. He implies that people's ethical judgments are already in pretty good shape. A philosopher's job is to understand what kind of thing an ethical judgment is. In other words, moral philosophy is meta-ethics.
That is a controversial claim. John Rawls, Peter Singer, Robert Nozick, Judith Jarvis Thomson, and many other modern philosophers have advanced and defended challenging theses about morality. Since the great renaissance of ethics in the English-speaking world (1965-1975), its ambitions have diminished, I think, and a distinction has arisen between ethics (which is very "meta") and applied ethics (which is mostly about a given topic area, and not very philosophical). This split seems a harmful development, because the best moral philosophy is methodologically innovative and challenging and also addresses real issues.
Why shouldn't philosophers dispense advice? Because what one needs to advise people well is not only correct general views (which, in any case, many laypeople hold), but also good motivations, reliability and attention, fine interpretative skills, knowledge of the topic, judgment born of experience, and communication ability (meaning not only clarity but also tact). There is no reason to think that members of your local philosophy department are above average on all these dimensions.
But correct general views are valuable, and philosophers offer proposals that enrich other people's moral thinking. You wouldn't ask John Rawls to run a governmental program or even to advise on specific policies, but your thinking about policies may be better because you have read Rawls. It so happens that he held some interesting ideas about meta-ethics, but those were merely complementary to his core views, which were substantive.
I'm afraid I detect a general withdrawal from offering and defending moral positions in the academy. Humanists like to "problematize" instead of proposing answers. Social scientists are heavily positivist, regarding facts as given and values as arbitrary and subjective (thus not part of their work). If moral philosophers begin to consider the offering of moral positions as beyond their professional competence, there's virtually no one left to do it.
permanent link | comments (0) | category: philosophy
April 5, 2010
a philosopher hits the big time
I'm an adherent of a very small and obscure philosophical school called "particularism." (Of course, because I'm an academic, I have to have my own special flavor of it.) The best known particularist is Jonathan Dancy, whom I only met once but who nicely reviewed a book manuscript of mine. And his work has had a big influence on me, even though I come at things from a different angle. Anyway, unbelievable as it may seem, here he is explaining particularism on Craig Furgeson's "Late Late Show" on CBS:
I've never seen his show, but this Furgeson guy strikes me as pretty smart. And Dancy does a credible job in a terrifying situation. It turns out he's the actress Claire Danes' father-in-law. That relationship--rather than the arguments in "Are Basic Moral Facts both Contingent and A Priori?" (2008)--may be the reason for Dancy's new TV career. Whatever the reason, long may it prosper.
permanent link | comments (0) | category: philosophy
March 25, 2010
state, market, and original sin
Imagine that the pure and original human condition is freedom from all political constraint; and when governments intervene, they introduce arbitrary and illegitimate power. Then the market is Eden and the government is original sin. In that case, anyone who deliberately increases the scope of government must either be a purposeful or a deluded friend of sin. Regardless of what the Congressional Budget Office or the American Medical Association may say about the new health care act, it can only be a snake in the garden. The difference between literally "taking over one sixth of the economy" by nationalizing health care and merely adding some new insurance regulations and subsidies (as Congress did this week) is immaterial, because sin is sin. On this view, the only important political distinction is between those who would protect freedom from the state and those who would use government for their ends. Communists, fascists, liberals, and moderate conservatives--despite what I observe as profound differences--run together.
I am certainly not the first to note a similarity between this specific kind of libertarianism and religious thought. In 1922, Charles A. Beard argued:
About the middle of the nineteenth century, thinkers [in the field of Political Economy] were mainly concerned with formulating a mill owner's philosophy of society; and mill owners resented every form of state interference with their 'natural rights.' ... The state was regarded as a badge of original sin, not to be mentioned in economic circles. Of course, it was absurd for men to write of the production and distribution of wealth apart from the state which defines, upholds, taxes, and regulates property, the very basis of economic operations; but absurdity does not stay the hand of the apologist.
Beard wanted to rebut the idea that markets were primeval and natural by demonstrating that states originally created modern markets by seizing territory, chartering corporations, coining money, literally building physical exchanges, and so forth. But Beard's language suggests another point. The doctrine of laissez-faire echoes Christian principles, but almost precisely in reverse. (And to teach an inverted Christian doctrine would be blasphemous.) The conventional Christian view is that property was absent in Eden and among Jesus' apostles. Property entered because of sin; anointed or otherwise legitimate governments rightly restrain it with law.
I think Tom Paine represents an intermediary stage between the original doctrine (property is sin) and its laissez-faire inversion (property is pristine). In Common Sense, he writes:
[Natural] Society is produced by our wants, and government by our wickedness; the former promotes our happiness positively by uniting our affections, the latter negatively by restraining our vices. The one encourages intercourse, the other creates distinctions. This first is a patron, the last a punisher. Society in every state is a blessing, but government, even in its best state, is but a necessary evil . . . Government, like dress, is the badge of lost innocence.
This is not yet philosophical libertarianism, because Paine thinks that government, like dress, is a good idea under the circumstances. But it introduces the association of government with original sin.
Glenn Beck waded into the same territory when he denounced churches that embrace "social justice." His sense of sin was religious, I think, although his doctrine was the precise reverse of what all Christian denominations still officially hold. Jim Wallis has a nice rebuttal in the Huffington Post. If the official and traditional religious position still influences believers, then Beck bit off more than he can chew.
permanent link | comments (0) | category: philosophy
March 23, 2010
debating Bleak House
Steven Maloney has a thoughtful post about moral issues in Dickens' Bleak House. He cites two of my posts on the same subject, so this is a bit of a back-and-forth. I would summarize my thoughts about the novel as follows:
1. Mrs. Jellyby illustrates how an author's judgment of a character can be correct even though the same author's choice of that character is problematic. I find Mrs. Jellyby awful, as does Dickens. She is callously unconcerned about her own family because she is obsessed with an obviously foolish charitable scheme in Africa, a place of which she knows nothing. No doubt there were women like that in Dickens' day, when paths to national political and civic leadership were reserved for men. But bourgeois women were also struggling to play useful public roles despite a powerful cult of domesticity. Dorothea Brooke in Middlemarch--for example--is a great soul largely squelched by her narrow opportunities for improving the world. So it bothers me that Dickens would choose to portray a woman who should just stop worrying about society and serve her family better.
Steven makes a fair point that a whole range of characters populates Bleak House, and both the men and women exhibit various levels of social and domestic responsibility. The fact that Messrs. Skimpole and Carstone are as irresponsible as Mrs. Jellyby reduces the misogyny of the novel. Yet there is no female character with any capacity for social improvement--despite the terrible needs that Dickens portrays--and that seems a flaw.
The general category that interests me here encompasses fictional characters who have genuine virtues or vices, but whose description reinforces a harmful stereotype.
2. I think that Bleak House is a nationalistic novel, encouraging readers to broaden their sympathies to encompass all Englishmen (while stopping at the coasts of England). That's certainly not my favorite ethical stance, but it's better than a narrower frame or a vacuous and sentimental concern for human beings in general. Such nationalism is a form of solidarity, not just empathy. Building the nation-state as a community of mutual concern was an arduous task that could still fail today. Bleak House (and the liberalism it represents) improved the world.
Steven makes an important observation about Mr. Skimpole, who professes literally not to understand his social obligations. That creates an interesting problem for moral assessment. I think Steven is right that Skimpole is ultimately a charlatan and his kind of non-understanding is either inexcusable or spurious.
I've written much more about the ethical interpretation of literature in Reforming the Humanities: Literature and Ethics from Dante through Modern Times (Palgrave Macmillan, 2009).
permanent link | comments (0) | category: fine arts , philosophy
February 25, 2010
idea for a moral philosophy survey
I suspect that people make moral judgments based on a mix of principles, rules, virtues, moral exemplars, and stories. My own philosophical position is that these factors are on a single plane. Principles need not underlie stories, for example. There can be a web of influence or implication that connects all these different kinds of factors. It can be legitimate for a story to imply a principle, a principle to imply respect for an exemplar, the exemplar to suggest respect for a virtue, which implies a different principle. None is necessarily primary or foundational.
As an empirical matter, people differ (I assume) in how their moral thought is organized. If you envision each moral factor as a node, and each implication from one factor to another as a network tie, then we each have a moral network map in our mind. But for some, the map will look like an organizational chart, with a few very broad principles at the bottom, which imply narrower principles, which imply specific judgments. For others, a single story (like the Gospels, or one's own traumatic experience) lies at the center, and everything else radiates out. Some may have a random-looking network map, with lots of nodes and connections but no order. And some--whether by chance or not--will have what's called a "scale-free" network, in which 20% of the nodes are responsible for 80% of the ties. That kind of network is robust and coherent, but not ordered like a flow chart. The 20% of "power nodes" may be a mix of stories, exemplars, principles, and virtues.
I would further hypothesize that people of similar cultures have similar moral network maps.
How to find out? I wonder if you could give people an online survey that led with a fairly realistic but fictional moral situation.* It would be something close to lived experience, not a scenario like a trolley problem that is contrived to bring abstract principles to the surface.
Respondents could then be asked:
1. What principles (if any) influence you when you think about what you should do?
2. Whom would you imitate (if anyone) when you're deciding what to do?
3. What virtues (if any) would you try to embody when you're deciding what to do?
4. What stories (if any) come to mind when you're deciding what to do?
All of a respondent's answers could then be displayed on a screen, randomly scattered across the plane. The respondent could be given a drawing tool and asked to draw arrows (one- or two-directional) between factors that seem to influence or support other ones. Those data would generate a moral network map for the individual, and we would see how much the structure of people's maps differ.
*It would be very challenging to write a scenario that didn't bias responses toward one kind of moral factor. It would also be difficult to create a fictional scenario that had salience for different people. But the general idea would be to create a nuanced, complex, realistic situation demanding a moral response. For me personally, the kind of fictional story that would resonate would be something like this: "Your child attends a local public school. She's doing well academically and learning some academic material in classes, although not as much as she could. The school is racially and culturally diverse, and she benefits from learning about people who are demographically different. White, middle-class students perform better on standardized tests within this school than their peers who are children of color. The principal is caring and concerned with equity but does not seem to have a vision. The teacher is not especially nice but does seem effective at raising all children's test scores. Options for you include moving your kid to a different school, becoming more involved in the school's governance, or advocating for a policy change. What do you feel you should do?"
permanent link | comments (0) | category: philosophy
February 24, 2010
going deeper on gay marriage
At a meeting last week, we discussed whether gay marriage makes a good topic for discussion in a philosophy or civics course at the high school or college level. Some participants argued that there are no good secular, public reasons against gay marriage. Students (at any level) may have personal convictions against it, but they can only disclose those convictions (if they dare). They will not be able to make arguments relevant to fellow students who hold different convictions. All the neutral arguments favor gay marriage. And that makes it a poor choice for a discussion topic.
I'm not certain that's correct, but I do think that gay marriage is nested in broader issues that make better discussion topics. IF we should live in a liberal, democratic state that is neutral about religion, AND IF that state should give special legal recognition and benefits to "marriage," defined as a very specific contract between pairs of consenting adults, THEN that recognition and those benefits should be available to gay citizens as well as straight ones. That argument seems very straightforward to me and virtually impossible to refute on its own terms. But ...
Should we live in a liberal, democratic state that is neutral about religion? That's a good, complicated, heavily-discussed topic. It raises thorny cases. For example, Martin Luther King was a Christian minister and theologian who made brilliant, "faith-based" arguments against segregation. Those arguments influenced policymakers and voters in our liberal democracy. Was his influence appropriate? If so, why?
Second, should the state recognize and provide benefits for only certain kinds of contracts, defined as "marriages?" Today, in some states, gays may marry legally. But everyone who marries enters into a contract that has certain features. It is designed to be permanent, although there is an intentionally difficult escape hatch in the form of divorce. It combines in one package monogamous sexual intimacy, economic unity, parenting and adoption rights, cohabitation, tax benefits, inheritance, and other legal privileges. Clearly, these elements could be unpacked and offered a la carte.
In practice, marriages do differ. Some people who marry are never sexual partners nor plan to be. Some couples do not expect or value monogamy. Prenuptial agreements may override the principle of economic unity or common property. Yet it remains important that the state -- and social custom -- favors one model of marriage (even when gay marriage is permitted).
I think this second issue (standardized legal marriages versus a la carte contracts) is pretty interesting. If legal marriage became very flexible, it would be like forcing everyone to negotiate their own prenuptial agreements. I would personally hate that idea. It seems extremely stressful to have to invent one's own model of marriage as a couple and then write it all down in legal terms. I would much rather buy into an existing legal and social norm. But this seems like a worthy topic of discussion.
permanent link | comments (0) | category: philosophy
January 6, 2010
why I am not a libertarian
I have a lot of respect for the pragmatic kind of libertarianism that says: Market solutions might work better than government programs, and we should try them. For example, I think it's right to experiment with voucher systems as alternatives to government-run schools. This experiment will either work or not (under various circumstances), but it's worth trying.
A voucher system would not, however, bring about true philosophical libertarianism. The government would still collect mandatory taxes to fund education, and would still make certain educational experiences mandatory for every child. In fact, voucher systems are standard in some of the Western European countries that we call "socialist."
True philosophical libertarianism says: Government taxation and regulation are affronts to personal liberty. My life is mine, and no one, including a democratic state, may take goods from me or direct my actions without restricting my freedom. At most, minor restrictions on my liberty are acceptable for truly important reasons, but they are always regrettable.
That doctrine simply does not feel plausible to me, experientially. Imagine that all levels of government in the United States reduced their role to providing national defense and protecting us against crimes of violence and theft. Gone would be an interventionist foreign policy, criminalization of drugs and prostitution, and--more significantly--publicly funded schools, colleges, medical care, retirement benefits, and environmental protection. As a result, a family like mine could probably keep 95% of the money we now have to spend on taxes, paying only for a minimal national defense and some police and courts. We would have perhaps one third more disposable income,* although we would have to purchase schooling for our kids, a bigger retirement package, and more health insurance; and we would have to pay the private sector somehow for things like roads and airports.
I have my doubts that we would be better off in sheer economic terms. In any case, I am fairly sure that I would not have more freedom as a result of this change. And freedom (not economic efficiency or impact) is the core libertarian value.
I don't think one third more discretionary income would make me more free because I know plenty of people who already have that much income and they don't seem especially free. With an extra billion dollars, I could do qualitatively different things from what I can do now; but an amount under $100,000 would just mean more stuff. Meanwhile, when I consider the actual limits to my freedom, the main ones seem to fall into two categories. First, there is a lack of time to do what I want. I suppose not having to pay taxes would give me a bit more time because I could work fewer hours. But my work is a source of satisfaction to me (and is also somewhat competitive with others' work). I would be very unlikely to cut my hours if the opportunity arose, nor would doing so feel like an increase in my freedom. The way to get more time is to stop wasting it.
Second, I feel limited by various mental habits: too much concern with material things, too much fear of disease and death, too much embroilment in trivialities. I hardly think that being refunded all my taxes would help with those problems, especially if I then had to shop for schools, retirement packages, and insurance. That sounds like a perfect snare.
I have been talking about me and my family. Whatever the impact on us of a libertarian utopia, it would be worse for people poorer than us. Unless you take a very dim view of the quality of government services such as Medicaid and public schools, you should assume that low-to-moderate income citizens get more from the state than they could afford on the market. They would have reason to worry that they could afford basic services at all, and such insecurity would decrease their freedom as well as their welfare.
Overall, economic libertarianism seems to me a materialistic doctrine. (Civil libertarianism, which I endorse, is a different matter.) You risk being called elitist for saying that we are unfree because we have too much stuff and care too much about it. But it happens to be true.
*I don't know how much my family spends on total taxes (income, sales, property, local, state, federal, Social Security, etc), but the Statistical Almanac of the United States says that 12% of all personal income goes to taxes, and I am presuming that we pay three times the average rate because we have higher income and live in Massachusetts.
permanent link | comments (0) | category: philosophy
January 5, 2010
Habermas illustrated by Twitter
The contemporary German philosopher Jürgen Habermas has introduced a set of three concepts that I find useful. They play out in the 140-character messages, "tweets," that populate Twitter. Here are Habermas' three concepts, with tweets as illustrations. (I found these examples within seconds as I wrote this blog post.)
Lifeworld is the background of ordinary life: mainly private, maybe somewhat limited or biased, but also authentic and essential to our satisfaction as human beings. When in the Lifeworld, we mostly communicate with people we know and who share our daily experience, so our communications tend to be cryptic to outsiders and certainly not persuasive to people unlike us. Real examples from Twitter: "y 21st bday with my beloved fam, bf and bff :)" ... "Getting blond highlights for new year." ... "Thanks! You too! I hope you get a chance to rest over the weekend before 'life' comes back at us."
The Public Sphere is the set of forums and institutions in which diverse people come together to talk about common concerns. It includes civic associations, editorial pages of newspapers, New England Town Meetings, and parts of the Internet. The logic of public discourse demands that one give general reasons and explanations for one's views--otherwise, they cannot be persuasive. Examples from Twitter: "Is it time to admit that the failures in our intelligence on terrorism are not systemic/technical but human/cultural?" "Clyburn Compares Health Care Battle To Struggle For Civil Rights" ... "Reports from Iran of security forces massing in squares as new footage of protests is posted." (Note that each of these tweets had an embedded link to some longer document.)
The "System" is composed of formal organizations such as governments, corporations, parties, unions, and courts. People in systems have official roles and must pursue pre-defined goals (albeit with ethical constraints on how they get there). For example, defense lawyers are supposed to defend their clients; corporate CEOs are supposed to maximize profit; comptrollers are supposed to reduce waste in their own organizations. You can see the "System" at work on Twitter if you follow Microsoft ("The Official Twitter of Microsoft Corporate Communications"), The White House, or NYTimes.
When well designed, Systems can be efficient, predictable, and fair. But they prevent participants from reasoning about what ought to be done, because officials have pre-defined goals. Thus it is dangerous for the System to "colonize" the public sphere and the Lifeworld. It is also dangerous for people to retreat entirely from the public sphere into the privacy of the Lifeworld. The Twitter Public Timeline shows this struggle play out in real time.
permanent link | comments (0) | category: Internet and public issues , philosophy
September 23, 2009
Reforming the Humanities (coming soon)
My new book is in production and has a cover and an Amazon page. It's entitled Reforming the Humanities: Literature and Ethics from Dante through Modern Times. Two blurbs are on the back:
“Levine has written an erudite, balanced, insightful book integrating moral philosophy and literary interpretation. His choice of Dante's story of Francesca and Paolo is inspired, enabling him to illustrate his methodological and substantive points with a literary masterpiece. If anyone doubts that literature is ethical or that ethics can benefit from literature, this book will prove him wrong. I see here the beginnings of a new and promising humanistic discipline—narrative ethics.”—Colin McGinn, Professor of Philosophy, University of Miami
“The virtues of this book are many: it makes clear and compelling arguments for moderate particularism and historicism in moral reasoning, it deftly shows how Dante himself pursued these goals despite his own penchant for moral universalism, it generously but insistently illustrates the limitations of extremity (in particularism, historicism, and also universalism) through wide-ranging references to periods in art, literature, music, and philosophy, and it finally allies itself with a still burgeoning humanistic revival led by literary critics and moral philosophers. The author’s learnedness and intellectual curiosity are on display on every page…Philosophers and literary critics have much more to learn from each other right now. In the humanities, we dwell too much on what to read and how to read, but too little on why to read. This book offers a distinctive and compelling answer to that last question.”—Daniel S. Malachuk, Western Illinois University and author of Perfection, the State, and Victorian Liberalism
permanent link | comments (0) | category: philosophy
September 3, 2009
ethical reasoning as a scale-free network
All of us have many ethical thoughts--about this person, that activity, and also about general concepts like virtues and principles. Some of our ethical thoughts are linked to other ones. One entails another, or trumps it, or incorporates it. So you could make a diagram of my moral or ethical worldview that would consist of my thoughts and links among them.
What kind of network would it be? And what kind of network should it be? These are, respectively, an empirical/psychological question (the answer to which might differ for individuals) and a moral/philosophical question (which probably has one correct answer). By the way, instead of asking these questions about individuals, one could pose them for cultures or institutions.
Ethics might turn out to involve one of three kinds of networks:
1. An ordered hierarchy. This kind of network map would resemble the organizational flowchart of the US Army. At HQ would be some very general, core principles, mutually consistent: like Kant's Categorical Imperative or the utilitarian principle of the greatest good for the greatest number. Division commanders would be big principles like "no lying" or "spend government money to reduce suffering." The footsoldiers would be particular judgments. The chain of command would ideally be clear. Real people might have confused structures, but then we should try to rationalize them. The purpose, for example, of trolley problems is to identify the core principles of people's ethics so that inconsistencies can be reduced.
2. A random-looking network. In a truly random network, any node has an equal chance of being linked to any other. As in a bell curve, the node with the most links would not be that different from the mean node. Our ethical map would not be truly random, because there are reasons that one moral thought entails another. But the links among concepts and opinions might be distributed so that they were mathematically similar to those in a randomly-generated network.
I doubt that this is good description of morality. David McNaughton and Piers Rawling are correct to say that some ethical concepts are "central." They are not just more weighty than other concepts (as rape is more weighty than jaywalking). They are also more central in the sense that they turn up more often and we rely on them more for judgments (“Unprincipled Ethics,” in Hooker and Little, eds., Moral Particularism, p. 268.)
3. A scale-free network: This is a mathematical phrase for a network in which just a few nodes have enormous numbers of links and basically hold the whole thing together. Scale-free networks have no "scale" because there's no typical number of links that can be used to create a scale of popularity on the y-axis. Instead, popularity rises asymptotically according to a "power law." From wikipedia:
"An example power law graph, being used to demonstrate ranking of popularity. To the right is the long tail, to the left are the few that dominate (also known as the 80-20 rule)."
In the case of ethics, we might find that equality, freedom, self-improvement, and compassion were power hubs with enormous numbers of links. Gratitude, fidelity, etc might appear in an important second tier. (I am drawing here on W.D. Ross's list of prima facie duties.) Not cutting ahead in line would be out on the "long tail" of the distribution, along with reading Tolstoy and smiling at bus drivers.
Empirically, I think we could find out whether people (some or all of them) had scale-free moral network maps in their heads. One method would be to obtain a lot of text in which they reasoned about ethical issues--say, interview transcripts. One would identify and code concepts and connections among them, justifying each addition to the map with a quote. Whether the network is scale-free then becomes a mathematical question.
Philosophically, I like the idea of morality as a scale-free network. It means that some concepts are much more important than others, but everything needn't rest on a consistent and coherent foundation. The network can be strong even though it accommodates tensions. Further, since there is no foundation, doubting any one premise doesn't undermine morality as a whole. It just knocks out one hub and the traffic can be redirected. Finally, this metaphor helps us to think about differences in ethical thinking among individuals and among cultures. It's not that we have incommensurable perspectives, but that our network maps have (somewhat) different hubs. That suggests that dialog is possible even though disagreement should be expected (which sounds to me like the truth).
permanent link | comments (0) | category: philosophy
July 30, 2009
reforming the humanities
Last week, I submitted the copy-edited version of my next book for layout and production. It is entitled Reforming the Humanities: Literature and Ethics from Dante Through Modern Times, and it will be published by Palgrave Macmillan this year. The first paragraph says:
This is a book about ethics and stories. Ethics (or morality) encompasses what is right or good, what we ought to do, and how laws and institutions should be organized. I argue that a good way to make ethical judgments and decisions is to describe reality in the form of a true narrative. Fictional stories also support moral conclusions that can translate into real life. I argue that when the moral judgments supported by a good story conflict with general principles, we ought to follow the story and amend or suspend our principles, rather than the reverse. What makes a story “good” for this purpose is not its conformity to correct moral principles, but its merits as a narrative--for instance, its perceptiveness and coherence and its avoidance of cliché, sentimentality, and euphemism.
permanent link | comments (0) | category: philosophy
July 8, 2009
a tendency to generic thinking
When we try to think seriously about what should be done, we have a tendency or temptation to think in generic terms--about categories rather than cases.
- In social science, quantitative research evidently requires categorization; it is the search for relationships among classes of things.
- In applied philosophy/ethics, most of the discussion is about categories that can be defined by necessary and sufficient conditions, e.g., abortion, war, marriage. Thinking about categories allows what Jonathan Dancy calls “switching arguments." For instance, you decide what is good about heterosexual marriages, and if the same reasons apply to gay marriages, you should favor them as well. By thinking categorically, you can switch from one case to another.
- In policy analysis, lots of research is about generic policies: vouchers, foreign aid payments, prison sentences. I should, however, note the important exception that some scholars study major individual policies, such as the decision to invade Iraq or the No Child Left Behind Act.
- In ideological politics, the underlying values are strong general principles, e.g., "markets are good" or "there should be more equality." Categories of policies are then used as wedges for advancing an ideology. For example, libertarians promote school choice in order to demonstrate that markets work better (in general) than governments.
I have a gut-level preference for particularism: the idea that, in each situation, general categories are "marinaded with others to give some holistic moral gestalt" (Simon Blackburn's phrase). That implies that applying general categories will distort one's judgment, which should rather be based on close attention to the case as a whole.
I will back off claims that I made early in my career that we should all be thorough-going particularists, concerned mainly with individual cases and reluctant to generalize at all. My view nowadays is that there are almost always several valid levels of analysis. You can think about choice in general, about choice in schooling, about charters as a form of choice, or about whether an individual school should become a charter. All are reasonable topics. But the links among them are complex and often loose. For instance, your views about "choice" (in general) may have very limited relevance to the question of whether your neighborhood school should become a charter. Maybe the key issue there is how best to retain a fine incumbent principal. Would she leave if the school turned into a charter? That might be a more important question than whether "choice" is good.
The tendency to generalize is enhanced by certain organizational imperatives. For instance, if you work for a national political party, you need to have generic policy ideas that reinforce even more generic ideological ideas. The situation is different if you are active in a PTA. Likewise, if you are paid to do professional policy research, you are likely to have more impact if your findings can generalize--even if your theory explains only a small proportion of the variance in the world--than if you concentrate on some idiosyncratic case. On the other hand, if you are paid to write nonfictional narratives (for instance, as a historian or reporter), you can focus on a particular case.
I'm inclined to think that we devote too much attention (research money, training efforts, press coverage) to generic thinking, and not enough to particular reasoning about complex situations and institutions in their immediate contexts. There is a populist undercurrent to my complaint, since generic reasoning seems to come with expertise and power, whereas lay citizens tend to think about concrete situations. But that's not always true. Martha Nussbaum once noted that folk morality is composed of general rules, which academic philosophers love to complicate. Some humanists and ethnographers are experts who think in concrete, particularistic terms. Nevertheless, I think we should do more to celebrate, support, and enhance laypeople's reasoning about particular situations as a counterweight to experts' thinking about generic issues.
permanent link | comments (1) | category: philosophy
June 15, 2009
ethics from nature (on Philip Selznick)
(en route to the Midwest for a service-learning meeting.) Here is a fairly comprehensive ethical position. It is my summary of Philip Selznick's The Moral Commonwealth, chapter 1, which is presented as an interpretation of Dewey's naturalistic ethics. I have not investigated whether Selznick gets Dewey right--that doesn't matter much, because Selznick is a major thinker himself. His position has just a few key ingredients:
1. "The first principle of a naturalist ethic is that genuine values emerge from experience; they are discovered, not imposed" (Selznick, p. 19). So we shouldn't expect to ground ethics in a truth that is outside of experience, as Kant advised.
2. Experience is the understanding of nature, broadly defined. Such experience has moral implications. There is "support in nature for secure and fruitful guides to moral reflection and prescription" (p. 27). Yet "humanity is in the business of amending nature, not following it blindly" (p. 18).
3. The study of nature that we need for ethics is more like "natural history" than "theoretical science." In other words, it looks for generalities and patterns, but it doesn't assume that true knowledge is highly abstract and universal. "For modern theoretical scientists, nature is not known directly and concretely but indirectly and selectively. Ideally embodied in mathematical propositions, nature becomes rarified and remote. In contrast, students of natural history--naturalists--are interested in the situated wholeness of objects and organisms. They perceive a world of glaciered canyons, burnt prairies, migrating geese." They exhibit "love for the world" (p. 26).
4. Certain facts about human beings (not to be sharply separated from other natural species) emerge from such empirical observation and are ethically important. For instance, human beings have a potential for growth or development in interaction with community, and such growth gives us well-being. "When interaction is free and undistorted--when it stimulates reflection and experiment--powers are enhanced, horizons expanded, connections deepened, meanings enriched. Growth depends on shared experience, which in turn requires genuine, open communication" (pp. 27-8).
Dewey/Selznick begin with observable facts about us as a natural species, identify growth as a "normative idea" (p. 28), and are soon on their way to strong ethical conclusions. For instance, Dewey claimed that democracy is the best system of government because it permits free collective learning; but a democracy is desirable to the extent that discussion and experimentation prevail (rather than the mere tabulation of votes).
This approach suggests that it's better to "benchmark" than to set ideals. That is, it's better to assess where we are as a species, or as a community, or as an individual, and then try to enhance the aspects that seem best, rather than decide what a good society or a good character should be like in principle. Dorf and Sabel have tried to work out a whole political theory based on this distinction. (Link opens a Word doc.)
I find Selznick's view attractive, but I have two major methodological concerns. First, I'm not sure that the selection of natural features is as straightforward as Selznick and Dewey presume. We are naturally capable of learning together in cooperative groups, thereby developing our own competence and enriching our experience. We are also capable of exploitation, cruelty, faction, brutality, and waste. These all seem equally "natural." I suspect the pragmatist's preference for "growth" is closer to a classical philosophical premise than a naturalist observation. In fact, it sounds a lot like Kant's requirement that we develop ourselves and others.
We could read Dewey's conclusions as simply a contribution to public debate. He likes "growth"; others can discuss his preference. If we reach consensus within our community, we have all the ethical certainty we need. If we disagree, our task is to discuss.
That's all very well as long as we recognize that consensus is highly unlikely. (This is my second objection.) Imagine Dewey in a debate with an Iranian Ayatollah. The latter would reject Dewey's method, since revelation should trump experience; Dewey's understanding of natural history, since the world began with creation and will end apocalyptically; and Dewey's goals, since salvation after death is much more valuable than growth here on earth. No experience can directly settle this debate, because we only find out what happens after death after we die. And until the Mahdi actually returns, it's possible that he is waiting.
But here's an argument in favor of Dewey's method. The debate is not just about abstract principles and unfalsifiable predictions. It's also about how principles play out in real, evolving institutions. So we should compare not just the metaphysics of a Shiite Ayatollah and an American pragmatist, but also the institutions that each one endorses: contemporary Iran versus a Deweyan model, such as a laboratory school or a settlement house. It seems to me that contemporary Iran is not doing very well, and Dewey has a "naturalist" explanation of why not. The fundamental principles of the Iranian revolution are not in sync with nature. That's not going to persuade a diehard revolutionary, because he will expect everything to improve as soon as the Mahdi returns. But it is an observation that a devout Shiite can accept and use as an argument for reform. Thus there is a meaningful debate between reformers like Khatami and diehards like Ahmadinejad. If Khatami ultimately wins, score one for Dewey and Selznick, because Iran will have turned out to be governed by natural laws of growth and reflection.
permanent link | comments (0) | category: philosophy
May 7, 2009
two paths to abstraction
1. At first, artists depict the world as they think it actually is. They even show heaven and other eternal and transcendent scenes in terms of their own times, places, and styles. Then they realize that they have a manner, a method, and a style of representation; and many such styles are possible. They learn to imitate art from distant places and times, which requires a certain sympathy or compassion. Their ability to represent the world as depicted by others reduces their attachment to their own style, which begins to seem arbitrary. For example, it seems arbitrary that the center of a flat piece of art should always appear to recede into the distance, and that one side of each object should be visible. Why not show all the sides at once, as in cubism? Gradually, artists' enthusiasm for any form of representative art diminishes. One important option becomes renunciation, in the form of minimalism and abstraction. Showing the world in any style means embodiment; but the mind can transcend the body. True art then becomes not the naive representation of the world, nor a sentimental imitation of someone else's naive style, but just a field of color on a canvas. That seems the way to make the artist's arbitrary will and narrow prejudices disappear, and beauty appear.
2. The Buddha's "Karaniya Metta Sutta," translated by the Amaravati Sangha:
Even as a mother protects with her life
Her child, her only child,
So with a boundless heart
Should one cherish all living beings;
Radiating kindness over the entire world:
Spreading upwards to the skies,
And downwards to the depths;
Outwards and unbounded,
Free from drowsiness,
One should sustain this recollection.
This is said to be the sublime abiding.
By not holding to fixed views,
The pure-hearted one, having clarity of vision,
Being freed from all sense desires,
Is not born again into this world.
The image is Ad Rheinhart, "Abstract Painting" (1951-2). (Rheinhart, influenced by Zen through his friend Thomas Merton, sought to make painting as “a free, unmanipulated, unmanipulatable, useless, unmarketable, irreducible, unphotographable, unreproducible, inexplicable icon.”)
permanent link | comments (0) | category: philosophy
May 1, 2009
what shape is a field of vision?
At an idle moment recently, I was wondering what shape my field of vision has. A quick Google search took me to Alexander Duane's and Ernst Fuchs's 1899 textbook of ophthalmology, which is online. I am sure there is much more recent work--both empirical and conceptual--but I didn't explore it. Instead, I began to speculate that this is a fairly complicated question.
My first responses were in terms of two-dimensional spaces--for instance, I thought that perhaps my field of vision was an oval with a perturbation around my nose. It's oval rather than round because I have two eyes, and each has a separate field like the one pictured here. Putting them together creates an oval. So if you wanted to represent what I can see, you would take a wide-angle photo from my vantage point and cut out a roughly oval shape.
But my retinas are three-dimensional, as is the world they see. So should we say that my field of vision is a section of an ovoid with some irregularities created by my nose, eyebrows, and hair? Even that that answer seems oversimplified, since my eyeballs are capable of focusing at different depths (and even rolling around, although that might be forbidden in a test of one's field of vision); and the world itself is not pasted on the inside of an oval--it extends into the distance. If we said that the shape of my field of vision was roughly ovoid, how big would that ovoid be? The night sky that is sometimes part of it is awfully far away. And I haven't even mentioned that we see moving things and bright colors more easily than stable, dull things. By now, it's beginning to sound as if my field of vision has no shape. But surely that can't be right; my vision has limits and moves as I change my orientation. We've begun talking about the world, not what I see of it.
By the way, it is interesting how easily we accept a photograph as a representation of vision, even though it is flat and rectangular, whereas our field of vision is--at the very least--irregular and vaguely bordered.
Wittgenstein seems to want us to dispense with the goal of analogizing inner experience to something else, as if everyday experience required some explanation on terms other than its own:
- And above all do not say 'After all my visual impression isn't the drawing; it is this--which I can't shew to anyone.'--Of course it is not the drawing, but neither is it anything else of the same category, which I carry within myself. ... If you put the 'organization' of a visual impression on a level with colours and shapes, you are proceeding from the idea of the visual impression as an inner object. Of course this makes this object into a chimera; a queerly shifting construction. For the similarity to a picture is now impaired. -- Philosophical Investigations, translated by Anscombe, IIxi.
For Wittgenstein, I take it, a field of vision has no shape, and we only feel that that's strange because we are in the grip of a model of vision as inner photography. It's actually something else entirely. And yet I keep returning to my initial thought that what I see is an oval with my nose intruding from the bottom.
permanent link | comments (0) | category: philosophy
March 12, 2009
a new book on the way
Palgrave Macmillan has offered me a contract to publish my "Dante book" (which needs an actual title--and I'm not sure what that should be). I have been working on the manuscript for 14 years, and it has gone through many profound structural changes as my thoughts have evolved and as I've assimilated useful criticism. It is great to think that the project will be done and between covers within months.
Here is the beginning of the introduction:
This is a book about ethics or morality and fiction. Ethics encompasses what is right or good, what we ought to do and think, and how laws and institutions should be organized. I argue that we should often make ethical judgments and decisions by describing reality in the form of true narratives. Fictional stories provide excellent opportunities to deliberate about situations and issues that also occur in real life, and should be read, in part, as ethical statements. I argue that when the moral judgments supported by a good story conflict with general principles, we ought to follow the story and amend or suspend our principles, rather than the reverse. What makes a story “good” for this purpose is not its conformity to correct moral principles, but its merits as a narrative—for instance, its perceptiveness and coherence and its avoidance of cliché, sentimentality, and euphemism.
The relationship between stories and moral principles is connected to other issues that I also explore: the proper role of emotion and reason in ethics; the scope of ethical judgments (i.e., how widely or in how many different contexts a given judgment ought to apply); cultural diversity and what that means for morality; partiality, or whether it is appropriate to favor people whom one knows; what kinds of context are relevant to the interpretation of literary texts; and the value of fictional versus true narratives.
This is a book of humanistic scholarship: specifically, literary criticism and moral philosophy. Those are my roots, even though I spend almost all my time on quantitative social science or policy analysis. My day job is to study and promote "civic engagement" or "active citizenship"; and it has proved useful to study those topics empirically. (Hence CIRCLE.) I don't think either phrase appears in this book manuscript. But there is a deep connection in my mind, which I hope to make explicit in a later project.
The thesis of my "Dante book" is that an indispensable technique for moral judgment is the description of concrete, particular situations in narratives. I argue that no set of principles, no procedure, no algorithm for weighing values, and no empirical data could ever replace this process of description. It is an art and a skill; some people practice it better than others, and it can be taught. But it is not the special province of any credentialed experts, such as lawyers, economists, or moral philosophers. It cannot be replaced--even in a distant utopia--by rules or systems.
In my "Dante book," I draw some conclusions about the purposes and methods of the humanities. (In fact, it has been suggested that I entitle the volume, Dante's Moral Reasoning: Reforming the Humanities.) In my other work, I follow the implications beyond the academy into the domain of politics. We cannot tell what is right and good unless active, engaged citizens discuss concrete cases. They will only be motivated to discuss and to inform their conversations with experience if they have practical roles in self-government. That is the fundamental connection between my two main interests: moral judgment and civic engagement.
permanent link | comments (0) | category: philosophy
March 6, 2009
critical thinking about "critical thinking"
Here are three interestingly complementary comments. The first is from the moderate-conservative New York Times columnist David Brooks:
A few years ago, a faculty committee at Harvard produced a report on the purpose of education. "The aim of a liberal education" the report declared, "is to unsettle presumptions, to defamiliarize the familiar, to reveal what is going on beneath and behind appearances, to disorient young people and to help them to find ways to reorient themselves."
The report implied an entire way of living. Individuals should learn to think for themselves. They should be skeptical of pre-existing arrangements. They should break free from the way they were raised, examine life from the outside and discover their own values.
This approach is deeply consistent with the individualism of modern culture, with its emphasis on personal inquiry, personal self-discovery and personal happiness. But there is another, older way of living, and it was discussed in a neglected book that came out last summer called "On Thinking Institutionally" by the political scientist Hugh Heclo.
In this way of living, to borrow an old phrase, we are not defined by what we ask of life. We are defined by what life asks of us. As we go through life, we travel through institutions — first family and school, then the institutions of a profession or a craft. ...
New generations don’t invent institutional practices. These practices are passed down and evolve. So the institutionalist has a deep reverence for those who came before and built up the rules that he has temporarily taken delivery of. "In taking delivery," Heclo writes, "institutionalists see themselves as debtors who owe something, not creditors to whom something is owed."
The second comment is from the influential Yale literary and queer theorist Michael Warner (hardly a moderate conservative, nor a pundit--although he might be a pandit). In a chapter entitled "Uncritical Reading," Warner writes that the standard justification of college-level English is to teach students to be critical readers, ones who aren't fooled by various forms of ideology, emotion, bias or writerly tradecraft.
Critical reading is the folk ideology of a learned profession, so close to us that we seldom feel the need to explain it. ... Since literary critics tend to think of critical reading as a necessary form of any self-conscious reading, they seldom think of it as the kind of practice that might have--as I think it does have--a history, an intergenetic mix of forms, a discipline. ... The very specific culture of critical reading is not the only normatively or reflexively organized method of reading, to which all others should be assimilated.
Warner ends with a quote from the philosopher Bernard Williams (who, considering his politics as a British social democrat, makes a nice third leg of this stool):
This ideal [of critical reason] involves an idea of ultimate freedom, according to which I am not entirely free as long as there is any ethically significant aspect of myself that belongs to me simply as a result of the process by which I was contingently formed. If my values are mine simply in virtue of social and psychological processes to which I have been exposed, then (the argument goes) it is as though I had been brainwashed: I cannot be a fully free, rational, and responsible agent.
Williams is skeptical about this ideal of separating the "criticizing self" from "everything that a person contingently is." To put the point in my terms (not his): We can criticize any value. We can always ask, Why? Why should people have freedom of speech? Because they have equal dignity. But why should they have equal dignity? When moral words and phrases have emotional appeal, we can learn to disassociate ourselves from the positive emotions by asking critical questions. That process, carried to its relentless conclusion, leaves nothing.
Thus a good life is not simply a critical one; it also requires appreciation of contingency and solidarity for others. In my opinion, it is right to appreciate the diverse values that people have inherited (for contingent reasons) and to feel solidarity with them despite these differences. In that case, critical thinking and critical reading are not satisfactory goals of education, at any level. Some critical independence is valuable, but there must also be a positive affective dimension.
A separate question is to what extent critical thinking really dominates at institutions like Harvard. My sense is that the faculty report that Brooks quotes is only part of the picture. Universities also powerfully teach respect or even reverence for various institutions and traditions. Indeed, they try to teach students to revere academia itself--not mainly as a venue for critical debate but as a social gatekeeper and arbiter of norms. The fact that "critical reading" takes place in the seminar room helps to justify the institution's major function, which is to bestow membership and recognition on some and not on others.
permanent link | comments (4) | category: academia , philosophy
February 24, 2009
the politics of negative capability
Zadie Smith's article "Speaking in Tongues" (The New York Review, Feb 26) combines several of the fixations of this blog--literature as an alternative to moral philosophy, deliberation, Shakespeare, and Barack Obama--and makes me think that my own most fundamental and pervasive commitment is "negative capability." That is Keat's phrase, quoted thus by Zadie Smith:
- At once it struck me, what quality went to form a Man of Achievement especially in Literature and which Shakespeare possessed so enormously—I mean Negative Capability, that is when man is capable of being in uncertainties, Mysteries, doubts, without any irritable reaching after fact and reason.
Other critics have noted Shakespeare's remarkable ability not to speak on his own behalf, from his own perspective, or in support of his own positions. Coleridge called this skill "myriad-mindedness," and Matthew Arnold said that Shakespeare was "free from our questions." Hazlitt said that the "striking peculiarity of [Shakespeare’s] mind was its generic quality, its power of communication with all other minds--so that it contained a universe of feeling within itself, and had no one peculiar bias, or exclusive excellence more than another. He was just like any other man, but that he was like all other men." Keats aspired to have the same "poetical Character" as Shakespeare. Borrowing closely from Hazlitt, Keats said that his own type of poetic imagination "has no self--it is every thing and nothing--It has no character. … It has as much delight in conceiving an Iago as an Imogen. What shocks the virtuous philosop[h]er, delights the camelion poet.” When we read philosophical prose, we encounter explicit opinions that reflect the author’s thinking. But, said Keats, although "it is a wretched thing to express … it is a very fact that not one word I ever utter can be taken for granted as an opinion growing out of my identical nature [i.e., my identity]."
In Shakespeare's case, it helps, of course, that he left no recorded statements about anything other than his own business arrangements: no letters like Keats' beautiful ones, no Nobel Prize speech to explain his views, no interviews with Charlie Rose. All we have is his representation of the speech of thousands of other people.
Stephen Greenblatt, in a book that Smith quotes, attributes Shakespeare's negative capability to his childhood during the wrenching English Reformation. Under Queen Mary, you could be burned for Protestantism. Under her sister Queen Elizabeth, you could have your viscera cut out and burned before your living eyes for Catholicism. It is likely that Shakespeare's father was both: he helped whitewash Catholic frescoes and yet kept Catholic texts hidden in his attic. This could have been simple subterfuge, but it's equally likely that he was torn and unsure. His "identical nature" was mixed. Greenblatt argues that Shakespeare learned to avoid taking any positions himself and instead created fictional worlds full of Iagos and Imogens and Falstaffs and Prince Harrys.
What does this have to do with Barack Obama? As far as I know, he is the first American president who can write convincing dialog (in Dreams from My Father). He understands and expresses other perspectives as well as his own. And he has wrestled all his life with a mixed identity.
Smith is a very acute reader of Obama:
- We now know that Obama spoke of Main Street in Iowa and of sweet potato pie in Northwest Philly, and it could be argued that he succeeded because he so rarely misspoke, carefully tailoring his intonations to suit the sensibility of his listeners. Sometimes he did this within one speech, within one line: 'We worship an awesome God in the blue states, and we don't like federal agents poking around our libraries in the red states.' Awesome God comes to you straight from the pews of a Georgia church; poking around feels more at home at a kitchen table in South Bend, Indiana. The balance was perfect, cunningly counterpoised and never accidental.
The challenge for Obama is that he doesn't write fiction (although Smith remarks that he "displays an enviable facility for dialogue"), but instead holds political office. Generally, we want our politicians to say exactly what they think. To write lines for someone else to say, with which you do not agree, is an important example of "irony." We tend not to like ironic leaders. Socrates' "famous irony" was held against him at his trial. Achilles exclaims, "I hate like the gates of hell the man who says one thing with his tongue and another in his heart." That is a good description of any novelist--and also of Odysseus, Achilles' wily opposite, who dons costumes and feigns love. Generally, people with the personality of Odysseus, when they run for office, at least pretend to resemble the straightforward Achilles.
But what if you are not too sure that you are right (to paraphrase Learned Hand's definition of a liberal)? What if you see things from several perspectives, and--more importantly--love the fact that these many perspectives exist and interact? What if your fundamental cause is not the attainment of any single outcome but the vibrant juxtaposition of many voices, voices that also sound in your own mind?
In that case, you can be a citizen or a political leader whose fundamental commitments include freedom of expression, diversity, and dialogue or deliberation. Of course, these commitments won't tell you what to do about failing banks or Afghanistan. Negative capability isn't sufficient for politics. (Even Shakespeare must have made decisions and expressed strong personal opinions when he successfully managed his theatrical company). But in our time, when the major ideologies are hollow, problems are complex, cultural conflict is omnipresent and dangerous, and relationships have fractured, a strong dose of non-cynical irony is just what we need.
permanent link | comments (0) | category: Barack Obama , Shakespeare & his world , deliberation , philosophy
February 23, 2009
consolation of mortality
I just finished Jonathan Barnes' Nothing to be Frightened Of, which is the memoir of a novelist who fears death. I read it because the quotations in reviews were very funny; because, as a fellow chronophobiac, I hoped that some wisdom and solace might be mixed in with the humor; and because I knew the author's brother Jonathan at Oxford around 1990 and wanted to understand more about this philosopher who "often wears a kind of eighteenth-century costume designed for him by his younger daughter: knee breeches, stockings, buckle shoes on the lower half; brocade waistcoat, stock, long hair tied in a bow on the upper." (This is Julian's description. I would add that the effect is less foppish that you'd think. The wearer resembles a plain-spun, serious Man of the Enlightenment much more than a dandy.)
Anyway, it's a good book and certainly amusing. But Barnes treats the most powerful consolation of morality very subtly--if he recognizes it at all. I mean the consolation of the first person plural. I will die, but we will live on. We think in both the singular and plural and probably began the former first, when we stared at our parents. Language, thought, culture, desire--everything that matters is both individual and profoundly social.
"After I die, other people will go about their ordinary lives, laughing, singing, complaining about trifles, never mourning or even missing me." That is the solipsist's jealous lament. But the mood changes as soon as the grammar shifts. "Even though I must pass, our ordinary life will continue in all its richness and pleasure."
What we count as the "we" is flexible--it can range from a dyad of lovers to the whole human race. No such "we" is guaranteed immortality. It depresses Jonathan Barnes that humanity must someday vanish along with our solar system (and we may finish ourselves off a lot faster than that). But no large collectivity of human beings is doomed to a fixed life span. We can outlive you and me, and you and I can help to make that happen. This is a consolation available to all human beings, whatever they may believe about souls and afterlives. But it is not, I think, much of a comfort to Jonathan Barnes.
permanent link | comments (0) | category: philosophy
February 18, 2009
fundamental orientations to reform
(This is a rambling post written during a flight delay at Washington National. It lacks an engaging lead. In brief, I was thinking about various conservative objections to utopian reform and how social movements, such as the Civil Rights Movement, can address some of those objections.)
The French and Russian revolutions sought dramatically different objectives--the French Jacobins, for example, were fanatical proponents of private property--but they and their numerous imitators have been alike in one crucial way. Each wave of revolutionaries has considered certain principles to be universal and essential. They have observed a vast gap between social reality and their favored principles. They have been willing to seize the power of the state to close this gap. Even non-violent and non-revolutionary social reformers have often shared this orientation.
I see modern conservatism as a critique of such ambitions. Sometimes the critique is directed at the principles embodied in a specific revolution or reform movement. The validity of that critique depends on the principles in question. For example, the Soviet revolution and the New Deal had diametrically opposed ideas about individual liberty. One could consistently oppose one ideology and support the other.
Just as important is the conservative's skepticism about the very effort to bring social reality into harmony with abstract principles (any principles). Conservatives argue: Regardless of their initial motivations, reformers who gain plenipotentiary power inevitably turn corrupt. No central authority has enough information or insight to predict and plan a whole society. The Law of Uninintended Consequences always applies. There are many valid principles in the world, and they trade off. The cost of shifting from one social state or path to another generally outweighs the gains. Traditions embody experience and negotiation and usually work better than any plan cooked up quickly by a few leaders.
These are points made variously by Edmund Burke, Joseph de Maitre, James Madison, Lord Acton, Friedrich von Hayek, Isaiah Berlin, Karl Popper, Daniel Patrick Moynihan, and James C. Scott, among others: a highly diverse group that includes writers generally known as "liberals." But I see their skepticism about radical reform as emblematic of conservative thought.
Two different conclusions can follow from their conservative premises. One is that the state is especially problematic. It monopolizes violence and imposes uniform plans on complex societies. Its power reduces individual liberty. Individuals plan better than the state because they know their own interests and situations, and they need only consider their own narrow spheres. They have limited scope for corruption and tyranny. Therefore the aggregate decisions of individuals are better than the centralized rule of a government. This is conservative libertarianism: the law-and-economics "classical liberalism" of Hayek, not the utopian libertarianism of Ayn Rand or Robert Nozick (as different as those authors were).
The alternative conclusion is that local traditions should generally be respected. Reform is sometimes possible, but it should be gradual, generally consensual, and modest. The odds are against any effort to overturn the status quo, imperfect as that may be. This is Burkean traditionalist conservatism. The Republican Party has very little interest in it today, but it motivates crunchy leftists who prize indigenous customs and cultures and oppose "neo-imperialism" (just as Burke opposed literal imperialism).
These two strands of conservative thought often come into conflict, because actually existing societies do not maximize individual liberty or minimize the role of the state (or of state-like actors, such as public schools, religious courts, clans, and bureaucratic corporations). Traditionalists and libertarians disagree forcefully about what to do about illiberal societies.
Take the case of Iraq under Saddam. The so-called neoconservatives (actually libertarians of a peculiar type) claimed that the main problem with Iraq was a tyrannical state, and the best solution was to invade, liberate, and then constrain the successor regime sharply. Private Iraqis should govern their own affairs under a liberal constitution. The Burkean response was that Iraq was a predominantly non-liberal society, deeply religious and patriarchal; therefore, a liberal constitution would be an alien, utopian imposition that would never work.
We can envision a kind of triangular argument among utopian revolutionaries, Burkean traditionalists, and libertarians--with strengths and weaknesses on all sides. But there is a fourth way. That is the deliberately self-limiting utopian social movement. The Gandhian struggle in India, the Civil Rights Movement in the United States, and the anti-Apartheid movement in South Africa shared the following features: (1) regular invocation of utopian principles, portrayed as moral absolutes and as pressing imperatives; (2) deep respect for local cultures, traditions, and faiths; (3) pluralism and coalition politics, rather than a centralized structure; and (4) strict, self-imposed limits.
The South African ANC had a military wing that aimed to capture the state, whereas Gandhi and the Civil Rights Movement were non-violent. But I would describe non-violence as simply an example of a self-limitation designed to prevent corruption and tyranny. It's a good strategy, because violence tends to spin out of control, to the detriment of the reformers themselves. But it isn't intrinsically or inevitably better than other strategies. The ANC managed to use violence but to restrain itself--as did the American revolutionaries of our founding era.
So now we see a four-way debate among utopian reformers, libertarians, traditionalists, and social-movement reformers. Social movements have answers to several of the chief arguments made by the other sides. They can address conservative worries about arrogance, corruption, and tyranny while also seeking to change the world in principled ways. The problem for social movements is institutionalization. Such movements tend to crest and then fall away, unlike the regimes that the other ideologies promote.
permanent link | comments (1) | category: philosophy
January 28, 2009
measuring what matters
(Washington, DC) I am here for a meeting of a federal committee--one of dozens--that helps to decide which statistics to gather from public school students. We are especially focused on socio-economic "background variables" that may influence kids' success in schools. What to measure often boils down to what correlates empirically with test scores or graduation rates. For instance, a combination of parents' income, education, and occupation can explain about 15%-20% of the variance in test scores. And so we measure these variables.
But the mere fact of a correlation between A and B doesn't mean we should measure both. We could look for correlations between the length of students' noses and the weight of their earlobes. Instead, we look for covariance between parental income and the total number of questions a kid can answer correctly on a test that we write and make him take. Why? Because of moral commitments: beliefs about what inputs, outputs, and causal relationships matter ethically in education.
So it's worth getting back to fundamental principles. These would be mine:
First, the quality of schooling (education that the state provides) should be equal, or should actually be better for less advantaged kids. Quality does not mean effectiveness at raising test scores--it means what is actually good. That may include intrinsically valuable experiences, such as making and appreciating art. But quality probably includes effective practices that raise scores on meaningful, well-designed tests.
Second, it's good when outcomes are equal, but equality trades off against other values, such as freedom for children and parents, and cultural diversity. Also, a narrow focus on equality of outcomes almost inevitably leads to narrow definitions of success and can put excessive pressure on teachers and kids.
Third, individuals' aptitude probably varies (and the degree to which it varies is an empirical question), but every kid who is not performing very well could probably perform better if he or she got a better education. Thus differences in aptitude do not excuse failure to educate.
Fourth, out-of-school resources affect educational outcomes. These resources vary, and that is not fair. We should do something to equalize kids' chances. But resources fall into various categories that raise different moral questions:
1. Fungible resources, such as parents' income or wealth. We can compensate for these inequalities by, for instance, spending more on schools in poor communities. (We tend to do the opposite, but I am writing about principles, not reality.) Note, however, that family income alone explains a small amount of variance in test scores.
2. Attributes of parents that cannot be exchanged or bought, such as their knowledge, skills, abilities, social networks, and cultural capital (ability to function well in privileged settings such as universities and white-collar businesses). It is interesting, for example, that the number of books in a student's home is a consistent predictor of educational success. This is related to income, but it's not the same thing. You may be more educationally advantaged if your parents are poor graduate students with lots of books than rich but vapid aristocrats, especially if your parents devote time to you. The challenge is that parental attributes cannot be changed without badly restricting freedom.
3. Prevalent attitudes, such as racial prejudice/white privilege, that may affect students' self-image; or values relevant to education, such as the belief in Amish communities that a basic education is sufficient. These attitudes vary in how morally acceptable they are. But they have in common the fact that the state cannot change them without becoming highly coercive.
In the end, I think we measure parental resources and their relationship to test scores because we think that (a) it's especially important to compensate for inequalities in cash, and (b) we presume that test scores measure educational success. Both presumptions are debatable, but I believe them enough that I'll keep attending meetings on how to measure them better.
permanent link | comments (0) | category: education policy , philosophy
January 12, 2009
should lying to the public be a crime?
This is an argument from my side of the aisle, so to speak, that really upsets me. (Frank Rich, Dec. 13):
- Blagojevich’s alleged crimes pale next to the larger scandals of Washington and Wall Street. Yet those who promoted and condoned the twin national catastrophes of reckless war in Iraq and reckless gambling in our markets have largely escaped the accountability that now seems to await the Chicago punk nabbed by the United States attorney, Patrick Fitzgerald.
The Republican partisans cheering Fitzgerald’s prosecution of a Democrat have forgotten his other red-letter case in this decade, his conviction of Scooter Libby, Dick Cheney’s chief of staff. Libby was far bigger prey. He was part of the White House Iraq Group, the task force of propagandists that sold an entire war to America on false pretenses. Because Libby was caught lying to a grand jury and federal prosecutors as well as to the public, he was sentenced to two and a half years in prison. But President Bush commuted the sentence before he served a day.
It is not against the law to lie to the public or to start a war on false pretenses. Because those acts are not illegal, Libby was not charged with them. He was not investigated for lying to the public; no evidence to that effect was ever put before a jury. No one examined him to see whether his assertions were (a) false and (b) knowingly so. He could not defend himself in court against an accusation of deliberately misleading the American people, because no such accusation was made. If, as Frank Rich apparently wishes, Libby was convicted because he lied to the public about a war, that was a flagrant violation of the rule of law, one of whose fundamental principles is nullum crimen et nulla poena sine lege ("no crime and no punishment without a law").
Having gotten that off my chest, I'd like to raise a more theoretical question: Would it make any sense to create a criminal law against lying to the public? The elements of this crime would have to include intent and serious consequences. In other words, it would be a defense to say that you didn't know your information was wrong; and it would be a defense to say that your lie was inconsequential. The law could govern any public utterance, or only certain contexts, such as formal speeches given by high officials. We already have perjury laws that apply to sworn testimony; these would be broadened. Another precedent is the Oregon law that says that candidates' personal statements in state voter guides must be true. Former Congressman Wes Cooley was convicted of falsely claiming that he had served in the Special Forces.
In favor of this reform: Lying is wrong. It can cause serious harm to other people. Lying by public officials can undermine the public's sovereignty by giving citizens false information to use in making judgments. Although it can be challenging to prove intent, that is certainly possible in some circumstances, as we know from perjury trials.
Against: There could be a chilling effect on free speech, because people who participate in heated debates do occasionally stray from the truth. It would be bad to suppress such debates altogether. Also, criminalizing lying would shift power from the legislative and executive branches to the judiciary, which might therefore become even more "political." The reform might reduce the public's sense that we are responsible for scrutinizing our government's statements and actions and punishing bad behavior at the ballot box.
Finally, it would distort the political debate if there were frequent, high-stakes battles over whether individuals had knowingly lied about specific facts. Often a specific prevarication is not nearly as important as someone's bad values and priorities. For instance, the Bush Administration very publicly and openly denigrated the importance of foreigners' human rights and chose an aggressive and bellicose strategy. These were not lies; they were public choices that unfortunately happened to be quite popular.
permanent link | comments (3) | category: philosophy
September 9, 2008
"love" as a family-resemblance word
This is one of several recent posts in which I struggle with definitions of the word "love" as a way of thinking about how we define moral concepts, generally. Here I borrow the idea of “family-resemblance” from the later Wittgenstein. Sometimes, we recognize that people belong to a family, not because they all have one feature in common, but because each individual looks like many of his or her relatives in many ways. Maybe eight out of twelve family members have similar noses; a different six out of the twelve have the same color hair; and a yet another seven have the same chin. Then they all resemble each other, although there is no (non-trivial) common denominator. Wittgenstein argued that some--although not all--perfectly useful words are like this. They name sets of objects that resemble one another; but members of each set do not share any defining feature. Their resemblance is a statistical clustering, a greater-than-random tendency to share multiple traits.
A good example is “curry,” which the dictionary defines as a dish flavored with several ground spices. The word “curry” thus describes innumerable individual cases, where each one resembles many of the rest, but there is no single ingredient or other characteristic that they all share. Nor is there a clear boundary between curry and other dishes. Is bouillabaisse a curry? Clearly not, although the dictionary’s definition applies to it. Indeed, any definition will prove inadequate, yet we can learn to recognize a curry and distinguish it from other kinds of food. If we want to teach someone how to use the word “curry,” we will serve several particular examples and also perhaps some dishes that are not curries. If the student draws the conclusion that a curry must always contain coriander, or must be soupy, or must be served over rice, then we can serve another curry that meets none of these criteria. Gradually, he will learn to use the word. Even sophisticates will debate about borderline cases, but that is the nature of such concepts. Their lack of definition does not make them useless.
It seems to me that “love” is also a family-resemblance word, because there is no common denominator to love for ice cream, love for a newborn baby, love of country, brotherly love for humanity, self-love, tough love, Platonic love, making love, amor fati, philately, etc. Some (but not all) of these forms of “love” involve a high regard for the object. Some (but not all) imply a commitment to care for the object. Some (but not all) signify an intense emotional state. Dictionaries cope by providing numerous definitions of love, thus suggesting that “love” means “lust” or “enthusiasm” or “adoration” or “agape” or “loyalty.” But “love” never quite means the same as any of these other words, because we faintly recognize all of its other meanings whenever it is used in a particular way. For instance, “love” is always different from “lust,” just because the former word can mean loyal adoration as well as sexual desire.
The experience of love is complex because one has usually loved before in several different ways and has seen, heard, or read many descriptions of other loves; and these past examples and descriptions become part of one's present experience. “Love” is a family-resemblance word that brings its family along when it visits.
When we read a literary work that vividly describes an example of love, it changes our experience of the concept. Any philosophical discussion of "love" must be a discussion of the experience; and therefore what we conclude philosophically must depend (in part) on how love has been portrayed for us in the arts. (Cf. Tzachi Zamir, Double Vision: Moral Philosophy and Shakespearean Drama, p. 127).
permanent link | comments (0) | category: philosophy
September 4, 2008
what would Kant say about Peggy Noonan?
Yesterday morning, the speechwriter and columnist Peggy Noonan published a piece in the Wall Street Journal arguing that Sarah Palin was a great choice for vice president: potentially a "transformative political presence." Later the same day, she was recorded saying that Palin was not the best qualified person and was chosen because of "political bullshit about narratives and youthfulness."
What's wrong with this? Perhaps it's evidence of a lie. In the morning, Noonan published a proposition about her own feelings toward Palin. In the afternoon, she said a different proposition about her own feelings. If the two claims were contradictory, then she lied unless she changed her mind. But I'm not sure they're flatly contradictory, since the original column was at least somewhat conflicted: Palin, she wrote, "is either going to be brilliant and groundbreaking, or will soon be the target of unattributed quotes by bitter staffers shifting blame in all the Making of the President 2008 books." I think that's compatible with saying that Palin was chosen for a foolish reason. Noonan could be hopeful about Palin, yet suspicious of the reasons she was chosen. In short, the case for a lie seems weak to me.
Instead of treating Noonan's private remarks as evidence of mendacity, we could accuse her of violating Kant's principle of publicity: "All actions relating to the right of other human beings are wrong if their maxim is incompatible with publicity." The idea is that one can test the rightness of an action by asking whether the actor's private reason for so acting could be made public. If you cannot disclose the reason you have done P, you should not do P. Peggy Noonan's private remarks suggest that she thought Palin was probably a bad choice. But she could not say that in the Wall Street Journal without hurting the Republican ticket and costing herself powerful friends. So she shouldn't have written her Wall Street Journal column, according to at least one interpretation of Kant.
The publicity principle can seem over-demanding. Does it mean that one cannot mutter something to one's spouse unless one would also announce it in an office meeting? The glare of publicity can expunge the safe shadows of a private or personal life. That thought gives me a little sympathy for public figures like Peggy Noonan who are caught on tape being frank with friends. (Jesse Jackson and many others have done the same.) But Kant offered his publicity principle in a book about politics (Perpetual Peace), and he qualified it by limiting it to "actions relating to the right of other human beings." In other words, it applies to willing participants in the world of power, law, and politics--not to private individuals. By writing a column in the Wall Street Journal, Noonan committed herself to a public role. The implied promise to her readers was that she was acting transparently and sincerely in that public arena. If her private remarks show otherwise, then she violated Kant's publicity principle.
permanent link | comments (4) | category: philosophy
August 24, 2008
the moral evaluation of literary characters
I'm on p. 521 of Dickens' Bleak House--hardly past half-way--but so far Mrs Jelleby is proving to be a bad person. Like many of my friends (like me, in fact) she spends most of her days reading and writing messages regarding what she calls a "public project"--in her case, the settlement of poor British families on the left bank of the River Niger at the ridiculously named location of Borrioboola-Gha. Meanwhile, her own small children are filthy, her clothes are disgraceful, her household is bankrupt, her neglected husband is (as we would say) clinically depressed, and she is casually cruel to her adolescent daughter Caddy. Caddy finds a man who pays some attention to her, but Mrs Jellyby is completely uninterested in the wedding and marriage:
- "Now if my public duties were not a favourite child to me, if I were not occupied with large measures on a vast scale, these petty details [sc. the wedding] might grieve me very much. ... But can I permit the film of a silly proceeding on the part of Caddy (from whom I expect nothing else), to interpose between me and the great African continent? ..."
"I hope, Ma," sobbed poor Caddy at last, "you are not angry?"
"O, Caddy, you really are an absurd girl," returned Mrs Jellyby, "to ask such questions, after what I have said of the preoccupation of my mind."
"And I hope, Ma, you give us your consent, and wish us well?" said Caddy.
"You are a nonsensical child to have done anything of this kind," said Mrs Jellyby, "and a degenerate child, when you might have devoted yourself to a great public measure. But the step is taken, and I have engaged a boy [to replace Caddy as her secretary], and there is no more to be said. No, pray, Caddy," said Mrs Jellyby--for Caddy was kissing her--"don't delay me in my work, but let me clear off this heavy batch of papers before the afternoon post comes in!"
Mrs Jellyby's friends dominate the wedding breakfast and are "all devoted to public projects only." They have no interest in Caddy or even in one another's social schemes; each is entirely self-centered.
Within the imaginary world of Bleak House, Mrs Jellyby is bad, and her moral flaws should provoke some reflection in the rest of us--especially those of us who spend too much time sending emails about distant projects. The evident alternative is Esther Summerson, a model housekeeper who cares lovingly for her friends and relatives and refuses to interfere with distant strangers' lives on the ground "that I was inexperienced in the art of adapting my mind to minds very differently situated ...; that I had much to learn, myself, before I could teach others ..."
Fair enough, but we could also ask why Dickens decided to depict Mrs Jellyby instead of a different kind of person, for instance, a man who was so consumed with social reform that he neglected his spouse, a woman who successfully balanced public and private responsibilities, or a woman, like Dorothea Brooke, who yearned for a public role but instead devoted her life to the private service of men. Both the intention and the likely consequences of Dickens' portrait are to suppress the public role of women.
The general point I'd like to propose is this: the moral assessment of literary characters (lately returned to respectability by theorists like Amanda Anderson) requires two stages of analysis. First one decides whether a character is good or bad--or partly both--within the world of a fiction. And then one asks whether the author was right to choose to create that character instead of others.
permanent link | comments (0) | category: none
August 18, 2008
broadening philosophy
Moral philosophy (or ethics) forms a diverse and eclectic field, about which few accurate generalizations can be made.* However, I think I detect a very widespread preference for concepts whose significance is always the same--either positive or negative--wherever they appear. In defining moral concepts, philosophers like to identify necessary and sufficient conditions, such that if something can be done, it will always be obligatory, praiseworthy, desirable, permissible, optional, regrettable, shameful, or forbidden to do it. These moral propositions may have to be considered along with other valid propositions that also apply in the same circumstances. For instance, honesty may be obligatory (or at least praiseworthy); yet tact is also desirable. Honesty and tact can conflict. Hardly anyone doubts that we face genuine moral conflicts and dilemmas. Yet the hope is to develop general moral propositions, built of clearly defined concepts, that are always valid, at least all else considered.
But what should we say about complex and ambiguous phenomena that have evolved over biological and historical time and that now shape our lives? I am thinking of concepts like love (recently discussed here), marriage, painting, the novel, lawyers, or voting. We can't use these words in a deontic logic made up of propositions like "P is necessary." They are sometimes good and sometimes not. We could try to divide them into subconcepts. For instance, love could be divided into agape, lust, and several other subspecies; painting can be categorized as representational, abstract, religious, etc. Once we have appropriate subconcepts, we can say that they have a particular moral status if (and only if) specified conditions apply.
The urge is to avoid weak modal verbs like "may" and "can" or other qualifiers like "sometimes" and "often." Love can be wonderful; it can also be a moral snare. Paintings sometimes invoke the sublime; sometimes they don't. Lawyers have legitimate and helpful roles in some cases and controversies, but not in others. A core philosophical instinct is to get rid of these qualifiers by using tighter definitions. For example, agape (properly defined) might turn out to be always good and never a snare. You always need and have a right to a lawyer when you are arraigned. All paintings by Giorgione or similar to Giorgione's are sublime. And so on.
My fear is that the pressure to avoid soft generalizations prevents us from saying anything useful about a wide range of social institutions, norms, and psychological states. They don't split up neatly into subcategories, because they didn't evolve or develop so neatly. They won't work in a deontic logic unless we allow ourselves soft modals like "may" and "can." And yet, outside of philosophy, much of the humanities involves moral evaluations of just such concepts. For example, a great nineteenth-century novel about marriage does not claim that marriage is always good or bad, or always good or bad under specified conditions. The novel evaluates one or two particular marriages and supports qualified conclusions: marriage (in general) can be a happy estate, but it also has dangers. It is wise, when contemplating a marriage, to consider how events may play out for both partners. "Marriage," of course, means marriage of a specific, culturally-defined type (monogamous, exogamous, heterosexual, voluntary, permanent, patriarchal, and so on). That institution will evolve subtly and may be altered suddenly by changes in laws and norms. The degree to which the implied advice of the novel generalizes is a subtle question which the novel itself may not address.
Much contemporary philosophy has a forensic feel. The goal is to work out definitions and rules that, like good laws, permit the permissible and forbid the evil. I do not doubt the value of forensic thinking--in law. I do doubt that it is adequate for moral thinking. It seems to me that the search for clearly defined and consistent concepts narrows philosophers' attention to discrete controversial actions (abortion, torture, killing one to save another) and discourages their consideration of complex social institutions. It also directs their energy to metaethics, where one can consider questions about moral propositions, rather than "applied" topics, which seem too messy and contingent.
*I am struggling a bit to test my claims about what is central and peripheral, given the enormous quantity of articles and books published every year. If you use the Philosopher's Index (a fairly comprehensive database) to search for words that have been chosen as "descriptors" for books and articles, you will find 2,131 entries on utilitarianism, 445 on Kantianism, and 541 on metaethics; but also 2,121 on love and 351 on marriage. Given what is typically taught in philosophy departments, I was surprised to find a moral topic (love) almost matching a philosophical approach (utilitarianism.) Closer inspection reveals much diversity. There are articles in the Index on classical Indian philosophical writing, and articles on Victorian novels that seem more like literary criticism than philosophy. (The Index encompasses some interdisciplinary journals in the humanities.) There is much contemporary Catholic moral theory that seems to be in conversation mainly with itself. I will stick to my claims about what is most influential, highly valued, and canonical in the profession today, although I acknowledge that people with jobs as philosophers have written about practically everything and in practically all imaginable styles.
permanent link | comments (0) | category: philosophy
July 14, 2008
worrying about "love"
What is the meaning of a principle like "causing needless pain is bad" or "lying is wrong"? These principles are not always right--think about the pain of an athletic event or lying to the Gestapo. Various explanations have been proposed for the relationship between such principles and their exceptions. Maybe lying is wrong if certain conditions are met, and those conditions are common. Or maybe lying is really the union of two concepts--"mendacium" (mendacious untruths) and "falsiloquium" (blameless misleading), to use medieval concepts. Or maybe lying and pain-causing are always bad "pro tanto"--as far as that goes. They are always bad but their badness can be outweighed.
Mark Norris Lance and Maggie Little have another theory: "defeasible generalization."* The following are defeasible generalizations taken from science: Fish eggs turn into fish. A struck match lights. These assertions are certainly not always true. In fact, very few fish eggs actually turn into fish, and I rarely get a match going on the first try. Nevertheless, a fish egg turns into a fish unless something intervenes. Even though the probability of its reaching the fish stage is low, to do so is its nature. The privileged cases are the ones in which the egg turns into a fish and the struck match catches fire. All the other outcomes, even if they are more common, are deviant. To understand that something will normally or naturally turn into a fish is to realize that it is a fish egg.
Lance and Little make a close analogy to moral issues: "Many key moral concepts--indeed, the workhorses of moral theory--are the subjects of defeasible moral generalizations. ... Take the example of pain. We believe it is important to any adequate morality to recognize that defeasibly, pain is bad-making." In other words, it is correct that causing pain is bad, even though there are exceptions that may turn out to be common. "To understand pain's nature, then, is to understand not just that it is sometimes not-bad, but to understand that there is an explanatory asymmetry between cases in which it is bad and cases in which it is not: it is only because pain is paradigmatically bad-making that athletic challenges come to have the meaning they do, and hence provide a kind of rich backdrop against which instances of pain can emerge as not-bad-making, as not always and everywhere to-be-avoided." Moral discernment is grasping the difference between paradigm cases and aberrant ones. We learn this skill, but it is not just a matter of applying rules. It may not be codifiable.
This seems plausible to me. But I do not think that every moral issue works this way. Take the absolutely crucial concept of love. We might say, as a defeasible generalization, that love is good. We know that in some cases love is bad. Adultery, obsessive love, and lust are common examples (although each of these bad categories admits counter-examples that happen to be good). But maybe it is true to say that love is good just in the same way that it is true to say that fish eggs turn into fish. This principle (arguably) reveals an understanding of the concept of love even though many cases are exceptional.
Here is my worry. I do believe, as a statistical generalization, that most cases of love are good. However, I also believe that we have a tendency to overlook the bad side of love, especially if we are the subject or object of it. We have biases in favor of love that presumably arise from our biological desires for sex and companionship and from the legacy of a million stories, poems, paintings, movies, and songs in which the protagonists fall in love and are admired for it. So the principle that love is good, if treated as a defeasible generalization, a default position, or a rebuttable presumption, is likely to mislead.
And we have an alternative. That is to say that love is nearly always morally significant. It is rarely neutral. Yet you cannot know, without looking at the whole situation, whether love is a good or a bad thing. Given the important possibility that love may be bad, or that a good love may have some element or danger of bad love (or vice-versa), it is not right to make any presumption about its moral "valence" until you hear the whole story.
This is exactly the position that Jonathan Dancy calls "particularism" (and Anthony W. Price has called "variabalism"). Dancy says at times that it applies to every reason, principle, or value--none has a good or bad "valence" that we can know in advance. Whether anything is good depends on the context. I would argue that particularism or variabalism applies to love--but not to lying or causing pain. Still, this is only a minor setback for particularism, because love is a hugely important issue and is unlikely to be the only one that behaves this way. In fact, I suspect that most of Aristotle's list of virtues (courage, temperence, liberality, frindliness, patience, etc.) are like love. We can make the defeasible generalization that they are morally significant. That shows that we understand these concepts. But to say that they are good means jumping to conclusions, even if we insist that there are exceptions.
Incidentally, there are various alternatives to particularism about love that I have not addressed here. Most alternatives would involve categorizing types of love or explaining the general conditions under which love is good or bad. I think these are, at best, heuristics. Love is relatively unlikely to be good if Emma loves Rodolphe while Emma is married to Charles, for example. But there are plenty of real and fictional stories in which adulterous love is a good thing. The differences between good and bad love are unlikely to be codifiable, and the effort to divide "love" into its good and bad forms misses a basic fact about it. Love just is something that can be great, or can be awful, or can be both; and you have to be careful about it.
* See Mark Norris Lance and Maggie Little, “From Particularism to Defeasibility in Ethics," in Mark Norris Lance, Matjaž Potrč, and Vojko Strahovnik, eds., Challenging Moral Particularism (New York: Routledge, 2008), pp. 53-74. This chapter is very similar, but not identical, to Mark Norris Lance and Margaret Olivia Little, "Defending Moral Particularism," in James Dreier, ed., Contemporary Debates in Moral Theory (Oxford: Blackwell, 2006), pp. 305-321.
permanent link | comments (0) | category: philosophy
July 2, 2008
good lives
Friends returned recently from Alaska, where they had encountered people who prefer to live alone and "off the grid," with as little interaction with the United States as possible. I don't think this is a great form of life. I admire people who provide more service to humanity. Also, I'm not impressed by a way of life that must be denied to most other human beings (for we simply don't have enough space on the planet to allot each family many acres). It's possible that some day we'll all gain benefit from Alaskan survivalists--we may need their special knowledge. But that would make the case easy. Let's keep it hard by presuming that they will never do any practical good for anyone other than themselves.
This example is an opportunity to try to make sense of three premises:
1. Some ways of life are better than others.
2. It takes many types of lives (each with its own prime virtue) to make a livable world; and
3. It's a better world if it contains many different types of character and virtue, rather than a few.
I take 1 as pretty obvious. If you don't agree with me that Alaskan survivalists lead less meritorious lives than hospice workers, you must at least concede that hospice workers are better people than Storm Troopers. It might sound pretentious to assert that some lives are lived better than others. But the alternative is to deny that it makes any difference how we live, and that makes life a joke.
I think 2 is also pretty obvious. If we didn't have people who were committed to practical organizing work and productive labor, we'd starve. If there was no one who was concerned about security (and willing at least to threaten legitimate force on behalf of the community), we'd be in grave danger. Were it not for curious scientists, we would live shorter lives. But what follows from these examples? Not that several different kinds of lives are equally meritorious. Aristotle knew that it took many types of people, including manual laborers and soldiers, to sustain the polis. He nevertheless believed that the life of dispassionate inquiry was the single best life. He could hold these two positions together because he was no moral egalitarian. For him, it did not follow that if we need laborers and soldiers as well as philosophers, therefore all three are equally valuable. Moral egalitarianism is not self-evident or universal, although I certainly endorse it.
One can combine 1 and 2 by saying that there is a list of valuable ways of life, which includes all the necessary roles (e.g., producers, protectors, healers) plus some that have less practical advantages: for example, artists and abstract thinkers. This is a limited kind of pluralism. It supports moral distinctions but admits more than one type of goodness.
I'm inclined to go further and say that the world is better if it includes forms of life that are neither essential nor intrinsically meritorious. Our environment is simply more interesting if it contains Alaskan survivalists as well as productive farmers and cancer researchers. Thus I would propose that an individual who goes off the grid is probably not leading the best possible life for him; yet it is better that some people do this than that none do.
permanent link | comments (3) | category: philosophy
June 16, 2008
the ethics of liking a fictional character
(Waltham, Mass.) I have mentioned before that Middlemarch is my favorite book. Specifically, I am fond of Dorothea Brooke, its heroine. I like her; I want her to succeed and be happy. Allowing for the fact that she is a fictional character, I care about her.
Such feelings represent moral choices. Caring about someone is less important when that person happens to be fictional, but novels are at least good tests of judgment. Thus I am interested in whether I am right to care about the elder Miss Brooke. It seems to me that George Eliot was also especially fond of her heroine, and one could ask whether that was an ethical stance. Or, to put the question differently, was Eliot right to pull together a set of traits into one fictional person and describe that person in such a way as to make us like her?
The traits that seem especially problematic are Dorothea's beauty, her high birth, and her youth. She is a young woman from the very highest social stratum in the hierarchical community of Middlemarch, surpassed by no one in rank. She is consistently described as beautiful, not only by other characters, but also by the narrator. In fact, these are the very first lines of Chapter One:
Miss Brooke had that kind of beauty which seems to be thrown into relief by poor dress. Her hand and wrist were so finely formed that she could wear sleeves not less bare of style than those in which the Blessed Virgin appeared to Italian painters; and her profile as well as her stature and bearing seemed to gain the more dignity from her plain garments, which by the side of provincial fashion gave her the impressiveness of a fine quotation from the Bible,--or from one of our elder poets,--in a paragraph of to-day’s newspaper. She was usually spoken of as being remarkably clever, but with the addition that her sister Celia had more common-sense.
This introduction contains no physical detail, in contrast to the portrayals of other characters in the same novel, such as Rosamond and Ladislaw. The simple fact of Dorothea's beauty is not complicated by the mention of any particular form of beauty that a reader might happen not to like.
We have a tendency, I think, to want beautiful and high-born but lonely young ladies to live happily ever after. When we were young, we heard a lot of stories about princesses. We expect a princess to become happy by uniting with a young and attractive man; and whether that will happen to Dorothea is a suspenseful question in Middlemarch.
If we are prone to admire and like Dorothea because she is beautiful, Eliot complicates matters in three ways. First, she produces a second beautiful young woman in need of a husband, but this one is bad and thoroughly unlikable. (At least, it is very challenging to see things from Rosamond's perspective, as perhaps we should try to do.) Second, in Mary Garth, Eliot creates a deeply appealing young female character who, we are told, is simply plain. Third, Eliot makes Dorothea not only beautiful, but also "clever" and good.
Evidently, beauty does not guarantee goodness, nor vice-versa; yet several people in Middlemarch think that Dorothea's appearance and quality of voice manifest or reflect her inner character. This seems to be a kind of pathetic fallacy: people attribute virtues to her face, body, and voice as poets sometimes do to flowers or stars. But of course the characters who admire Dorothea's appearance as a manifestation of her soul may be right, within the world that Eliot has created in Middlemarch. Or perhaps character and appearance really are linked. Rosamond, for instance, could not be the same kind of person if she were less pretty.
I presume that it is right to like someone for being good, but it is not right to like someone because she is beautiful. One could raise questions about this general principle. Is someone's goodness really within his or her control? Perhaps we should pity (and care about) people like Rosamond who are not very virtuous. On the other hand, if we can admire beauty in nature and art, why not in human beings? And what about cleverness, which is not a moral quality but is certainly admired?
One interpretation of the novel is that Dorothea does not have a moral right to her inheritance or to her social status. These are arbitrary matters of good fortune, and she is wise to be critical of them. She does, however, according to the novel, deserve a happy marriage to a handsome man because she is both good and beautiful (and also passionate). The end of the novel feels happy to the extent that she gets the marriage she deserves. Does this make any sense as a moral doctrine? Is it an acceptable moral doctrine within a fictional world, but inapplicable to the real world?
Beautiful people tend to find other beautiful people, just as the rich tend to marry the rich and (nowadays) the clever marry the clever. Lucky people have assets in the market for partners. But is this something we should want to see? What if the plain but nice Mary Garth ended up with a broodingly handsome romantic outsider, and Dorothea married a nice young man from the neighborhood? Would that ending be wrong because beauty deserves beauty, or would it only be an aesthetic mistake (or a market failure)?
permanent link | comments (0) | category: none
June 5, 2008
teach philosophy of science in high school
I think controversies about whether to allow the teaching of "intelligent design" and whether teachers should present global warming as a fact are more complicated than is presumed by most scientific and liberal opinion. To announce that evolution is "science," while intelligent design is "religion," begs a lot of questions about what science is and how it should operate. To say that global warming is a "fact" implies a view about facts and what justifies them. Serious people hold relativist views, arguing that what we call science is a phenomenon of a particular culture. Others favor what used to be called "the strong programme in the sociology of science." That is the view that science is a social institution with its own power structure, and one can understand current scientific opinions by understanding the power behind them. I don't hold that view myself, but it's interesting that it originated on the left, and yet many people who hold it today are religious fundamentalists. And you can understand (without necessarily endorsing) their perspective when you consider that people who are anointed as "scientists" by older scientists get to control public funds, institutions, degrees, jobs, curricula, and policies in areas like health and the environment. These scientists are mostly very secular and declare that only secular beliefs qualify as science. There is a prima facie case here for skepticism, and it deserves a reasoned response.
Even among people who are strongly supportive of science (which includes most contemporary philosophers in the English-speaking world), there are live controversies about what constitutes scientific knowledge, whether and how a theory differs from other falsifiable assertions, how and why scientific theories change, how theories relate to data, etc. To tell students that evolution is a theory and that creationism isn't is dogmatism. It glosses over the debate about what a theory is.
There are also important questions that cross over from philosophy of science to political philosophy. Does a teacher have an individual right to teach creationism if he believes in it? Does he have an individual right to promote Darwinism even if local authorities don't want it taught? Should the Institute for Creation Research in Texas be allowed to issue graduate degrees? Does it have a right of association or expression that should permit this, or does the state have the right--or obligation--to license certain doctrines as scientific. Why?
I am one of the last people (I hope) to pile more tasks on our schools. In fact, I published an article arguing that we shouldn't ask schools to teach information literacy, even though it is important, because they simply have too much else to accomplish. (Instead, I argued, we need to make online information and search functions as reliable as possible). Yet I think philosophy of science is a real candidate for inclusion in the high school curriculum--or at least we ought to experiment to see if it can be taught well. I'd stake my case on two principles:
1. Making critical judgments about science as an institution is an essential task for citizens in a science-dominated society; and
2. Students are being required to study science (as defined by scientists), and taxpayers are being required to fund it. Fundamental liberal principles require that such requirements be openly debated.
permanent link | comments (1) | category: none
May 20, 2008
why join a cause?
I have been involved in a lot of causes--mostly rather modest or marginal affairs, but ones that have mattered to me: public journalism, campaign finance reform, deliberative democracy, civilian national service, civic education, media reform, and service-learning, among others. The standard way to evaluate such causes and decide whether to join the movements that support them is to ask about their goals and their prospects of success. To be fully rational, one compares the costs and benefits of each movement's objectives with those of other movements, adjusting for the probability and difficulty of success. A rationally altruistic person joins the movement that has the best chance of achieving the most public good, based on its "cause" and its strategies.
To use an overly-technical term, this is a "teleological" way of thinking. We evaluate each movement's telos, or fundamental and permanent purpose. Friedrich Nietzsche was a great critic of teleological thought. He saw it everywhere. In a monotheistic universe, everything seems to exist for a purpose that lies in its future but was already understood in the past. Nietzsche wished to raise deep doubts about such thinking:
the cause of the origin of a thing and its eventual utility, its actual employment and place in a system of purposes, lie worlds apart; whatever exists, having somehow come into being, is again and again reinterpreted to new ends, taken over, transformed, and redirected by some power superior to it; all events in the organic world are a subduing, a becoming master, and all subduing and becoming master involves a fresh interpretation, an adaptation through which any previous "meaning" and "purpose" are necessarily obscured or even obliterated. However well one has understood the utility of any physiological organ (or of a legal institution, a social custom, a political usage, a form in art or in a religious cult), this means nothing regarding its origin ... [On the Genealogy of Morals, Walter Kaufmann's translation.]
I think that Nietzsche exaggerated. In his zeal to say that purposes do not explain everything, he claimed that they explain nothing. In the human or social world, some things do come into being for explicit purposes and then continue to serve those very purposes for the rest of their histories. But to achieve that kind of fidelity to an original conception takes discipline, in all its forms: rules, accountability measures, procedures for expelling deviant members, frequent exhortations to recall the founding mission. The kinds of movements that attract me have no such discipline. Thus they wander from their founding "causes"--naturally and inevitably.
As a result, when I consider whether to participate, I am less interested in what distinctive promise or argument the movement makes. I am more interested in what potential it has, based on the people whom it has attracted, the way they work together, and their place in the broader society. I would not say, for example, that service-learning is a better cause or objective than other educational ideas, such as deliberation, or media-creation, or studying literature. I would say that the people who gather under the banner of "service-learning" are a good group--idealistic, committed, cohesive, but also diverse. Loyalty to such a movement seems to me a reasonable basis for continuing to participate.
permanent link | comments (0) | category: philosophy
April 28, 2008
three different ways of thinking about the value of nature
These are three conflicting or rival positions:
1. People value nature, and the best measure of how much they value it is how much they would be willing to pay for it. Actual market prices may not reflect real value because of various flaws in existing markets. For example, if you find an old forest that no one owns, chop it down, and burn the wood for fuel, all that activity counts as profit. You don't have to deduct the loss of an asset or the damage to the atmosphere. However, it would be possible to alter the actual price of forest wood by changing laws and accounting rules. Or at least we could accurately estimate what its price should be. The real value of nature is how much human beings would be willing to pay for it once we account for market failures.
2. Nature has value regardless of whether people are willing to pay for it. Perhaps nature's value arises because God made it, called it "good," and assigned it to us as His custodians. Or perhaps nature has value for reasons that are not theistic but do sound religious. Emerson:
The stars awaken a certain reverence, because though always present, they are inaccessible; but all natural objects make a kindred impression, when the mind is open to their influence. Nature never wears a mean appearance. ... The greatest delight which the fields and woods minister, is the suggestion of an occult relation between man and the vegetable. I am not alone and unacknowledged. They nod to me, and I to them.
Emerson's view is sharply different from #1 because he believes that his fellow men do not value nature as they should. "To speak truly, few adult persons can see nature. Most persons do not see the sun. At least they have a very superficial seeing. ..." Thus prices do not reflect nature's value.
If you're an economist or a scientist, you may not personally feel that God is present in nature or that nature is ineffably precious. Regardless, you can respect your fellow citizens who hold those feelings. One version of scientific positivism says that there are (a) testable facts about nature and (b) opinions about nature as a whole. The latter are respectable but not provable. They are manifestations of faith, neither vindicated nor invalidated by science. This sounds like the early Wittgenstein.
3. Nature has value irrespective of price: real value that may or may not be recognized by people at any given moment. But this value does not derive from a metaphysical premise about nature as a whole, e.g., that God made the world. We can make value judgments about particular parts of nature, not all of which have equal value. We can change other people's evaluations of nature by providing valid reasons.
Yosemite is more precious than your average valley. How do we substantiate such a claim? Not by citing a foundational, metaphysical belief, but by describing Yosemite itself. Careful, appreciative descriptions and explanations of natural objects are valid arguments for their value, just as excellent interpretations of Shakespeare's plays are valid arguments for the excellence of those works.
This view rejects a sharp distinction between facts and values. "Thick descriptions" are inextricably descriptive and evaluative. This view also rejects the metaphor of foundations, according to which a value-judgment must rest on some deeper and broader foundation of belief. Why should an argument about value be like the floor of a building, which is no good unless it sits on something else? It may be sufficient on its own. (This all sounds like the later Wittgenstein.)
This third position contrasts with Emerson's. He says:
Nature never wears a mean appearance. Neither does the wisest man extort her secret, and lose his curiosity by finding out all her perfection. Nature never became a toy to a wise spirit. The flowers, the animals, the mountains, reflected the wisdom of his best hour, as much as they had delighted the simplicity of his childhood.
This third view says, pace Emerson, that nature varies in quality. Tigers are more magnificent than roaches. A good way to make such distinctions is indeed to "extort [the] secrets" of nature. When we understand an organism better--including its functioning, its origins, and its place in the larger environment--we often appreciate it more, and rightly so. The degree to which our understanding increases our appreciation depends on the actual quality of the particular object under study.
permanent link | comments (1) | category: philosophy , philosophy
three different ways of thinking about the value of nature
These are three conflicting or rival positions:
1. People value nature, and the best measure of how much they value it is how much they would be willing to pay for it. Actual market prices may not reflect real value because of various flaws in existing markets. For example, if you find an old forest that no one owns, chop it down, and burn the wood for fuel, all that activity counts as profit. You don't have to deduct the loss of an asset or the damage to the atmosphere. However, it would be possible to alter the actual price of forest wood by changing laws and accounting rules. Or at least we could accurately estimate what its price should be. The real value of nature is how much human beings would be willing to pay for it once we account for market failures.
2. Nature has value regardless of whether people are willing to pay for it. Perhaps nature's value arises because God made it, called it "good," and assigned it to us as His custodians. Or perhaps nature has value for reasons that are not theistic but do sound religious. Emerson:
The stars awaken a certain reverence, because though always present, they are inaccessible; but all natural objects make a kindred impression, when the mind is open to their influence. Nature never wears a mean appearance. ... The greatest delight which the fields and woods minister, is the suggestion of an occult relation between man and the vegetable. I am not alone and unacknowledged. They nod to me, and I to them.
Emerson's view is sharply different from #1 because he believes that his fellow men do not value nature as they should. "To speak truly, few adult persons can see nature. Most persons do not see the sun. At least they have a very superficial seeing. ..." Thus prices do not reflect nature's value.
If you're an economist or a scientist, you may not personally feel that God is present in nature or that nature is ineffably precious. Regardless, you can respect your fellow citizens who hold those feelings. One version of scientific positivism says that there are (a) testable facts about nature and (b) opinions about nature as a whole. The latter are respectable but not provable. They are manifestations of faith, neither vindicated nor invalidated by science. This sounds like the early Wittgenstein.
3. Nature has value irrespective of price: real value that may or may not be recognized by people at any given moment. But this value does not derive from a metaphysical premise about nature as a whole, e.g., that God made the world. We can make value judgments about particular parts of nature, not all of which have equal value. We can change other people's evaluations of nature by providing valid reasons.
Yosemite is more precious than your average valley. How do we substantiate such a claim? Not by citing a foundational, metaphysical belief, but by describing Yosemite itself. Careful, appreciative descriptions and explanations of natural objects are valid arguments for their value, just as excellent interpretations of Shakespeare's plays are valid arguments for the excellence of those works.
This view rejects a sharp distinction between facts and values. "Thick descriptions" are inextricably descriptive and evaluative. This view also rejects the metaphor of foundations, according to which a value-judgment must rest on some deeper and broader foundation of belief. Why should an argument about value be like the floor of a building, which is no good unless it sits on something else? It may be sufficient on its own. (This all sounds like the later Wittgenstein.)
This third position contrasts with Emerson's. He says:
Nature never wears a mean appearance. Neither does the wisest man extort her secret, and lose his curiosity by finding out all her perfection. Nature never became a toy to a wise spirit. The flowers, the animals, the mountains, reflected the wisdom of his best hour, as much as they had delighted the simplicity of his childhood.
This third view says, pace Emerson, that nature varies in quality. Tigers are more magnificent than roaches. A good way to make such distinctions is indeed to "extort [the] secrets" of nature. When we understand an organism better--including its functioning, its origins, and its place in the larger environment--we often appreciate it more, and rightly so. The degree to which our understanding increases our appreciation depends on the actual quality of the particular object under study.
permanent link | comments (1) | category: philosophy , philosophy
April 22, 2008
against legalizing prostitution
The Eliot Spitzer fiasco generated some blog posts (which I neglected to bookmark) arguing that prostitution should be legal. The bloggers I read acknowledged that Governor Spitzer should be liable for breaking the law, but they argued that the law was wrong. Their premise was libertarian: private voluntary behavior should not be banned by the state. One can rebut that position without rejecting its libertarian premise, by noting that many or most prostitutes are actually coerced. In the real world, incest, rape, violence, and human trafficking seem to be inextricably linked to prostitution. But that fact will only convince libertarians if the link really is "inextricable." If some prostitution is voluntary, then it should be legal, according to libertarian reasoning.
Which I reject. Libertarians are right to prize human freedom and to protect a private realm against the state; but issues like prostitution show the limits of libertarian reasoning. We are deeply affected by the prevailing and official answers to these questions: What is appropriate sexual behavior? What can (and cannot) be bought and sold? Our own private, voluntary behavior takes on very different meanings and significance depending on how these questions are answered. Answers vary dramatically among cultures and over time. Deciding how to answer them is a core purpose of democracy.
This position can make liberals uncomfortable because of its implications for other issues, such as gay marriage. One of the leading arguments in favor is that adults should be allowed to do what they like, and the fact that two men or two women decide to marry doesn't affect heterosexuals. Actually, I think gay marriage does affect heterosexual marriage by subtly altering its social definition and purpose. I happen to think that the change is positive. It underlines the principle that marriage is a voluntary, permanent commitment (which is clearly appropriate for gays as well as for straight people). Other moral principles also favor gay marriage, including equal respect and, indeed, personal freedom. But for me, personal freedom does not trump all other considerations.
By the way, because prostitution seems to be so closely linked to incest, rape, and violent coercion, I think the best policy would be very strict penalties against soliciting. It is buying, rather than selling, sex that seems most morally odious.
permanent link | comments (1) | category: philosophy
April 4, 2008
philosophy of the middleground
1. Should the government require national service?
That's a question that modern political philosophers are primed and ready to address. It concerns the proper power of the state and the responsibilities of its citizens. Libertarians, communitarians, civic republicans, and others have fundamental principles that they can easily apply to this question. I call it a "background" issue because it deals with the fundamental rights and duties that define a whole society. It's like a question about whether everyone has a right to health care or free speech, or whether the government may compel taxation. These "background" issues are central to modern political theory.
2. Should I enlist in the military or join a civilian service program such as CityYear?
This is also a topic that political philosophers are equipped to address. It raises fundamental ethical questions about the use of force, membership in hierarchical organizations, duties to the community, and the shape of a good life. Pacifists, communitarians, various kinds of virtue-ethicists, pluralists, and others have fundamental principles that apply pretty directly to this question. I call it a "foreground" issue because it deals with a matter very close to the individual--a personal choice. It is like questions about whether to marry, have an abortion, or join a church. Such foreground issues are central to modern ethics.
3. What would a good service program be like and how could we make such a program come into being?
This is the kind of question that modern philosophers are not very good at addressing. One cannot easily answer it by applying the fundamental intuitions that drive mainstream theories of ethics and political theory. There isn't necessarily a libertarian or communitarian answer.
As a result, the question tends to be addressed in thoroughly empirical, administrative, or tactical ways. The empirical issue is what consequences result from various types of service programs. The administrative issue is what rules or processes increase the probability that the program will be run well. And the tactical issue is how one can build and sustain political support for the program.
All these questions have crucial moral dimensions. It's not enough to know whether a given program causes a particular outcome (such as higher incomes, or more civic duty). We must also decide whether those outcomes are good, whether they are distributed fairly, whether any harms to others are worthwhile, and what means for deriving these consequences are acceptable. Further, it's not enough to understand how to run or structure a good program. We must also decide what forms of governance or administration are ethical. (Mussolini made the trains run on time, but that was not an adequate defense of fascism). Finally, it's not enough to know that a given argument or "message" would produce political support for a program. We must also decide which forms of argument are ethically acceptable.
Thus it's a shame that philosophers tend to cede the "middleground" to social scientists, administrators, and tacticians. As a result, no one raises the serious, complex moral issues that arise when one thinks about political tactics, the design of programs, and their administration. This is not only bad for policy and public discourse; it is also bad for philosophy. Theories are impoverished when they miss the middleground. For example, it would be a decisive argument against requiring national service if it were impossible to build and sustain a good service program. So any argument for national service that depends entirely on first principles is a lousy argument. It needs its middleground.
Some areas of philosophy have developed a middleground and thereby not only served public purposes but also enriched the discipline. Medical ethics is the best example. It's no longer restricted to matters of individual ethics (e.g., should a physician conduct an abortion?) or matters of basic structure (e.g., is there a right to life?), but also to matters of administration, politics, and program design. Medical ethicists work in hospitals, advise commissions, and review policies. Harry Brighouse has argued that the philosophy of education should follow the same model. I would generalize and say that across the whole range of policy and social questions, it is worth asking moral questions not only about basic rights and individual behavior, but also about institutional arrangements and political tactics.
permanent link | comments (0) | category: philosophy
March 27, 2008
happiness over the course of life
Imagine two people who experience exactly the same amounts of happiness over the course of their whole lives. A experiences most of his happy times near the beginning, whereas B starts off miserable but ends in happiness.* We are inclined to think that B is more fortunate, or better off, than A. If the story of A's life were written down, it would be tragic, whereas B's tale has a happy ending. But does B really have more welfare?
One view says no. The happiness of a life is just the happiness of all the times added up. Maybe we feel happier when we are on an upward trajectory, but that extra satisfaction should be factored into an accurate estimate of our happiness. If A and B really have identical total quantities of happiness over the courses of their lives, they are equally well off. Any aesthetic satisfaction that we obtain from the happy ending of B's life is no reason to declare him better off.
Another view says says that happiness is equally valuable at any time, but we wish devoutly that our own happiest times are still to come. That wish colors our estimation of other people's lives; but perhaps it shouldn't. Just because I want the end of my life to be (even) better than the beginning, it doesn't follow that B was better off than A. Once the ledgers are closed at death, it no longer matters how the happiness was distributed.
A third view says: even if the amount of happiness is the same at two times of life, somehow the quality of happiness is better if it comes later, because then it's more likely to be the outcome or satisfaction of one's plans and one's work. That is sometimes true, but it's not necessarily the case. One can be happy late in life because of sudden dumb luck. One can have early happiness as the well-deserved accomplishment of youthful efforts.
I incline to a fourth view. Happiness is not more valuable if it happens to come later. But a morally worthwhile life is one that develops, and one should take satisfaction in one's own development. Thus we think of the old person who has learned, grown, and become better--and who is satisfied with that achievement--as a moral paradigm. He or she happens to be happy, but what matters is that the happiness is justified. The child who is naively happy makes us glad but does not inspire our admiration. Thus our intuition that happiness is better late in life does not mean that it has a greater impact on welfare. Our intuition is a somewhat confused reflection of our admiration for a particular kind of mature satisfaction.
*This topic was raised by Connie Rosati in a fine paper she delivered at Maryland this week. These views are my own and I'm deliberately not summarizing her interesting thesis because I didn't seek permission.
permanent link | comments (0) | category: philosophy
March 25, 2008
the "general turn to ethics" in literary criticism
I need to revise my book manuscript about Dante, which is under consideration by a publishing house. In the book, I argue that interpreting literature has moral or ethical value. Literary critics, I claim, almost always take implicit positions about goodness or justice. They should make those positions explicit because explicit argumentation contributes more usefully to the public debate. Also, the need to state one's positions openly is a valuable discipline. (Some positions look untenable once they are boldly stated.)
I had taken the stance that contemporary literary theorists and academic critics were generally hostile to explicit ethical argument. My book was therefore very polemical and critical of the discipline. But I was out of date. In Amanda Anderson’s brilliant and influential book The Way We Argue Now: A Study in the Cultures of Theory (Princeton, 2006), she announces: "We must keep in mind that the question. How should I live? is the most basic one" (p. 112).
This bold premise associates her with what she rightly calls the "general turn to ethics" that's visible in her profession today (p. 6). This turn marks a departure from "theory," meaning literary or cultural theory as practiced in the humanities from the 1960s into the 1990s. "Theory" meant the use of (p. 4) "poststructuralism, postmodernism, deconstruction, psychoanalysis, Marxism, feminism, postcolonialism, and queer theory" in interpreting texts and discussing methods and goals within the humanities.
"Theory" tended to deprecate human agency. Poststructuralism "limit[ed] individual agency" by insisting that we could not overcome (or even understand) various features of our language, psychology, and culture. Multiculturalism added another argument against human agency by insisting "on the primacy of ascribed group identity." Anderson, in contrast, believes in human agency, in the specific sense that we can think morally about, and influence, the development of our own characters. We don’t just "don styles [of thinking and writing], … as evanescent and superficial as fashion" (p. 127). Instead, we are responsible for how we develop ourselves.
Focusing on character does not imply a faith in untrammeled free will or individualism. "Such an exercise can (and, in my view, ideally should) include a recognition of the historical conditions out of which beliefs and values emerge (psychological, social, and political) that can thwart, undermine, or delay the achievement of such virtues and goods" (p. 122).
Anderson takes the side of liberals, Enlightenment thinkers, and proponents of deliberation in the public sphere, theorists like Jurgen Habermas (p. 5). But she emphasizes that a rational, critical, analytical stance--sometimes seen as the liberal ideal--is just one kind of character. Like other character types or identities, it must be cultivated in oneself and in others before it can flourish. Thus a Kantian or Habermasian stance is not an abstract ideal, but a way of being in the world that requires education, institutional support, and "on ongoing process of self-cultivation" (p. 127). Like other character types, the critical rationalist and the civic deliberator must be assessed morally. The primary question is how should one live. Living as a critical rationalist is just one response, to be morally examined like the others (p. 112).
For all that they seem to reject deliberation about how to live, postmodernist theorists also have views about ethos (character). For example, Stanley Fish and Richard Rorty have presented the ironist as an ideal character type. “With varying degrees of explicitness and self-awareness, I argue, contemporary theories present themselves as ways of living, as practical philosophies with both individualist and collective aspirations” (p. 3) Most of The Way We Argue Now is devoted to close, often sympathetic, but also critical readings of theoretical texts. Anderson is very insightful about character, form, irony, ambiguity, and development in these works--elements that we usually associate with literature, not with literary theory. She defends several postmodernist and multicultural authors by showing that they embody moral stances or characters that have value. She is a pluralist, in contrast to a liberal or deliberative democrat who would see the only valuable theory as one that embodied the character traits of reasonableness or tolerance. She believes that the question, "How should I live?" opens a broad discussion in which the radical theoretical movements of the 1960s to 1990s have a place.
To investigate the link between each theory and the character of those who endorse and live by it would broaden the discussion beyond "identity politics, performativity, and confessionalism," which "have exercised a certain dominance" (p. 122). Identity politics reduces the choice to either the "espousal" or the "subversion of various ascriptive and power-laden identities (gender, race, ethnicity, class, sexuality); such enactments are imagined, moreover, as directly and predominantly political in meaning and consequence." There is more to be discussed than how we relate to ascribed identities in political contexts. "Ultimately, a whole range of possible dimensions of individuality and personality, temperament and character, is bracketed, as is the capacity to discuss what might count as intellectual or political virtue or, just as importantly, to ever distinguish between the two" (pp. 122-3)
permanent link | comments (0) | category: philosophy
March 17, 2008
science from left and right
On the left today, most people seem to think that science is trustworthy and deserves autonomy and influence. The Bush Administration must be a bunch of rubes, because they continually get into struggles with scientists. Thus, for example, the first masthead editorial in today's New York Times is entitled "Science at Risk." The Times says:
As written in 1970, the [Clean Air Act] imposes one overriding obligation on the E.P.A. administrator: to establish air quality standards "requisite to protect the public health" with "an adequate margin of safety." Economic considerations--costs and benefits--can be taken into account in figuring out a reasonable timetable for achieving the standards. But only science can shape the standards themselves.Congress wrote the law this way because it believed that air quality standards must be based on rigorous scientific study alone and that science would be the sure loser unless insulated from special interests.
But the definitions of "requisite to protect the public health" and an "adequate margin of safety" could never be scientific. These were always value-judgments--implicit decisions about how to balance mortality and morbidity versus employment and productivity. Costs always factored in, because the only level of emissions that would cause no harm to human health is zero. EPA has allowed enormous quantities of emissions into the air, surely because the agency balances moral goods against moral evils. What the Clean Air Act said was: professional scientists (not politicians or judges) shall estimate the costs of pollution. Since it is unseemly to talk about human deaths and sickness as "costs," scientists shall not use this word, nor set explicit dollar values on lives. Instead, they shall declare certain levels of safety to be "adequate," and present this as a scientific fact.
I well remember when people on the left were the quickest to be skeptical of such claims. Science is frequently an ally of industry and the military. It is intellectually imperialistic, insensitive to cultural traditions. It is arrogant, substituting expertise for public judgment even when there are no legitimate expert answers to crucial questions. (For instance, What is the economic value of a life?). Science is a human institution, driven by moral and cultural norms, power, and status. It is not an alternative to politics.
So progressives used to say. Yet scientific consensus now seems to favor progressive views of key issues such as climate change. The conservative coalition encompasses critics of science, such as creationists. And, as Richard Lewontin wrote immediately before the 2004 election, "Most scientists are, at a minimum, liberals, although it is by no means obvious why this should be so. Despite the fact that all of the molecular biologists of my acquaintance are shareholders in or advisers to biotechnology firms, the chief political controversy in the scientific community seems to be whether it is wise to vote for Ralph Nader this time."
These are short-term political calculations that lead progressives to ally themselves with science and endorse its strongest claims to power. If we are going to defend science, we should do so on the basis of principle, not political calculation. I agree with the Times that the EPA should clamp down on air pollution. I disagree that this would represent a triumph of science over politics. It would be a moral and political victory--and that is all.
permanent link | comments (1) | category: philosophy
March 12, 2008
conservative relativism
Moral relativism is the idea that there isn't any objective or knowable right or wrong; there are only the opinions of individuals or cultures at particular times in history. Some famous conservatives have made their names by attacking moral relativism: Bill Bennett and Allan Bloom, for instance. Many of us also object to it from the left, since it undermines claims about social justice. But conservatives and liberals sometimes make moral-relativist arguments when it suits them.
Consider Justices Roberts and Thomas in the case of Parents Involved in Community Schools v. Seattle School District (2007). This is racial segregation/integration case. Defendants want to use race as a factor in assigning kids to schools, for the purpose of increasing diversity or integration. They claim that this goal is benign, unlike segregationists' use of race, which was malicious. They ask the court to allow racially conscious policies that are well-intentioned, reasonably supported by evidence, and enacted through democratic procedures.
In response, Justice Roberts quotes Justice O'Connor from an earlier case: "The Court's emphasis on 'benign racial classifications' suggests confidence in its ability to distinguish good from harmful governmental uses of racial criteria. History should teach greater humility… . '[B]enign' carries with it no independent meaning, but reflects only acceptance of the current generation's conclusion that a politically acceptable burden, imposed on particular citizens on the basis of race, is reasonable." Justice Thomas likewise argues that allowing a school system to promote diversity through racial classification means acceding to "current societal practice and expectations." That was the approach, he argues, that led the majority in Plessy v Ferguson to uphold Jim Crow laws, which were the fad of that time. "How does one tell when a racial classification is invidious? The segregationists in Brown argued that their racial classifications were benign, not invidious. ... It is the height of arrogance for Members of this Court to assert blindly that their motives are better than others."
These justices doubt that there is a knowable difference between benign and invidious uses of race. But surely there are moral differences between Seattle's integrationist policy of 2005 and the policy of Mississippi in 1940: differences of intent, principle, means, ends, expressive meaning, and consequences or outcomes. If we cannot tell the difference, we are moral idiots. There can be no progress, and there isn't any point in reasoning about moral issues.
To be sure, Seattle's policy is open to critique. The conservative justices quote some politically correct passages from the school district's website to good satirical effect, and the policy could also be attacked from the left. Whether Seattle should be able to decide on its use of race, or whether that should be decided by judges, is a good and difficult question. But it's almost nihilistic to assert that "benign" has "no independent meaning" and reflects only the opinions of the "current generation." That equates Seattle's policy with that of, say, George C. Wallace when he "barred the schoolhouse door."
permanent link | comments (0) | category: philosophy
January 18, 2008
on shared responsibility for private loss
(Syracuse, NY) Yesterday, I wrote a fairly frivolous post in response to Steven Landburg's New York Times op-ed, because I found one of his analogies risible. But I suppose it's worth summarizing the standard serious, philosophical argument against his position (which is libertarian, in the tradition of Robert Nozick). Lansburg asks whether we should compensate workers who would be better off without particular free-trade agreements that have exposed them to competition and have thereby cost them their jobs.
One way to think about that is to ask what your moral instincts tell you in analogous situations. Suppose, after years of buying shampoo at your local pharmacy, you discover you can order the same shampoo for less money on the Web. Do you have an obligation to compensate your pharmacist? If you move to a cheaper apartment, should you compensate your landlord? When you eat at McDonald’s, should you compensate the owners of the diner next door? Public policy should not be designed to advance moral instincts that we all reject every day of our lives.
I need not compensate a pharmacist if I buy cheaper shampoo than she sells, because I have a right to my money, just as she has a right to her shampoo. We presume that the distribution of property and rights to me and to the pharmacist is just. We're then entitled to do what we want with what we privately own. But who says that the distribution of goods and rights on the planet as a whole is just? It arose partly from free exchanges and voluntary labor--and partly from armed conquest, chattel slavery, and enormous helpings of luck. For example, some people are born to 12-year-old mothers who are addicted to crack, while others are born to Harvard graduates.
Given the distribution of goods and rights that existed yesterday, if we let free trade play out, some will become much better off and some will become at least somewhat worse off as a result of voluntary exchanges. Landsburg treats the status quo as legitimate--or given--and will permit it to evolve only as a result of private choices (which depend on prior circumstances). However, the Constitution describes the United States as an association that promotes "the general Welfare." Within such an association, it is surely legitimate for people who are becoming worse off to state their own interests, and it is morally appropriate for others to do something to help. (How much they should do, and at what cost to themselves, is a subtler question.)
Of course, one can question the legitimacy of the American Republic. It is not really a voluntary association, because babies who are born here are not asked whether they want to join. And its borders are arbitrary. That said, one can also question the legitimacy of our system of international trade. It is based on currencies, corporations, and other artificial institutions.
The nub of the matter is whether you think that individuals may promote their own interests in the market, in the political arena, or both. If one presumes that the economic status quo is legitimate, then the market appears better, because it is driven by voluntary choice. But if one doubts the legitimacy of the current distribution of goods and rights, then politics becomes an attractive means to improve matters. Because almost all Americans believe in the right and duty of the government to promote the general welfare, even conservatives like "Mitt Romney and John McCain [battle] over what the government owes to workers who lose their jobs because of the foreign competition unleashed by free trade."
permanent link | comments (3) | category: philosophy
October 2, 2007
tightening the "nots"
For what it's worth, I have listed my fundamental commitments and beliefs here. I can also define my own position by saying what kind of a scholar/writer I am not:
Not a positivist, because I don't believe that one can isolate facts from values, nor that one can live a good life without reasoning explicitly about right and wrong.
Not a technocrat, because I don't believe that any kind of technical expertise is sufficient to address serious public problems.
Not a moral relativist, because the arguments for moral relativism are flawed, and the consequence of relativism is nihilism.
Not a post-modernist of the type influence by Foucault (who is a major influence across the cultural disciplines), because I believe that deliberate human choices and actions matter and freedom is real.
Not a social constructivist, because I believe we are responsible for understanding the way the world actually works.
Not a utopian, because I believe that any persuasive theory of justice must incorporate a realistic path to reform. An ideal of justice that lacks a praxis is meaningless, or worse.
Not a utilitarian, because I don't believe that any social welfare function can define a good society.
Not a deontologist, because I doubt that any coherent list of principles can define a good society.
Not a pure pragmatist, because we need criteria for assessing whether a social process for defining and addressing problems is fair and good. Such criteria are extrinsic to the process itself.
Not a pluralist (in the political-science sense), because I believe there is a common good. But also not a deliberative democrat (in the Habermas version), because I believe that there are real conflicts of interest.
permanent link | comments (2) | category: philosophy
September 19, 2007
where morality comes from
Nicholas Wade's New York Times article, entitled "Is 'Do Onto Others' Written into Our Genes?" started off badly enough that I had a hard time reading it. Stopping would have been a loss, because I appreciated the reference to YourMorals.org, where (after registering) one can take a nifty quiz.
Wade begins: "Where do moral rules come from? From reason, some philosophers say. From God, say believers. Seldom considered is a source now being advocated by some biologists, that of evolution."
First of all, the evolutionary basis of morality is not "seldom considered." It has been the topic of bestselling books and numerous articles. Even the student commencement speaker at the University of Maryland last year talked about it.
More importantly, Wade's comparison of philosophers and biologists is misleading. Biologists may be able to tell us where morals "come from," in one sense. As scientists, they try to explain the causes of phenomena, such as our beliefs and behaviors. We call some of our beliefs and behaviors "moral." Biology may be able to explain why we have these moral characteristics; and one place to look for biological causes is evolution.
But why are we entitled to call some of our beliefs and behaviors moral, and others--equally widespread, equally demanding--non-moral or even immoral? Why, for example, is nonviolence usually seen as moral, and violence as immoral? Both are natural; both evolved as human traits. Moreover, not all violence is immoral, at least not in my opinion. Not even all violence against members of one's own group is wrong.
Morality "comes from" reason, not in the sense that reason causes morality, but because we must reason in order to decide which of our traits and instincts are right and wrong, and under what circumstances. Evolutionary biology cannot help us to decide that. If biologists want to study the origins of morality, they must use a definition that comes from outside of biology. One approach is to use the definition held by average human beings in a particular population. But why call that definition "moral"? I would call it "conventional." Conventional opinion may, for example, abhor the alleged "pollution" caused by the mixing of races or castes. It is useful to study the reasons for such beliefs, but it is wrong to categorize them as moral.
Perhaps I wrote that last sentence because of my genes, my evolutionary origins, or what I ate for breakfast this morning. Whether it is true, however, depends on reason.
permanent link | comments (1) | category: philosophy
August 29, 2007
hypocrisy
If Senator Larry Craig opposed gay rights and said hostile things about gays while occasionally soliciting gay sex, he was hypocritical. Hypocrisy is one of the easiest faults to prove, but it is not one of the worst faults, especially in a leader.
Hypocrisy is easy to establish, once the facts are out, because it involves a contradiction between the person's statements and his actions. (Likewise, lies are evident when a person's statements contradict what he knows or believes.) You can have very few moral commitments and very little knowledge of issues, and yet detect other people's hypocrisy.
But what if Larry Craig were completely heterosexual and totally faithful to his wife, yet anti-gay? In my view, his position would then reflect injustice and intolerance. These are worse faults than hypocrisy; they have far more serious consequences. But many Americans are uncomfortable about charging anyone with injustice. That's because: (1) the charge is controversial, given that definitions of justice vary; (2) the accusation reflects deep moral commitments, which are incompatible with moral relativism or skepticism; and (3) the claim requires knowledge of issues and policies. The issue of gay rights happens to be relatively easy to understand, but I would argue that Senator Craig's votes on economic policy display equally serious injustice. To make that claim, I have to follow politics fairly closely and develop strong moral commitments.
Thus I think that Americans who are disconnected from politics and issues tend to jump on evidence of hypocrisy as if it were very momentous (and interesting) news, whereas far worse faults are ignored.
(It's not even crystal-clear that Larry Craig is a hypocrite, because one could oppose certain rights for gays and yet be gay or bisexual, without a contradiction. If Craig is a hypocrite, it's not because of his policy positions but because he falsely denies being gay himself--or so his accusers claim. I happen to feel considerable sympathy for a gay person who hides his orientation, given the general climate of intolerance and the tendency of police to entrap gay men. But hypocrisy, while not the worst moral fault, is wrong. The wrongness, it seems to me, lies in the failure to treat other people as responsible and rational agents who can make decisions on the basis of facts. Instead, the hypocrite feels it necessary to deceive in order to get the results he wants. This is manipulative; it is using someone else as a means to one's ends, not as an end in himself. But of course there are many forms of political manipulation that do not involve hypocrisy--for example, fear-mongering and exaggeration.)
permanent link | comments (5) | category: philosophy
July 18, 2007
stability of character
I think most people believe, as a matter of common sense, that individuals have stable characters. In fact, it turns out that the word "character" comes from a Greek noun for the stamp impressed on a coin. We think that adults have been "stamped" in some way, so that one person is brave but callous; another, sensitive but vain. We make fine discriminations of character and use them to predict behavior. We also see categories of people as stamped in particular ways. For instance, we may think that men and women have different characters, although that particular distinction is increasingly criticized--and for good reasons.
Experiments in social psychology, on the other hand, tend to show that most or all individuals will act the same way in specific contexts. Details of the situation matter more than differences among individuals. For instance, in a famous experiment, seminary students on their way to give a lecture on helping needy people are confronted with an actor who is slumped over and pretending to be in distress. Whether the students stop depends on how late they believe they are--a detail of the context. All the self-selection, ideology, training, and reflection that goes into seminary education seems outweighed by the precise situation that a human being confronts on his way to an appointment.
On a much broader scale, we are all against slavery and genocide today. But almost all White people condoned slavery in American ca. 1750, and almost all gentile Germans turned a blind eye to genocide ca. 1940. It seems safe to say that context made all the difference, not that our characters are fundamentally better than those of old. (For a good summary, see Marcia Homiak, "Moral Character," The Stanford Encyclopedia of Philosophy [Spring 2007 Edition], edited by Edward N. Zalta.)
My question is why the common sense or folk theory of character seems so attractive and is so widespread. If human behavior depends on the situation and is not much affected by individuals' durable personality traits, why do we all pay so much attention to character?
In fact, most people we know are rarely, if ever, confronted with new categories of challenging ethical situations. Neither the political regime nor one's social role changes often, at least in a country like the USA. An individual may repeatedly face the same type of situation, and these circumstances differ from person to person. Thus a big-city police officer in the US faces morally relevant situations of a certain type--different from those facing a suburban accountant. An American lives in a different kind of social/political context from an Iraqi. Individuals occupy several different social roles at once. But the roles themselves are pretty stable. They are, to varying degrees, the result of choices that we have made.
Thus what we take to be "character" may be repeated behavior resulting from repeated circumstances--which, in turn, arise because of the roles we occupy, which (to some degree) we choose. In that case, it is reasonable to expect people to act "in character," yet situations are what drive their behavior. By the way, this seems a generally Aristotelian account.
permanent link | comments (1) | category: philosophy
July 16, 2007
the purposes of political philosophy
(In Philadelphia for the National Conference on Volunteering and Service) Why would a person sit down at a desk to write general and abstract thoughts about politics? This is a significant question, because people who think hard about politics are likely to be interested in social change. Yet it is not obvious that writing abstract thoughts about politics can change anything.
One might write political theory in order to persuade someone with the power to act on one's recommendations: for instance, the sovereign. Machiavelli addressed his book The Prince "ad Magnificum Laurentium Medicem"--"to Lorenzo (the Magnificent) de' Medici"--a man who surely had the capacity to govern.
Today, political theorists still occasionally write papers for the World Bank or a national government, preserving the tradition of philosophy as advice to the ruler. Ronald Dworkin, Thomas Nagel, Robert Nozick, John Rawls, et al. sent a brief to the Supreme Court whose first section was headed, "Interest of the Amici Curiae." The authors explained their "interest" as follows: "Amici are six moral and political philosophers who differ on many issues of public morality and policy. They are united, however, in their conviction that respect for fundamental principles of liberty and justice, as well as for the American constitutional tradition, requires that the decisions of the Courts of Appeals be affirmed."
Unfortunately, one rarely finds a sovereign willing to act on morally demanding principles. And if one's principles happen to be republican, one may not wish to serve or help the sovereign at all. (It is a subtler question whether a powerful Supreme Court is compatible with republicanism.)
Rousseau, being a republican, thought that Machiavelli's advice to Lorenzo had to be ironic. Machiavelli's real audience was--or so Rousseau presumed--the Florentine people, who would realize that a prince, in order to be secure, must be ruthless and cruel. They would therefore rise up and overthrow Lorenzo, becoming what they should always have been: the sovereign. In this "theory of change," the philosopher addresses the sovereign as an apparently loyal courtier, but his real effect is to sew popular discontent and rebellion.
Whether or not Rousseau's reading of Machiavelli was correct, many philosophers have addressed themselves to the public as the sovereign. Rousseau himself dedicated his Discourse on Inequality "To the Republic of Geneva." He began: "Magnificent, very honorable, and sovereign sirs, convinced that it is only fitting for a virtuous citizen to give to his nation the honors that it can accept, for thirty years I have labored to make myself worthy to offer you a public homage. ..."
There is, I'm sure, some irony in Rousseau's dedication. He didn't expect the oligarchs of Geneva to whom he addressed his discourse to act in accord with his ideas. He understood that "la Republique" was not the same as the "souverains seigneurs" who might actually read his book.
Today, a dedication or appeal to the public would seem pretentious in a professional philosophy book--partly because it's clear that "the public" won't read such a work. John Rawls' Theory of Justice is dedicated to his wife, a common (and most appropriate) opening. Still, I think we can assume that Rawls wanted to address the whole public indirectly. He believed that the public was sovereign. He knew, of course, that most citizens would not read his book, which was fairly hard going. Even if it had been an easier work, most people were not interested enough in abstract questions of politics to read any "theory of justice." But Rawls perhaps hoped to persuade some, who would persuade others--not necessarily using his own words or techniques, but somehow fortified by his arguments.
This is a third "theory of change" that may be implicit in most modern academic political theory. The idea is: We must first understand the truth. Since it is complex and elusive, we need a sophisticated, professional discussion that draws on welfare economics, the history of political thought, and other disciplines not easy for a layperson to penetrate. But the ultimate purpose of all this discussion is to defuse diffuse true ideas into the public domain. We do that by lecturing to undergraduates, writing the occasional editorial, persuading political leaders, filing amici briefs, etc.
This theory is not foolish, but I don't believe in it. I doubt that a significant number of people will ever have the intellectual interests or motivations to act differently because they are exposed to philosophical arguments.
I further doubt that one can develop an intellectually adequate understanding of politics unless one thinks through a theory of change. It is easy, for example, to propose that the state should empower people by giving them various political rights. But what if saying that has no effect on actual states? What if saying it actually gives states ideas for propaganda? (Real governments have sometimes used political theory as the inspiration for entirely hypocritical rhetoric.) What if talking about the value of particular legal rights misdirects activists into seeking those rights on paper, when the best route to real freedom lies elsewhere? In my view, an argument for political proposition P is an invalid argument if making it actually causes not-P. And if you argue for P in such a way that you can never have any impact on P, I am unimpressed.
Finally, I doubt that philosophical arguments about politics are all that persuasive, except as distillations and clarifications of experience. Too much about politics is contingent on empirical facts to be settled by pure argumentation. (In this sense, political philosophy is profoundly different from logic.) Thus I read The Theory of Justice as an abstract and brilliant rendition of mid-20th-century liberalism. But the liberalism of the New Deal and Great Society were not caused in the first place by political theory. They arose, instead, from practical experimentation and negotiation among social interests. Rawls' major insights derived from his vicarious experience with the New Deal and the Great Society--which makes one wonder how much efficacy his work could possibly have. It was interesting analysis, no doubt; but could it matter?
A fourth "theory of change" is implicit in a work like John Gaventa's Power and Powerlessness (1980). This book has no official dedication, but the preface ends, "Most of all, I am indebted in this study to the people of the Clear Fork Valley. Since that summer in 1971, they have continued to teach, in more ways than they know." It's not clear whether Gaventa expected the residents of an Appalachian valley to read his book, but he did move to the region to be a leader of the Highlander Folk School. Gaventa's theory was: Join a community or movement of people who are motivated and organized to act politically. Learn from them and also give them useful analysis and arguments. Either expect them to read your work directly, or use your academic work to develop your analysis and then share it with them in easier formats.
I am the opposite of a Marxist in most respects, but I think we have something to learn from Marxists on the question of "praxis": that is, how to make one's theory consequential. In his Theses on Feuerbach, Marx wrote, "Philosophers have hitherto only interpreted the world in various ways; the point is to change it." That seems right to me, not only because we have a moral or civic obligation to work for social change, but also because wisdom about politics comes from serious reflection on practical experience.
Thus I will end with one more quote from a preface--the 1872 preface of the German edition of the Communist Manifesto. Here we see Marx addressing an organized social movement: "The Communist League, an international association of workers, which could of course be only a secret one, under conditions obtaining at the time, commissioned us, the undersigned, at the Congress held in London in November 1847, to write for publication a detailed theoretical and practical programme for the Party. Such was the origin of the following Manifesto, the manuscript of which travelled to London to be printed a few weeks before the February Revolution."
Now that is political writing with a purpose.
permanent link | comments (0) | category: philosophy
June 14, 2007
Günter Grass’s memories
The June 4 New Yorker presents an excerpt from Günter Grass’s memoir, Peeling the Onion. For the first time, we get the novelist's own lengthy account of his experiences in the Waffen S.S., a story that he had suppressed for about 60 years. The New Yorker (or possibly Grass) chose an excerpt that is action-packed. There is not too much rumination about what the experience meant or why he failed to mention it during the decades when he bitterly denounced German hypocrisy about the Nazi past. Instead, the thrilling adventures of a young man at war make us highly sympathetic. We root for him to survive, notwithstanding the double-S on his collar. And as we read the exciting story (under the flip headline of "Personal History: How I Spent the War"), our eyes wander to amusing cartoons about midlife crises.
I would not be quick to condemn a 16-year-old for joining the S.S., although that was a much worse thing to do than joining a gang and selling drugs, for which we imprison 16-year-olds today. For me, the interesting moral question is what the famous and accomplished adult Günter Grass did with his memories.
So ... why run an excerpt that is mainly about his exciting adventures in the war? Why not write about the 60-year cover-up? Why introduce the memoir in English in a very lucrative venue, America's most popular literary magazine? Also, why write only from his personal perspective, saying almost nothing about the nature of the S.S. or its reputation among German civilians at the time?
Grass cannot recall precisely what the S.S. meant to him when he was assigned to it. But he thinks it had a "European aura to it," since it comprised "separate volunteer divisions of French and Walloon, Dutch and Belgian. ..." The von Frundsberg Division, to which he was assigned, was named after "someone who stood for freedom, liberation." And once Grass was in the S.S., where he was exposed to many months of training, "there was no mention of the war crimes that later came to light."
This paragraph continues: "But the ignorance I claim cannot blind me to the fact that I had been incorporated into a system that had planned, organized, and carried out the extermination of millions of people. Even if I could not be accused to active complicity, there remains to this day a residue that is all too commonly called joint responsibility. I will have to live with it for the rest of my life."
I do not know whether the factual claim here is credible. I must say I find it very surprising that in the course of a whole autumn and winter of S.S. training, there was "no mention" of war crimes. Maybe the details of the death camps were not discussed, but I am amazed that the S.S. trainers never talked in general terms about violence against Jewish, Gypsy, Slavic and other civilian populations. That was a different kind of "European aura": the attempted slaughter of several whole European peoples.
Regardless of what precisely Grass heard in his S.S. training, I find his reflection on "joint responsibility" troubling. He says he has no "active complicity," even though he had joined the S.S. when he could have found his way into the army. His involvement in the Holocaust is passive: "I was incorporated into a system. ..." As a result of this bad moral luck, he feels "joint responsibility"--a term that is "all too often" used. (Actually, I find this sentence hard to interpret and evasive. Is the term "joint responsibility" used when it does not apply? Does it apply in his case?) Finally, Grass emphasizes the distress that his passive complicity has always caused him and will continue to cause him for the rest of his life. There is no hint of an apology for the harm that his active decision to join the S.S. might have caused other people. And then the memoir proceeds to make him its hero--his survival a happy ending.
I would forgive Grass instantly if he took personal responsibility for what he did at age 16 and 17. I am not so sure I like how he is behaving at age 80.
permanent link | comments (2) | category: philosophy
May 31, 2007
a typology of democracy and citizenship
I've been in Chicago for an interesting research conference on civic participation. There was some discussion about how empirical research should relate to "normative" thinking, i.e., arguments about how citizens ought to act, or how institutions should treat citizens. One of my colleagues* suggested that it might be helpful to provide empirical researchers with a menu of reasonable normative ideals, each of which might support different policies and outcome measures.
I'd first note that many people care about politics because they have substantive goals: for instance, social justice, individual liberty, moral reform, or concern for nature. Thus we could begin by listing substantive political ideals. But that would produce a huge array, especially once we cross-referenced each substantive goal with various ideas about appropriate political behavior. (For instance, you can be an environmentalist who believes in public deliberation, an environmentalist revolutionary, or an environmentalist who thinks that consumers and conservationists should bargain with business interests.) Thus I'd begin by conceding that there will be debates about what makes a good (or better) society. Assuming that the people engaged in these debates want to handle their differences democratically, we can turn to various rival views of democracy:
1. Theories of democratic participation
a. Equal influence in an adversarial system: The main purpose of politics is to bend institutions to one's own purposes, nonviolently. As in the title of Harold Lasswell's 1958 book, politics is "Who Gets What, When, How." It is desirable that poor and marginalized people participate in politics effectively, because this is their way to counter massive inequality in the economy. Voting is a core measure of participation; votes should be numerous, and the poor should be at least as prone to vote as the rich. Other forms of political engagement are also aimed at the state or at major private institutions, e.g., persuading others to vote, protesting, and filing lawsuits. The value of a political act depends on its impact, which is empirically measurable. For example, a protest may affect the government more or less than a vote, depending on the circumstances.
b. Deliberation: The main purpose of politics is to exchange ideas and reasons so that opinions can become more fair and informed before people take action. A vote is not a good act unless it is well informed and reflects ethical judgment and learning. Participation in meetings is good, especially if the meetings include ideologically diverse people, operate according to fair rules and norms, and conclude with agreement. The use of high-quality news and opinion sources is another indicator of deliberation.
c. Public work: Citizens create public goods by working together--especially in civil society, but also in markets and within the government if these venues are reasonably fair. Public goods include cultural products, the creation of which is an essential democratic act. Relevant individual-level indicators include "working with others to address a community problem" (a standard survey question) or--specifically--participation in environmental restoration, educational projects, public art, etc. Perhaps the best indicators are not measures of individual behavior but rather assessments of "the commonwealth," which is the sum of public goods.
d. Civic republicanism: Political participation is an intrinsically dignified, rewarding, and honorable activity, particularly superior to consumerism. It is implausible that voting once a year could be dignified and rewarding; but deliberation or public work could be.
Civic participation is not only a means to change society; it is also part of the citizen's life. Thus we also need to consider:
2. Theories of the good life
a. Critical autonomy: The individual should be as free as possible from inherited biases and presumptions. We should hold our opinions and roles by choice and revise them according to evidence and alternative views. Not only should people choose their substantive political values, but they should decide, after due reflection, whether or not to engage politically.
b. Eudaimonism: A good life is a happy life, if happiness is properly understood. (And that's a matter of debate.) The happiness of all human beings should matter to each of us, which implies strong and universalistic moral obligations.
c. Communitarianism: We are born into communities that profoundly shape us. Although we should have some rights of voice within our communities and exit in cases of oppression, true autonomy is a chimera and membership is a necessary source of meaning. Participation in a community is essential, but what constitutes appropriate participation is at least somewhat relative to local norms.
d. Creativity: The good life involves some measure of innovation, expression, and the creation of things that have lasting value. Creative work can be collaborative, in which case it requires civic engagement.
These two lists could be combined to create an elaborate grid or taxonomy (which would become 3-D if we added substantive political goals). I'm struck that especially my second list looks rather idiosyncratic, even though my intention was merely to summarize prevailing, mainstream views. I'm not sure what that says about me or this subject.
*I have a self-imposed policy against identifying other people who attend meetings with me.
permanent link | comments (0) | category: philosophy
May 23, 2007
philosophy and concrete moral issues
The Philosopher's Index (a database) turns up 25 articles that concern "trolley problems." That's actually fewer than I expected, given how frequently such problems seem to arise in conversation. Briefly, they involve situations in which an out-of-control trolley is barreling down the tracks toward potential victims, and you can affect its course by throwing a switch that sends it plowing into a smaller group of victims, or by throwing an innocent person in front of the tram. Or you can refrain from interfering.
The purpose of such thought experiments is to use our intuitions as data and learn either: (a) what fundamental principles actually underlie our moral choices, perhaps as a result of natural selection, or (b) which moral theory would consistently and appropriately handle numerous important cases. In either case, the "trolley" story is supposed to serve as an example that brings basic issues to the fore for consideration. The assumption is that we have, or ought to have, a relatively small set of general principles that generate our actual decisions.
I do not think this approach is useless, but it doesn't interest me, for the following reason. When I consider morally troubling human interactions and choices, I imagine a community or an institution like a standard American public school. The issues that arise, divide, perplex, and worry us in such contexts usually look like this: Ms. X, a teacher, believes that Mr. Y, her colleague, is not dedicated or effective. How should she relate to him in staff meetings? Or, Ms. X thinks that Johnny is not a good student. Johnny is Latino, and Ms. X is worried about her own anti-Latino prejudices. Or, Ms. X assigns Charlotte's Web, a brilliant work of literature but one whose tragic ending upsets Alison. Should Alison's parents complain? Or, Mr. and Mrs. B believe that Ms. X is probably a better teacher than Mr. Y. Yet they cannot be sure. Should they try to get their little Johnny into Ms. X's class, even if that means insulting Mr. Y? Or should they allow Johnny to be assigned by the principal?
Possibly, philosophy has little value in guiding, or even analyzing, such choices. I would like to think that is wrong, and philosophical analysis can be helpful. But it is very hard to see how trolley problems can get us closer to wise to judgment about concrete cases.
permanent link | comments (2) | category: philosophy
March 29, 2007
what I believe
(In Albuquerque) For whatever it's worth, here are the most basic and central positions I hold these days. The links refer to longer blog posts on each idea:
Ethical particularism: The proper object of moral judgment is a whole situation, not an abstract noun. Some general concepts have deep moral significance, but their significance varies unpredictably depending on their interplay with other factors present in any given situation.
Historicism: Our values are deeply influenced by our collections of prior experiences, examples, and stories. Each person's collection is his or her "culture." But no two people have precisely the same background; one culture shades into another. A culture is not, therefore, a perspective (i.e., a single point from which to observe everything), nor a premise or set of premises from which our conclusions follow. There are no barriers among cultures, although there are differences.
Dialectic over entropy: Cultural interaction generally leads to convergence. Convergence is bad when it is automatic and the result is uniformity. It is good when it is deliberate and the result is greater complexity.
Narratives justify moral judgments: We make sense of situations by describing them in coherent, temporal terms--as stories. Narratives make up a large portion of what we call culture.
Populism: It is an appropriate general assumption--for both ethical and practical reasons--that all people can make valuable contributions to issues of moral significance that involve them. (Note that ethical particularism rebuts claims to special moral authority or expertise.)
Public deliberation: When judgments of situations and policies differ, the people who are affected ought to exchange ideas and stories under conditions of peace and reasonable equality, with the objective of consensus. This process can, however, be local and voluntary, not something that encompasses the whole polity.
Public work: Deliberation should be connected to action. Otherwise, it is not informed by experience, nor is it motivating. (Most people don't like merely to talk.)
Civic republicanism: Participation--the liberty of the ancients--is not only a means to an end; it is also intrinsically dignified.
Open-ended politics: We need a kind of political leadership and organizing that does not aim at specific policies or social outcomes, but rather increases the prevalence of deliberation and public work. Like other forms of politics, this variety needs strategies, messages, constituencies, and institutions.
The creative commons: Many indispensable public goods are not just given (like the sun or air) but are created by collective effort. Although there is a global creative commons, many public goods are local and have a local cultural character.
Developmentalism: Human beings pass through a life course, having different needs and assets at different points. Development is not a matter of passing automatically through stages; it requires opportunities. Active citizens are made, not born. They acquire culture and help make it.
Associations: Voluntary private associations create and preserve public goods, host deliberations, and recruit and teach the next generation.
Some of these ideas fit together very neatly, but there are tensions. For example, how can I be skeptical about judging abstract moral concepts and yet offer a positive judgment of "participation," which is surely an abstract idea? As a matter of fact, I don't think participation is always intrinsically good; I simply think that we tend to undervalue it or overlook its intrinsic merits. But how weakly can I make that claim without undermining it entirely?
permanent link | comments (1) | category: philosophy
March 22, 2007
consequentialists should want torture to "work"
I ended yesterday's post with the question, "if killing is worse than torturing, why should we ban the latter--especially if it proves an efficient means of preventing casualties?" I said "if" because this is a controversial empirical hypothesis. Human rights groups argue that torture does not work. It does not prevent terrorism or other grave evils, because those who are tortured can lie or can change their plans once they are captured. It generates false information that justifies even more torture without actually serving national security or any other acceptable end.
This sounds at least plausible. But it isn't impossible to imagine a situation in which a particular form of torture (duly limited and overseen) actually has beneficial net effects on human happiness. That is, the few people who suffer under torture may--in this hypothetical world--cough up enough true information that there is less terrorism, tyranny, or war. Their suffering is far outweighed by the increased security of numerous others.
What I find interesting is that I don't want this scenario to be empirically true. I believe in universal human rights, which rest on a sense of the dignity and intrinsic worth of all people. I also think that virtue excludes the use of torture, which is dishonorable. However, I am not so much of a "deontologist" that I'll stick to principles regardless of their consequences. I won't say "fiat lex pereat mundus"--let the [moral] law prevail even if the world perishes. Instead, I hope that the effects of torture prove harmful, because then arguments about consequences will line up with arguments about principles and virtues and the case will be easy.
One could, however, be a consistent consequentialist and argue that we should institute torture (with appropriate safeguards and limits) if and only if its net effects are positive. If that is your view, you should actually hope that torture is highly effective. If any practice, P, has both costs and benefits, a consequentialist should want its benefits greatly to outweigh its costs and should then press to institutionalize P. A consequentialist should oppose torture if, as the human rights groups say, it doesn't work. But I see no consequentialist grounds for hoping that it doesn't work.
permanent link | comments (2) | category: philosophy
March 20, 2007
Wittgenstein in the kitchen
Wittgenstein used "game" as an example of a word that we can use effectively even though the examples are highly various. Some games are competitive, some are fun, and some have rules-- but some have none of these features. Indeed, Wittgenstein thought that there was no defining feature of "games," but there were many individual games that were similar to many others. The word marked a cluster of cases that one could learn to "see" without being able to identify a common denominator. It might be right or wrong to call a given object a "game," but the test would not be whether the game met any particular criterion.
My favorite example of such words is not "game," but "curry"--a kind of hobson-jobsonism derived from a Tamil word meaning "sauce or relish for rice." But there are plenty of curries served without rice, and plenty of rice sauces that aren't curries. Webster's defines the English word "curry" as "a food, dish, or sauce in Indian cuisine seasoned with a mixture of pungent spices." But there are millions of curries that don't come from India, and some Indian curries are not particularly pungent.
Here are the ingredients for two curries, taken from cookbooks in our house. 1) Whole chicken, onions, blanched almonds, coriander seeds, cardamom pods, pepper, yogurt, salt. 2) Flank steak, peanut butter, coconut milk, basil leaves, fish sauce, sugar, cumin, white pepper, paprika, galanga root, kaffir lime leaves, cumin, coriander, peppercorns, lemon grass, garlic, shallots, salt, and shrimp paste. These recipes both contain coriander and salt, but it is not hard to find other curries without the coriander, and you can leave out the salt. It is hard to find any two curries that share absolutely no common ingredient. Yet the ingredients that any two share may not be found in a third.
If "curry" cannot be defined by its components, perhaps it refers to some cooking method? Many curries involve pastes or thick sauces composed of ground ingredients. But that's also a good description of romesco sauce from Catalonia, pesto from Italy, or chile con carne. No one would call a minestrone with pesto a curry. We could try to define "curry" by listing countries of origin. But there are dishes from India that aren't curries. "Country captain" is arguably a curry of English origin. And what about adobo from the Philippines or a lamb stew from Iran? Curries or not?
In short, you can teach or learn the correct meaning of "curry" (albeit with some controversial borderline cases), but you cannot define it in a sentence that will communicate its meaning. Learning requires experience. I believe the same is true of "love," "happiness," and "virtue"--but that's another story.
permanent link | comments (0) | category: philosophy
February 21, 2007
building alternative intellectual establishments
Think back to the year 1970. ....
Almost all university professors are men. They seem to be interested only in male historical figures and male issues. They select their own advanced students and colleagues and decide which manuscripts are published. They defend their profession as rigorous, objective, and politically neutral. Feminists respond by criticizing those claims; some also try to create a parallel set of academic institutions (women's studies departments, feminist journals) that can confer degrees and tenure and publish. Certain academic disciplines, including law, history, and political science, are seen as predominantly liberal. They seem to support a liberal political establishment that has considerable power. For example, law professors are gatekeepers to the legal profession, which produces all judges. Professors in these fields choose their own successors and claim to be guardians of professionalism, expertise, independence, and ethics. Conservatives--disputing these claims--decide to build a parallel set of research institutions, including the right-wing think tanks and organizations like the Federalist Society (founded 1982). The National Endowment for the Arts gives competitive grants to individual artists. NEA peer-review committees are composed of artists, critics, and curators. They are said to be insulated from politics and capable of choosing only the best works. The artists they support tend to come from the "Art World" to which they also belong: a constellation of galleries, art schools, small theaters, and magazines, many based in New York City. Most of the funded work is avant-garde. It is usually politically-correct, aiming to "shake the bourgeoisie." Critics complain about some particularly controversial artists, and ultimately the individual grants program is canceled. Almost all professional biologists are Darwinians. They assert the legitimacy of science; but their religious critics believe that they depend on false metaphysical assumptions. Biologists use peer-review to select their students, to hire colleagues, to disperse research funds, and to choose articles for publication. Religious critics cannot get through this system, so they build a parallel one composed of the Institute for Creation Research, Students for Origins Research, and the like. The most influential news organs in the country (some national newspapers and the nightly television news programs) claim neutrality, objectivity, accuracy, and comprehensiveness: in a phrase, "all the news that's fit to print." Critics from both the left and right detect all sorts of bias. They try (not for the first time in history) to construct alternative forms of media, including NPR (founded in 1970) and right-wing talk radio.
If you are influenced by Nietzsche's Genealogy of Morals and Foucault, you may see all knowledge as constructed by institutions to serve their own wills to power. Then you must view all of the efforts mentioned above with equanimity--or perhaps with satisfaction, since they have unmasked pretentious claims to Truth. If you believe in separate spheres of human excellence, then you may lament the way that various disciplines and fields have been enlisted for political organizing. You may concede that all thought has a political dimension, but you may be sorry that scholarly and artistic institutions have been used as strategic resources in battles between the organized left and right. (I owe this idea to Steven Teles.)
I guess my own response is ad hoc and mixed. For example, I think that conservative ideas about law, history, and political science are interesting and challenging and should be represented in academia. I'm sorry that some legal conservatives have found their way to the Supreme Court, but the solution is to win the public debate about the meaning of the Constitution--not to wish that conservatives would go away. The Federalist Society provides liberals with a valuable intellectual challenge.
I suspect that the NEA's peer-review committees of the 1970s and 1980s often identified the best artists: meaning those who were most innovative, sophisticated, and likely to figure in the history of art as it is written a century from now. (Although who can tell for sure?) But I'm not convinced that taxpayers' money should be devoted to the "best" artists. Other criteria, such as geographical dispersion, various sorts of diversity, and public involvement, should perhaps also count. If it's fair to say that the New York Art World dispersed public money to itself, that sounds like a special-interest takeover of a public agency.
Finally, "creation science" and "intelligent design theory" strike me as both scientific and theological embarrassments, destined to disappear but not before they have done some damage. Nevertheless, the anti-Darwinian organizations reflect freedom of association and freedom of speech and must certainly be tolerated.
(These ad hoc judgments are probably not consistent or coherent at a theoretical level.)
permanent link | comments (0) | category: academia , philosophy
January 11, 2007
consequences of particularism
I drafted a paper more than a year ago that drew some political implications out of a philosophical doctrine called "moral particularism" (click for pdf). I haven't had a chance to improve and expand that paper for publication. It actually covers a huge amount of ground very thinly (which makes it inappropriate, in its current form, for academic publication). Here are a few key ideas:
Some concepts have the following features:
1) They are morally important. When they show up as features of a situation, they usually ought to influence our moral judgment, albeit in conjunction with other features.
2) These concepts lack consistent moral "valence." Depending on the situation, they can make it worse or better. There are no general rules that reliably tell us what their valence will be in all the instances of a certain description. By way of analogy (which I owe to Simon Blackburn), we can't tell in advance--or by means of a principle or rule--whether a splash of red paint will make a painting better or worse. That is because the proper unit of aesthetic analysis is the whole painting, not an area within it. Nevertheless, a splash of red paint is important to the overall beauty of a painting. It might ruin a Vermeer but save a De Kooning. Likewise, we can't tell whether love makes a situation better or worse; but it usually matters.
3) These concepts are indispensable. We cannot resolve moral questions appropriately by appealing only to concepts that avoid 1) and 2).
I think these three features apply to all of the traditional virtues and vices: courage, pride, partiality, respect, and many more. As an example, consider love. Love is morally significant and can be either good or bad depending on the situation. The question is whether we can use a rule or principle to delimit the good cases of love from the bad. Then we could replace the ambiguous word "love" with two words, one for the good form and the other for the bad. (Or there might turn out to be more than two subsets of love.)
I don't have a proof that such an analysis must fail. But I doubt that it can succeed, because I suspect that we humans happen to have an emotion, love, that can be either good or bad, or can easily change from good to bad (or vice-versa), or can be good and bad at the same time in various complex ways. Even when love is good, it carries some problematic freight because of its potential to be bad. And when love is wrong, it nevertheless has some redeeming qualities because it is akin to love that is good. I think the instinct to divide it into eros and agape or other such subcategories is fundamentally mistaken.
(That does not mean, however, that there is no difference between good and bad love. Moral judgment is necessary, but it has to be about situations, not about concepts in the abstract.)
Numerous implications follow from this doctrine, but the one I want to mention here is political. For a particularist, there can be no technique for resolving moral questions that is analogous to the techniques of economics, engineering, or law. First, there cannot be a computational method (such as the one that utilitarianism promises) because that would presume that one consistent good, such as happiness, is the only concept that counts. The particularist replies that other concepts must also be considered, and they happen to be unpredictable. Second, the particularist doubts that we can develop a set of sharp and valid moral definitions or principles and then apply them to cases. Although some moral concepts may be definable in ways that give them consistent moral valence, others cannot.
Thus there is no expertise or procedure that will yield wise moral judgments. However, particularism is consistent with public deliberation. When people discuss what should be done, they apply rules and principles--sometimes validly and sometimes not. But they also tell stories so as to bring out the salient features of a situation and depict those features in a positive or a negative way. They place particular aspects of the situation in various contexts. And they bring out themes (not rules or principles but repeated motifs of moral significance). In making such arguments, they apply their distinct backgrounds and perspectives. This is the best form of moral reasoning, assuming that particularism is right.
I do not assume that everyone has equally valid and useful points to contribute in deliberation. Yet we should allow everyone to participate because: (a) rules that exclude some and favor others tend to be biased--merely to serve special interests, and (b) many more people have valid moral insights than one might think. Thus particularism does not imply egalitarianism, but it counts in its favor.
permanent link | comments (0) | category: philosophy
October 30, 2006
the origins of government
Would this work as a definition of a government? "An institution designed to outlast individual human beings that operates within a fixed geographical territory; it has permanent fiscal accounts, offices with mutually consistent and complementary roles that are held temporarily by individuals, and real property. It has some authority over all the people and institutions within its territory (where 'authority' means the ability to make and enforce rules claimed to be legitimate)."
If this definition works, then Florence had a government in 1300. Dante, for example, held various offices for his city, was paid for his work out of public accounts, made binding decisions while he was a city magistrate, and represented the government abroad. When he was exiled, he left the jurisdiction and employ of Florence; his office and legal power passed to another man.
In Dante's time, England basically lacked a government. That is not to say that England was disorganized or backward. The English erected great cathedrals, castles, schools, and universities; their leading cities were international entrepts; their knights were capable of ransacking France. Nor was England an individualistic and atomized society--on the contrary, people were bound to one another by obligations, often inherited and unshakable.
But there was no English government. A baron was a personal vassal of the king, to whom he owed certain duties and from whom he could expect protection. Each baron had many vassals who owed him duties (as men personally obligated to other men). And each peasant was a vassal of a minor lord, entitled to certain birthrights, such as use of particular fields and woods, but obligated to work the land of his ancestral village and share the crop with his lord. The borders of the realm depended on what fiefs the monarch had inherited; thus the "national" territory might shift with each change of king.
None of the offices of the realm, from monarch to peasant, was governmental in the modern sense. Take Justices of the Peace: they were the closest equivalents of modern police, but they were not paid, trained, or overseen. They were just vassals of the monarch who were morally obligated to preserve the King's Peace by sword or by persuasion. There was a public treasury, the Exchequer, but it had very minor importance. Even when Queen Elizabeth I ascended the throne in 1558, she was expected to pay for what we would call "government" (e.g., foreign embassies) out of her inherited wealth, rents on the extensive lands that she personally owned, plus some import duties. Her claims to sovereign power were controversial, and in any case, she lacked the personnel, the files, and the budget needed to "govern" in the modern sense.
She did obtain an effective espionage service when Sir Francis Walsingham started paying for secret information out of his own pocket; Elizabeth then authorized him to supplement those payments from her treasury. Even so, the English secret service was really just a group of Sir Francis' servants and retainers, and he was a personal retainer of the Queen. When Walsingham died, so did the organization.
In men like Walsingham, we see the origins of government. He was a professionally trained expert (a lawyer), not a nobleman with any hereditary powers. He held an appointed office, Mr. Secretary, which he was free to quit. He structured his civil service as a bureaucracy and tried to serve the permanent interests of England as a Protestant state, not merely those of his Queen. However, had Elizabeth married Franois, the Duke of Anjou and Alenon (as she threatened), then Walsingham would have faced a choice. This Puritan lawyer could have become a personal servant of a Catholic French nobleman, or he could have quit public life.
The medieval case shows that we could have elaborate social structures without governments; that is a relevant conclusion at a time of globalization, when governments are losing authority over fixed territories. It is not clear, however, that we can have elaborate social structures and personal liberties without governments.
permanent link | comments (3) | category: philosophy
October 24, 2006
smelling memories
(On my way back to Chicago for another meeting.) Sit quietly, close your eyes, and recall the scent of a lemon ... soy sauce ... pepper ... gasoline ... a baked apple. Inhale through your nose as you remember these smells. I find this entertaining, and I can get quite precise about it. For example, I can choose whether to remember a bitter lemon smell (with some of the white pith), or the pure scent of the inside of the fruit.
It appears that memories of smells decay more slowly than other sensory memories. This is a bit surprising, because "each olfactory neuron in the epithelium only survives for about 60 days, to be replaced by a new cell." Dr. Maturin in one of the Patrick O'Brien novels notices the power of smells to restore memories and hypothesizes that it's because we don't have many words for scents. He thinks that because we translate our visual and auditory experiences into language, we tend to forget them, whereas we retain our olfactory sensations in their raw form.
When people (like O'Brien and Proust) write about memory and smell, they usually describe the power of real scents to evoke lost memories. The reverse is interesting, too: the power of deliberate recollection to conjure up imaginary smells.
permanent link | comments (5) | category: philosophy
September 26, 2006
torture: against honor and liberty
In the Hamdan decision, the Supreme Court said that torture was our responsibility. We couldn't allow the president to decide secretly whether and when to obey the Geneva Convention. There would have to be a public law, passed by our representatives, subject to our review at the next election.
Alas, the Congress appears likely to pass legislation that will permit torture, buoyed by polls that suggest the American people prefer to sacrifice our ancient common law principles in favor of spurious security. Our national honor and liberty are at risk. Those are old-fashioned terms, more securely anchored in conservative than in progressive thought. Yet they are precisely the correct terms, as I shall argue here.
Torture is dishonorable because of the perverted personal relationship that it creates between the torturer and the victim. That is why people of honor do not torture, and nations with honor do not condone it. As David Luban writes: "The torturer inflicts pain one-on-one, deliberately, up close and personal, in order to break the spirit of the victim--in other words, to tyrannize and dominate the victim. The relationship between them becomes a perverse parody of friendship and intimacy: intimacy transformed into its inverse image, where the torturer focuses on the victim's body with the intensity of a lover, except that every bit of that focus is bent to causing pain and tyrannizing the victim's spirit."
Torture may not be the worse injustice. To bomb from 30,000 feet can be more unjust, because more may die. To imprison 5.6 million Americans may be more unjust, because one in 37 of us spends months or years in dangerous, demeaning, state-run facilities. But there is a difference between injustice and dishonor. Bombing people and locking them up are impersonal, institutional acts. Torture is as intimate as rape. It sullies in a way that injustice does not. That is why the House of Lords ruled in 2005: "The use of torture is dishonourable. It corrupts and degrades the state which uses it and the legal system which accepts it."
Torture threatens liberty because it gives the state the power to generate testimony and evidence contrary to fact, contrary even to the will of the witness. It thus removes the last constraint against tyranny, which is truth. Torture was forbidden in English common law since the middle ages, not because medievals were sqeamish about cruelty--their punishments and executions were spectacularly cruel--but because a king who could use torture in investigations and interrogations could reach any conclusions he wanted.
Torture is personal, yet torture is an institution. One cannot simply decide to torture in a one-off case, a hypothetical instance of a ticking time bomb. To be effective, torture requires training, equipment, expertise, and settings. The bureaucracy of torture then inevitably seeks to justify and sustain itself--if necessary, by using torture to generate evidence of its effectiveness. As Phronesisaical says, "Torture requires an institution of torture, which ... entails a broader torture program than the administration would have us believe." Again, the Lords were right:
The lesson of history is that, when the law is not there to keep watch over it, the practice is always at risk of being resorted to in one form or another by the executive branch of government. The temptation to use it in times of emergency will be controlled by the law wherever the rule of law is allowed to operate. But where the rule of law is absent, or is reduced to a mere form of words to which those in authority pay no more than lip service, the temptation to use torture is unrestrained.
permanent link | comments (0) | category: Iraq and democratic theory , philosophy
September 25, 2006
being Pope means never having to say you're sorry
I have now read the full text of Pope Benedict's Sept. 12 lecture, a passage of which provoked global controversy and violence. I read it with an open mind and genuine interest, but it seems to me that the section on Islam is gratuitous and rather poorly argued.
As the Pope said in his quasi-apology, he meant his discussion of Islam to be incidental to his main theme, which concerns the relationship between faith and reason in Christianity. This is the skeleton of his argument:
The Greeks, being philosophical, decided that God could not (or would not) act "unreasonably": in other words, against logos. On this basis, Socrates and other sophisticated Greek thinkers rejected myth, which had described gods acting arbitrarily. Their equation of divinity with reason already influenced Jewish thought before Jesus' time. The Hebrew Bible evolved from mythical thinking toward an abstract, rational, omniscient deity (first evident in the words from the burning Bush: "I am"). The association of reason with divinity was also essential in the Gospels, as shown by John's prologue: "In the beginning was ho logos."
According to Benedict, the union of faith and reason naturally took place in Europe, where reason had been born, not in the irrational East: "Given this convergence, it is not surprising that Christianity, despite its origins and some significant developments in the East, finally took on its historically decisive character in Europe."
However, faith and reason have come apart in Europe since the 16th century. First Protestants tried to strip the Bible of Greek metaphysics and treat it only as a sequence of literal events. Liberal theologians (including some Catholics) reinforced this tendency when they advocated a "return simply to the man Jesus and to his simple message, underneath the accretions of theology and indeed of hellenization."
It is a mistake to drive philosophical reason out of religion, Benedict argues, because God is rational and can be understood by means of philosophy. It is also an error to imagine science without faith:
[The] modern concept of reason is based, to put it briefly, on a synthesis between Platonism (Cartesianism) and empiricism, a synthesis confirmed by the success of technology. On the one hand it presupposes the mathematical structure of matter, its intrinsic rationality, which makes it possible to understand how matter works and use it efficiently: this basic premise is, so to speak, the Platonic element in the modern understanding of nature. On the other hand, there is nature's capacity to be exploited for our purposes, and here only the possibility of verification or falsification through experimentation can yield ultimate certainty.
Because modern rationality assumes that nature has a mathematical character, science hints at transcendence. But because it views empirical verification as the criterion of rationality, it rules out the possibility of God. This is a contradictory position, Benedict thinks. He recommends that we "acknowledge unreservedly" the benefits of science, yet we must "[broaden] our concept of reason and its application" so that it can encompass faith. By reuniting faith and reason, the West will reopen a dialogue with "profoundly religious cultures," which cannot fathom "a reason which is deaf to the divine."
All of the above seems fairly mainstream for a conservative Catholic theologian. But the Pope chooses to illustrate his argument with a digression about Islam. He says that for the Byzantine emperor Manuel II Paleologus, "spreading the faith through violence is something unreasonable. Violence is incompatible with the nature of God and the nature of the soul." This "statement is self evident" to "a Byzantine shaped by Greek philosophy." In contrast, for an "educated Persian" who debates Paleologus, "God is absolutely transcendent ..., not bound even by his own word."
This is a very odd example to support Benedict's major point. Did Paleologus really emphasize that conversion by the sword was "unreasonable"--incompatible with logos--and thus alien to God? Or did he simply say that it was wrong? Did the Persian really reply that God was "absolutely transcendent," and therefore it was appropriate to convert people forcibly despite the dictates of reason? Or did the Persian agree with the Emperor about forcible conversion, citing Qur'an 2:256: "There shall be no compulsion in religion: the right way is now distinct from the wrong way."
Benedict calls this passage from the Qur'an "one of the suras of the early period, when Mohammed was still powerless and under threat." Later, according to Benedict, Mohammed preached holy war. I am not competent to assess that interpretation of the Qur'an. But I would note a resemblance between Paleologus and the young Mohammed: both led groups who were very vulnerable to conquest. Indeed, Byzantium soon fell to a Moslem army (one that tolerated Christians and Jews). On the other hand, when Christians have been triumphant, they have not always been eager to argue that faith must be voluntary.
David Cook writes, "Islam was not in fact 'spread by the sword'conversion was not forced on the occupants of conquered territoriesbut the conquests created the necessary preconditions for the spread of Islam." One could write exactly the same thing about Christianity. For example, the Catholic Encyclopedia notes the advantages enjoyed by the first Franciscans in Mexico: "The fact that they had found the territory conquered, and the inhabitants pacified and submissive, had greatly aided the missionaries; they could, moreover, count on the support of the Government, and the new converts on its favour and protection."
The Catholic Encyclopedia denies that Mexican natives were converted by force, but there were certainly wars declared for the purpose of converting countries to Christianity. As the Encyclopedia itself states: "The meaning of the word crusade has been extended to include all wars undertaken in pursuance of a vow, and directed against infidels, i.e. against Mohammedans, pagans, heretics, or those under the ban of excommunication. The wars waged by the Spaniards against the Moors constituted a continual crusade from the eleventh to the sixteenth century; in the north of Europe crusades were organized against the Prussians and Lithuanians; the extermination of the Albigensian heresy was due to a crusade, and, in the thirteenth century the popes preached crusades against John Lackland and Frederick II."
Thus I can imagine the "educated Persian" (a patronizing description, by the way) arguing that mass conversions to Christianity have often followed conquest. He could have observed cases in which Moslems tolerated Jews and Christians and cited the Book of Revelations to illustrate Christian bloodthirstiness: "And out of his mouth goeth a sharp sword, that with it he should smite the nations: and he shall rule them with a rod of iron: and he treadeth the winepress of the fierceness and wrath of Almighty God."
The Pope was widely criticized for his lecture. As we know, he issued a new statement:
At this time, I wish also to add that I am deeply sorry for the reactions in some countries to a few passages of my address at the University of Regensburg, which were considered offensive to the sensibility of Muslims. These in fact were a quotation from a medieval text, which do not in any way express my personal thought.
I by no means condone violent reactions to Pope Benedict's lecture. However, it strikes me that:
1) The digression about Islam and violence was gratuitous in an essay supposedly about faith and reason;
2) The Emperor Paleologus was obviously quoted to express Benedict's personal thoughts;
3) The equation of Europe with reason (and the East with arbitrariness) is disturbing; and
4) It shows bad faith to depict Islam as a religion spread by the sword without at least noting the advantages that Christianity has reaped from violence.
permanent link | comments (0) | category: philosophy
August 24, 2006
how to respond to the terror risk
A diverse range of people are arguing that we have overreacted to terror threats after 9/11. Their arguments include the following:
The statistical risk of being killed by a terrorist is very low. As John Mueller writes in a paper for the libertarian Cato Institute (pdf), "Even with the September 11 attacks included in the count, the number of Americans killed by international terrorism since the late 1960s (which is when the State Department began counting) is about the same as the number of Americans killed over the same period by lightning, accident-causing deer, or severe allergic reaction to peanuts." Responses to terror, however, can be very costly. Consider the price and inconvenience of airport screening procedures. Or the deaths caused when people drive instead fly because they are afraid of terror. Or public support for the Iraq war. Acting terrified of terror encourages terrorists. It means that they can damage America simply by talking about plots. There is an emerging "we-are-not-afraid" movement that argues we ought to react to terrorist threats in a calm and unruffled manner. The alleged British bombing plot probably shows a desire to blow up airplanes, but the conspirators may have been far from being able to pull off the terror of which they dreamed. (Phronesisaical has links.) Fear of terror steers public resources to certain agencies and companies that have an incentive to stoke the fear further. Irrational fear of terror distorts public opinion, to the advantage of incumbent politicians. Some see evidence of Machiavellian manipulation; but Mueller draws a more cautious conclusion: "There is no reason to suspect that President Bush's concern about terrorism is anything but genuine. However, his approval rating did receive the greatest boost for any president in history in September 2001, and it would be politically unnatural for him not to notice. ... This process is hardly new. The preoccupation of the media and of Jimmy Carters presidency with the hostages taken by Iran in 1979 to the exclusion of almost everything else may look foolish in retrospect. ... But it doubtless appeared to be good politics at the time--Carter's dismal approval rating soared when the hostages were seized."
I think these are good points, but there is another side to consider. It's unreasonable to adopt a strictly utilitarian calculus that treats all deaths as equally significant. Every human being counts the same, yet we are entitled to care especially about some tragic events. If deaths were fungible, then none would really matter; they would all be mere statistics.
In particular, as a nation, we are entitled to care more about the 2,700 killed on 9/11 than about the roughly similar number of deaths to tonsil cancer in 2001. Pure utilitarianism would tell us that 9/11 happened in the past; thus it's irrational to do anything about it, other than to try to prevent a similar disaster in the future. And it's irrational to put resources into preventing a terrorist attack if we could prevent more deaths by putting the same money and energy into seat belts or cancer prevention. However, the attack on 9/11 was a story of hatred against the United States, premeditated murder, acute suffering, and heroic response. Unless we can pay special attention to moving stories, there is no reason to care about life itself.
In my view, we can rationally respond to 9/11 by bringing the perpetrators to justice, even at substantial cost, and even if they pose no threat. That violates the utilitarian reasoning that underlies Mueller's argument. However, note that the Bush administration has not brought Bin Laden to justice. Also note that the 9/11 story may justify vengeance, but it does not justify excessive fear about similar attacks.
Finally, we must think carefully about responsibility. On a pure utilitarian calculus, we might be better off with virtually no airport security. A tiny percentage of people would be killed by bombers, because there aren't very many terrorists with the will and the means to kill. By getting rid of airport screenings, we would save billions of dollars and vast amounts of time, and possibly even save lives by encouraging more people to fly instead of drive. But this reasoning doesn't work. If a government cancelled airport screening procedures, some people would die, and it would not be irrational to pin the responsibility for those deaths on the government.
Thus no government can dismiss the terror threat, because people understandably hold the national security apparatus responsible for protecting them against terror. In contrast, protection against tonsil cancer is not seen as a state responsibility. I like the following passage by Senator McCain (quoted in Mueller), but I'm not sure that any administration could get away with using it as an anti-terror policy:
Get on the damn elevator! Fly on the damn plane! Calculate the odds of being harmed by a terrorist! Its still about as likely as being swept out to sea by a tidal wave. Suck it up, for crying out loud. Youre almost certainly going to be okay. And in the unlikely event youre not, do you really want to spend your last days cowering behind plastic sheets and duct tape? Thats not a life worth living, is it?
permanent link | comments (5) | category: philosophy
August 16, 2006
the difference between economics and psychology
To tell the truth, I have never taken a single course in either economics or psychology. However, my professional interests have led me to read a fair amount in both disciplines and to talk to scholars of both persuasions. I think I have noticed a basic difference.
Economists are interested in concrete actions: behaviors. They began by studying financial exchanges, but now they will investigate practically anything, including learning, war, marriage, and civic participation, as long as it involves observable or reportable acts. In contrast, pyschologists (since the decline of behaviorism) are interested in mental states, many of which are not directly observable. You can't see what someone's identity or mood or capacity is, nor can you necessarily ask the person directly. Pyschologists tend to measure these mental states by asking many questions or making many observations and creating statistically reliable "constructs." Thus they like to use scales and factor analysis. (See this apparently classic 1955 paper.) Economists are suspicious of such constructs because there is always an imperfect correlation between the construct and its directly measured components.
I don't think you can tell the difference between the disciplines by asking what they study: pyschologists explore human behavior in markets, and modern economists investigate practically everything. Instead, the divide is between a kind of empiricism or nominalism that distrusts general constructs, versus a kind of philosophical "realism" that takes unobserved mental states seriously.
As for political science--with apologies to my many friends in that field, it isn't a discipline at all, but rather a topic area that uses methods from economics, pychology, philosophy, and narrative history.
permanent link | comments (2) | category: philosophy
March 23, 2006
democracy as education, education for democracy
I've been commissioned to write an article about John Dewey's 1927 book, The Public and its Problems, and what it implies for contemporary democratic practice. Given my own interests, I have focused on its implications for public deliberation and civic education. My whole first draft is pasted "below the fold" for anyone who's interested in Dewey or the philosophy of democratic education.
For John Dewey, the link between democracy and learning was profound and reciprocal. Dewey defined "democracy" as any process by which a community collectively learns, and "education" as any process that enhances individuals' capacity to participate in a democracy. Although these definitions pose difficulties, they constitute an insightful and original theory that remains relevant 80 years after Dewey wrote The Public and its Problems. His theory is especially illuminating for those concerned about public deliberation and civic education.
On a conventional definition of "democracy," it as a system of government that honors equity and freedom. In a democracy-or so we are taught-every adult has one vote, and all may speak freely. For Dewey, however, such rules were merely tools that happened to be in current use. No institution (including free elections and civil rights) could claim "inherent sanctity." There were no general principles, no "antecedent universal propositions," that distinguished just institutions from unjust ones. The nature of the good society was "something to be critically and experimentally determined." [1927, p. 74]
As described so far, Dewey's theory of democracy gives no guidance and makes no distinctions. If we reject all "antecedent universal propositions," then we cannot know that a system of free elections is better than an tyranny. However, Dewey had one profound commitment, to collective learning. Thus he valued the American constitutional system, not because all human beings were truly created equal, and not because elections would generate fair or efficient outcomes, but because democracy promoted discussion, and discussion was educative. "The strongest point to be made in behalf of even such rudimentary political forms as democracy has already attained, popular voting, majority rule and so on, is that to some extent they involve a consultation and discussion which uncover social needs and troubles."[1927, p. 206]
If learning is our goal, then we could spend our time reading books or observing nature. However, the kind of learning that Dewey valued most was social and experiential. A democracy was a form of social organization in which people realized that they were interconnected and learned by working together. "Wherever there is conjoint activity whose consequences are appreciated as good by all singular persons who take part in it, and where the realization of the good is such as to effect an energetic desire and effort to sustain it in being just because it is a good shared by all, there is in so far a community. The clear consciousness of a communal life, in all its implications, constitutes the idea of democracy." [1927, p. 149]
It might seem strange to evaluate societies and institutions largely as opportunities for collective education. But that approach emerged from Deweys beliefs about the purpose of life itself. In Democracy and Education (1916), he argued that individual life had value as experience; and the richer the experience, the better. The value of a society was to permit individuals to share and enlarge their experiences by communicating. "The ulterior significance of every mode of human association," he wrote, is "the contribution which it makes to the improvement of the quality of experience." [1916, p. 12] It followed that a "democracy is more than a form of government; it is primarily a mode of associated living, of conjoint communal experience." [1916, p. 93]
I think that Dewey's rejection of universal propositions in favor of continuous collective learning was problematic. As he noted, "every social institution is educative in effect." [1916, p. 12] However, not every educative institution is democratic. Consider science, which Dewey valued very highly. Science is a collective enterprise and an excellent means of learning. However, when it works as advertised, it is meritocratic, not democratic. If we equate democracy with collective learning, then we may weaken our commitment to equality and try to organize the government on the same principles as science (as Dewey recommended in Liberalism and Social Action, 1935), or we may try to democratize scientific research. Both reforms are mistakes, in my view.
Or consider any society in which some oppress others and deprive them of rights. Such arrangements are consistent with "learning": the oppressors learn to dominate, and the oppressed learn to manage. Indeed, the two classes learn together, and they may learn continuously. I would deny that such a system is democratic, because it violates antecedent principles of equality. But Dewey's deep pragmatism prevented him from endorsing such external principles.
In Democracy and Education, Dewey recognized that "in any social group whatever, even in a gang of thieves, we find some interest held in common, and we find a certain amount of interaction and cooperative intercourse with other groups. From these two traits we derive our standard. How numerous and varied are the interests which are consciously shared? How full and free is the interplay with other forms of association?" In a "criminal band," Dewey thought, the shared interests must be narrow ("reducible almost to a common interest in plunder") and the group must isolate itself from outsiders. [1916, p. 89]. In a good society, by contrast, everyone has everyone else's full range of interests at heart and there are dense networks connecting all sectors.
This ideal seems more satisfactory than a simple commitment to "learning," but it relies on the kind of abstract moral principles that Dewey elsewhere rejects. For example, concern for the holistic wellbeing of all fellow human beings is a strong moral commitment, characteristic of Kantianism. It does not derive logically from the concept of communal learning, but is a separate principle. It is not clear to me how a Deweyan pragmatist can embrace it.
Notwithstanding this qualification, there is much of value in Dewey's theory. For those who promote concrete experiments in public deliberation, a theory of democracy-as-learning is inspirational. It explains why adults should be, and are, motivated to gather and discuss public problems: discussion is virtually the purpose of human life. Dewey's theory also provides a response to those who say that small-scale public deliberation is "just talk," that it lacks sufficient impact on votes and policies. Dewey would reply that the heart of democracy is not an election or the passage of a law, but personal growth through communication. "There is no liberal expansion and confirmation of limited personal intellectual endowment which may not proceed from the flow of social intelligence when that circulates by word of mouth from one to another in the communication of the local community." [1927, p. 219]
Dewey's endorsement of verbal communication does not mean, however, that speech should be disconnected from action. "Mind," he thought "is not a name for something complete by itself; it is a name for a course of action in so far as that is intelligently directed." [1916, p. 139] Likewise, deliberation (which is thinking by groups) should be linked to concrete experimentation. Public deliberation is most satisfying and motivating-and most informed and disciplined-when the people who talk also act: when they argue from personal, practical experience and when their decisions have consequences for their individual and collective behavior.
Dewey was a developmental thinker: he understood that human beings change over the course of the lifecycles and that a society needs different contributions from each generation. For adults, learning must be collective and voluntary. Adults cannot be given reading assignments on government or public affairs. The forms of adult learning that most interested Dewey were face-to-face adult deliberations, membership in voluntary associations, and communication via the mass media (in his day, newspapers and radio).
However, in a complex society, he thought, children have too much to learn in too short a time for them to be allowed simply to experience discussions and associations. For them, "the need of training is too evident; the pressure to accomplish a change in their attitude and habits is too urgent. ... Since our chief business with them is to enable them to share in a common life we cannot help considering whether or no we are forming the powers which will secure this ability." Thus the need for a "more formal kind of education": in other words, "direct tuition or schooling." [1916, p. 10] Note again that the purpose of education is to prepare students to "share in a common life" of continual learning.
Contrary to what some critics of Dewey claim, he favored "direct tuition" as an efficient means of transmitting accumulated knowledge to children so that they could become competent citizens within a reasonable amount of time. However, he recognized that merely imparting information was not good pedagogy. "Formal instruction ... easily becomes remote and dead-abstract and bookish, to use the ordinary words of depreciation." [1916, p. 11] Besides, the most profound effects of education (for better or worse) came from the way schools operated as mini-societies, not from the formal curriculum. "The development within the young of the attitudes and dispositions necessary to the continuous and progressive life of a society cannot take place by direct conveyance of beliefs, emotions, and knowledge. It takes place through the intermediary of the environment." [1916, p. 26] In other words, what adults demonstrated by how they organized schools was more important than what they told their students in lectures and textbooks.
Dewey argued that young people were more "plastic" than their elders, more susceptible to being deliberately educated. Recent research bears him out. There is ample evidence that civic experiences in adolescence have lasting effects. For example, in an ongoing longitudinal study of the high school class of 1965, Kent Jennings and his colleagues have found that participation in student government and other civic extracurricular activities has a positive effect on people's participation in civil society almost forty years later. More than a dozen longitudinal studies of adolescent participation in community service have found positive effects as much as ten years later. And Doug McAdam's rigorous study of the Freedom Summer voting-rights campaign shows that the activists' experience in Mississippi (admittedly, an intense one) permanently transformed them.
In contrast, few studies of deliberately educative civic experiences find lasting effects on adult participants. We can explain the difference as follows. Young people must form some opinion about politics, social issues, and civil society when they first encounter those issues in adolescence. Their opinion may be the default one (disinterest) or it may be critical engagement, enthusiastic support, or some other response. Once they have formed a basic orientation, it would take effort and perhaps some psychological distress to change their minds. Therefore, most young adults settle into a pattern of behavior and attitudes in relation to politics that lasts for the rest of their lives, unless some major shock (such as a war or revolution) forces them to reconsider. In a country like the United States, when adults change their political identities, the change results from voluntary experiences, not from exhortations or any form of mandatory civic education.
It would be immoral to write off adults because they are much less "plastic" than adolescents and less susceptible to deliberate civic education. But it is crucial to invest in the democratic education of young people, since they will be permanently shaped by the way they first experience politics, social issues, and civil society. Civic education, as Dewey recommended, must include not only formal instruction but also concrete experiences and the whole "environment" of schools. Indeed, "one of the weightiest problems with which the philosophy of education has to cope is the method of keeping a proper balance between the informal and the formal, the incidental and intentional, modes of education." [1916, p. 12]
Dewey and some of his contemporaries tried to "reorganize" American education "so that learning takes place in connection with the intelligent carrying forward of purposeful activities." [1916, p. 144]. Dewey called this reorganization "slow work," and it did encounter many frustrations. Nevertheless, he and his fellow educational Progressives achieved some striking reforms.
First, to give students opportunities for purposeful civic activities, the Progressives founded student governments and school newspapers. Evaluations find that these activities have lasting positive effects on students' civic engagement, yet the percentage of American students who participate has declined by 50 percent since the 1960s, in large part because high schools have been consolidated. (Fewer schools means fewer school governments and newspapers.)
The Progressives also created the first courses on "civics" and "social studies." These subjects grew at the partial expense of history, which followers of Dewey saw (mistakenly, in my opinion) as an overly "academic" discipline. In 1915, the US Bureau of Education formally endorsed a movement for "community civics" that was by then quite widespread. Its aim was "to help the child know his communitynot merely a lot about it, but the meaning of community life, what it does for him and how it does it, what the community has a right to expect from him, and how he may fulfill his obligations, meanwhile cultivating in him the essential qualities and habits of good citizenship."
In 1928-9, according to federal statistics, more than half of all American ninth-graders took "civics." That percentage had fallen to 13.4 by the early 1970s. In 1948-9, 41.5 percent of American high school students took "problems of democracy," another Progressive innovation, which typically involved reading and debating stories from the daily newspaper. By the early 1970s, that percentage was down to 8.9.
Nevertheless, the percentage of high school students who have taken any government course has been basically steady since 1915-1916. Although the historical data have gaps, it appears most likely that "civics" and "problems of democracy" have disappeared since 1970, while American history, world history, and American government have either stayed constant or grew. As Nathaniel Schwartz notes, the old civics and problems of democracy textbooks addressed their readers as "you" and advocated various forms of participation. Today's American government texts discuss the topics of first-year college political science: how a bill becomes a law, how interest groups form, how courts operate. Social studies arose during the Progressive Era, when philosophical pragmatists argued for a curriculum of practical relevance to democracy. Social studies and civics seem to be waning at a time when academic rigor is the first priority and high schools take their cues from colleges.
Finally, Dewey and his allies were interested in the overall design of schools: their location, physical architecture, bureaucratic structure, and rules of admission and graduation. They sought to integrate schools into the broader community and to make them into democratic spaces in which young people and adults would practice citizenship by working together on common tasks.
Today, however, many students attend large, incoherent, "shopping mall" high schools that offer long lists of courses and activities, as well as numerous cliques and social networks. Students who enter on a very good track or who have positive support from peers and family may make wise choices about their courses, friends, co-curricular activities, and next steps after graduation. They can obtain useful civic skills and habits by choosing demanding courses in history and social studies, by joining the student newspaper or serving in the community, and by interacting with administrators. However, relatively few studentsusually those on a path to collegecan fill these roles in a typical high school. Other students who are steered (or who steer themselves) into undemanding courses and away from student activities will pay a price for the rest of their lives. Serious and lasting consequences follow from choices made in early adolescence, often under severe constraints.
Typical large high schools also tend to have frequent discipline problems, a general atmosphere of alienation, and internal segregation by race, class, and subculture. Often, they occupy suburban-style campuses, set far apart from the adult community of work, family, religion, and politics. Even worse, some of these huge schools occupy prison-like urban blocks, secured with gates and bars. Parents and other adults in the community have little impact on these big, bureaucratic institutions. Therefore, schools are rarely models of community participation, nor do they create paths for youth to participate in the broader world.
Although large high schools offer opportunities for self-selected students to be active citizensrunning for the student government, creating video broadcast programs, and engaging in community servicemost of their fellow students have no interest in their work. Why pay attention to the student government, or watch a positive hip-hop video that your peers have produced, if you do not share a community with them? Commercial products are more impressive and entertaining.
Since the 1960s, one of the most consistent findings in the research on civic development is the following: Students who feel that they and their peers can have an impact on the governance of their own schools tend to be confident in their ability to participate in their communities and interested in public affairs. However, it is impossible for anyone to influence the overall atmosphere and structure of a huge school that offers a wide but incoherent range of choices and views its student population merely as consumers. To make matters worse, school districts have been consolidated since Deweys time, so that there are dramatically fewer opportunities for parents and other adults to govern their own public schools. According to data collected by Elinor Ostrom, the number of elected school board seats has shrunk by 86% since 1930, even as the population has more than doubled.
Those with the most education (relative to their contemporaries) are by far the most likely to participate in democracywhich suggests that education prepares people for citizenship. During the course of the twentieth century, each generation of Americans attained, on average, a higher level of education than those before. Educational outcomes also became substantially more equal. When we put these facts together, we might assume that participation must have increased steadily during the 1900s. On the contrary, voting rates are considerably lower than they were a century ago; levels of political knowledge are flat; membership in most forms of civic association is down; and people are less likely to say that they can make a difference in their communities.
Although many causes have been suggested for these declines, part of the problem is surely a decline in the quality of civic education. People are spending many more years in school, but getting less education for democracy. What we need is just what Dewey and his allies championed-not merely government classes (although they have positive effects and are in danger of being cut), but also community-service opportunities that are connected to the academic curriculum, student governments and student media work, and the restructuring of schools so that they become coherent communities reconnected to the adult world.
--
Sources
Dewey, John, Democracy and Education, 1916 (Carbondale and Evansville: Southern Illinois University Press, 1985).
----------------, The Public and Its Problems (New York: Henry Holt, 1927)
permanent link | comments (2) | category: philosophy
January 16, 2006
an exercise for Martin Luther King Day
I find it useful to teach WALKER v. CITY OF BIRMINGHAM, 388 U.S. 307 (1967) as an example of legal and moral reasoning. This is the case that originated with the arrest of Martin Luther King and 52 others in Birmingham, AL, at Easter, 1962. It is a rich example for exploring the rule of law, civil disobedience, religion versus secular law, procedures versus justice, and even the way that our moral conclusions follow from how we choose to tell stories.
By way of background:
In 1962, the Southern Christian Leadership Conference (SCLC) hoped to generate massive protests in Birmingham before the end of the term of Eugene 'Bull' Connor, the violently racist Commissioner of Public Safety. As the protests began, Connor obtained a state-court injunction against the marchers. When the SCLC leaders received the injunction on April 11, they stated, "we cannot in good conscience obey" it. King called it a "pseudo" law which promotes "raw tyranny under the guise of maintaining law and order."
At this point, the Direct Action campaign is in crisis: there have been only 150 arrests so far, and no more bail credit is available. On April 12 (Good Friday), Norman Amaker, an NAACP lawyer, says that the injunction is unconstitutional, but breaking it will result in jail time. King disappears from a tense conference, reappears in jeans. "I don't know what will happen ... But I have to make a faith act. ... If we obey this injunction, we are out of business." Leads 1,000 marchers; he and 52 are arrested. He is sent to solitary confinement. In NYC, Harry Belafonte raises $50,000 for bail. The New York Times and President Kennedy condemn marches as ill-timed.
April 15 (Easter Sunday): MLK is released from solitary confinement, still in jail. Writes "Letter from a Birmingham Jail."
April 26: King is sentenced to five days with a warning not to protest. Sentence is held in abeyance.
May 2: Children's march. King: “We subpoena the conscience of the nation to the judgment seat of morality."
May 20: Supreme Court strikes down Birmingham's segregation ordinances. A deal is worked out.
September: bomb kills four little girls at Birmingham's Sixteenth Street Baptist Church.
SCLC appeals King's conviction for two reasons: to overturn the Birmingham parade ordinance, and to prevent future uses of injunctions against civil rights marchers. The case is [Wyatt Tee] Walker v. City of Birmingham. It is not decided until 1967 by the Supreme Court, which upholds King's arrest and imprisonment on basically procedural grounds:
The text of the Supreme Court decision, written by Potter Stewart | My commentary and questions |
On Wednesday, April 10, 1963, officials of Birmingham, Alabama, filed a bill of complaint in a state circuit court asking for injunctive relief against 139 individuals and two organizations. | With whom does the opinion begin? How are those people described? What do we usually think of when we hear "city officials"? How else could these particular men be described? (Hint: the Klan was powerfully influential in city government). How would the narrative read if it started with King and the other civil rights leaders? |
The bill and accompanying affidavits stated that during the preceding seven days:
|
How are the petitioners described? Were the petitioners a "mob" -- or a group of citizens assembled to petition for the redress of their grievances? Is there a corect answer to this question? What is not said about them? What context is missing? What are their alleged actions? How else could the SCLC's actions be described? |
The bill stated that these infractions of the law were expected to continue and would "lead to further imminent danger to the lives, safety, peace, tranquility and general welfare of the people of the City of Birmingham," and that the "remedy by law [was] inadequate." | Apart from unrest, what else might the city officials fear? |
The circuit judge granted a temporary injunction as prayed in the bill, enjoining the petitioners from, among other things, participating in or encouraging mass street parades or mass processions without a permit as required by a Birmingham ordinance | Is the ordinance constitutional? If not, why not? Why did Connor get an injunction instead of arresting people under the ordinance? Does the opinion explain his motivations? Would it read differently if it did? |
Five of the eight petitioners were served with copies of the writ early the next morning. Several hours later four of them held a press conference. There a statement was distributed, declaring their intention to disobey the injunction because it was "raw tyranny under the guise of maintaining law and order." At this press conference one of the petitioners stated: "That they had respect for the Federal Courts, or Federal Injunctions, but in the past the State Courts had favored local law enforcement, and if the police couldn't handle it, the mob would." That night a meeting took place at which one of the petitioners announced that "[i]njunction or no injunction we are going to march tomorrow." The next afternoon, Good Friday, a large crowd gathered in the vicinity of Sixteenth Street and Sixth Avenue North in Birmingham. A group of about 50 or 60 proceeded to parade along the sidewalk while a crowd of 1,000 to 1,500 onlookers stood by, "clapping, and hollering, and [w]hooping." | Does the SCLC "respect" the state courts? Should it? Why are the SCLC's disrespectful words quoted here? (See footnote #3: petitioners "contend that the circuit court improperly relied on this incident in finding them guilty of contempt, claiming that they were engaged in constitutionally protected free speech. We find no indication that the court considered the incident for any purpose other than the legitimate one of establishing that the participating petitioners' subsequent violation of the injunction by parading without a permit was willful and deliberate." Why then quote them verbatim?) The crowd is described as "hollering and [w]hooping." How else could they be described? Who's being quoted here? |
Some of the crowd followed the marchers and spilled out into the street. At least three of the petitioners participated in this march. Meetings sponsored by some of the petitioners were held that night and the following night, where calls for volunteers to "walk" and go to jail were made. On Easter Sunday, April 14, a crowd of between 1,500 and 2,000 people congregated in the midafternoon in the vicinity of Seventh Avenue and Eleventh Street North in Birmingham. One of the petitioners was seen organizing members of the crowd in formation. A group of about 50, headed by three other petitioners, started down the sidewalk two abreast. At least one other petitioner was among the marchers. Some 300 or 400 people from among the onlookers followed in a crowd that occupied the entire width of the street and overflowed onto the sidewalks. Violence occurred. Members of the crowd threw rocks that injured a newspaperman and damaged a police motorcycle. | What of factual significance is described here? Why say "Violence occurred"? (NB: Garrow mentions no violence; Branch says MLK was "suddenly seized without warning by police.") Were the city officials justified in their initial fears? (They feared violence; violence occurred.) Does this make the injunction valid? |
The next day the city officials who had requested the injunction applied to the state circuit court for an order to show cause why the petitioners should not be held in contempt for violating it. At the ensuing hearing the petitioners sought to attack the constitutionality of the injunction on the ground that it was vague and overbroad, and restrained free speech. They also sought to attack the Birmingham parade ordinance upon similar grounds, and upon the further ground that the ordinance had previously been administered in an arbitrary and discriminatory manner. The circuit judge refused to consider any of these contentions, pointing out that there had been neither a motion to dissolve the injunction, nor an effort to comply with it by applying for a permit from the city commission before engaging in the Good Friday and Easter Sunday parades. | Why didn't the SCLC go back to Connor for a permit? How does the Court want the SCLC to treat Connor? Does Connor merit this? |
Consequently, the court held that the only issues before it were whether it had jurisdiction to issue the temporary injunction, and whether thereafter the petitioners had knowingly violated it. Upon these issues the court found against the petitioners, and imposed upon each of them a sentence of five days in jail and a $50 fine, in accord with an Alabama statute. | |
... The generality of the language contained in the Birmingham parade ordinance upon which the injunction was based would unquestionably raise substantial constitutional issues concerning some of its provisions. ... The petitioners, however, did not even attempt to apply to the Alabama courts for an authoritative construction of the ordinance. | What is the Supreme Court's attitude toward the Alabama courts? Were those courts legitimate? |
...The breadth and vagueness of the injunction itself would also unquestionably be subject to substantial constitutional question. But the way to raise that question was to apply to the Alabama courts to have the injunction modified or dissolved. | |
... The petitioners also claim that they were free to disobey the injunction because the parade ordinance on which it was based had been administered in the past in an arbitrary and discriminatory fashion. In support of this claim they sought to introduce evidence that, a few days before the injunction issued, requests for permits to picket had been made to a member of the city commission. One request had been rudely rebuffed, and this same official had later made clear that he was without power to grant the permit alone, since the issuance of such permits was the responsibility of the entire city commission. | Petitioners raise the issue of past discrimination. What kind of discrimination would this have been? (racial) Has race been mentioned at all in the opinion? Why does Justice Stewart say "a member of the city commission" instead of "Connor"? (According to testimony by Lola Hendricks, this is what happened: "I asked Commissioner Connor for the permit, and asked if he could issue the permit, or other persons who would refer me to, persons who would issue a permit. He said, 'No, you will not get a permit in Birmingham, Alabama to picket. I will picket you over to the City Jail,' and he repeated that twice." Why does Steward say that Connor "made clear" his lack of authority to issue permits? (Connor actually did issue permits to other groups.) Why not use the words "asserted" or "claimed"? |
This case would arise in quite a different constitutional posture if the petitioners, before disobeying the injunction, had challenged it in the Alabama courts, and had been met with delay or frustration of their constitutional claims. But there is no showing that such would have been the fate of a timely motion to modify or dissolve the injunction. There was an interim of two days between the issuance of the injunction and the Good Friday march. The petitioners give absolutely no explanation of why they did not make some application to the state court during that period. | What was the significance to the Civil Rights Leaders of Easter? Why was it important for them to have innocent people jailed on Good Friday and released on Easter Sunday? How does this reasoning and motivation collide with that of the legal system ? |
... The rule of law that Alabama followed in this case reflects a belief that in the fair administration of justice no man can be judge in his own case, however exalted his station, however righteous his motives, and irrespective of his race, color, politics, or religion. This Court cannot hold that the petitioners were constitutionally free to ignore all the procedures of the law and carry their battle to the streets. One may sympathize with the petitioners' impatient commitment to their cause. But respect for judicial process is a small price to pay for the civilizing hand of law, which alone can give abiding meaning to constitutional freedom. | The "civilizing hand of law." Does this value count against the marchers? Or against Connor? "... which alone can give abiding meaning to constitutional freedom." Alone? Contrast MLK, in Atlanta (1962): "legislation and court orders can only declare rights. They can never thoroughly deliver them. Only when people themselves begin to act are rights on paper given life blood." |