From: George Kane


Dear Bob,

I have a few comments on your section on morality and ethics [Appendix section - RKK]. You should decide at the outset if morality is objective or merely a social convention. If it is merely conventional, then how can you expect to ever achieve a universal morality? Why would you want to? Each society should have a morality appropriate to its specific history, economic and social structure. If, on the other hand, there is an objectivity to morals which you will merely discover, on what is it based? You dismiss divine creation. Perhaps you should consider Kant, who claimed that his ethical system was valid for all rational beings.

You might discover a universal morality by identifying what is common to all moral systems. But if the answer requires a cultural reference (e.g., actions in accordance with precepts that are necessary for the society to satisfy its material needs and minimize conflict) the implementation may be very different in different times and places. But if you assert that morality is purely conventional, you have taken a major step towards eliminating the concepts of right and wrong. Moral systems are all designed to influence people to act for reasons other than the satisfaction of selfish, transitory pleasures. To justify the institution of morality you have to come up with persuasive reasons.

Your remark that moral systems are sometimes called "social contracts" has to be put into context. Nobody ever thought there was a social contract before the Age of Enlightenment, when ideology was being shaped by the ascendant merchant class. Certainly there was no contract agreed to by the slaves and serfs who obeyed God-given laws and worked for hereditary masters who ruled by divine right. The contract was an appealing analogy for the time because it enshrined a basic relationship required for the existence of trade and the merchant class. I think that over the centuries the analogy has lost its plausibility.

Your distinction between law and morality was culturally limited. In our society there is a distinction between the two for the reasons you state. But in tribal, nomadic and especially preliterate societies, the distinction is probably moot.

At one point you indicate that you want to design a morality that will start generating precepts. I get the impression that you have in mind a humanist ten commandments. I don't think that moral cannons are truly useful, because they are all just "high percentage shots". (The phrase comes from Berkeley chess players of the 1960s, but I hope my intended meaning here is clear.) "Thou shalt not steal" is good advice a very high percentage of the time. But when the theft of e-mail exposes abuses by Chiquita (United Fruit) that include bribery, exposing foreign workers to harmful pesticides, circumventing laws restricting land ownership by setting up dummy landholders and hiding their connection to the parent, union-busting, etc., I think the theft was a highly moral act.

The point to my mind is that acts are not good or bad due to intrinsic characteristics, but only according to their consequences. Moral judgment requires being able to foresee the consequences of actions, and then of evaluating the goodness of those consequences according to defined criteria. I think the overriding criterion has to be some form of the greatest good for the greatest number" principle you discuss. But an essential point of a teleological, result-based morality, as opposed to a deontological morality such as one given by a god, is that we can never know with certainty whether our actions are good or bad. All that we can do is act upon our best guesses.

So I think that proper moral instruction should not include the recitation of canons, but training in the skills of fact-based decision making.


George,

I do think that morality is a convention, but that some moral conventions are better than others. The more that people benefit by abiding by the convention, the better the convention is. If one tribe's moral convention results in the people being happy, and another tribe's convention results in them being miserable, I would regard the happy tribe's convention as better. That is the tribe I would prefer to be in and the convention I would prefer to follow. The best possible convention could be considered to be the basis for an "absolute" morality.

Since moral conventions facilitate cooperation among people, it helps if as many people as possible share the same convention. If the blue tribe says it is wrong to kill other blues but it is OK to kill members of the evil green tribe, and the green tribe says it is OK to kill blues but not other greens, we have a bad situation likely to result in war. A common moral system in which neither color may be killed would clearly be better. Many bitter wars have been fought in which both sides were basically true to their own moral conventions. So I do feel that it is important that we try to agree on a common moral convention, and that it ought to be the one that is "best". I think that this is what Kant was aiming at, although I have the impression that he avoided any concept of happiness as a factor. To me happiness must be the basis on which we compare alternative sets of moral rules.

As to groups with different moral principles because of different circumstances, a more general set of moral principles should be employed which takes into account the circumstances. If a desert tribe considers wasting water a serious offense but a jungle tribe does not, it is not hard to find a rule that will satisfy both - such as "don't waste water where it is in short supply". Rules such as "do unto others as you would have them do unto you" or "work for the greatest good for the greatest number" are very flexible and should work well for almost every circumstance.

I think it is useful to think of morality as a "social contract" because a contract provides a set of conditions that people follow for mutual benefit, and I think that is what a moral situation should be if it is to be at all useful. To be sure, people don't deliberately enter into moral systems like they would a formal contract. Normally they are indoctrinated into the system as children. The "mutual benefit" of a moral system may be questionable in some cultures, particularly as we go back in history.

It seems like some catch-all principles would be beneficial as part of the ideal moral system, although I don't think they should be dogmatic in the sense of the "Ten Commandments". The ideas that we shouldn't assault people or steal from people ought to recognized as universal enough that they should only be violated in unusual circumstances. I don't want people to think that an assassination is moral just because they think the world would be better off without that individual, although I would think that in an extreme enough case, such as Hitler, the overall benefit to humanity might be great enough to justify violating the prohibition against murder. I certainly approve of training in fact-based decision making together with the idea that the decisions ought to be aimed at the benefit of humanity. Still, some canons would be very useful for the cases where the facts are unclear.

Bob


Dear Bob,

Until I read your reply, I thought that I agree with you 90% of the time on moral questions. I read your central assertions to be:

1. Morality is absolute
2. The measure of the value of a moral system is happiness
3. The social contract is the basis for morality
4. Moral precepts should guide our actions.

I disagree with you on every point!

If you read over your comments, I think you will admit that you present no substantiation for a claim of objectivity of morals. You identify "happiness" as the measure for evaluating which moral system is "preferable", and suggest a high enough level of generality for finding a "common" system. "Preference" and "commonality" are relative terms - they relate different moral systems to each other. Moral absolutism signifies that one moral system is correct, and deviations from it are in error. I think you are a moral relativist, and should be proud of that.

Your judgment that a "happy" tribe's moral system to be superior to that of an "unhappy" tribe's is highly subjective. The moral code of the unhappy tribe has developed to suit its specific needs. If they are continually besieged by invaders, their morality will emphasize tribal loyalty and such martial virtues as courage, sacrifice and subservience to a command hierarchy. You cannot call their moral code a failure because the people are unhappy.

Even if we compare societies in ostensibly comparable situations, you are judging "happiness" as an unqualified outsider. As an example: we idealize democratic Athens, with its illustrious intellectual elite, and demonize militaristic Sparta. I expect that you would judge Athenians to be happier - or more fulfilled or actualized - than Spartans. But Spartans were ferociously dedicated to their social code, and contemptuous of Athens. What society would abandon its moral code or rituals, traditions and customs because some outsider didn't much care for them?

You have already refuted the postulate that moral systems are justified by the greatest happiness of the individuals by pointing out that moral action requires individual sacrifice. People willingly surrender their own lives to defend their nation or ideals. Recognizing this, you should have concluded that the basis of morality cannot be a mere prescription for individual happiness.

Correct judgements in morality as in economics and politics are always circumscribed by the historical moment. We have a prejudice for democracy and are opposed to dictatorship, for example, yet undoubtedly there are times when people must surrender self-government to a tyrant. Rome was so traumatized by its civil war that the Republic was surrendered to strong Caesar's to avoid another.

My point is that the relative happiness of two societies as judged by an outsider is not an objective basis upon which to validate a moral system. You place a society's morals under a much sharper microscope if you instead judge its appropriateness to its specific circumstances. For example, the "family values" preached today by the Promise Keepers and the rest of the Christian right, in which a wife must stay home to raise the children, is an anachronism in an economy in which exploitation has developed to the point that two incomes are required to sustain the standard of living that one provided a half century ago. Family values must adapt to changed economic and social realities.

The fiction of a social contract fails because the contract is extorted, and therefore invalid. Any court would invalidate a contract to which one side agrees when it has a gun pointed at its head. The acquiescence of individuals to the law is certainly no contract. The choice is between compliance and jail Now, in your posted article, you point out the difference between a legal system and a moral system. But if morality is a social contract, how do we deal with people who don't sign up? Rakes and rogues and ne'er do wells merit our approbation, but do they care (or, do we care) unless society enshrines that approbation with legal sanctions? The social contract has didactic utility in certain arguments, but always keep in mind that it is entirely a fiction.

I hope I can convince you to abandon your fondness for moral precepts! As I wrote previously, I regard moral precepts as nothing more than high percentage shots. I have no objection to incorporating them into moral instruction as such. We must always be clear, however, that an act is always and only right because of its consequences, never because of compliance to any pre-existing rule. Realizing this, a moral person will always be cautious in his moral decisions in recognition of his limited ability to foresee their consequences. He must be respectful of uncertainty.

To follow your own example, I would have no problem saying to a student of morality, "In most cases one should not assassinate leaders of government for their political actions. Substantial evidence of this are the severe legal consequences to yourself, the brutalization of the political process and the predictable public reaction of repugnance at the act.. One must also balance against the possibility of ending the victim's abhorrent policies the uncertainty of the far-reaching consequences. A moral person must be humble. You have to be damned sure that killing this person will result in better consequences than his continued life, and that it will justify the consequences to you, before you assassinate anyone."

I end with a question: What is the function you are trying to fulfill? If you believe that there are objective moral laws, you are a discoverer who is presenting the evidence for your discovery. If you are a social scientist, you are observing the nature and effects of existing moral conventions. But I think that your true aspiration is to be a moral guru, who will persuade people to a better way. You have in mind what you think would be a nice moral system, and say to the world "let's do it!" Subjectivist errors are inevitable in this Utopian approach. As I remarked earlier, a more productive approach is to evaluate the appropriateness of a society's moral system to its objective social, economic and historic circumstances.


George,

The last question first: What function am I trying to fulfill? In the context of "Responsible Thinking", I want to reduce conflict between people caused by false beliefs. When the conflict results from different beliefs about testable assertions, it can be addressed by scientific methods and if the resolution is too difficult, both sides should recognize their uncertainty. If the conflict is caused by a difference in moral beliefs, how do we resolve it? It would be helpful to understand the nature of morality, and, ideally, find a common basis for resolving moral questions.

We probably agree on what would be the moral action in at least 90% of problems that might come up, and in fact are probably in general agreement that we seek the greatest good for the greatest number. Some of the remaining disagreements may be semantic.

When I use the word "happiness", I don't intend to imply any momentary emotion, but equate it with whatever people want, assuming they are in a stable mental state. What people want when in a fit of anger or depression is probably temporary and doesn't represent their long-term wants. Different people will want different things, so I don't envision a good moral system imposing predefined conditions on people. Instead it should be set up to make it as easy as possible for everyone to pursue their own goals by promoting cooperation instead of conflict.

I don't care for the word "absolute" as a description of the sort of morality I would promote because it has connotations of rigidity and some sort of magical defining source. However, the "greatest good for the greatest number" concept is fairly objective, and I feel that peoples actions that work against this principle should be discouraged (that is, considered wrong) even if their culture approves of those actions. This leaves flexibility for a culture like that in Sparta which had to impose harsh measures because of a harsh situation, since the measures were (presumably) necessary for the "greatest good". It still leaves us room to criticize things like genocide, torture of heretics, slavery, human sacrifice, and despotism even though they may be regarded as good or acceptable within certain cultures. I am not comfortable judging a culture by whether its practices are appropriate to its circumstances because the word "appropriate" is subjective. The judgement would still based on moral values but we aren't being explicit about what those values are.

You suggested that the fact that morality sometimes requires individual sacrifice indicates that it must not be justified on the basis of individual happiness. I agree that morality doesn't necessarily make every individual happier. Based on my own desire to be happy, however, I might want to be part of a society with a moral system because I expect ON THE AVERAGE to be happier. If the system does not improve average happiness, complete amorality would be preferable. In general, the benefit I would expect to gain from the sacrifices of others would exceed the losses I have from my own sacrifices. This is where I see the similarity to a contract. Normally with a contract, my own action tends to cost me something (money or work or possessions), but I enter into it because I expect the actions of the others involved to provide benefits that exceed my costs. I'm not sure why you object so strongly to the reference to contracts. I don't see anything misleading about it that would justify calling the term "social contract" a falsehood. Perhaps someone has used the term in the past to justify some oppressive principles, but it doesn't seem like the concept encourages that.

I agree that the formulation of specific precepts are indeed "high percentage shots". I think of their use more as a strategy than as an essential part of the justification for morality. The issue would be whether "the greatest good for the greatest number" is better served by promoting precepts or only promoting the "greatest good" directly. If it turns out that precepts get in the way, then they shouldn't be used. I also could imagine that precepts might be different for groups in different situations, which some might consider a form of cultural relativism. I do think, however, that the greatest good (overall happiness) should be the ultimate measure of all moral action, and this would be universal (absolute) for all cultures.

Bob


I agree that much of what separates us in this argument is semantic. I think, though, that very few moral absolutists would accept your claim to be their colleague. If we simply apply an arithmetic principle of the greatest good for the greatest number, we will end up praising different societies for adopting very different moral codes. That's moral relativism. But I agree with you. I'm not sure of the best formulation of the arithmetic principle, but it is the irreducible minimum underlying value that should decide moral reasoning.

I think that we remain in disagreement over what you characterize as a "strategic" issue, the development of moral precepts. I think that if they become strategic, then we are already violating what should be the core of moral instruction. Precepts should be posited only for tactical merit, to inform a moral decision. But we should never call an act right because it complies with a precept, nor wrong because it violates one.

Let's agree that some form of the greatest good for greatest number principle should underpin any moral judgment. Moral instruction should discourage slavish obedience to any precepts. An acute moral eye should be especially sensitive to exceptions, and treat each decision on individual merits. I've previously stated what I consider to be the essential points to be stressed: 1.) Moral judgment requires the ability to foresee the consequences of actions, which puts correct facts and valid reasoning at the crux of morality; and 2.) that since our knowledge and reasoning is limited, we must always appreciate that we may be wrong.

You don't understand my objection to the "social contract", and I cannot understand the fascination it has for you. I don't think you have addressed my points that no one in fact signs up (it is a fiction), that if such a contract did exist the courts would declare it invalid because it was coerced and extorted, and that it contains neither a clause for enforcement nor an escape clause. The only reason I see for you to posit it is the didactic utility of explaining why people should sometimes do things that contradict their individual self-interest. But it seems to me that you've already got that covered with your arithmetic principle, of greatest good for the greatest number. Society may justifiably sanction an individual for immoral actions either through law or social approbation, because of harm to society (in other words, it violates the greatest good principle), but it's silly to claim that a contract was violated. The social contract adds nothing to your position, Bob. As social allegories go, it's a cripple.


George,

I deliberately avoid calling myself a "moral absolutist" for the very reason that you describe - I differ greatly from most of the people who claim their morality is absolute. However, since you asked me to pick a side, I felt that absolute was more accurate, if perhaps misleading. It may be that you tend to think of anyone who is not an extreme absolutist as a relativist, while I tend to think of anyone who is not an extreme relativist as an absolutist. When I think of moral relativists, I think of post-modernist types that, perhaps overeager to embrace multiculturalism, act as if all moral systems are equally valid. I've also heard comments suggesting that virtually all culturally based moral principles originally derived from some sort of necessity. While some may have this origin, I believe many just derive from accidental tradition and irrational or self-serving dictates of religious and political leaders. The use of the word "social" in "social contract" suggests to me that we are talking about something different from a legal contract. Since the context is the philosophy of morality, it should be pretty obvious to everyone that we are not talking about a legal contract. Whether courts would uphold it or whether it lacks certain "clauses" seems irrelevant to me. The characteristic it shares with traditional contracts is that those involved have an incentive to abide by its requirements because they expect to gain from the conformance of the other people involved.

The idea of "social contract" cannot be "covered" by the greatest good rule because it is in fact a device to help illustrate why the greatest good rule is worthwhile. It is more sensible to support a "contract" which maximizes the average benefit to each of us than it is to support one that requires abiding by traditional values or the dictates of a leader. While participation in a moral system is not typically voluntary for the mass of the population, we are looking at it as philosophers trying to decide what is a good rule to base morality on. As such, we are not coerced. Perhaps more importantly, if we want persuade others to abandon arbitrary or counter-productive moral beliefs in favor of the greatest good principle, we need their voluntary compliance.

I wouldn't say the idea of a social contract has "fascination" for me. I simply think it is a useful way of looking at the issue. On the other hand, while I could understand that you might feel the term lacks explanatory value, I'm surprised that you would be hostile enough to the idea that you would use such derogatory terms as "fiction" and "cripple".

Bob


It seems that we will always be at odds on this "social contract." I'll continue to base my moral judgments on the consequences of actions, considering each action upon its individual merits, conscious all the while that I could easily be wrong. If I at times break the social contract, refusing conformance to the contract's dictates because I am persuaded that the consequences to society would be negative, so be it. It is a fundamental characteristic of my existence that I am an independent moral agent. I may at times guide my actions by conformance to socially acceptable standards, but if those acts produce harmful consequences, I am at fault. My obedience to the social contract does not absolve my responsibility.

I think that I've said all that I can to show that the contract is an invalid allegory. All that I can add is that you don't need it. Your only use for the contract is to provide the individual with a motive for acting in accordance with values other than purely self-serving hedonism. What is the real reason that a person will sacrifice his own pleasure for the good of the greater society? It is not because of a calculation that, because he does so, others will also, and he will be better off. Moral decision will always play off these two values against each other -- what is good for me, vs. what is good for the greater society.

The decision between the two must always be decided by the merits of the instance. Should a soldier throw himself upon a grenade to save his colleagues? The courage and sacrifice of such an act would always be an inspiration. Whether or not it would be "right" would depend upon the individual circumstances and consequences. We can imagine that the person who ran away instead, resulting in the death of a few colleagues, might later be plagued by guilt at his cowardice. He may conclude that the resulting deaths were much more tragic than his would have been, if one of his colleagues had never seen his infant son, and another were the sole support for his invalid, widowed mother.

These are the types of considerations a person would take into account in determining if he had done the right thing, and some arguments in favor of the soldier sacrificing his life might be very persuasive. But to say that his act of cowardice had violated a social contract is a non-sequitur. There are limitations on the sacrifices to which one can contractually agree. A contract that requires someone to sacrifice his life would be odious to most reasonable people.

You do not have to postulate a social contract to make a persuasive case that people should sometimes act in ways opposed to their own selfish interests.


George,

Your last sentence presents the crux of the problem. I really do need some reason to ask people to act unselfishly. Religion typically says that God will reward you for acting morally, so it ultimately would be to the individual's own self interest to act unselfishly. Since I don't feel we can rely on those who claim they know what some god or gods want, we need some other motive for participating in a moral system. The only motive I know of is that, on the average, it is in our own self-interest; and that is because our own participation allows us to benefit from the participation of others.

I have apparently neglected a crucial aspect of the social contract as I see it - it must be a commitment made long before any situation requires an extreme sacrifice such as falling on a grenade to save one's comrades. Our commitment to moral beliefs normally takes place over a long period of time and becomes ingrained in us until it becomes a matter of our own self-respect. Most often people develop their internalized moral code based on the respect or disrespect of their peers about various actions, but obviously that doesn't apply when we are trying to determine what the moral code "ought" to be. The decision to use the "greatest good" principle is the logical one to make from the standpoint of our own self-interest at a time long before we know whether it will require great sacrifices of our own or whether we would benefit from the sacrifices of others. For those of us who promote this principle intentionally, the feedback we give to others will be based on it and we will recognize as "valid" the feedback we get from others that conforms to the rule. Fortunately, much of the morality already in use is consistent with the "greatest good" rule so we don't have a huge conflict with previously ingrained moral values.

You seemed concerned that social contract idea was in opposition to the principle that moral judgements should be based on the consequences of actions. I don't intend to imply that at all. In fact I see it as providing justification for trying to achieve the greatest good for the greatest number, which is an outcome based principle.

Bob


Back