So far, we’ve been imagining that a utilitarian would evaluate the ethicality of an action one action at time. Suppose, for example, that we want to know whether or not you should cheat on an upcoming exam, so we ask a utilitarian (of the kind we’ve been imagining so far) “Should this person cheat on the test?”
In order to answer this question, the utilitarian would need to know many specific facts about the particular test we have in mind, including whether or not you’ll need to know the material later on, what your chances are of getting caught cheating, what grade you’d probably get if you didn’t cheat, what grade you’d probably get if you did cheat, and how happy these respective grades would make you and everybody else, in the short and long term. Only then would the utilitarian be able to tell you whether or not you should cheat on the exam.
On the one hand, if it turns out that you’ll never need to use the material later on, that you’d fail if you didn’t cheat, and that the consequences of your cheating are dire and wide-reaching, the utilitarian would counsel you to cheat. On the other hand, if it turns out that you will need to know the material later one, that you won’t really learn it if you cheat, and that the consequences of you not knowing the material are generally bad, the utilitarian would tell you not to cheat.
So is it good or bad to cheat in general? “Nobody ever cheats ‘in general,’” responds this utilitarian, “People on cheat on specific exams. Sometimes it’s good. Sometimes it’s bad. It all depends upon the consequences of that particular act of cheating.”
This kind of utilitarianism is called “act utilitarianism,” because it evaluates actions one at a time, saying that an action is good if it produces the greatest happiness for the greatest number, and bad if it produces unhappiness. Act utilitarianism only requires us to answer one question – “Does this particular action maximize happiness?” To that extent it’s a pretty simple ethical theory, although, as we’ve seen, the process of answering that question can be very complex because it needs to take into account many features of the action being evaluated.
Until this point, we’ve been taking act utilitarianism for granted, as though it’s the only kind of utilitarianism. But it’s not. Let’s ask another kind of utilitarian whether or not you should cheat on an exam. “Should this person cheat on the test?” we inquire.
“No!” this utilitarian answers, “She shouldn’t cheat!”
“But the material on the exam really isn’t important,” I explain, “It’s a basket-weaving class. And if this student doesn’t cheat, she’ll fail the test. And if the student fails the test, she’ll fail the course. And if the student fails the course, her scholarship will be revoked. This will make everyone very unhappy because this student is in a premed program and…”
At this point, the utilitarian would interrupt me. “Stop,” he’d say, “It doesn’t matter. I don’t care. I don’t need to know the particular details of this sad story. You asked me if this student should cheat on an upcoming exam. I considered what general rule the student would be following if she did cheat and decided that the rule would be something like ‘cheat on exams.’ I then considered whether this rule, if generally followed, would maximize happiness. And I think it’s clear that it wouldn’t. If people generally cheated on exams, exams would loose their function as assessment tools. Teachers wouldn’t be able to tell what topics they should spend more time on. Unqualified people would be getting driver’s licenses, even medical licenses. No. Cheating on exams would definitely not maximize happiness. If anything, it would maximize unhappiness. This means that cheating is wrong, and so your student shouldn’t cheat. The fact that cheating on this particular exam might make people happy doesn’t matter. Cheating is wrong because, as a general rule, it wouldn’t make people happy.”
See the difference? This kind of utilitarianism doesn’t assess individual actions for their utility, but rather focuses on the utility of the general rules of which a particular action would be an instance. Not surprisingly, this is called “rule utilitarianism,” and it says that an action is good it conforms to a rule which, if generally practiced, would produce the greatest happiness for the greatest number. As we’ve seen, rule utilitarianism would have us ask two questions: 1) “What general rule would I be following if I did this particular action?” and 2) “Would this rule, if generally followed, maximize happiness?” To this extent, it’s more theoretically complex than act utilitarianism, but because it can give us general rules to follow, it’s simpler to apply. We no longer need to worry too much about the specific circumstances surrounding any particular ethical decision. And maybe, with rule utilitarianism will enable us to resolve some of the problems with utilitarianism we’ve discussed. Let’s see if rule utilitarianism allows us to respond to the objections to utilitarianism that we’ve seen so far.