Can’t get no
Ethical Satisfaction (and it’s not just me)
I have this
problem in my life. It’s been lurking creepily behind me for quite a long time
(I inserted “behind me” in post- because I know people like physical metaphors,
even if half-assed). I mean, it’s not really my unique, personal problem (it is
not up to me to explain how this amendment affects my half-assed physical
metaphor); it’s a problem of abstract philosophy that causes me emotional
issues. It’s very vaguely related to the better-known Frege-Geach problem – which, incidentally, is itself a problem that
has been known to cause emotional issues (although not to me, only to people
who refuse to see that saying that all ethical discussion and activity consists
in expressive/commissive speech acts is just blatantly untenable, because
that’s clearly not a satisfactory account of what’s been going on in
philosophical ethics for, you know, the last few thousand years (how can you propose
to explain ethical language games simpliciter
unless you can account for Plato, Aristotle, Spinoza, Kant, Bentham, Mill, Foot,
Nussbaum, Singer and Parfit?)). In fact, what I just said didn’t emphasise this
point enough. If you’ll permit me to apply a positive scalar to the vector
we’re already skating along (people love quasi-physical linear maths metaphors
the most), I want to suggest that ‘my’ problem – I use the possessive pronoun
with a bashful knitting of my limbs (narrowing of the body) and a downcast gaze
– is actually a problem that a very sizeable category of persons have… even if the
instances of this category are not smart enough to realise it and consequently
(logical, atemporal sense) do not experience any associated disquiet (to
clarify a point in no need of such remediation for comic effect, if these
people are depressed or anxious, it has nothing to do with this abstract
philosophical problem).
Now, as a comically self-undermining disdainer of dithering coyness –
sort of like Polonius (“Brevity is the soul of wit” says the least brief man in
the world, much as the counter-Enlightenment mystic Jordan Peterson professes
to be a Man of Science) – I staunchly refuse to dilly-dally in relating to the
Reader what category of persons I am referring to in the above, and instead
will immediately spit the answer out following this colon:
the utilitarians.
I want it known that I do not myself belong to this category of
persons. Personally, I think that utilitarianism is a false God. And one thing
that I will do very soon is explain why. But before even that, I feel strangely
as if I have some sort of moral obligation (a
rather rum thing, what) to say in what this problem consists – or to put it
less prolixly – what this problem is.
(As the peerless philosophy stylist, Jerry Fodor, showed us, brevity
has never been, and will never be, the soul of wit. If anything, it is the
opposite. There is nothing witty about Hemingway. And imagine a taciturn
stand-up comic. Nor are cowboys funny…)
As Ecclesiastes might urge, I will cut the hilarious bullshit now,
because now is not the time. The
problem is this:
Like any other smart, philosophically trained chump who is politically
and ethically engaged in the world and has beliefs about what is right and
wrong , about how things could go better and worse (and all that jazz), I strongly
believe that the things (actions, trends, institutions) that I think are bad
are bad not because they violate some divine command or some ‘rule’ forced upon
us by Reason itself, but because they have (will probably have, have had) bad consequences; mutatis mutandis, that the things that I think are good are good
because they have (will probably have, have had) good consequences. However, I don’t actually have any kind of
systematic framework that would, as it were, ‘ground’ all these positions I
take – which tend to be justified case by case, often relying on principles
which I don’t take seriously wholesale – because, as made clear, I
categorically reject utilitarianism (certainly, as a philosophy for living).
Now, if you recall the comment I made before to the effect that I think that
the problem I’m stating here is actually an unrecognised problem for pretty
much all utilitarians, you might be somewhat confused, because utilitarians shouldn’t have this problem if, after
all, by virtue of being utilitarians, they have the systematic framework to
ground the ethical judgments they make. Unfortunately, my response to this is
rather controversial – repugnant to most utilitarians (a “repugnant
conclusion”). I hold the extremely edgy view, that, if you think that accepting
the Humean “bundle theory” of “the self” isn’t sufficient to make a person a
Buddhist (i.e. to be a Buddhist you have to actually do Buddhist activities),
then, by the exact same logic, there isn’t a person alive on this entire planet
who is actually a utilitarian, because no utilitarian actually lives as a
utilitarian – certainly if we’re not including in this assessment any devious,
J.S. Mill-influenced Satanists who think that “utility” encompasses “eudaimonia”
(I feel such a move is a sure path towards not identifying as a utilitarian anymore).
Why
do I hold such a mad view? Not merely because it gives me great pleasure to deflate
the absurd pretensions of utilitarians – and it surely does, a major hobby of
mine being to dialectically disrobe them until they are quivering like freshly
shorn lambs in front of me, bleating for abatement and détente – but for two
major reasons (neither of them original, both of them well-known critiques of
utilitarianism), which I shall now explicate.
1.)
Nobody can even tell us what utility is; there
is no formalism, nothing resembling a “neurochemical reduction”, no way of
defining it beyond resort to vague, non-technical and imprecise words like
“pleasure”, “pain”, “harm” and “enjoyment”, "preference realisation". How do you measure utility? How do
you compare utility? How do we weigh up human versus animal utility? If you’re
not the kind of utilitarian who rejects
Mill’s claim about dissatisfied Socrates and the satisfied pig (i.e. you’ve
moved past Bentham), then you have clearly given up on nebulous neurobabble-accounts
of how you compare utility (“if the “pleasure receptors” are more excited in
person A than person B, then A has more utility!”) – which is good, except that
there also seems very little to stop you sliding back towards more ‘intuitive’,
commonplace thinking about ethics. If you refuse to state quantitatively the
ratio of ethical significance between the higher pleasures and the base
pleasures (and why would you do such a patently silly thing?), then the ethical
territory open to you becomes just as wide as that open to someone who rejects
utilitarianism: you can go full Nietzschean and say that it’s ok if a society
is highly inegalitarian and there’s lots of poverty and suffering so long as
that society is producing Great Men who make magnificent works of art and
achieve great scientific advances (because you think that these goods are so much more important in their
contribution of utility than goods like a reduction of the proportion of the
population with pneumonia or whatever); or you can go full Marxian and say that
a world without Great Men would be more than fine if no-one was hungry and
everyone had a nice, clean place to live; or anything in between. All is
licensed. And that’s not what utilitarians want at all! The dream is a system
that forces you to adopt certain judgments that aren’t even emotionally
attractive to you, by virtue of the fact that they are the “correct” ones. If
nobody fucking knows what this whole utility thing is about, how the fuck does
this work?
2.)
A utilitarian (one who isn't drifting towards non-utilitarianism) literally has no idea what is good
or bad, or what they should be doing (this is, ahem, A MASSIVE PROBLEM). If you
assume, for no good reason
whatsoever, that there is some vaguely conceptually appropriate sciency account
of what utility really is in terms of
brain states (a bunch of sentences of neurochemistry or computational
neuroscience or some such) that ‘we’ will eventually arrive at (what I mean by
“vaguely conceptually appropriate” is that the account will fit with enough of
our existing utilitarian conclusions (and what these conclusions even are,
beyond conclusions that other ethical ‘frameworks’ or ways of thinking already
deliver, it is not clear to me (maybe ‘we shouldn’t brutally slaughter animals’,
and ‘there’s nothing inherently wrong with fucking a dead chicken (as long as
we don’t actually do it such that we cause disgust to other people)’ and ‘there
are certain conceivable circumstances in which nuking a million people would be
morally good’) to satisfy us that there is no better specific sciency account
of this concept), and if you assume that
if a person with “full knowledge” and understanding compares any two
meaningfully distinct “world states” X and Y, there can only be one comparative
judgment licensed by this account of utility (X = Y, X > Y or X < Y, with
the same accompanying axioms as the central axioms of decision theory), then
utilitarianism might be a viable organisational system for a future race of
superintelligent aliens (actually, even that’s not clear at all, but whatever).
However, as things stand, in the boring present, utilitarians don’t know what
utility is and they can’t compare world states. Utilitarians (those who haven’t
gone in the Millian direction and aren’t sliding towards a richer cosmos of
values (becoming more like the rest of us chumps)) can’t even confidently
assert morally banal propositions like that Hitler contributed more evil to the
world than good. They simply cannot, because they don’t know if it’s “true”
according to their system. The world is far too complicated to decide. What if all
the history-trajectories involving Hitler and people very similar to Hitler
taking control of Germany in the 1930s (people we can also call Hitler, for
convenience (metaphysics like this is too impossible as to be a good use of
time)) are massively outnumbered by ‘adjacent’ history-trajectories where the
planet experiences nuclear Armageddon and civilisation is wiped out? NOBODY
FUCKING KNOWS. Which means that even extremely basic moral judgments are unavailable
for utilitarians! THIS IS A HUGE PROBLEM. UTILITARIANS PAY ATTENTION PLEASE!
Anyhow,
to reiterate my key point, what this means is that utilitarians may believe
they have a systematic framework for making ethical judgments, but they
actually don’t; instead, they’re just making shit up and being edgy on taboos
because they’re assholes (that is essentially what I think).
Meanwhile, as I kept saying before, I am myself in a major bind because
I do actually think there is such a thing as ethical reasoning, and I believe
in evaluating goods and bads by consequences, but I don’t actually take that
logic all the way, which is, at least on
the face of it (and we’ll see why we might need to look beneath the face in
a minute), illogical. As I’ve explained before (e.g. in this essay (https://writingsoftclaitken.blogspot.com.au/2016/12/a-philosophically-involved-work.html),
which, incidentally, I am very ambivalent about and which shouldn’t be taken as
a statement of fully mature thoughts), analytic ethics survives because (crudely)
some ethical judgments are more logical than others. In slightly more precise
terms, I have previously noted the following:
“The way rational ethical discourse works can be illustrated by a
simple, abstract model:
Person A agrees with person B that x (where x is
some ethical principle or a highly general ethical judgment). Person B points
out that person A is violating x in the case of y (where y is
some specific issue: women’s rights, race, animal welfare, abortion, whatever).
Person A argues that position on y is not a violation of x because
of an error in Person B’s argument, or because of empirical considerations
which Person B has overlooked. The debate either continues with
further discussion of the merits in each other’s arguments or onto further
discussion of the empirical considerations.”
This model explains why I
am unambiguously right that, e.g., cat cullings in Australia are a good
thing (I know this is a weird case study but, strangely, it’s pretty much optimal
for the purpose, apart from the fact of its being ‘weird’):
People who oppose the culling
of cats will claim that it is deeply wrong because YOU SHOULDN’T KILL ANIMALS
IN COLD BLOOD, because IT IS WRONG TO KILL or because WE SHOULDN’T INTERFERE
WITH “NATURE” (it tends to be very emotive, very deontological thinking, or, as
with the last one, rather quaint and philosophically idiotic teleological
thinking). But if you point out that native marsupials – all of which are
vulnerable – and native birds are killed in their millions by these cats, in a fashion
that surely causes tremendous suffering to these prey animals (being stalked,
chased, leapt upon, bitten, scratched and ripped open is presumably a significantly
more painful way to die than getting cleanly shot, as the cats are), they
literally have no response, because this is also majorly unpalatable to them or
should be unacceptable to them according to the ‘reasons’ they actually give.
And so the inevitable outcome to such a response is either that they get up and
walk away angrily, moronically repeat what they already said before, or concede
that you are right and that they were mistaken. (Also, you can point out that the
teleological way of thinking about nature as something that shouldn’t be “interfered
with” is silly on every possible level, given that God doesn’t exist and such, and also the fact, in this specific case, that WE INTRODUCED CATS TO AUSTRALIA TO BEGIN WITH.)
This model probably also
explains why anyone who says that the Iraq War was a horrific crime is correct,
and explains similar such things. (I’m actually not being ironic, which may be
disappointing to some.)
Anyhow, the reason that this model is emotionally unsatisfying on a
deeper level is that it relativises correct and incorrect ethical verdicts to
the principles that people choose to bring into play and can agree on when
debating a specific ethical problem. Which is another way of saying what I said
before: like it or not, there is no systematic framework. Which is another way
of saying that there’s no way of justifying an entire network of ethical positions,
i.e. your whole ethical worldview. Which is another way of saying that you can’t
necessarily reconcile your ethical positions on different issues, even if those
ethical positions were each arrived at in a pretty rational sort of way (with
thinking consistent with my model of desirable ethical thinking). And responding
with this sort of rubbish (https://twitter.com/michaelshermer/status/949454134716936192
“Why not just use all the frameworks, picking and choosing whenever we
encounter a problem?”) is not any kind of solution, because those “theories”
that Shermer mentions in that tweet are simply logically inconsistent with each
other in all kinds of complicated ways (there’s almost nothing that survives as
a coherent doctrine if you were to actually try to mash them all together), and
the contextual picking and choosing of which “theory” to apply where will have
nothing to do with Reason.
So no, there’s no happy ending, and everyone has to deal with this
problem. And that sort of sucks.
No comments:
Post a Comment