Search This Blog

Thursday 29 March 2018

i keep writing the same shit over and over..

Many educated people are familiar with the classic debate over whether a person's meta-ethical position ought to have a major impact on one's behaviour (even if they are not familiar with the debate in such terms). This debate is most typically played out in the modern day between atheists/agnostics and religious believers (particularly monotheists). Many monotheists are wont to claim that disbelief in God immediately implies that purpose, or good and evil, don't have a fully mind-independent status, and, second, that belief in this doctrine of a universe without divine teleology is highly liable to promote a social collapse into bad behaviour, including criminal behaviour (which seems to imply commitment to a model with the positive prediction that non-monotheists should be less polite, more criminal, etc, unless they think that people who are disbelievers currently are morally special or something (but that seems like an 'epicycle', so to speak)). In doing so, they seem to tacitly come down on the God-as-dictator side of the Euthyphro Dilemma (they deny the possibility of rationally identifying as an irreligious moral realist (this is what they get right!)).
      More interesting to me is the fact that a similar debate has also been played out in academic, non-religious meta-ethics. Derek Parfit, the atheist moral realist (believer in eternal "normative truths" which don't require any 'metaphysical grounding'), makes it clear in On What Matters that he thought that the meta-ethical issues are of tremendous importance to humanity. There are several nods to Effective Altruism in Volume I (and maybe a few in Volume II (I read most of both volumes in 2015, and I can't remember it perfectly)), despite the fact that neither volume is actually about practical ethics - I think he thought, with some merit (as I'll shortly discuss), that people without a realist meta-ethical bent are much less likely to want to be Effective Altruists. Famously, he even says (I think he might say it first in the introduction) that, if he is wrong, his entire life would have been "a waste". Obviously, this can't be an 'objective' "waste" if he is wrong, so he is tacitly admitting that his emotional universe wouldn't change much if he convinced himself he were wrong in that very remark. But, as I say, I do at least get the strong  impression that he thinks that he may well at least have given up some elements of his self-denying altruism if he were proven wrong. This is especially strong in the section on his good friend Bernard Williams, who was, as Parfit notes, though a decent and virtuous man, not nearly so morally obsessive about distant suffering. 'Why,' Parfit might ask himself were he to give up belief in 'objective reasons', 'do I give so much of my income for people whose emotional reactions don't actually make me suffer in the here and now? I could spend money instead on things that make me happy more directly, while making an effort to remove unpleasant thoughts about starving people in distant countries from my mind!' I think that people like Singer, Pinker, Bloom, Greene, Harris and all other members of the "Effective Altruism" movement, who advocate the view that you can 'do the most good' by trying to think of how to promote "wellbeing" generally, with a decision-theoretic mindset, also quite clearly think that if one does not adopt the doctrine that moral behaviour should be motivated by considerations of consistency between the general normative principles to which we claim to be committed and our specific behaviours, then one is less likely to carry out actions which are "beneficial". So, again, here we have a mild version of the same concern of the monotheist. If you do not try to systematise the logic of your decision-making in the way the above figures do, then you are going to be a much worse person than those who do.
     I actually think that it makes perfect sense that one's meta-ethical position would affect one's actions at least a little, and that, for the average person, significantly changing one's meta-ethical stance probably does have some effect on one's emotional universe. I recently myself have consciously raised my credence in the view that all forms of systematic consequentialism - and all systematic moral frameworks - are doomed to failure: more specifically, that, whilst not all 'moral reasoning' is mere "rationalisation", since, in a given dialectical context (a situation where one is debating an ethical issue), it will often be the case that both parties are willing to commit themselves to a relevant, very generally phrased "ethical principle" and yet only one party's position on the issue in question is compatible with the logic of that principle (one case study in the essay linked), the idea that there is some 'absolutely true', self-evident, fully consistent, decidable for all moral problems (in the sense of giving a solid verdict for every possible 'moral situation) finite set of such principles that everyone should rationally accept is really quite absurd. It is absurd for too many reasons to count. The first few I already discussed in the essay linked, but I'd like to go over them again, and some more in addition. One is that (even very intelligent) people don't even agree on what the good is, or even what wellbeing is. Another is that we have no way to measure wellbeing (yes we have proxies like life expectancy, spending power and level of education for human populations, but thinking about human wellbeing even in terms of these measures quickly leads you into some very tricky territory, like the Repugnant Conclusion), and it doesn't seem at all impossible (in fact it seems damn likely to me!) that we will never turn general organismal wellbeing into a measurable theoretical construct in some properly formal model (can we quantify grades of emotional complexity among different organisms lol? even adopting an individualist rather than social or Aristotelian perspective on wellbeing, it doesn't become less absurd: given the computational theory of mind, it is in principle silly to think of thinking of happiness as reducible to arrangements of chemicals, so our 'best' bet would presumably be some kind of computational reduction or something ("The total level of wellbeing w for an organism o at time is, for all currently perceived worldstates s­ (even if delusionally perceived) and the correspondent desired worldstates ds­i, the average degree of match (continuous variable, along the Real interval [0,1]) between ds­and  s­i , formalised in terms of the theory of mathematical isomorphism according to structurally-abstracted, idealised models of the s and the ds") but since my attempt in that preceding parenthetical interjection was a dumb joke, there is no prospect for this. Another is that we have extremely limited ability to predict future social dynamics (notwithstanding the excellent work of my man Peter Turchin!), so, even supposing for a second that the world can be objectively a better or worse place (assuming that there is some measurable quantity in the fully mind-independent world that goes up and down depending on our actions) no-one can actually claim with very much confidence at all, no matter what they are doing, that they are "making the world a better place", nor can we say that about political policies or decisions, including war (the utilitarian may want to say, as I would, that the Vietnam War, or the Iraq War, were evil atrocities, but they always have in the back of their mind the butterfly effect... what about the unintended longer-term consequences? Hmm I can't be too confident that these things were even overall bad can I? (Objection: but do we really have to be that sceptical? Can't we have high credence that even from a long-term perspective the Iraq War was bad? Sure you can, but with no scientific justification whatever - and that's my point, you have no scientific justification (none of us do!)). Perhaps the only thing a utilitarian can have some confidence is the right thing to do, on the basis that they value the "wellbeing" of all organisms in general, or of "reducing suffering" in general, is to devote their life to investigating and trying to raise alarm bells about existential risks, like Nick Bostrom does. But, as always with utilitarian commitments, there is then no way to justify all the time you spend not working on this project, unless it is directly targeted at improving your ability to work at this project (like resting your brain, or trying to make a lot of money), and this means that to take fully seriously this utilitarian logic you would have to become a total fucking loser weirdo with a horrible life. (Even then, given that "good" and "bad" are not fully mind-independent properties - we know this because they don't feature in any scientific models of mind-independent phenomena which make reliable predictions - the claim that "if the world were without life completely/if humans were wiped out/if civilisation was wiped out it would be a worse or less valuable place" is yet another expressive speech act, and gibberish from a scientific standpoint (it is very easy to persuade humans that they should be concerned about existential risk, because it does seem to follow from a lot of the expressive speech acts we make about ethics that existential risks are extremely important, and yet, strangely (or not so strangely), it is impossible to get a human being to behave as if existential risks are the most important thing to deal with at all times (even Bostrom doesn't really act like this; he has a girlfriend and writes poems).)  Another, related one is that a lot of 'noble' or 'altruistic' or 'benevolent' things people say they do for utilitarian reasons don't make sense for utilitarian reasons and are instead just complicated deprivation rituals - rituals which have a very ancient legacy in religions in virtually all human societies.  For example, people like Peter Singer - people I know - say that they are vegetarian or vegan because of utilitarian logic. But even if you don't press on the claim that they usually use to back up this statement - "If a lot of people became vegetarian or vegan, there would be more wellbeing in the world/less suffering in the world" (a mere expressive speech act, given that the aggregate wellbeing of all organisms on the planet is immeasurable because we don't even know what wellbeing is (same with suffering)) - even if you pretend that this expressive speech act is something more, and just take for granted that there is some quantity "wellbeing" which would go up were more people to become vegetarians or vegans, and a dual quantity "suffering" which would go down were this to happen (which I am usually happy to do, because it is so intuitive and comfortable to think like this) - then there is still no reason why an individual person should never eat meat, because individual consumption decisions in any given situation are totally inconsequential, and the real goal should be to try one's best to persuade as many other people to be vegetarian or vegan independent of your own consumption decisions. Why not, for example, just eat whatever you want when nobody will find out but try to convince other people to go vegetarian or vegan? That way, you will have a chance of having a super-inconsequential impact on the world without even depriving yourself of delicious meat and making it way harder to keep your iron and B12 levels in a healthy range (if you're South Asian, your iron and B12 levels will probably stay in a healthy range even if you go fully vegetarian and don't take any supplements or specialise your diet, but otherwise it can be difficult). In my experience, people who identify as "utilitarians" are, without fail, just pretenders. They act like normal people except they occasionally do random shit like ask people to donate money to charity while they mutilate themselves in some way or some shit. (And then they want praise for giving money to charity, which is absurd, because why are they wasting their time on charity pranks when they should be working on existential risk lol, or at least trying to get into banking or politics lol?)
     Anyhow, to get back to the point, I've noticed - and I'm not sure where the causal arrow points in this situation - that my meta-ethical change has coincided with more of a conscious embrace on my part of Machiavellianism. I have decided that I'm happy to cheat, lie and scheme my way through life, do whatever it takes to get on top and enjoy myself. I can suppress guilt and have a good time.
    ...
    In all seriousness, if there are no reasons for action but instrumental reasons, then, well, the Humean conclusion is that, insofar as you can take 'control' of your guilt responses (and most people cannot for 'big' wrongs, admittedly), you should scheme and deceive your way through life, if that allows you to get the wanted outcomes in your life, and if you can avoid detection as a knave and blackguard. You should only work to mitigate suffering insofar as you are directly motivated to do so. And you have no reason to care whether you sometimes make expressive speech acts which, if you were to interpret them in a kind of stricter fashion, imply moral commitments that are at odds with the way you are currently spending your time. Because what do such 'inconsistencies' even matter? They don't even matter if you care about logic, because it's not really an issue of logic, unless you choose to make it into one by treating expressive speech acts as if they are more (as, admittedly, I always myself do in ethical debates).
    Anyhow, we gotta save the world from climate change I'm telling ya.

No comments:

Post a Comment