Search This Blog

Thursday 29 March 2018

i keep writing the same shit over and over..

Many educated people are familiar with the classic debate over whether a person's meta-ethical position ought to have a major impact on one's behaviour (even if they are not familiar with the debate in such terms). This debate is most typically played out in the modern day between atheists/agnostics and religious believers (particularly monotheists). Many monotheists are wont to claim that disbelief in God immediately implies that purpose, or good and evil, don't have a fully mind-independent status, and, second, that belief in this doctrine of a universe without divine teleology is highly liable to promote a social collapse into bad behaviour, including criminal behaviour (which seems to imply commitment to a model with the positive prediction that non-monotheists should be less polite, more criminal, etc, unless they think that people who are disbelievers currently are morally special or something (but that seems like an 'epicycle', so to speak)). In doing so, they seem to tacitly come down on the God-as-dictator side of the Euthyphro Dilemma (they deny the possibility of rationally identifying as an irreligious moral realist (this is what they get right!)).
      More interesting to me is the fact that a similar debate has also been played out in academic, non-religious meta-ethics. Derek Parfit, the atheist moral realist (believer in eternal "normative truths" which don't require any 'metaphysical grounding'), makes it clear in On What Matters that he thought that the meta-ethical issues are of tremendous importance to humanity. There are several nods to Effective Altruism in Volume I (and maybe a few in Volume II (I read most of both volumes in 2015, and I can't remember it perfectly)), despite the fact that neither volume is actually about practical ethics - I think he thought, with some merit (as I'll shortly discuss), that people without a realist meta-ethical bent are much less likely to want to be Effective Altruists. Famously, he even says (I think he might say it first in the introduction) that, if he is wrong, his entire life would have been "a waste". Obviously, this can't be an 'objective' "waste" if he is wrong, so he is tacitly admitting that his emotional universe wouldn't change much if he convinced himself he were wrong in that very remark. But, as I say, I do at least get the strong  impression that he thinks that he may well at least have given up some elements of his self-denying altruism if he were proven wrong. This is especially strong in the section on his good friend Bernard Williams, who was, as Parfit notes, though a decent and virtuous man, not nearly so morally obsessive about distant suffering. 'Why,' Parfit might ask himself were he to give up belief in 'objective reasons', 'do I give so much of my income for people whose emotional reactions don't actually make me suffer in the here and now? I could spend money instead on things that make me happy more directly, while making an effort to remove unpleasant thoughts about starving people in distant countries from my mind!' I think that people like Singer, Pinker, Bloom, Greene, Harris and all other members of the "Effective Altruism" movement, who advocate the view that you can 'do the most good' by trying to think of how to promote "wellbeing" generally, with a decision-theoretic mindset, also quite clearly think that if one does not adopt the doctrine that moral behaviour should be motivated by considerations of consistency between the general normative principles to which we claim to be committed and our specific behaviours, then one is less likely to carry out actions which are "beneficial". So, again, here we have a mild version of the same concern of the monotheist. If you do not try to systematise the logic of your decision-making in the way the above figures do, then you are going to be a much worse person than those who do.
     I actually think that it makes perfect sense that one's meta-ethical position would affect one's actions at least a little, and that, for the average person, significantly changing one's meta-ethical stance probably does have some effect on one's emotional universe. I recently myself have consciously raised my credence in the view that all forms of systematic consequentialism - and all systematic moral frameworks - are doomed to failure: more specifically, that, whilst not all 'moral reasoning' is mere "rationalisation", since, in a given dialectical context (a situation where one is debating an ethical issue), it will often be the case that both parties are willing to commit themselves to a relevant, very generally phrased "ethical principle" and yet only one party's position on the issue in question is compatible with the logic of that principle (one case study in the essay linked), the idea that there is some 'absolutely true', self-evident, fully consistent, decidable for all moral problems (in the sense of giving a solid verdict for every possible 'moral situation) finite set of such principles that everyone should rationally accept is really quite absurd. It is absurd for too many reasons to count. The first few I already discussed in the essay linked, but I'd like to go over them again, and some more in addition. One is that (even very intelligent) people don't even agree on what the good is, or even what wellbeing is. Another is that we have no way to measure wellbeing (yes we have proxies like life expectancy, spending power and level of education for human populations, but thinking about human wellbeing even in terms of these measures quickly leads you into some very tricky territory, like the Repugnant Conclusion), and it doesn't seem at all impossible (in fact it seems damn likely to me!) that we will never turn general organismal wellbeing into a measurable theoretical construct in some properly formal model (can we quantify grades of emotional complexity among different organisms lol? even adopting an individualist rather than social or Aristotelian perspective on wellbeing, it doesn't become less absurd: given the computational theory of mind, it is in principle silly to think of thinking of happiness as reducible to arrangements of chemicals, so our 'best' bet would presumably be some kind of computational reduction or something ("The total level of wellbeing w for an organism o at time is, for all currently perceived worldstates s­ (even if delusionally perceived) and the correspondent desired worldstates ds­i, the average degree of match (continuous variable, along the Real interval [0,1]) between ds­and  s­i , formalised in terms of the theory of mathematical isomorphism according to structurally-abstracted, idealised models of the s and the ds") but since my attempt in that preceding parenthetical interjection was a dumb joke, there is no prospect for this. Another is that we have extremely limited ability to predict future social dynamics (notwithstanding the excellent work of my man Peter Turchin!), so, even supposing for a second that the world can be objectively a better or worse place (assuming that there is some measurable quantity in the fully mind-independent world that goes up and down depending on our actions) no-one can actually claim with very much confidence at all, no matter what they are doing, that they are "making the world a better place", nor can we say that about political policies or decisions, including war (the utilitarian may want to say, as I would, that the Vietnam War, or the Iraq War, were evil atrocities, but they always have in the back of their mind the butterfly effect... what about the unintended longer-term consequences? Hmm I can't be too confident that these things were even overall bad can I? (Objection: but do we really have to be that sceptical? Can't we have high credence that even from a long-term perspective the Iraq War was bad? Sure you can, but with no scientific justification whatever - and that's my point, you have no scientific justification (none of us do!)). Perhaps the only thing a utilitarian can have some confidence is the right thing to do, on the basis that they value the "wellbeing" of all organisms in general, or of "reducing suffering" in general, is to devote their life to investigating and trying to raise alarm bells about existential risks, like Nick Bostrom does. But, as always with utilitarian commitments, there is then no way to justify all the time you spend not working on this project, unless it is directly targeted at improving your ability to work at this project (like resting your brain, or trying to make a lot of money), and this means that to take fully seriously this utilitarian logic you would have to become a total fucking loser weirdo with a horrible life. (Even then, given that "good" and "bad" are not fully mind-independent properties - we know this because they don't feature in any scientific models of mind-independent phenomena which make reliable predictions - the claim that "if the world were without life completely/if humans were wiped out/if civilisation was wiped out it would be a worse or less valuable place" is yet another expressive speech act, and gibberish from a scientific standpoint (it is very easy to persuade humans that they should be concerned about existential risk, because it does seem to follow from a lot of the expressive speech acts we make about ethics that existential risks are extremely important, and yet, strangely (or not so strangely), it is impossible to get a human being to behave as if existential risks are the most important thing to deal with at all times (even Bostrom doesn't really act like this; he has a girlfriend and writes poems).)  Another, related one is that a lot of 'noble' or 'altruistic' or 'benevolent' things people say they do for utilitarian reasons don't make sense for utilitarian reasons and are instead just complicated deprivation rituals - rituals which have a very ancient legacy in religions in virtually all human societies.  For example, people like Peter Singer - people I know - say that they are vegetarian or vegan because of utilitarian logic. But even if you don't press on the claim that they usually use to back up this statement - "If a lot of people became vegetarian or vegan, there would be more wellbeing in the world/less suffering in the world" (a mere expressive speech act, given that the aggregate wellbeing of all organisms on the planet is immeasurable because we don't even know what wellbeing is (same with suffering)) - even if you pretend that this expressive speech act is something more, and just take for granted that there is some quantity "wellbeing" which would go up were more people to become vegetarians or vegans, and a dual quantity "suffering" which would go down were this to happen (which I am usually happy to do, because it is so intuitive and comfortable to think like this) - then there is still no reason why an individual person should never eat meat, because individual consumption decisions in any given situation are totally inconsequential, and the real goal should be to try one's best to persuade as many other people to be vegetarian or vegan independent of your own consumption decisions. Why not, for example, just eat whatever you want when nobody will find out but try to convince other people to go vegetarian or vegan? That way, you will have a chance of having a super-inconsequential impact on the world without even depriving yourself of delicious meat and making it way harder to keep your iron and B12 levels in a healthy range (if you're South Asian, your iron and B12 levels will probably stay in a healthy range even if you go fully vegetarian and don't take any supplements or specialise your diet, but otherwise it can be difficult). In my experience, people who identify as "utilitarians" are, without fail, just pretenders. They act like normal people except they occasionally do random shit like ask people to donate money to charity while they mutilate themselves in some way or some shit. (And then they want praise for giving money to charity, which is absurd, because why are they wasting their time on charity pranks when they should be working on existential risk lol, or at least trying to get into banking or politics lol?)
     Anyhow, to get back to the point, I've noticed - and I'm not sure where the causal arrow points in this situation - that my meta-ethical change has coincided with more of a conscious embrace on my part of Machiavellianism. I have decided that I'm happy to cheat, lie and scheme my way through life, do whatever it takes to get on top and enjoy myself. I can suppress guilt and have a good time.
    ...
    In all seriousness, if there are no reasons for action but instrumental reasons, then, well, the Humean conclusion is that, insofar as you can take 'control' of your guilt responses (and most people cannot for 'big' wrongs, admittedly), you should scheme and deceive your way through life, if that allows you to get the wanted outcomes in your life, and if you can avoid detection as a knave and blackguard. You should only work to mitigate suffering insofar as you are directly motivated to do so. And you have no reason to care whether you sometimes make expressive speech acts which, if you were to interpret them in a kind of stricter fashion, imply moral commitments that are at odds with the way you are currently spending your time. Because what do such 'inconsistencies' even matter? They don't even matter if you care about logic, because it's not really an issue of logic, unless you choose to make it into one by treating expressive speech acts as if they are more (as, admittedly, I always myself do in ethical debates).
    Anyhow, we gotta save the world from climate change I'm telling ya.

the extended intelligence theory

Try to develop a complete and correct algorithm for a complicated programming problem in your head. Try to do a complete mathematical proof in your head for an important theorem you haven't proved before (in a branch of mathematics whose axioms you understand well). Try to seriously think through complicated philosophical territory in the philosophy of language in your head.

Try to solve any major intellectual problem in your head.  Just in your head, with no paper at all. It shouldn't be so challenging...

It's impossible. Working memory gets overloaded so quickly.

Not sure to what extent people realise, as F. Chollet does (https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec), that it is technology and culture that has raised us far above our fellow organisms cognitively, not the cognitive endowment itself (except insofar as it allowed for the possibility of complex culture to begin with). Racist people from history or today who mock and disdain the primitive technology of hunter-gatherers (a group that includes great philosophers like Hume and Kant) fail to realise that, if a person with their genetic recipe were raised in such an environment (if 'they' were raised in that environment), they would just  be yet another hunter-gatherer armed with a bow and arrow, bawdy jokes, a fanatical attachment to bizarre and traumatic initiation rituals, and a very rich botanical and ethological knowledge of their local environment (i.e. they wouldn't fucking invent a slingshot, let alone writing, let alone prove one of Euclid's theorems, let alone...). People do not sufficiently appreciate the fact that the 20th and 21st Century intellectual giants we recognise in the natural and mathematical sciences - Einstein, Curie, Turing, von Neumann etc - who made astonishing advances unprecedented in history, did not have a kind of brain-wiring or processing speed unprecedented in humans in history. This should be obvious, but I get the impression many people don't reflect on it deeply. Our intelligence as humans is so culture-dependent, even if this doesn't show up so much on IQ tests (except it does a bit in any case, says the Flynn Effect).

Monday 19 March 2018

Peter Godfrey-Smith on the Evolution of Consciousness in Other Minds

Peter Godfrey-Smith, of the City University of New York and the University of Sydney, is one of my favourite philosophers. His work in the philosophy of biology, particularly on the problem of biological individuality and on the evolution of consciousness, is very interesting and, I think, important. I got into his work while doing a unit at university on exactly that subject, and reading his take on the first of the two problems mentioned above. I found myself experiencing tremendous admiration for his bipartite schema for the determination of biological individuality and his ingenious cubic diagram (https://writingsoftclaitken.blogspot.com.au/2017/06/the-problem-of-individuals-and-species.html). Anyhow, I just finished his book Other Minds, a meditation on the cognitive faculties of octopus and cuttlefish species, particularly as they illuminate issues generally in the evolution of intelligence and consciousness (but also because this stuff is damn fascinating in its own right). It's a very elegantly written and very compelling and insightful book. I'm not going to properly review it, because I don't properly review books (I loathe summarising things). I just want to highlight a couple of his observations and thoughts from the section on "consciousness", which, incidentally, is a very good section, with some genuinely original thoughts raised in defence of a 'gradualist' position on the evolution of consciousness, and, by extension, a 'continuous' position on consciousness itself (as opposed to boring David Chalmers-type 'consciousness-is-a-special-mystery-essence' BS.)
The first observation I want to mention is a rather simple evolutionary insight but, I think, extremely important, and highly underappreciated. It goes as follows: since mobile animals (as opposed to plants, or microscopic creatures, or most insects) affect the world they interact with in a very salient way - at the very least, by moving their body, significantly alter the scene coming in through their perceptual systems - they absolutely 100% need to somehow model what 'they' are doing to change things around them, as opposed to what other creatures are doing, or what is happening due to non-organismal forces like currents in the ocean, the wind, etc. This attribute of having some kind of model of the self, or at least some kind of 'awareness' of actions taken, must, therefore, be very ancient. The nervous systems of fish in the Cambrian explosion probably included some mechanisms that transmitted motor information to other parts of the system. Another interesting thought Godfrey-Smith raises is that acting abnormally in response to negative stimulus - like tightening up or moving rapidly in a certain direction, or producing a sound - and avoiding sources of pain, is a definite sign of this ability, and a definite sign of feeling pain or experiencing suffering of some kind. And, in this connection, he points out that ants do not act any different when they are severely damaged; ants still try to drag themselves around even when their abdomens are squished, as if nothing has happened at all.
A more subtle theory he proposes on this topic is that organisms that have evolved to be curious in the sense that they have evolved to be versatile foragers, looking for different food at different times or in different places, and to take advantage of resources (e.g. both habitation and food resources),in new environments - a category which does seem to encompass both us and the octopi, along perhaps with our chimp cousins and definitely our hominin ancestors - may possess a greater degree of consciousness because such creatures have to rely heavily on highly integrated thinking, having to really take in all aspects of their environment and solve problems using knowledge from different areas of the brain. It's hard to connect this directly to 'subjectivity', but of course the Integrated Information Theory of Consciousness is a very popular one, and something seems right about it (more on this in a second).
He also suggests - extremely plausibly, as I see it - that humans may have a uniquely strong sense of subjectivity because of internal language, and the ability it probably enables for meta-thinking, thinking about one's thoughts and reflecting on one's mental states. In his nod to the integrated information/workspace theory of consciousness, he hypothesises that language may also be a major tool in helping the mind bring together information coming from different modules (strangely enough, he and I share this theory (https://writingsoftclaitken.blogspot.com.au/2016/10/an-essay-on-mysteries-of-mind.html (I want to be clear that I have been convinced since I wrote this that the Chomsky-Berwick hypothesis on the origin of language is very likely quite significantly wrong)) and we both think that there may be evidence for this from child psychology research by the likes of Elizabeth Spelke (actually, I can't remember if he cites Spelke but he mentions some child psychology research)). Fascinatingly, he hypothesises that it might be reasonable to think of Kahneman and Tversky's category of "System 2 Thinking" as linguistic thinking, thinking which brings all our knowledge to one 'workspace' and allows us to slow down and process things symbolically. And he suggests also that it is when that internal language is more muted that one is less conscious, i.e. one is less conscious when that 'voice' is switched off. This last bit does accord with my own experience, I think, when I reflect. And surely it is no coincidence that meditation-boosters talk of the feeling of being 'freed of the self' by eliminating that incessant interior monologue.
So pain and some kind of monitoring of oneself as an agent comes inevitably with being a certain-sized creature acting in the world (Godfrey-Smith namedrops Dewey in relation to this stuff, which I like, because Dewey is super cool), but maybe only humans have that real special sort of awareness, because that comes with language. Cool stuff.

Also I agree with Godfrey-Smith's claim in this book that intelligence isn't one thing, and that there is no such thing as General Intelligence with a capital G and I (https://writingsoftclaitken.blogspot.com.au/2017/12/a-rambling-essay-called-notion-of.html).

Thursday 15 March 2018

A Great Essay on Sexual Politics

At times, this article from LRB hints at a robust blank-slatism (does the author know, as Pinker discusses in the mostly terrific book How the Mind Works, that low-status young men going "amok" is a social phenomenon attested to cross-culturally? And I am concerned that this is someone who dismisses evolutionary psychology/sexual selection-theory-applied-to-humans wholesale, when the correct take on evo psych is only that most of it is garbage, whereas figures like Pinker do a pretty respectable job of cleaving to the evo psych hypotheses for which there is solid evidence, or which are testable in principle)... but it's a very good discussion of these issues overall. I, too, think we should try to "dwell in that ambivalent place" of which the author speaks.
And I obviously love that the author echoes exactly many of my own thoughts and observations, including noting the strange, disturbing fact that angry incel MRA-types can almost invariably trace most of their externally imposed suffering to fellow men, and yet respond by joining male solidarity movements... https://writingsoftclaitken.blogspot.com.au/2018/01/curious-social-fact.html

Wednesday 7 March 2018

monbiot vs pinker

https://www.theguardian.com/commentisfree/2018/mar/07/environmental-calamity-facts-steven-pinker

lol finally monbiot himself wrote the thing i've been writing


reminds me of monbiot's savage takedowns in 2010 of matt ridley's shitty environmental research:
https://www.theguardian.com/commentisfree/2010/may/31/state-market-nothern-rock-ridley
https://www.theguardian.com/commentisfree/cif-green/2010/jun/18/matt-ridley-rational-optimist-errors


economic libertarianism is literally incompatible with a good understanding of environmental issues... it doesn't survive