Search This Blog

Monday 16 May 2016

A Debate about Ethical Eating

A Debate about Ethical Eating

In a few paragraphs’ time, I shall post some extracts from the debate that took place in the comments section below this video (https://www.youtube.com/watch?v=oaMbU8vVLOI) of a discussion on “the philosophy of ethical eating” between the excellent Massimo Pigliucci (an evolutionary biologist turned philosopher of science and professional Stoic) and Daniel Kaufman (a philosophy professor at Missouri State University). Overall, I found the video quite stimulating, like most of their previous discussions, and I would recommend watching it.[1]
There are two reasons why I am posting the extracts: 
1.)    I think they capture the crux of the debate around ethical eating.
2.)    As you will see, last night I did something rather outrageous in the major thread below the video, accusing one of the participants in the video (Dan Kaufman) of having views akin to a 19th Century patriarch. As you will also see, I then got a reply from Kaufman himself, only hours later. *GASP*
Since this skirmish is already in the public domain, I hope my interlocutors won’t mind having their words reproduced here, along with their Youtube thumbnails and usernames.
However, as the son of two lawyers, I will say that if either of my interlocutors happen to stumble upon this blog and do not approve of this reproduction, I will immediately take this post down.
So, anyway, here are the extracts:

My First Comment:
Wish you talked more about ecology, environmental degradation, population ethics and the planet. Early in the video, Massimo Pigliucci made a brief comment about the environmental component of vegetarianism and veganism, but the line of thought was not pursued. It seems to me that, on a planet already polluted, plundered, scarred, defoliated and ravaged by human activity, whose climate is being thrust into ever greater flux by the terrifying juggernaut of accelerating climate change (in this possibly late stage of the Anthropocene), any act or way of living that would significantly reduce one's carbon and environmental footprint has truly supreme ethical value. Indeed, I would go so far as to say that significantly reducing one's environmental impact has a level of ethical value unrivalled by basically any other kind of virtuous behaviour. If the future of civilisation and the human race are at stake -- and they undoubtedly are to some degree, because the population can't keep growing indefinitely (under our current global economic system) and there are /already/ millions starving (as Peter Singer would point out) -- then one's environmental impact means everything. (Incidentally, this is the same logic that supposedly motivates that cyborg Nick Bostrom, who genuinely thinks his warnings about AI might be the salvation of humanity.)
I put it to you that even according to a loose virtue ethics framework, the goal of reducing one's carbon and environmental footprint should be a basic requirement -- i.e. a basic mark of good character. In fact, significantly reducing one's carbon and environmental footprint should be regarded as a kind of virtuous behaviour roughly equivalent to, say, volunteering in the community, or mentoring a promising student out of uni-hours.
If you accept this logic, the only questions remaining are the following: 1.) Does switching from a standard Western omnivorous diet to a vegetarian or vegan diet actually end up significantly reducing one's carbon and general environmental footprint? 2.) How does this aggregate over multiple agents?
I am no expert on these questions -- but that's exactly why I was hoping for a serious discussion.
Show less
Reply1  
(And notice that I argued this from within the virtue ethics framework, which makes it harder.)
Reply  

I have since watched Cowspiracy, and am now very confident that this speculation was spot-on. I think the ecological arguments for switching to a vegan diet are overwhelming.

The Main Thread:
At about the 10:00 frame, Massimo's interlocutor begins to castigate vegans. That mystified me. I have been a vegan who very infrequently eats a vegetarian diet when I travel, and I cannot fathom his insistence that there is no moral difference between choosing not to kill and eat animals and choosing to be a carnivore. That position is morally dubious to say the least. Just for the record, it is easy to prepare and eat a vegan diet. It simply means that you make your meals from whole foods and think about how to make them nutritious. After a brief initiation phase, it honestly becomes simple second nature.

Also, Massimo's citing the mistaken idea that the cultivation of non-meat crops would mean that more land would have to be placed under cultivation (if the global diet were more predominantly vegan) is quite simply mistaken, and it is often simply propaganda propagated by the meat production industry.

As for the Massimo's presentation of Peter Singer's dilemma at about frame 26:00, WHY does he postulate leather shoes? It is easy to buy comfortable non-leather shoes. Why should we gauge our self-esteem by whether we are wearing designer labels on our feet feet?

Vegan choices are easy to make. They do not make one miserable, as Massimo contends at about frame 29:00. In fact, as an added bonus, vegan choices can make your life richer, more creative and more FUN! For example, I love creating a vegan dinner party. The carnivores almost always rave about how good the food is. That makes me happy. Like much of life, making vegan choices does not necessarily result in being humorless and puritanical.

Eudomonic happiness is completely harmonious with veganism and also with epicurean happiness btw ;)
Show less
Reply4  
+Aravis Tarkheena I certainly do not feel I dishonest in my characterization of your comments. "Castigating" seems appropriate when you characterize vegans in the negative ways you did within the video. In response to your assertion that "Vegan choices may not make you miserable, but they would make me miserable," I would be curious to know how long you attempted to eat in a vegan diet. In my experience vegan cuisine is healthful, easy to prepare, and economical, and I would be curious to know why your experience is so negative.
Show less
Reply  
You said at 10:00 I "castigated" vegans. 'Castigate' means "reprimand someone severely." I did no such thing. Neither at 10.00 or at any other point of the discussion.

As for your question, I am a lover of great world cuisine. I would be miserable to never eat sushi again. Never eat shwarma again. Never eat lamb vindaloo again. Never eat schnitzel again. Never eat lobster again. Never eat great charcuterie. These are some of the great pleasures in life. I would no more give up eating meat than I would going to the Louvre or the Metropolitan Opera.
Show less
Reply  
+Aravis Tarkheena I don't have the time to watch the video again, so I'm not going to post the exact frames, but anyone who watches the video will clearly see that you often portray vegans in a negative light in the video. For starters, you say point blank at about frame 8:00 that you didn't admire vegans. If you don't like the word "castigate" feel free to substitute the synonyms"censure," "criticize," or "upbraid," all of which would serve the purpose equally well.

You did not chose to comment on whether or not you have seriously tried a vegan diet. And as for your food choices, I have no intention of moralizing or trying to tell you what your food choices should be. What I'm saying is that veganism is not an ascetic, labor intensive choice. After the initial learning curve, preparing a vegan diet can enhance the flourishing and the eudemonic effect on one's existence. However, if you feel you must eat shwarma, lamb vindaloo, and lobster in order to have personal contentment, then you will disagree with me. I am not a deontologist, so I'm not going to preach to you about your food choices. It would be futile for me to do so anyway.
Show less
Reply  
+2b Sirius I never once made any hostile or otherwise harsh remarks about vegans, which is what "castigate" and its "synonyms" mean. I simply explanied why I do not believe that the sort of diet they embrace is obligatory. The discussion was academic and cordial. Fortunately, anyone who wants to know what I actually said -- as opposed to your mischaracterization -- can see it him or herself.

Since I do not believe veganism is obligatory, I see no reason to take on the "learning curve" you describe. Just as I wouldn't see how long I can go without setting foot in the Metropolitan Museum of Art or the Avery Fisher Music Hall, I am not going to see how long I can get along without the cuisine I grew up with and love (like my grandmother's Paprikas Csirke recipe).
Show less
Reply  
+Aravis Tarkheena Sorry but your were both hostile and dismissive of vegans and veganism. But I do agree that viewers of the video can watch your characterizations and draw their own conclusions. As for your defensiveness about being unwilling to explore veganism for yourself, no one is twisting your arm and trying to make you do so. However, if you think that eating veal and going to the MoMA are moral equivalents I would love to see a forensic analysis of the ethics which allow you to make such claims.
Show less
Reply1  
+2b Sirius Worth nothing that patriarchs were very attached to tradition of women as nurturing, feminine domestic mannequins before the two major Women's Lib movements. Might they have tried to defend this beautiful tradition within some loose Aristotelian virtue ethics framework? Very possibly -- in fact, I think that's what some of them did (Janet Radcliffe Richards has made this kind of point when talking about the concept of "the natural" in relation to feminism).
You can be amoral about animal welfare and the future of the planet if you want, but I strongly suspect the kind of apologia voiced in this video will look very bad in a couple of decades -- like, seriously bad. Like, repugnant. Just as we do when we reflect on the progress of women, we may well wonder, "How were so many of us so selfish and cruel on this matter". And we will bow our heads in shame. As I do myself, knowing that my own commitments are weak.
As for Singer being a hypocrite, well, at least he's doing his best. At least he is preaching something that is worth preaching, even if he doesn't quite live up to his ideal. I admire him for that far more than I admire the non-hypocrisy of someone who defends indifference to animal welfare and the future of the planet on the basis of the aesthetic value of meat in their lives. (Again, wasn't there a certain aesthetic value in the traditional role for women, those dainty, elegant, bedizened women in their kidney-crushing corsets who pouted and fanned themselves and curtseyed for strong, handsome men with mutton chops? Didn't the moral value of equality and liberty and self-determination and a shared humanity outweigh this aesthetic value? Might it in this case too?)
Show less
Reply  
You can be amoral about animal welfare and the future of the planet if you want, but I strongly suspect the kind of apologia voiced in this video will look very bad in a couple of decades -- like, seriously bad. Like, repugnant. ---------------------------------- Right. All the great chefs of the world will come to be viewed as serial killers, and the Michelin Star will be known as a badge of infamy. Charcuteries will be shut down by righteous mobs, and the shwarma hawkers in the souk will be chased down as the villains they are. I think, I'll be just fine. Thanks. And I'll put up my good works against yours any day of the week. It's easy to be righteous on the internet.
Show less
Reply  
+Thomas Aitken I think it is very possible that in the future the facile social justifications for eating the flesh of dead animal will come to be seen in a very negative light. After all in the past slavery and cannibalism were condoned as cultural norms, as was the complete subjugation of women, as you pointed out in your comment.
Reply  

I did sound awfully righteous, I will admit, but I also said that I myself felt “shame”. As for his mocking point about chefs and butchers, I think he’s not entirely missing the mark, but I really don’t think he should be so smug. The stats quite strongly suggest that vegetarianism will continue to grow in popularity over the next few decades (see the graphs in The Better Angels of our Nature) and it wouldn’t surprise me if people who were still highly recalcitrant and defensive during this period (resorting, for example, to “aesthetic” arguments to defend their carnivorous eating habits) will be seen as selfish and barbaric in a couple of decades’ time. It wouldn’t surprise me at all, in fact. And I really don’t think the analogy with feminism is that misguided either. Yes, it may seem absurd, but it also would have seemed absurd in the 19th Century to accuse a patriarch of acting in a morally repugnant way, or to tell a woman of that time that she was being oppressed from the womb to the grave.
If that sounds melodramatic (and it does), so be it. 



[1] One interesting thing about their discussion is that they both mount their cases from the point of view of old-fashioned “virtue ethics”. This has the consequence that they obviate entirely the usual, utilitarian arguments of vegetarians, and talk little about animal welfare (no mention of Jeremy Bentham or his famous epigram). My own feeling is that this is a weakness of their discussion: I think that one sort of has to assume a more utilitarian framework when wading into debates about vegetarianism and veganism (but that is not a point I will argue).

Wednesday 11 May 2016

An Adapted and Extended Uni Essay on the notorious Newcomb's Problem

An Atypical Defence of One-boxing in Newcomb’s Problem

I am an unashamed, incorrigible one-boxer, and I think you should be too. My arguments for this position are not typical, though they are not completely original either. All of them emerge from my reckoning that questions of free will are at the heart of the problem. For reasons that will become clear, it is the arguments of three computer scientists that form the backbone of my attack: Scott Aaronson, Radford Neal and Seth Lloyd. The first two of this trio show a way of re-imagining Newcomb’s problem that simultaneously: a) accounts for the “common cause” dilemma, b) shows why one-boxers don’t have to reject Causal Decision Theory, and c) repudiates the claim that the fully deterministic one-box argument is assuming “backwards causation”. The third of these has written a paper proving a computational/philosophical notion he calls the “Theorem of Human Unpredictability”, which I believe shows that anyone who tries to apply CDT to the imagined ‘post-Prediction’ stage of Newcomb’s problem (what Burgess calls “the second stage”), without reformulating the problem in the manner of Aaronson and Neal, cannot have the slightest clue what the requisite ‘subjunctive conditional’ probabilities actually are, and thus can’t legitimately apply CDT.[1]

In order to stimulate thoughts about free will, let us begin by means of a second-person vignette – imagining what it would be like to carry out Newcomb’s problem in real-life:
It is 11 AM on a Monday, and you are walking into a room, alone. (To add mystery and intrigue, we can say it’s a dark and moody room, with red carpet, like an old cinema.) The room is empty up until the back wall, where there sit two futuristic-looking boxes (I imagine them placed on identical and symmetrically positioned black pedestals). One is transparent and contains $1000. The other is opaque. For whatever reason, you know (for certain) that the opaque box either contains $1,000,000 or $0, and that whichever sum it contains has been chosen on the basis of a super accurate prediction of what you will choose. You know that you are only allowed to choose either to take the transparent and opaque boxes together, or the opaque box alone. You know that if the prediction has been made for the former choice, there will almost certainly be nothing in the opaque box if you take it, and you know that if the prediction has been made for the latter choice, there will almost certainly be $1,000,000 in the opaque box if you take it.  You don’t know how this super accurate prediction really works, but you know that it is super accurate. To be precise, you know that it predicts correctly which box people pick 99.9% of the time.
This is what you are thinking as you near the boxes:
“If the prediction is really 99.9% accurate, then there is only a 0.1% chance that it will not get things right, which means that if I choose both boxes, then it is 99.9% certain that there will be nothing in the opaque box and if I choose the one opaque box, then it is 99.9% certain that there will be $1,000,000 in this box, which means I should obviously choose the one opaque box. Wait no, none of this makes any sense. The sums of money that are in the boxes have already been decided; there is no backwards causation, the prediction has already been made. I can’t be 99.9% certain that if I choose both boxes there will be nothing in the opaque box and that if I choose the one opaque box there will be $1,000,000 in this box. But, hang on, if that is not how this thing is working, then how can the Predictor be 99.9% accurate? Am I really not freely deliberating right now? Are my thoughts not reflecting some kind of spontaneous process of reasoning? How can I really not be making an unpredictable choice here? I mean… I don’t know. I still don’t even know which box I’m going to pick. And I could do something really crazy. No, actually, it would probably be too predictably crazy to do something crazy. Actually, maybe it is too predictable of me to think that kind of self-reflexive thought, and devolve into ever-increasing levels of bluffing. Onebox-twoboxes-onebox-twoboxes-onebox-twoboxes.
You know, I could base my decision on some really random aspect of my personal biography – like the number of letters in the name of my year 2 teacher, Mrs Evans. It would make sense if I two-boxed for an even number of letters and one-boxed for an odd number of letters. But, actually, I don’t know which it is in her case: with the appellation, her name is 8 letters long, but without it, it’s 5. Do I say it’s 5 or 8? One box or two box? Shit. Maybe I should just use the number of letters in some other person’s name, or the name of the campsite we used to go to when I was a kid. Maybe I should use the number of letters in some species of grass – you know, the Latin name or something. Actually, no – what if I still use Mrs Evans’ name, except do two-boxing for the odd number and one-boxing for the even? Only problem is that I still don’t know whether to include the appellation or not.
You know, one good thing about thoughts is how little sense they all make – I reckon I’m being totally unpredictable in having them. They seem unpredictable, don’t they? But probably “seem” is perhaps the key word here. And maybe I’m going too far with the lack of sense. Maybe the prediction had enough information on me to know that that’s the kind of thing I would do, so maybe I should do something that just makes sense in a straightforward way – but, then again, presumably the prediction would have taken into account my agonising vacillations and recursive self-looping and meta-entanglement. So I really don’t know. I really don’t know. Ah, I guess, I’ll just take the one box – no two boxes – no one box – no two boxes – yes I’ll take two boxes – no, I’ll take one.”
You take one. When you leave the room, you open the opaque box: it is stuffed with wads of cash – $1,000,000. The prediction was a success, just as an outside observer would expect. You are not an outlier, you are not unusual. With a feeling of immense confusion and disquiet, you suddenly realise that if you had taken two boxes, you would have got $1,001,000. This seems deeply strange. Beating the prediction would have made you a total outlier (the first one to fail in the last 1000 enactments), and yet you feel as if you were, at multiple moments, totally on the verge of picking the two boxes. In other words, you feel that you were totally on the verge of becoming an outlier. You feel as if your decisions in that room were totally unpredictable – that your mind was running wild, in a way that was profoundly organic and spontaneous and weird. Were they not? You don’t know what to think. This is really strange.
I think that this story really brings out what I believe to be the core of Newcomb’s problem: the age-old question of ‘free will’. As this way of presenting the problem highlights, there is a deep weirdness in Newcomb’s problem. Like all people, I have a conviction that, at least when I am thinking in a very self-conscious way (listening to my own interior monologue) there is a profound spontaneity, unpredictability and – dare I say – freedom about my thoughts (and the actions they lead to). Bizarre things can happen: I can think things like, “I’m going to raise my arm, I’m going to raise my arm, I’m going to raise my arm” and then do nothing for three seconds and then lower my arm slightly, and then thrust it back upwards. These actions make no sense – not even to me. In fact, on such occasions, I feel that I can’t even predict my own actions.
Newcomb’s problem is largely complicated by the fact that we can so easily imagine someone to be thinking in just such a self-conscious way as they approach the decision between one-boxing and two-boxing. We can easily imagine someone agonising over the clash between Causal Decision Theory and Evidential Decision Theory, trying to sort out their thoughts about what has made the Predictor so successful in the past, and generally unravelling into an increasingly recursive, self-referential and involute thought-vortex (until they are at last forced to just make a choice). And yet, no matter how complicated and self-referential these thoughts get (and no matter if the decision-maker ends up making their decision based on something totally random, like the number of letters in the name of their second grade teacher) we are forced by the set-up of the problem to think that their final action will have been fully predictable in advance. This, in turn, seems to force us to regard our thoughts as we think about the problem – no matter how convoluted, self-referential and ‘meta’ they get – to still be fully predictable.
Now, some philosophers regard the bizarreness of the story above as indicating something wrong with Newcomb’s problem itself. One group believes, for various subtle reasons, that Newcomb’s problem is either fundamentally insoluble (because it’s a true paradox) or fundamentally incoherent, and should be dissolved (Levi 1975, Sorensen 1987, Priest 2002, Maitzen and Wilson 2003, Jeffrey 2004, Slezak 2005). Another, smaller group holds that, despite claims to the contrary, a Predictor that was really perfect (or even near-perfect) would have to be some kind of supernatural being relying on “backwards causation” (Mackie 1977, Kavia 1980, Bar-Hillel and Margalit 1980, Cargile 1980, Schlesinger 1980). This claim is either used to argue for two-boxing (‘let us just use Causal Decision Theory and not worry about the lack of probabilistic independence between the states and the actions’) or to argue for the dissolution of the problem (‘why discuss an absurdity?’). I shall now discuss an argument for one-boxing that simultaneously repudiates all these deflationary and ‘corrective’ positions. The argument is independently attributable to the computer scientist Scott Aaronson and to the computer scientist Radford Neal. I shall therefore propound it using quotes from both.
In chapter 19 of his book Quantum Computing Since Democritus, titled “Free Will”, Aaronson introduces Newcomb’s problem as a choice between one-boxing, two-boxing and “the boring “Wittgenstein” position” (Aaronson, 2013, p. 295). Aaronson seems to conceive of the Wittgenstein position as the recognition of the inherent contradiction in the presupposition of free will that Newcomb’s problem makes (you are making a decision), and its simultaneous requirement that we really have no choice (the Predictor knows what you will do). Evidently, this Wittgenstein position is one of those held by many of the former group of deflationists. Though Aaronson admits that he used to himself be a “Wittgenstein” on Newcomb’s problem, he now describes himself as “an intellectually fulfilled one-boxer” (Aaronson, 2013, p. 296). His and Neal’s argument explains why, since by showing that Newcomb’s problem is possible within our own universe and that one-boxing is absolutely the right way to go, Aaronson and Neal refute both the “backwards causation” latter group and the hard-deflationist former group in one fell swoop.   
In order to make their case, both Aaronson and Neal pull a stunt more usually used by two-boxers seeking to overcome the “common cause” dilemma: they imagine what the Predictor would have to be like in real-life.[2] Since they are computer scientists, both Neal and Aaronson imagine not that the Predictor is some kind of demon, alien or God, but that the Predictor is some kind of computer. However, one important difference between them and someone like Burgess (2004) is that they don’t believe that the Predictor could be as super-accurate as the problem stipulates with anything resembling current technology. To them, the Predictor’s having the ability to, say, conduct brain-scanning and to create a profile of you from your internet activity is nowhere near sufficient to explain the phenomenal powers of prediction that the problem stipulates.[3] The fundamental reason why our modern technology is so inadequate is revealed in the fact that the Predictor would have to successfully predict thought processes like those I outlined above. As Neal writes, “While some of our actions are easy to predict, other actions could only be predicted with high accuracy by a being who has knowledge of almost every detail of our memories and inclinations, and who uses this knowledge to simulate how we will act in a given situation” (Neal, 2005, p. 13)  So though we might imagine that some of the Predictor’s successes would have come on people who were pre-committed “one-boxers” or “two-boxers”, or people who don’t overthink things, we must also assume that the Predictor would have had success with highly self-conscious and recursive reasoners.
Importantly, since ‘you’ could easily be one of those tricky reasoners, the Predictor would have to know “absolutely everything about you” and would need to solve “a “you-complete problem”” (Aaronson, 2013, p. 296). And this leads to an earthshattering conclusion: “the Predictor needs to run a simulation of you that’s so accurate it would essentially bring into existence another copy of you” (Aaronson, 2013, p. 297). In Neal’s words, “you have no way of knowing that you are not [the simulated being]” (Neal, 2005, p. 13). The implications of this for one’s decisions may initially be unclear, but they are certainly not even-handed. Aaronson unpacks them nicely: “If you’re the simulation, and you choose both boxes, then that actually is going to affect the box contents: it will cause the Predictor not to put the million dollars in the box. And that’s why you should just take the one box” (2013, p. 297).
The brilliance of this line of thought should be clear. It overcomes the common cause dilemma because you yourself are the common cause. And it suggests a way that the Predictor could always (or almost always) get the right answer without the invocation of backwards causation, or indeed any supernaturalism at all. Perhaps most significantly, it dissolves the central issue of the Newcomb debate: the supposed conflict between Causal Decision Theory and Evidential Decision Theory (in particular, the supposed necessity that one-boxers have to ‘reject’ the former). If you don’t know whether you’re the decision-maker or the simulation of the decision-maker, there can be no conflict between the two decision theories at all. Both decision theories recommend one-boxing in this story, since the indicative conditional probabilities required by EDT are exactly the same as the subjunctive conditional probabilities required by CDT. To put this in formal terms:
For Evidential Decision Theory, EU(A) = ∑i P(Oi | A) x U(Oi). If we assume that utility equals monetary value, the expected utility of one-boxing in Aaronson and Neal’s story is simply 1 x 1,000,000 (+ 0 x 0) = 1,000,000. The expected utility of two-boxing, by contrast, is 1 x 1,000 (+0 x 0) = 1,000. For Causal Decision Theory, CEU(A) = ∑i P(A         Oi) x U(Oi). If we make the same assumption about utility, the expected utility of one-boxing in Aaronson and Neal’s story is once again 1 x 1,000,000 (+ 0 x 0) = 1,000,000. Just as before, the expected utility of two-boxing is 1 x 1,000 (+0 x 0) = 1,000. So one-boxing always wins, and there is no problem with CDT.[4]

I should say that I do not expect committed two-boxers to be convinced by this argument. No doubt their reply would be that this is just a cheap way of overcoming the common cause dilemma. Contra my claim in footnote 2, they would probably say that this re-imagining does destroy the structure of the problem, because it is functionally equivalent to injecting backwards causation or clairvoyance into the problem, which the original populariser of the problem, Robert Nozick, explicitly tells us not to do (Nozick, 1969, p.134). There’s a sense in which this reply shows the futility of even arguing Newcomb’s problem: if the two sides conceive of the problem in different ways, what can you do to forge agreement? Though I don’t wholly reject this nihilistic position, I hope to show why there is one right answer: namely, one-boxing. 
The standard two-boxer tack, exemplified by someone like Burgess (2004), is to seek out a common cause for the alignment between predictions and decisions that allows for some degree of freedom. The novel thing about Burgess’ argument is that he ends up breaking the problem up into two stages: stage one, the stage “we are currently in”, the stage before which the Predictor has gained all the information on which the prediction will be based; and stage two, the period, post-Prediction, when the decision has to be made (Burgess, 2004). Burgess believes that Newcomb’s problem is normally considered just in terms of the “second stage”, and believes, like all two-boxers, that in this stage “the conditional expected outcomes support two-boxing”, since “for all values of φ [P(According to your epistemic state, you will one-box)], your conditional expected outcome for two-boxing will be $1000 better than one-boxing” (Burgess, 2004, p. 280). However, in the first stage, he says, you have “an opportunity to influence your BATOB [brainstate at the time of brainscan]”. This means you can influence “the alien’s prediction” and “whether or not the $1m is placed in the opaque box” (Burgess, 2004, p. 280). This extra detail throws a spanner in the works, since it dramatically changes the conditional expected outcomes:
If your brainstate during the prediction can influence the prediction, the expected utility of absolutely and sincerely committing yourself to one-boxing in this first stage, according to Causal Decision Theory, would be something like 0.95 × 1,000,000 + 0.05 × 0 = 950,000. By contrast, the expected utility of remaining a confident two-boxer would be something like 0.8 × 1,000 + 0.2 × 1,001,000 = 201,000.[5]
This logic leads Burgess to a strange recommendation: “Decide to commit yourself to one-boxing before you even get brainscanned. Indeed, you may as well commit yourself to one-boxing right now” (2004, p. 280). At the same time, he does not at all dismiss his reasoning about the rationality of two-boxing in the second stage. Thus, his ultimate conclusion is as follows: “if you are in the first stage, you should commit yourself to one-boxing and if you are in the second stage, you should two-box” (2004, p. 280). This supposedly holds even though one’s sincere commitment to one-boxing will make it almost impossible to perform the rational act in the second stage.
I think Burgess’ argument is mostly very strong. One counterargument that has been made against it is that his temporal specification destroys the abstract structure of the problem. Peter Slezak, one of the first group of deflationists who thinks Newcomb’s problem is a true paradox (produced by a “cognitive illusion”) says the following of Burgess’ reformulation: “Burgess’s strategy attempts to avoid the demon’s prediction by assuming that a presentiment or tickle is merely evidence of the choice and not identical with the choice decision itself which is, thereby, assumed to be possible contrary to the tickle. Clearly, however, this must be ruled out since, ex hypothesi, as a reliable predictor, the demon will anticipate such a strategy” (Slezak, 2005, p. 2029). I’m not sure Slezak is right about this, since Burgess seems to concede that most likely one won’t be able to renounce one’s commitment in the second stage, and therefore the demon won’t be outwitted. (I shall later give a different argument for why Burgess is mistaken to think that Causal Decision Theory is actually rational in the second stage.)
To me, the interesting thing about Slezak’s own argument is its similarity to Aaronson’s and Neal’s. Slezak thinks what really makes Newcomb’s problem paradoxical is that it is “a way of contriving a Prisoner’s Dilemma against one’s self” (Slezak, 2005, p. 2030). As he elaborates, “The illusion of a partner, albeit supernatural, disguises the game against one’s self – the hidden self-referentiality that underlies Priest’s (2002) diagnosis of “rational dilemma,” Sorensen’s (1987) instability and Maitzen and Wilson’s (2003) vicious regress. Newcomb’s Problem is a device for externalizing and reflecting one’s own decisions” (Slezak, 2005, p. 2030). Of course, the difference between the arguments of Slezak and the two computer scientists is that Aaronson and Neal show that conceiving of the problem as a Prisoner’s Dilemma against one’s self doesn’t have to be paradoxical. But even though Slezak might be wrong to believe that conceiving of the problem as a battle against oneself is inherently paradoxical, there is, I believe, something important to Slezak’s preoccupation with the paradox of “self-reference” in Newcomb’s problem – and this insight (conveniently) takes us back to Burgess.
The reason why I don’t agree with Burgess, or indeed any two-boxers (i.e. the reason why I don’t think Causal Decision Theory can be rationally applied to “the second stage”) is actually directly related to the problem of self-reference – in particular, something called the “Theorem of Human Unpredictability”.

 In his paper, “A Turing Test for Free Will”, the quantum computing pioneer Seth Lloyd proves this Theorem of Human Unpredictability. As he describes it, this theorem boils down to the claim “that the problem of predicting the results of one’s decision making process is computationally strictly harder than simply going through the process itself” and that “the familiar feeling of not knowing what one’s decision will be beforehand is required by the nature of the decision making process” (Lloyd, 2013, p. 2-3). He says his argument (based on “uncomputability” and Turing’s Halting Problem) can be thought of as “making mathematically precise the suggestion of Mackay that free will arises from a form of intrinsic logical indeterminacy, and Popper’s suggestion that Gödelian paradoxes can prevent systems from being able to predict their own future behaviour” (Lloyd, 2013, p. 6). Lloyd’s mathematical reasoning would take up too much room to include, but, in essence, he provides what seems to me to be convincing proof of one matter that bears directly on Newcomb’s problem: that one cannot have any idea of the subjunctive conditional probabilities required to apply CDT in the second stage.
Causal Decision Theory tells us that in “stage two” of Newcomb’s problem two-boxing is the better option irrespective of the exact values of one’s subjunctive conditional probabilities (however likely one thinks it is that the Predictor will have predicted a given choice, based on introspection) because the two-boxing option always gives us an extra $1000. But Lloyd’s theorem raises the question: What if it is simply a mistake to try to find any values for these subjunctive conditional probabilities? To even make this attempt assumes, to at least some extent, the ability to know oneself. Yet, as Lloyd shows, recursive reasoners don’t know themselves.
The consequences of this are clear: if you don’t know yourself, you shouldn’t be so arrogant as to believe you can insert the probabilities that second-stage CDT requires. Even if you don’t want to re-imagine Newcomb’s problem in the manner of Aaronson and Neal (even if you want to keep the Predictor’s powers mysterious), you should still not try to apply CDT to the second stage of Newcomb’s problem. You should still not regard it as rational to try to outwit the Predictor. You should instead accept the power of the Predictor, and take the $1,000,000. In short, you should become a one-boxer, like me.[6]
Reference List

Cited:

Book:  Aaronson, S. 2013, Quantum Computing Since Democritus, Cambridge University Press, Cambridge.

Articles:  Burgess, S. 2004, “The Newcomb Problem: An Unqualified Resolution”. Synthese 138 (2). Springer: 261–87. Accessed from:
<http://www.jstor.org/stable/20118389>

Lloyd, S. 2013, “A Turing Test for Free Will”, Massachusetts Institute of Technology Department of Mechanical Engineering. Accessed from:
<http://arxiv.org/pdf/1310.3225.pdf>

Neal, R. 2006, “Puzzles of Anthropic Reasoning Solved Using Full Non-indexical Conditioning”, Technical Report No. 0607, Department of Statistics, University of Toronto. Accessed from:
<http://arxiv.org/pdf/math/0608592.pdf>                                  

Nozick, R. 1969, "Newcomb's Problem and Two Principles of Choice", Essays in Honor of Carl G Hempel, ed. Rescher, Nicholas. Accessed from:
<http://faculty.arts.ubc.ca/rjohns/nozick_newcomb.pdf>

Slezak, P. 2005, "Newcomb's Problem as Cognitive Illusion", Proceedings of the 27th Annual Conference of the Cognitive Science Society, Bruno G. Bara, Lawrence Barsalou & Monica Bucciarelli eds., Mahway, N.J.: Lawrence Erlbaum, pp. 2027-2033. Accessed from:
<http://csjarchive.cogsci.rpi.edu/proceedings/2005/docs/p2027.pdf>

Uncited:

Craig, W.L. 1987, "Divine Foreknowledge and Newcomb's Paradox," Philosophia 17: 331-350. Accessed from:
<http://www.leaderu.com/offices/billcraig/docs/newcomb.html>

Hedden, B. Lecture notes.

Lewis, D. 1981, “`why Ain'cha Rich?'”. Noûs 15 (3). Wiley: 377–80. Accessed from:
<http://www.jstor.org/stable/2215439>

Maitzen, S & Wilson, G. 2003, “Newcomb’s Hidden Regress”, Theory and Decision 54:2 151-162. Accessed from:
<http://commonweb.unifr.ch/artsdean/pub/gestens/f/as/files/4610/13602_104700.pdf>






[1] It should be noted that I do not aim to argue that Causal Decision Theory is generally incorrect; instead, like most philosophers, I believe that CDT is overall superior to its competitor, Evidential Decision Theory. One of the goals of my essay is to show that advocating one-boxing in Newcomb’s problem doesn’t have to entail spurning CDT (in other words, that Newcomb’s problem is not identical to those other common cause problems). The Aaronson and Neal re-imagining demonstrates clearly the non-necessity of spurning CDT, as does the temporally specified CDT-based “first-stage” argument of Simon Burgess, which (as will become clear) I more or less accept.
The problem with using CDT comes when we keep the Predictor’s powers mysterious (but without allowing backwards causation). It is its application in these contexts that I do find illegitimate, and EDT is left as the only formal alternative.
[2] While doing this does lead them away from the standard, abstract formulation of the problem, I nevertheless think it is a valid way of arguing the one-box case, because, as I shall explain, it doesn’t destroy the fundamental weird structure of the problem.
[3] Notwithstanding the results of the famous “Libet” experiments in the 1980s, which Aaronson mentions after he finishes his argument
[4] My repeated use of the probability value “1” here is perhaps open to question, since we are conventionally not meant to assume that the Predictor is absolutely perfect. But Aaronson and Neal have an answer to this: they say that the simulation could be slightly off sometimes, resulting in extremely rare errors, but you’d still have no way of knowing whether you are the simulation or the decision-maker. Thus the probability value of “1” seems permissible (perhaps it should be lowered to 0.99 or so, but this would make little difference).
[5] As I shall argue, it is actually very difficult to have any hold on these subjunctive conditional probabilities.
[6] Admittedly, there is still something very odd about this. A futility lingers. It remains true (if you don’t re-imagine the story in the manner of Aaronson and Neal) that if a seriously committed two-boxer walks into the room with the boxes, there will be no money in the opaque box, and therefore it wouldn’t be rational to take only this box. But I do have an answer in this case. I would say the following: Imagine if such a committed two-boxer suddenly decided to take the one box. The stipulations of the problem tell us that there would still almost certainly be $1,000,000 in it. The reason this is not absurd is that, unbeknownst to her, this committed two-boxer actually harboured signs that she wasn’t as fully committed as other two-boxers during the prediction. In other words, she didn’t know herself… And this would be true of all two-boxers who suddenly decided to take one box.
Another response is to take a step back like Burgess and say, “It would have been more rational to commit yourself to one-boxing to begin with”.