One day you’re gambolling through the woods, when you happen upon three genies. Each tries to win your (peculiarly exclusive) friendship with a distinct offer:
Genie A: If you pick me, I’ll give you a 90% chance of infinite (positive) utility, and a 10% of business as usual. Genie B: If you pick me, I’ll give you a 10% chance of infinite (positive) utility, and a 90% chance of business as usual. Genie C: If you pick me, I’ll give you a 100% chance of business as usual and a nice juicy apple.
Incidentally, these are quite unstable genies, so it’s just possible one will pop out of existence while you’re deciding - so you’d better specify the order of the three, not just your top pick.
Since this is the Real World(™) you’re allowed to be as sceptical of their claims as one would hope you actually would be. All three genies can and will perform any feats you can imagine to ask of them to prove their potence (though they won’t be tricked into giving you infinite utility, nice apples, or anything else that would devalue their offering in the process), and all three seem as perfectly sincere, honest and confident in their claims as you’re capable of telling.
I have a few seemingly incompatible intuitions here:
- Genie A is obviously a slam dunk, and why are you asking this stupid question?
- I want to maximise my expected utility, so Genies A and B are interchangeablely the best two (as arguably is Genie C, since we apparently believe that the normal world has a positive chance of infinite utility)
- Assuming we're not positing logic-defying supermagic genies, there's no trick so amazing, no finite number of utilons any genie could brandish before me that would give me any evidence that it could actually generate infinite utilons, so if I'm not to completely throw away my understanding of how the world works, I must assume that Genies A and B are liars (or mistaken), and that Genie C is making me the best offer
A couple of friends offered other intuitions:
- I might have some infinitessimal credence in Genie A and/or Genie B’s offers that would multiply out to a finite amount of expected utility, which could be contrasted with the apple.
- I might think (after the genies have demonstrated sufficiently extraordinary abilities) that our understanding of the world has been so undermined that our mathematical/physical reasoning could be fatally flawed - thus even if I agreed with the logic of 2 or 3, I would pick Genie A, betting on my own reasoning being flawed
I don’t have a firm reason for accepting or rejecting any of these lines of thought, but I lean towards C.
I reject A because, while it’s intuitive that (say) turning the observable universe into lemonade with a finger click is evidence that you could produce infinite lemonade, but I can’t see any reason why it really is - you would be no closer to having infinite lemonade than you were before the finger click.
I reject B for the same reason, plus its inability to give us any direction whatsoever. I take it as basically an axiom that my system of ethics should be able to order at least some decisions, which feels like an unsatisfying reason for rejection, but that doesn’t mean it’s not the right one - or at least, good enough for now.
I reject D because even if multiplying infinitessimals by infinities can least to finite positives (a question which I don’t feel competent to opine on), I have enough trouble telling what decile my credences are (eg in the real world I would struggle to tell the difference between a 60% bet and a 50 or 70% one). So I don’t see how I could possibly judge that I had infinitessimal credence in something, let alone learn what infinitessimal credence I had, which I would need to figure out my expected utility.
And I reject E because it doesn’t give us any useful guidance until we actually happen to meet such genies, which rather defeats the point of the thought experiment. Less glibly, I reason similarly to my rejection of A - no amount of evidence can make me reach conclusions of which I can’t make logical sense, so nothing the genies could do could make me genuinely accept the premise that everything I know is wrong.
C also keeps me safest from Pascalian Wagers. So C it is for now, though I'd like to find a more thorough way of reasoning about this.
[Update] 24 hours after posting this, I've updated quite strongly in favour of D, on the thought that I don't necessarily need to introspect to find my credence in a proposition, but can assert that as a payoff becomes larger, my a priori credence in it decreases, probably fast enough to decrease my expectation overall. I would have higher expectation from someone who offered me £5 than from someone who offered me £5000, and higher from either of them than from someone who offered me £5 trillion for fairly intuitive reasons, and I don't see any reason to think the principle wouldn't generalise.