Jake

Important If True 21: The Real Monkeys

Recommended Posts

32 minutes ago, Urthman said:

But the paradox is that once you get to the decision phase, the AI has already either put the money in the box or not.  You can't change that by your decision.  You can't change the past by choosing just one box, so you might as well just go ahead and take both boxes.  Anyone who could see the contents of both boxes would tell you to take both boxes.

I mean but so what though? A decision that's been made in the past unbeknownst to us is not observably different from a decision that's made on the spot right in front of us. I don't see how that should affect our reasoning at all, especially since we have so much to gain from, in essence, colluding with the AI. I have to assume that someone in the past thought they could get one over on the AI and it figured them out, why would I think I could do any better just by being a 5-year-old's idea of sneaky?

Share this post


Link to post
Share on other sites

So what would you say to me if I could see the contents of both boxes and said, truthfully, "you will get more money if you take both boxes." 

Share this post


Link to post
Share on other sites

The interesting part of the paradox question to me- and what I originally thought it was trying to get people to consider- was about the risky behavior of the box-chooser. I understood it as having a third party watch you over a period of time and then try to guess what you would pick based upon its observations of your life.

 

Also, the amount of money is an important detail because as I was hearing Chris read the setup I already chose the non $1000 box in my head before I knew what was potentially in it just because $1000 isn't life-changing.

 

Lastly, really enjoyed Chris' anecdote about the boardgames; I like that this show has space for that kind of story. The other time Chris related a story like that and that I enjoyed just as much was with the the woman who came out of an Uber/Taxi, smashed a glass bottle to drink the liquid inside it, and then got back in the car and drove away. I still think about the weirdness of that sometimes.

Share this post


Link to post
Share on other sites
On 7/11/2017 at 2:59 PM, Urthman said:

So what would you say to me if I could see the contents of both boxes and said, truthfully, "you will get more money if you take both boxes." 

Assuming an infallible observer or at least one with the knowledge that you would be there, I would say "begone, foul trickster!" and pick the closed box, because the reason why I would get more money by taking both boxes would change depending on my answer.  I'm not changing the past by doing this, but revealing what the observer already knew (or guessed at) all along.

Share this post


Link to post
Share on other sites
Quote

Another related problem is the meta-Newcomb Problem. The setup of this problem is similar to the original Newcomb problem. However, the twist here is that the predictor may elect to decide whether to fill box B after the player has made a choice, and the player does not know whether box B has already been filled. Also, there is also another predictor—a meta-predictor, who has also predicted correctly every single time in the past, who predicts the following: "Either you will choose both boxes, and the predictor will make its decision after you, or you will choose only box B, and the predictor will already have made its decision.

 

I just feel like this whole paradox is a bit forced, a lot of the interesting elements are driven from making the situation more convoluted.

Share this post


Link to post
Share on other sites

The more I think about this problem, the less impressed I am by these scientists, and I was skeptical from the beginning.  It seems like they didn't even consider the idea that the AI might factor the way you would react to these last minute game theory calculations into its answer, which is something it would have to do in order to make an accurate prediction.  The only way for the so called paradox to occur is if the AI is somehow able to determine what your decision would be without taking into account the possibility that you might decide to play 4D chess at the end.

Share this post


Link to post
Share on other sites
23 hours ago, Lork said:

Assuming an infallible observer or at least one with the knowledge that you would be there, I would say "begone, foul trickster!" and pick the closed box, because the reason why I would get more money by taking both boxes would change depending on my answer.  I'm not changing the past by doing this, but revealing what the observer already knew (or guessed at) all along.

 

But it's not a trick!  I'm looking at the contents of both boxes, I can see how much money is in each one, and I'm telling you 100% truthfully that you will get more money if you take both boxes.  The AI already made its decision.  The money is either there or it isn't.  I can already see how much is there, and I'm telling you truthfully, there's more total money if you take both boxes.

 

If I see that the million-dollar box is empty, I'm thinking, "Dude, you're not gonna magically make that million dollars appear by taking just the one box.  You'd better take both and at least get $1000."

If I see that the million-dollar box has $1,000,000, I'm thinking, "Dude, you're not gonna magically make that million dollars disappear by taking both boxes.  Why not take them both and get $1000 more?"

Share this post


Link to post
Share on other sites

You, like the people who created this scenario, seem to have a blind spot for the period after the AI tells you what its prediction is.  In order for the prediction to be accurate, the AI would have had to have known that you will be there and factor that into its calculations.  If I'm the kind of person who would be swayed by your argument, the AI would've sniffed that out beforehand and upon taking both boxes I'd learn that I've been hoisted.  On the other hand if we accept that a third party can come in and muck things up unbeknownst to the AI then it really has no idea what I'm going to pick and all bets are off.  

 

The "paradox" relies on the AI being able to make an accurate prediction with incomplete information, which is a huge cop out.

Share this post


Link to post
Share on other sites

The entire point of this box thing is to illustrate the difference between two decision making procedures which have been proposed by philosophers.

 

According to evidential decision theory, the right decision to make is the one that is based on the evidence available to you. Because the evidence suggests that the predictor is correct, you want to pick the single opaque box, because that way the correct predictor will have put $1,000,000 in the box.

 

According to causal decision theory, the right decision to make is one that is based on what consequences your choice could have. Because it's too late for your choice to influence how much money is in the boxes, you should pick both boxes, because that way you get more money. You get more money if the opaque box is empty, because you get $1,000 vs. $0. You get more money if the opaque box is full, because you get $1,001,000 vs. $1,000. So either way, you should pick both boxes.

 

It was not presented super well in the email sent to the podcast for various reasons. For instance, it's important to make clear to people that your goal is to make the most money possible - this screens off choices like Chris's "I'll pick one box because $1,000 is not enough to make a big deal in my life," which is a sensible choice in real life but which doesn't get at the point of the thought experiment. On the other hand, the thought experiment is really only useful for dividing evidential from causal decision theory, and that's a question basically nobody cares about, so maybe it's fine to get something like Chris's response.

 

You can find more details here if you're interested.

Share this post


Link to post
Share on other sites

If the goal is to make the most money possible and they wanted actual clarity to that goal they should have just loaded them both with something close the same amount, like $1000/$2000. The way this is framed, it's taking a huge and unnecessary risk for a 0.1% gain. Casul decision theorists, good job, you just screwed yourself out of a million dollars with your bad decision-making.

 

Share this post


Link to post
Share on other sites
9 minutes ago, Problem Machine said:

If the goal is to make the most money possible and they wanted actual clarity to that goal they should have just loaded them both with something close the same amount, like $1000/$2000. The way this is framed, it's taking a huge and unnecessary risk for a 0.1% gain. Casul decision theorists, good job, you just screwed yourself out of a million dollars with your bad decision-making.

 

It's a hard thing to calibrate, because the visible amount has to be enough that it's worthy of consideration but not so much that it makes the contents of the opaque box irrelevant. It'd probably be different from person to person in most cases.

Share this post


Link to post
Share on other sites
2 hours ago, Problem Machine said:

If the goal is to make the most money possible and they wanted actual clarity to that goal they should have just loaded them both with something close the same amount, like $1000/$2000. The way this is framed, it's taking a huge and unnecessary risk for a 0.1% gain. Casul decision theorists, good job, you just screwed yourself out of a million dollars with your bad decision-making.

 

The problem is that causal decision theorists don't screw themselves out of anything with their bad decision-making, if by "bad decision-making" you mean picking both boxes. The only thing that can lose you the $1,000,000 is the predictor predicting that you'll pick two boxes. Your actual choice is irrelevant: it's your actions prior to the prediction that matter.

 

If your point is just that being a causal decision theorist while the AI evaluates you is bad, that's obviously true. Nobody disagrees about that. The causal decision theorist does not say "you should be a causal decision theorist during the time the AI is observing your behavior." That would be goofy! If you think one side of this debate is doing something goofy, you're missing the point. Neither side is goofy.

 

So, the non-goofy thing that the causal decision theorist is saying is that, once it's time to make the choice, you might as well pick both boxes. By that point, the prediction has already been made, and you're guaranteed to get more money if you pick both boxes. You get more money if the opaque box is empty: you get $1,000 vs $0. You get more money if the opaque box is full: you get $1,001,000 vs. $1,000,000. So, either way, you get more money if you pick both boxes. So you should pick both boxes!

 

You're not taking any "risk"by picking both boxes, according to the causal decision theorist. It's too late to risk anything by picking both boxes. The AI has already put the money in the box. Nobody can take the money out of the box. It's there for you to take. If there's no money in the box, the only risk is taking that empty box rather than that empty box plus the $1k box. So of course you shouldn't take that risk.

Share this post


Link to post
Share on other sites

You are taking the risk by deciding to pick both boxes and choosing to be the jackass who takes both boxes. This is a 'paradox' created by picking apart the timeline in a highly specific way rather than creating an actual strategy. The best play is to choose to take one box. Maybe in the future it would be better to take both, but making the decision now you choose to take one, and you commit to that decision because to do otherwise would be to fuck with the prediction that is making it the best choice to choose one box.

 

See, you're coming at this from the perspective that if people have spent a long time arguing about it it's an intractable problem. I'm coming at it from the perspective that if people have spent a long time arguing about it it's probably a stupid problem. Most 'paradoxes' are just tricks of semantics and sloppy thinking.

Share this post


Link to post
Share on other sites
9 minutes ago, Problem Machine said:

See, you're coming at this from the perspective that if people have spent a long time arguing about it it's an intractable problem. I'm coming at it from the perspective that if people have spent a long time arguing about it it's probably a stupid problem. Most 'paradoxes' are just tricks of semantics and sloppy thinking.

 

That is... a hell of a statement, even if we just confine it to philosophical abstracts. Do you have any more to this, or is it just an intuitively felt thing for you?

Share this post


Link to post
Share on other sites
2 minutes ago, Problem Machine said:

You are taking the risk by deciding to pick both boxes and choosing to be the jackass who takes both boxes.

I already demonstrated why it's not a risk. Why is picking both boxes a "jackass" move? 

 

2 minutes ago, Problem Machine said:

This is a 'paradox' created by picking apart the timeline in a highly specific way rather than creating an actual strategy.

It's not a 'paradox' at all. It's just an illustration of the different outcomes generated by different decision procedures. There is no "picking apart the timeline in a highly specific way rather than creating an actual strategy." Both decision procedures have an actual strategy - in fact, they have very detailed strategies that you can read up on here if you're interested. If you don't want to read up on it, please just trust me when I say there are definitely "actual strategies" out there that pick two boxes, and for good reason.

 

4 minutes ago, Problem Machine said:

The best play is to choose to take one box.

According to evidential decision theory, yes. According to causal decision theory, the best play is to take two boxes. This is the entire point of the thought experiment: it shows us that the two decision theories come apart in certain specific cases, which is interesting.

 

5 minutes ago, Problem Machine said:

Maybe in the future it would be better to take both, but making the decision now you choose to take one, and you commit to that decision because to do otherwise would be to fuck with the prediction that is making it the best choice to choose one box.

It's impossible to "fuck with the prediction." The prediction happened in the past: it is now immune to fuckery. No matter how badly you wish to fuck with it, it's immune to being fucked with, the same way you can't influence anything else that happened in the past. This is how causation work: stuff in the past is immune to being changed by stuff in the future. This is why the 2-boxing strategy is recommended by what's called "causal" decision theory. That decision theory is focused on what effects you could possible cause, right here, right now. Because you can't possibly change the amount of money in the boxes, causal decision theory recommends picking both boxes, because no matter what's in them, that strategy will give you more money.

 

7 minutes ago, Problem Machine said:

See, you're coming at this from the perspective that if people have spent a long time arguing about it it's an intractable problem.

It's not an intractable problem. It's not even a problem! It's simply a thought experiment designed to illustrate the differences between evidential and causal decision theories. Your mistake is thinking about is as a paradox or a problem or whatever. Nobody's up at night sweating whether to pick one or two boxes. The choice is obvious. What's interesting is that the obvious choice differs between these two theories. Of course, there might be a nearby intractable problem: which decision theory should I pick? That's not what we're talking about here, though. This problem is quite tractable. Simply pick your favorite decision theory and it'll tell you what to do.

 

8 minutes ago, Problem Machine said:

I'm coming at it from the perspective that if people have spent a long time arguing about it it's probably a stupid problem. Most 'paradoxes' are just tricks of semantics and sloppy thinking.

Again, this is not a "paradox," and to the extent there is any sloppy thinking going on, it is entirely in your corner. Believe me when I say that I and other philosophers don't waste our time on stupid problems, tricks of semantics, or sloppy thinking. We're not all a bunch of fucking yahoos.

Share this post


Link to post
Share on other sites

ez

 

Tell the clairvoyant to close his eyes. When he does, punch him in the face. If he reacts before your fist lands, take box B. otherwise, take both. 

Share this post


Link to post
Share on other sites
Just now, Reyturner said:

ez

 

Tell the clairvoyant to close his eyes. When he does, punch him in the face. If he reacts before your fist lands, take box B. otherwise, take both. 

I know you're joking, but the point of this problem is that the causal decision theorist would say that even if the clairvoyant does react, you should still take both boxes. For whatever reason, people sometimes get angry when presented with this sort of reasoning and insist that two boxers are a bunch of idiots, etc. But they aren't, and it can be interesting to think about why it would make sense to pick both boxes even if you're given good evidence that this person is clairvoyant. If you can figure that out, you can see what the causal decision theorists are talking about.

Share this post


Link to post
Share on other sites
43 minutes ago, Gormongous said:

 

That is... a hell of a statement, even if we just confine it to philosophical abstracts. Do you have any more to this, or is it just an intuitively felt thing for you?

Mostly based on observation, but also based on what a paradox is: The idea of a paradox is that something is contradictory, but reality doesn't contradict itself, only the symbols we use to describe reality can contradict. The problems presented by paradoxes are entirely symbolic, and thus can be resolved by rephrasing the problem, or discovering there was never an underlying problem at all, just a weird semantic trick.

 

Quote

It's not a 'paradox' at all. It's just an illustration of the different outcomes generated by different decision procedures. There is no "picking apart the timeline in a highly specific way rather than creating an actual strategy." Both decision procedures have an actual strategy - in fact, they have very detailed strategies that you can read up on here if you're interested. If you don't want to read up on it, please just trust me when I say there are definitely "actual strategies" out there that pick two boxes, and for good reason.

This is what I mean when I say that this distinction comes down to weird time-picking: You should always, from our perspective now outside the problem, choose to take the opaque box. However, from the perspective of being inside the room at the moment-of, you should always choose to take both boxes -- except for, by being the person who in the moment would take both boxes, you've just again screwed yourself out of a million. Oops.

So the solution, I suppose, is to always pick the opaque box and then end up unintentionally somehow picking up both. Does the AI account for clumsiness??? Does the AI account for you having been introduced to the problem already on a podcast/forum?

 

 

Quote

It's impossible to "fuck with the prediction." The prediction happened in the past: it is now immune to fuckery. No matter how badly you wish to fuck with it, it's immune to being fucked with, the same way you can't influence anything else that happened in the past. This is how causation work: stuff in the past is immune to being changed by stuff in the future. This is why the 2-boxing strategy is recommended by what's called "causal" decision theory. That decision theory is focused on what effects you could possible cause, right here, right now. Because you can't possibly change the amount of money in the boxes, causal decision theory recommends picking both boxes, because no matter what's in them, that strategy will give you more money.

Except it's not impossible to fuck with the prediction when you assume that the machine has already simulated your exact decision-making process in the past. In that case, it is to your benefit to be the person who will come to the decision that. Causality works in reverse as well: Effects have causes.

 

Quote

 

It's not an intractable problem. It's not even a problem! It's simply a thought experiment designed to illustrate the differences between evidential and causal decision theories. Your mistake is thinking about is as a paradox or a problem or whatever. Nobody's up at night sweating whether to pick one or two boxes. The choice is obvious. What's interesting is that the obvious choice differs between these two theories. Of course, there might be a nearby intractable problem: which decision theory should I pick? That's not what we're talking about here, though. This problem is quite tractable. Simply pick your favorite decision theory and it'll tell you what to do.

 

Again, this is not a "paradox," and to the extent there is any sloppy thinking going on, it is entirely in your corner. Believe me when I say that I and other philosophers don't waste our time on stupid problems, tricks of semantics, or sloppy thinking. We're not all a bunch of fucking yahoos.

My argument is that one of the decision theories is based on a belief in the supernatural power of free will to escape causality, and is thus actually completely incorrect. And literally everyone wastes their time on stupid problems, semantics, and sloppy thinking, so let's not preemptively eliminate that as a possibility.

 

Also re: all this not a paradox stuff, isn't this called "Newcomb's Paradox"???

Share this post


Link to post
Share on other sites
20 minutes ago, Problem Machine said:

So the solution, I suppose, is to always pick the opaque box and then end up unintentionally somehow picking up both. Does the AI account for clumsiness??? Does the AI account for you having been introduced to the problem already on a podcast/forum?

The solution is to pick the opaque box, if you're an evidential decision theorist, or to pick both boxes, if you're a causal decision theorist. The AI accounts for whatever it accounts for - it's not particularly important how it makes its decision except that however it does it, it's very good at doing it, and also the decision in this particular case (for your boxes) has already been made.

 

20 minutes ago, Problem Machine said:

Except it's not impossible to fuck with the prediction when you assume that the machine has already simulated your exact decision-making process in the past. In that case, it is to your benefit to be the person who will come to the decision that.

We are not arguing over what kind of person to be while the AI makes its prediction. That is not an interesting question. Everyone has the same answer to that question. The answer to that question is to be a one-boxer. But that is not the question we are asking. The question we are asking is how many boxes to pick, once the prediction has already been made. That question has two legitimate answers: one box or two boxes. Evidential decision theorists recommend one box. Causal decision theorists recommend two boxes.

 

20 minutes ago, Problem Machine said:

Causality works in reverse as well: Effects have causes.

You misunderstood my point. My point is not that effects do not have causes. My point is that causes cannot cause effects in the past. Causes can only cause effects in the future.

 

20 minutes ago, Problem Machine said:

My argument is that one of the decision theories is based on a belief in the supernatural power of free will to escape causality, and is thus actually completely incorrect.

Neither decision theory relies on any such belief. Your mistake is believing that causal decision theory tries to "escape causality." It does not try to escape causality. The causal decision theorist accepts that it's too late to escape causality. The box has already been determined. It either already has the money or already doesn't. It's too late to fix that. Your only choice here is how many boxes to pick: one or two. And two is the better choice. Maybe it's literally impossible for there to be money in the box if you've been a causal decision theorist in the past. That's fine. Maybe causal decision theorists are stuck always getting $1,000. They're okay with that. Casual decision theorists don't say that there view will always net them $1,001,000. Maybe their view will always net them $1,000. That's okay. It's still the right decision to pick two boxes, according to causal decision theory.

 

20 minutes ago, Problem Machine said:

And literally everyone wastes their time on stupid problems, semantics, and sloppy thinking, so let's not preemptively eliminate that as a possibility.

I'm not preemptively eliminating this as a possibility, I'm speaking from experience, having spent many hours of my life studying this question in quite a bit of detail. You, meanwhile, don't even understand it yet, let alone have a particularly interesting take on the problem. I think it is a little presumptuous of you to declare an entire field of people (a field that includes me), people who are not idiots (I don't think I'm an idiot), as having wasted its time on stupid problems, mere semantics, and sloppy thinking. My claim is not that it's impossible that this has ever happened, but merely that it hasn't happened in this case, and you're going to have to do better than presenting mistaken understandings of the problem (and bad conclusions supported by these mistaken understandings) in order to convince me otherwise about this case.

Share this post


Link to post
Share on other sites
20 minutes ago, Problem Machine said:

Mostly based on observation, but also based on what a paradox is: The idea of a paradox is that something is contradictory, but reality doesn't contradict itself, only the symbols we use to describe reality can contradict. The problems presented by paradoxes are entirely symbolic, and thus can be resolved by rephrasing the problem, or discovering there was never an underlying problem at all, just a weird semantic trick.

 

I am skeptical that "reality doesn't contradict itself" is not an a priori proposition borne of the assumption that reality is entirely defined by comprehensible and ineluctable rules. If your evidence for all paradoxes being semantics is that reality appears to be internally consistent, and your evidence for why reality appears to be internally consistent is that all paradoxes are semantics, I don't really know that I can agree with you, not with counterintuitive things like quantum theory in play.

Share this post


Link to post
Share on other sites
23 minutes ago, Problem Machine said:

Also re: all this not a paradox stuff, isn't this called "Newcomb's Paradox"???

Yes, but that is a misnomer. Also, it's not always called Newcomb's Paradox. It is more often called Newcomb's Problem.

Share this post


Link to post
Share on other sites

I'm having trouble with how we can agree that causal decision making will definitely cost you a million dollars but you can still argue that it's not a bad way to approach the problem. This sounds like, rather than a dilemma created to demonstrate two separate but equally viable systems, a dilemma contrived to make one of those systems look foolish.

 

13 minutes ago, Gormongous said:

 

I am skeptical that "reality doesn't contradict itself" is not an a priori proposition borne of the assumption that reality is entirely defined by comprehensible and ineluctable rules. If your evidence for all paradoxes being semantics is that reality appears to be internally consistent, and your evidence for why reality appears to be internally consistent is that all paradoxes are semantics, I don't really know that I can agree with you, not with counterintuitive things like quantum theory in play.

I didn't assume any such thing. What would reality contradicting itself look like? I'm not arguing that there's some inherent logic of reality, but that whatever reality is is what it is. Whether it's logical or not is just a matter of whether what reality is is something we can understand -- in any case, that's a flaw in our logic, in the symbolic systems describing the underlying reality, more than anything else.

Share this post


Link to post
Share on other sites
2 minutes ago, Problem Machine said:

I'm having trouble with how we can agree that causal decision making will definitely cost you a million dollars but you can still argue that it's not a bad way to approach the problem. This sounds like, rather than a dilemma created to demonstrate two separate but equally viable systems, a dilemma contrived to make one of those systems look foolish.

Yes, causal decision theory does look foolish in this case. That's one interesting feature of the case, because normally causal decision theory looks fine. However, there are cases that make evidential decision theory look foolish, and which make causal decision theory look much better, so overall it's not a huge deal. In effect, everyone agrees causal decision theory is a bad way to approach the problem if by that you mean this is the only decision you will ever have to make in your life, and if this is the only decision you'll ever have to make in your life, obviously you should be an evidential decision theorist.

 

In reality, though, we make lots and lots of decisions in our lives, and in fact we never come up against the Newcomb's Box decision, so causal decision theory is not in much hot water merely because it fails to handle one (very weird, very specific) case very well. Mostly the interesting thing about the Newcomb's Box problem is that causal decision theory and evidential decision theory give different answers. Prior to this, one might have thought they always gave the same answers, and in fact one might not have realized that there are two ways of making decisions in the first place. So, Newcomb's Box is helpful because it helps us think about the ways that we make decisions.

 

And yes, one small incidental feature is that it makes causal decision theory look slightly bad, because it can't handle this case well, but then again, nobody will ever face this case in their lives, so it's not a huge deal if causal decision theory can't handle it very well. It's like saying nobody should ever go to culinary school at the most prestigious chef university because they don't teach you how to cook pickled yak testicles, and one day someone might threaten to kill you unless you perfectly cook pickled yak testicles. It's like, well, okay, that's true, if that's the criterion then I shouldn't go to the fancy culinary school. However, this is super unlikely, so probably I'm going to make up my mind based on other stuff, like whether the culinary school is likely to get me a job.

 

Similarly, if someday you're going to face Newcomb's Box, and also you won't have to make any other choices, then probably you should just go ahead and be an evidential decision theorist. But that's not what our lives look like, so nobody is ever really convinced to abandon causal decision theory just on the basis of its failing to handle this super weird, implausible case very well.

Share this post


Link to post
Share on other sites

Okay, but we agree that this specific case highly incentivizes that one particular branch of decision-making, which means it's not very good at highlighting their relative strengths. And, sure, creating a problem specifically to highlight a situation in which they generate different outcomes is interesting, but every presentation of the problem has been as a paradox or as a dilemma, not as an illustration of the consequences of different decision-making processes. My argument is not that evidential decision making is always better, just that clearly the correct decision in this case is to take the opaque box. Maybe that's different in other cases, in which case that's fine. I'd be interested in seeing those cases though, if you have any examples handy for me to search.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now