Jake

Important If True 21: The Real Monkeys

Recommended Posts

24 minutes ago, Problem Machine said:

Okay, but we agree that this specific case highly incentivizes that one particular branch of decision-making, which means it's not very good at highlighting their relative strengths.

I am not sure where I suggested that this specific case is good at highlighting the relative strengths of these two decision theorists. Where exactly did you get the notion that this was the goal of the case? As I've said a few times in this thread already, the point of the case is to show how the two theories differ in their answers.

 

24 minutes ago, Problem Machine said:

And, sure, creating a problem specifically to highlight a situation in which they generate different outcomes is interesting, but every presentation of the problem has been as a paradox or as a dilemma, not as an illustration of the consequences of different decision-making processes.

This is one of the reasons why, in my very first post in this thread, I said the presentation of this problem in the podcast was not very good, and that the right way to think about the problem is to think of it as highlighting the difference between the two decision theories. I'm not trying to be a jerk here, but it seems like you're not even reading my posts? I've been at pains to explain to you that this is not a paradox or dilemma, it's just a way to divide two decision theories up, etc. Like, you're happy to spend your time accusing philosophers of wasting our time on bullshit and semantics and so on, but when a philosopher is trying to explain to you why we're not all wasting our time, you just ignore everything he says and you substitute your own views about what's going on? Do you see how this might strike me as a little objectionable and frustrating? You're accusing me of wasting my life on worthless bullshit and when I try to tell you I'm not, you ignore what I say and continue to claim that my entire field is full of clueless bozos or cranks or both.

 

24 minutes ago, Problem Machine said:

My argument is not that evidential decision making is always better, just that clearly the correct decision in this case is to take the opaque box. 

No, that's the correct decision according to evidential decision theory. The correct decision according to causal decision theory is to take both boxes. Now, you might think that really the question isn't "how many boxes should I pick" but "which decision theory should I pick." That's fine. You're more than welcome to ask that question. The question we're talking about right now is not that question. The question we're talking about right now is how many boxes to pick, and you can only answer that question once you pick a decision theory. If you pick causal decision theory, the right answer is two boxes. There is no disputing this. Everyone agrees that, given causal decision theory, you should two box. Everyone also agrees that, given evidential decision theory, you should one box. What's interesting is that the two theories diverge, and also that there's no general agreement about which decision theory to pick. If Newcomb's Box were the only decision people ever faced, it would be clear which decision theory to pick. But we face many decisions in our lives, none of which are anything like Newcomb's Box. On the basis of these other decisions, many philosophers think that we should be causal decision theorists. Because of this, these philosophers think we should pick both boxes if faced with the Newcomb's Box problem.

 

24 minutes ago, Problem Machine said:

 Maybe that's different in other cases, in which case that's fine. I'd be interested in seeing those cases though, if you have any examples handy for me to search.

One of the classic cases used to argue for causal as opposed to evidential decision theory is the case described here about smoking lesions.

Share this post


Link to post
Share on other sites
4 minutes ago, TychoCelchuuu said:

I am not sure where I suggested that this specific case is good at highlighting the relative strengths of these two decision theorists. Where exactly did you get the notion that this was the goal of the case? As I've said a few times in this thread already, the point of the case is to show how the two theories differ in their answers.

 

This is one of the reasons why, in my very first post in this thread, I said the presentation of this problem in the podcast was not very good, and that the right way to think about the problem is to think of it as highlighting the difference between the two decision theories.

I'm doing my best to understand your approach, but it's not helped by me not really being clear on when you're agreeing or disagreeing with me. My contention is just that this is an obvious choice based on one approach consistently yielding far better results: This doesn't seem to be something you disagree with me on. What you do seem to disagree with me on is the original intent of the problem posed, to which I say, "fine, but that's not how it was originally presented to me, so you can see why I'd find it silly in its original context". I now acknowledge the utility of it as a tool to explain the difference in approach to problem solving, but suggest that this utility is somewhat undercut by it presenting a circumstance where one approach is clearly superior to the other, so it could probably be formulated better if that's the intent. I don't feel that I'm ignoring the things you say, but sometimes nuances fall through the cracks, so perhaps I've missed something in my characterization of our exchange.

 

Now, having had a chance to do a bit of reading re: evidential decision-making vs causal decision-making, my stance has shifted. I don't really understand the point of evidential decision-making except as a stand-in for when there's a lack of visible causal evidence, but I also don't feel this decision is a good illustration of causal vs evidential decision-making, since saying it does ignores the causal link between deciding to take the second box, between being the sort of person who would take the second box, and that box being full of money. You're saying that in the moment there's only one correct decision for the causal decision-maker to make, but there's no moment without the moment before, no effect without cause, and in this case the cause of the box being full of money is making the decision to take that box.

 

Think of it this way. Say I'M the predictor: I can choose now to either fill the boxes with a million dollars or not. I know that afterwards my memory is going to be wiped and I will be presented with this problem (not being told that I was the predictor). If I predict wrong then they kill me or whatever, something that ensures that I try as hard as possible to predict correctly. Under those circumstances, I'm going to do my best to essentially collude with myself, I'm going to put a million dollars in the box knowing that my logical process would lead me to take that box. There is a clear causal relationship between my decisions now/later and what's going to be in the box. A machine that could mechanistically predict my actions would be if anything more reliable than me trying to predict my own actions -- saying that acknowledging that is somehow beyond the realm of causation is difficult for me to swallow. Both the predictor and the chooser are operating as different spokes on the same gear, moving at a time offset but in the same direction.

Share this post


Link to post
Share on other sites
11 minutes ago, Problem Machine said:

Now, having had a chance to do a bit of reading re: evidential decision-making vs causal decision-making, my stance has shifted. I don't really understand the point of evidential decision-making except as a stand-in for when there's a lack of visible causal evidence, but I also don't feel this decision is a good illustration of causal vs evidential decision-making, since saying it does ignores the causal link between deciding to take the second box, between being the sort of person who would take the second box, and that box being full of money. You're saying that in the moment there's only one correct decision for the causal decision-maker to make, but there's no moment without the moment before, no effect without cause, and in this case the cause of the box being full of money is making the decision to take that box.

The cause of the box being full of money is not making the decision to take the box. That's literally impossible. It would require something now (the decision to take the box) to cause something in the past (the AI predicting you will take the box). Causation does not work backwards in time like that. Causation can only work forwards in time. Things right now can only cause things subsequent to now.

 

11 minutes ago, Problem Machine said:

Think of it this way. Say I'M the predictor: I can choose now to either fill the boxes with a million dollars or not. I know that afterwards my memory is going to be wiped and I will be presented with this problem (not being told that I was the predictor). If I predict wrong then they kill me or whatever, something that ensures that I try as hard as possible to predict correctly. Under those circumstances, I'm going to do my best to essentially collude with myself, I'm going to put a million dollars in the box knowing that my logical process would lead me to take that box.

But that's only because you're a one-boxer. If you were a two-boxer, you'd follow the same line of reasoning except you wouldn't put the money in the box. This has nothing to do with one-boxing vs. two-boxing. It just has to do with making a good prediction. And of course the prediction depends on what kind of person you are making a prediction about.

 

11 minutes ago, Problem Machine said:

 There is a clear causal relationship between my decisions now/later and what's going to be in the box. 

This is half correct. There is a clear causal relationship between your decision now and what's going to be in the box. You, the predictor, get to pick what goes in the box. There is no causal relationship between your decision later and what's going to be in the box, unless by "your decision later" you mean "your prediction now of your decision later." Your decision later can't have any causal effect on what's happening now, because your decision later is in the future, and the future cannot causally impact the present. That would require time travel.

 

11 minutes ago, Problem Machine said:

A machine that could mechanistically predict my actions would be if anything more reliable than me trying to predict my own actions -- saying that acknowledging that is somehow beyond the realm of causation is difficult for me to swallow.

Acknowledging that the machine is a reliable predictor does not require saying that the machine is beyond the realm of causation. The only time we say the machine is beyond the realm of causation is if we say it is not predicting the future on the basis of information it has right now, but rather being influenced by the future by somehow getting information from the future right now. That second thing is impossible. We cannot get information from the future right now. That would require time travel. Why is it difficult to swallow the idea that getting information from the future requires time travel? That's how time travel works: it takes things from the future and brings them backwards. Without time travel, things cannot go from the future to the past. They can only go from the past to the future.

 

Share this post


Link to post
Share on other sites

Argh now I'm frustrated because I feel like you're not acknowledging what I'M saying.

The idea is no choice is made in isolation, it's always made within the context of the mind's framework which is mechanistically deterministic. Making the decision to take the box now is a result of the mechanisms of the mind, which also determine what's in the boxes. You can't just choose to take a different thing than you've chosen, that would require spontaneously changing the mechanical composition of your brain, in other words supernatural interference. Therefore, the causal effect you have on the contents of the box is in having the mind-layout that makes you make the correct choice. Sure, what box you choose doesn't change the past, but what box you will choose already has. Believing that you can just jump the tracks is supernatural nonsense.

Getting information from the future doesn't require time travel in a mechanically deterministic universe, it only requires complete information about the current moment and the capacity to make complete mechanical predictions based upon that -- basically just simulating the universe and time stepping it forwards a few hours. We humans do a simple and shitty version of it all the time, but in this case I'm positing a predictor who can do the real thing.

Share this post


Link to post
Share on other sites
9 minutes ago, Problem Machine said:

The idea is no choice is made in isolation, it's always made within the context of the mind's framework which is mechanistically deterministic. Making the decision to take the box now is a result of the mechanisms of the mind, which also determine what's in the boxes. You can't just choose to take a different thing than you've chosen, that would require spontaneously changing the mechanical composition of your brain, in other words supernatural interference. Therefore, the causal effect you have on the contents of the box is in having the mind-layout that makes you make the correct choice. Sure, what box you choose doesn't change the past, but what box you will choose already has. Believing that you can just jump the tracks is supernatural nonsense.

Nobody has ever claimed you can jump the tracks, though. Your mistake is in thinking that it's possible for the predictor to err. Perhaps the predictor cannot err. Perhaps the predictor is 100% right because it knows physical mechanics well enough to predict your choice with 100% accuracy. That's fine. That doesn't disagree with a single thing I've said. (It's not really part of the setup of the problem because it's irrelevant either way. We can imagine the predictor has 100% accuracy because of physics, or we can imagine the predictor doesn't, and they're right because they're very good at psychology, or whatever. Again, it doesn't matter.)

 

My point is merely that, according to a causal decision theorist, the right choice is two boxes. Maybe it's impossible for you to make that choice, and the predictor knows it's impossible, so they put the money in the box, because they know you'll one-box. Still, you're making the wrong decision. You should two-box (if you could). You can't, of course. You're stuck making the wrong decision. As you put it, you'd have to spontaneously change the mechanical composition of your brain, which is impossible. But just because you're forced to make a decision, this is no reason to think it's the right decision. According to the causal decision theorist, it's the wrong decision. (According to the evidential decision theorist, it's the right decision! So that's good news.)

 

9 minutes ago, Problem Machine said:

Getting information from the future doesn't require time travel in a mechanically deterministic universe, it only requires complete information about the current moment and the capacity to make complete mechanical predictions based upon that -- basically just simulating the universe and time stepping it forwards a few hours. We humans do a simple and shitty version of it all the time, but in this case I'm positing a predictor who can do the real thing.

That's not literally getting information from the future. That's getting information from the present and the past and using it to predict the future. The way to see that this is not literally getting information from the future is to imagine that the laws of physics change ten minutes from now, after you've made your prediction. Your prediction can't account for this: there's no way to know the laws of physics were going to give out! But if you were literally getting information from the future, you'd know the laws of physics were about to change.

Share this post


Link to post
Share on other sites

Eh, is 100% reliable prediction of the future different from information directly from the future? I don't see any difference there.

5 minutes ago, TychoCelchuuu said:

Nobody has ever claimed you can jump the tracks, though. Your mistake is in thinking that it's possible for the predictor to err. Perhaps the predictor cannot err. Perhaps the predictor is 100% right because it knows physical mechanics well enough to predict your choice with 100% accuracy. That's fine. That doesn't disagree with a single thing I've said. (It's not really part of the setup of the problem because it's irrelevant either way. We can imagine the predictor has 100% accuracy because of physics, or we can imagine the predictor doesn't, and they're right because they're very good at psychology, or whatever. Again, it doesn't matter.)

I don't understand, my entire point was that it isn't possible for the predictor to err. I mean everything else you say tracks I just found that confusing.

 

5 minutes ago, TychoCelchuuu said:

My point is merely that, according to a causal decision theorist, the right choice is two boxes. Maybe it's impossible for you to make that choice, and the predictor knows it's impossible, so they put the money in the box, because they know you'll one-box. Still, you're making the wrong decision. You should two-box (if you could). You can't, of course. You're stuck making the wrong decision. As you put it, you'd have to spontaneously change the mechanical composition of your brain, which is impossible. But just because you're forced to make a decision, this is no reason to think it's the right decision. According to the causal decision theorist, it's the wrong decision. (According to the evidential decision theorist, it's the right decision! So that's good news.)

My perspective is that 'decisions' aren't something that happens in just one moment, that they're something that extends over time. Maybe at the exact moment your decision is enacted it's the wrong one, but on a time scale that includes the predictor's read on your decision it's the correct one -- the result is just being skewed by artificially cropping the time window down to the 'climax' of your decision.

Share this post


Link to post
Share on other sites
15 minutes ago, Problem Machine said:

Eh, is 100% reliable prediction of the future different from information directly from the future? I don't see any difference there.

It's the difference between being good at math and physics vs. being able to travel through time. One is perfectly understandable and the other contravenes the laws of physics.

 

15 minutes ago, Problem Machine said:

I don't understand, my entire point was that it isn't possible for the predictor to err. I mean everything else you say tracks I just found that confusing.

I was merely pointing out that "it's impossible for the predictor to err" is something you're bringing to the table. That's not part of the original setup. The original setup is just that the predictor is very very good and so far has not made any mistakes.

 

16 minutes ago, Problem Machine said:

My perspective is that 'decisions' aren't something that happens in just one moment, that they're something that extends over time. Maybe at the exact moment your decision is enacted it's the wrong one, but on a time scale that includes the predictor's read on your decision it's the correct one -- the result is just being skewed by artificially cropping the time window down to the 'climax' of your decision.

I think this is an indefensible view of what a "decision" consists of, and if you were forced to work out all the implications of your view you'd quickly find yourself unable to defend it. But, showing this is a lot of work, because it requires providing an entire theory of what it is to decide, which is very difficult. Suffice it to say that I think on your view, it turns out people either never make any decisions, or they only ever make one decision.

Share this post


Link to post
Share on other sites
13 minutes ago, TychoCelchuuu said:

It's the difference between being good at math and physics vs. being able to travel through time. One is perfectly understandable and the other contravenes the laws of physics.

Uh huh, but is there any difference in the information obtained?

 

13 minutes ago, TychoCelchuuu said:

I think this is an indefensible view of what a "decision" consists of, and if you were forced to work out all the implications of your view you'd quickly find yourself unable to defend it. But, showing this is a lot of work, because it requires providing an entire theory of what it is to decide, which is very difficult. Suffice it to say that I think on your view, it turns out people either never make any decisions, or they only ever make one decision.

I think we just use 'decision' as a descriptive term for an inflection point in a human behavior, but these are just arbitrary descriptions that we use out of convenience. So, depending on what magnitude or shape you require to describe something as a decision, we can describe someone as either making a million decisions a second or zero seconds over their entire lifetime. However, what drives those decisions is always deeply rooted in a person's history, and is inseparable from that history.

Share this post


Link to post
Share on other sites
4 minutes ago, Problem Machine said:

Uh huh, but is there any difference in the information obtained?

Only if anything changes in between the prediction and the future, which as far as we know is not going to happen (the laws of physics don't seem to be going anywhere) but then again who knows?

 

5 minutes ago, Problem Machine said:

I think we just use 'decision' as a descriptive term for an inflection point in a human behavior, but these are just arbitrary descriptions that we use out of convenience. So, depending on what magnitude or shape you require to describe something as a decision, we can describe someone as either making a million decisions a second or zero seconds over their entire lifetime. However, what drives those decisions is always deeply rooted in a person's history, and is inseparable from that history.

If "decision" is this loosey goosey then it's not clear where you get off saying anything at all about the right "decision" to make when presented with this choice, since it's all effectively a load of bullshit that depends on whatever arbitrary description you pick to use for the sake of convenience.

Share this post


Link to post
Share on other sites

True! That's why I say that it's the right decision over the long time scale and the wrong decision over a small timescale. My argument is that it's only wrong when viewed in a very specific way, whereas in the vast majority of circumstances it's right.

How would you define a decision? The exact moment when you pick up the box? When you start moving to pick up the box? When you look at the box? When you think to yourself "I will take this box"? Language is by necessity imprecise, but we can at least be precise about its imprecision, and acknowledge that we often mean many different points in time when we talk about when someone makes a decision.

Share this post


Link to post
Share on other sites
33 minutes ago, Problem Machine said:

How would you define a decision? The exact moment when you pick up the box? When you start moving to pick up the box? When you look at the box? When you think to yourself "I will take this box"? Language is by necessity imprecise, but we can at least be precise about its imprecision, and acknowledge that we often mean many different points in time when we talk about when someone makes a decision.

As long as whatever the decision is, it's something that occurs after the money is in the box, we don't have to get more specific. Something that vague is fine, because once it's too late to change what's in the boxes, we get the division between causal and evidential decision theory.

Share this post


Link to post
Share on other sites

I have tasked an AI with observing this thread. Let me check on it...

 

 

oh... oh no

Share this post


Link to post
Share on other sites
57 minutes ago, Jake said:

I have tasked an AI with observing this thread. Let me check on it...

 

oh... oh no

 

Jake I'm so sorry, I just wanted jokes about pseudo-omniscient gameshow robots and Nick to be disagreeable about something. I didn't mean to unleash a minuscule amount of darkness and frustration on your life. I'd much prefer tiny flecks of joy.

 

Also, TychoCelchuuu, you've caught me dead to rights on misrepresenting the problem for the sake of podcast hilarity. I'm a Bioethicist, so my familiarity is all second-hand anyway. My roommate did some conference presentations on the topic, and I just reviewed drafts and followed along. 

 

More importantly, what's your actual field of study?

Share this post


Link to post
Share on other sites
1 hour ago, LyonArtime said:

More importantly, what's your actual field of study?

I'm a grad student and I do social and political philosophy.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now