TychoCelchuuu

Members
  • Content count

    2800
  • Joined

  • Last visited

Posts posted by TychoCelchuuu


  1. 3 hours ago, Ben X said:

    Really the only thing I can complain about is that no matter how low I put the volume on everything else, I still can't hear Atlas' instructions if anything else is going on. A lot of games seem to have this issue, I've found. I always have to put music volume down to at most 40%. Even the old Lucasarts adventures did it.

    Are you sure nothing's fucked up in your audio setup, like in Windows or whatever? If Windows thinks you have 5.1 surround sound but you only have 2 speakers and it's piping all the dialogue through the center speaker that doesn't exist or whatever, that might explain that sort of thing. Although if this reaches back even to the old Lucasarts adventures then chances are it's not technical.


  2. 33 minutes ago, Problem Machine said:

    How would you define a decision? The exact moment when you pick up the box? When you start moving to pick up the box? When you look at the box? When you think to yourself "I will take this box"? Language is by necessity imprecise, but we can at least be precise about its imprecision, and acknowledge that we often mean many different points in time when we talk about when someone makes a decision.

    As long as whatever the decision is, it's something that occurs after the money is in the box, we don't have to get more specific. Something that vague is fine, because once it's too late to change what's in the boxes, we get the division between causal and evidential decision theory.


  3. 4 minutes ago, Problem Machine said:

    Uh huh, but is there any difference in the information obtained?

    Only if anything changes in between the prediction and the future, which as far as we know is not going to happen (the laws of physics don't seem to be going anywhere) but then again who knows?

     

    5 minutes ago, Problem Machine said:

    I think we just use 'decision' as a descriptive term for an inflection point in a human behavior, but these are just arbitrary descriptions that we use out of convenience. So, depending on what magnitude or shape you require to describe something as a decision, we can describe someone as either making a million decisions a second or zero seconds over their entire lifetime. However, what drives those decisions is always deeply rooted in a person's history, and is inseparable from that history.

    If "decision" is this loosey goosey then it's not clear where you get off saying anything at all about the right "decision" to make when presented with this choice, since it's all effectively a load of bullshit that depends on whatever arbitrary description you pick to use for the sake of convenience.


  4. On 7/15/2017 at 8:23 PM, Turgid said:

    Also, that football story is a tad too much, IMO. Jon Bois has really gone beyond profound sports-adjacent goofery, into high-concept sci-fi.

    It's only too much if all you're looking for is profound sports-adjacent goofery! For people who either aren't looking for that, or who are looking for that but also high-concept sci-fi, it's perfect. I've only read like half of it so far but I love it.


  5. 15 minutes ago, Problem Machine said:

    Eh, is 100% reliable prediction of the future different from information directly from the future? I don't see any difference there.

    It's the difference between being good at math and physics vs. being able to travel through time. One is perfectly understandable and the other contravenes the laws of physics.

     

    15 minutes ago, Problem Machine said:

    I don't understand, my entire point was that it isn't possible for the predictor to err. I mean everything else you say tracks I just found that confusing.

    I was merely pointing out that "it's impossible for the predictor to err" is something you're bringing to the table. That's not part of the original setup. The original setup is just that the predictor is very very good and so far has not made any mistakes.

     

    16 minutes ago, Problem Machine said:

    My perspective is that 'decisions' aren't something that happens in just one moment, that they're something that extends over time. Maybe at the exact moment your decision is enacted it's the wrong one, but on a time scale that includes the predictor's read on your decision it's the correct one -- the result is just being skewed by artificially cropping the time window down to the 'climax' of your decision.

    I think this is an indefensible view of what a "decision" consists of, and if you were forced to work out all the implications of your view you'd quickly find yourself unable to defend it. But, showing this is a lot of work, because it requires providing an entire theory of what it is to decide, which is very difficult. Suffice it to say that I think on your view, it turns out people either never make any decisions, or they only ever make one decision.


  6. 9 minutes ago, Problem Machine said:

    The idea is no choice is made in isolation, it's always made within the context of the mind's framework which is mechanistically deterministic. Making the decision to take the box now is a result of the mechanisms of the mind, which also determine what's in the boxes. You can't just choose to take a different thing than you've chosen, that would require spontaneously changing the mechanical composition of your brain, in other words supernatural interference. Therefore, the causal effect you have on the contents of the box is in having the mind-layout that makes you make the correct choice. Sure, what box you choose doesn't change the past, but what box you will choose already has. Believing that you can just jump the tracks is supernatural nonsense.

    Nobody has ever claimed you can jump the tracks, though. Your mistake is in thinking that it's possible for the predictor to err. Perhaps the predictor cannot err. Perhaps the predictor is 100% right because it knows physical mechanics well enough to predict your choice with 100% accuracy. That's fine. That doesn't disagree with a single thing I've said. (It's not really part of the setup of the problem because it's irrelevant either way. We can imagine the predictor has 100% accuracy because of physics, or we can imagine the predictor doesn't, and they're right because they're very good at psychology, or whatever. Again, it doesn't matter.)

     

    My point is merely that, according to a causal decision theorist, the right choice is two boxes. Maybe it's impossible for you to make that choice, and the predictor knows it's impossible, so they put the money in the box, because they know you'll one-box. Still, you're making the wrong decision. You should two-box (if you could). You can't, of course. You're stuck making the wrong decision. As you put it, you'd have to spontaneously change the mechanical composition of your brain, which is impossible. But just because you're forced to make a decision, this is no reason to think it's the right decision. According to the causal decision theorist, it's the wrong decision. (According to the evidential decision theorist, it's the right decision! So that's good news.)

     

    9 minutes ago, Problem Machine said:

    Getting information from the future doesn't require time travel in a mechanically deterministic universe, it only requires complete information about the current moment and the capacity to make complete mechanical predictions based upon that -- basically just simulating the universe and time stepping it forwards a few hours. We humans do a simple and shitty version of it all the time, but in this case I'm positing a predictor who can do the real thing.

    That's not literally getting information from the future. That's getting information from the present and the past and using it to predict the future. The way to see that this is not literally getting information from the future is to imagine that the laws of physics change ten minutes from now, after you've made your prediction. Your prediction can't account for this: there's no way to know the laws of physics were going to give out! But if you were literally getting information from the future, you'd know the laws of physics were about to change.


  7. 11 minutes ago, Problem Machine said:

    Now, having had a chance to do a bit of reading re: evidential decision-making vs causal decision-making, my stance has shifted. I don't really understand the point of evidential decision-making except as a stand-in for when there's a lack of visible causal evidence, but I also don't feel this decision is a good illustration of causal vs evidential decision-making, since saying it does ignores the causal link between deciding to take the second box, between being the sort of person who would take the second box, and that box being full of money. You're saying that in the moment there's only one correct decision for the causal decision-maker to make, but there's no moment without the moment before, no effect without cause, and in this case the cause of the box being full of money is making the decision to take that box.

    The cause of the box being full of money is not making the decision to take the box. That's literally impossible. It would require something now (the decision to take the box) to cause something in the past (the AI predicting you will take the box). Causation does not work backwards in time like that. Causation can only work forwards in time. Things right now can only cause things subsequent to now.

     

    11 minutes ago, Problem Machine said:

    Think of it this way. Say I'M the predictor: I can choose now to either fill the boxes with a million dollars or not. I know that afterwards my memory is going to be wiped and I will be presented with this problem (not being told that I was the predictor). If I predict wrong then they kill me or whatever, something that ensures that I try as hard as possible to predict correctly. Under those circumstances, I'm going to do my best to essentially collude with myself, I'm going to put a million dollars in the box knowing that my logical process would lead me to take that box.

    But that's only because you're a one-boxer. If you were a two-boxer, you'd follow the same line of reasoning except you wouldn't put the money in the box. This has nothing to do with one-boxing vs. two-boxing. It just has to do with making a good prediction. And of course the prediction depends on what kind of person you are making a prediction about.

     

    11 minutes ago, Problem Machine said:

     There is a clear causal relationship between my decisions now/later and what's going to be in the box. 

    This is half correct. There is a clear causal relationship between your decision now and what's going to be in the box. You, the predictor, get to pick what goes in the box. There is no causal relationship between your decision later and what's going to be in the box, unless by "your decision later" you mean "your prediction now of your decision later." Your decision later can't have any causal effect on what's happening now, because your decision later is in the future, and the future cannot causally impact the present. That would require time travel.

     

    11 minutes ago, Problem Machine said:

    A machine that could mechanistically predict my actions would be if anything more reliable than me trying to predict my own actions -- saying that acknowledging that is somehow beyond the realm of causation is difficult for me to swallow.

    Acknowledging that the machine is a reliable predictor does not require saying that the machine is beyond the realm of causation. The only time we say the machine is beyond the realm of causation is if we say it is not predicting the future on the basis of information it has right now, but rather being influenced by the future by somehow getting information from the future right now. That second thing is impossible. We cannot get information from the future right now. That would require time travel. Why is it difficult to swallow the idea that getting information from the future requires time travel? That's how time travel works: it takes things from the future and brings them backwards. Without time travel, things cannot go from the future to the past. They can only go from the past to the future.

     


  8. 24 minutes ago, Problem Machine said:

    Okay, but we agree that this specific case highly incentivizes that one particular branch of decision-making, which means it's not very good at highlighting their relative strengths.

    I am not sure where I suggested that this specific case is good at highlighting the relative strengths of these two decision theorists. Where exactly did you get the notion that this was the goal of the case? As I've said a few times in this thread already, the point of the case is to show how the two theories differ in their answers.

     

    24 minutes ago, Problem Machine said:

    And, sure, creating a problem specifically to highlight a situation in which they generate different outcomes is interesting, but every presentation of the problem has been as a paradox or as a dilemma, not as an illustration of the consequences of different decision-making processes.

    This is one of the reasons why, in my very first post in this thread, I said the presentation of this problem in the podcast was not very good, and that the right way to think about the problem is to think of it as highlighting the difference between the two decision theories. I'm not trying to be a jerk here, but it seems like you're not even reading my posts? I've been at pains to explain to you that this is not a paradox or dilemma, it's just a way to divide two decision theories up, etc. Like, you're happy to spend your time accusing philosophers of wasting our time on bullshit and semantics and so on, but when a philosopher is trying to explain to you why we're not all wasting our time, you just ignore everything he says and you substitute your own views about what's going on? Do you see how this might strike me as a little objectionable and frustrating? You're accusing me of wasting my life on worthless bullshit and when I try to tell you I'm not, you ignore what I say and continue to claim that my entire field is full of clueless bozos or cranks or both.

     

    24 minutes ago, Problem Machine said:

    My argument is not that evidential decision making is always better, just that clearly the correct decision in this case is to take the opaque box. 

    No, that's the correct decision according to evidential decision theory. The correct decision according to causal decision theory is to take both boxes. Now, you might think that really the question isn't "how many boxes should I pick" but "which decision theory should I pick." That's fine. You're more than welcome to ask that question. The question we're talking about right now is not that question. The question we're talking about right now is how many boxes to pick, and you can only answer that question once you pick a decision theory. If you pick causal decision theory, the right answer is two boxes. There is no disputing this. Everyone agrees that, given causal decision theory, you should two box. Everyone also agrees that, given evidential decision theory, you should one box. What's interesting is that the two theories diverge, and also that there's no general agreement about which decision theory to pick. If Newcomb's Box were the only decision people ever faced, it would be clear which decision theory to pick. But we face many decisions in our lives, none of which are anything like Newcomb's Box. On the basis of these other decisions, many philosophers think that we should be causal decision theorists. Because of this, these philosophers think we should pick both boxes if faced with the Newcomb's Box problem.

     

    24 minutes ago, Problem Machine said:

     Maybe that's different in other cases, in which case that's fine. I'd be interested in seeing those cases though, if you have any examples handy for me to search.

    One of the classic cases used to argue for causal as opposed to evidential decision theory is the case described here about smoking lesions.


  9. 2 minutes ago, Problem Machine said:

    I'm having trouble with how we can agree that causal decision making will definitely cost you a million dollars but you can still argue that it's not a bad way to approach the problem. This sounds like, rather than a dilemma created to demonstrate two separate but equally viable systems, a dilemma contrived to make one of those systems look foolish.

    Yes, causal decision theory does look foolish in this case. That's one interesting feature of the case, because normally causal decision theory looks fine. However, there are cases that make evidential decision theory look foolish, and which make causal decision theory look much better, so overall it's not a huge deal. In effect, everyone agrees causal decision theory is a bad way to approach the problem if by that you mean this is the only decision you will ever have to make in your life, and if this is the only decision you'll ever have to make in your life, obviously you should be an evidential decision theorist.

     

    In reality, though, we make lots and lots of decisions in our lives, and in fact we never come up against the Newcomb's Box decision, so causal decision theory is not in much hot water merely because it fails to handle one (very weird, very specific) case very well. Mostly the interesting thing about the Newcomb's Box problem is that causal decision theory and evidential decision theory give different answers. Prior to this, one might have thought they always gave the same answers, and in fact one might not have realized that there are two ways of making decisions in the first place. So, Newcomb's Box is helpful because it helps us think about the ways that we make decisions.

     

    And yes, one small incidental feature is that it makes causal decision theory look slightly bad, because it can't handle this case well, but then again, nobody will ever face this case in their lives, so it's not a huge deal if causal decision theory can't handle it very well. It's like saying nobody should ever go to culinary school at the most prestigious chef university because they don't teach you how to cook pickled yak testicles, and one day someone might threaten to kill you unless you perfectly cook pickled yak testicles. It's like, well, okay, that's true, if that's the criterion then I shouldn't go to the fancy culinary school. However, this is super unlikely, so probably I'm going to make up my mind based on other stuff, like whether the culinary school is likely to get me a job.

     

    Similarly, if someday you're going to face Newcomb's Box, and also you won't have to make any other choices, then probably you should just go ahead and be an evidential decision theorist. But that's not what our lives look like, so nobody is ever really convinced to abandon causal decision theory just on the basis of its failing to handle this super weird, implausible case very well.


  10. 20 minutes ago, Problem Machine said:

    So the solution, I suppose, is to always pick the opaque box and then end up unintentionally somehow picking up both. Does the AI account for clumsiness??? Does the AI account for you having been introduced to the problem already on a podcast/forum?

    The solution is to pick the opaque box, if you're an evidential decision theorist, or to pick both boxes, if you're a causal decision theorist. The AI accounts for whatever it accounts for - it's not particularly important how it makes its decision except that however it does it, it's very good at doing it, and also the decision in this particular case (for your boxes) has already been made.

     

    20 minutes ago, Problem Machine said:

    Except it's not impossible to fuck with the prediction when you assume that the machine has already simulated your exact decision-making process in the past. In that case, it is to your benefit to be the person who will come to the decision that.

    We are not arguing over what kind of person to be while the AI makes its prediction. That is not an interesting question. Everyone has the same answer to that question. The answer to that question is to be a one-boxer. But that is not the question we are asking. The question we are asking is how many boxes to pick, once the prediction has already been made. That question has two legitimate answers: one box or two boxes. Evidential decision theorists recommend one box. Causal decision theorists recommend two boxes.

     

    20 minutes ago, Problem Machine said:

    Causality works in reverse as well: Effects have causes.

    You misunderstood my point. My point is not that effects do not have causes. My point is that causes cannot cause effects in the past. Causes can only cause effects in the future.

     

    20 minutes ago, Problem Machine said:

    My argument is that one of the decision theories is based on a belief in the supernatural power of free will to escape causality, and is thus actually completely incorrect.

    Neither decision theory relies on any such belief. Your mistake is believing that causal decision theory tries to "escape causality." It does not try to escape causality. The causal decision theorist accepts that it's too late to escape causality. The box has already been determined. It either already has the money or already doesn't. It's too late to fix that. Your only choice here is how many boxes to pick: one or two. And two is the better choice. Maybe it's literally impossible for there to be money in the box if you've been a causal decision theorist in the past. That's fine. Maybe causal decision theorists are stuck always getting $1,000. They're okay with that. Casual decision theorists don't say that there view will always net them $1,001,000. Maybe their view will always net them $1,000. That's okay. It's still the right decision to pick two boxes, according to causal decision theory.

     

    20 minutes ago, Problem Machine said:

    And literally everyone wastes their time on stupid problems, semantics, and sloppy thinking, so let's not preemptively eliminate that as a possibility.

    I'm not preemptively eliminating this as a possibility, I'm speaking from experience, having spent many hours of my life studying this question in quite a bit of detail. You, meanwhile, don't even understand it yet, let alone have a particularly interesting take on the problem. I think it is a little presumptuous of you to declare an entire field of people (a field that includes me), people who are not idiots (I don't think I'm an idiot), as having wasted its time on stupid problems, mere semantics, and sloppy thinking. My claim is not that it's impossible that this has ever happened, but merely that it hasn't happened in this case, and you're going to have to do better than presenting mistaken understandings of the problem (and bad conclusions supported by these mistaken understandings) in order to convince me otherwise about this case.


  11. Just now, Reyturner said:

    ez

     

    Tell the clairvoyant to close his eyes. When he does, punch him in the face. If he reacts before your fist lands, take box B. otherwise, take both. 

    I know you're joking, but the point of this problem is that the causal decision theorist would say that even if the clairvoyant does react, you should still take both boxes. For whatever reason, people sometimes get angry when presented with this sort of reasoning and insist that two boxers are a bunch of idiots, etc. But they aren't, and it can be interesting to think about why it would make sense to pick both boxes even if you're given good evidence that this person is clairvoyant. If you can figure that out, you can see what the causal decision theorists are talking about.


  12. 2 minutes ago, Problem Machine said:

    You are taking the risk by deciding to pick both boxes and choosing to be the jackass who takes both boxes.

    I already demonstrated why it's not a risk. Why is picking both boxes a "jackass" move? 

     

    2 minutes ago, Problem Machine said:

    This is a 'paradox' created by picking apart the timeline in a highly specific way rather than creating an actual strategy.

    It's not a 'paradox' at all. It's just an illustration of the different outcomes generated by different decision procedures. There is no "picking apart the timeline in a highly specific way rather than creating an actual strategy." Both decision procedures have an actual strategy - in fact, they have very detailed strategies that you can read up on here if you're interested. If you don't want to read up on it, please just trust me when I say there are definitely "actual strategies" out there that pick two boxes, and for good reason.

     

    4 minutes ago, Problem Machine said:

    The best play is to choose to take one box.

    According to evidential decision theory, yes. According to causal decision theory, the best play is to take two boxes. This is the entire point of the thought experiment: it shows us that the two decision theories come apart in certain specific cases, which is interesting.

     

    5 minutes ago, Problem Machine said:

    Maybe in the future it would be better to take both, but making the decision now you choose to take one, and you commit to that decision because to do otherwise would be to fuck with the prediction that is making it the best choice to choose one box.

    It's impossible to "fuck with the prediction." The prediction happened in the past: it is now immune to fuckery. No matter how badly you wish to fuck with it, it's immune to being fucked with, the same way you can't influence anything else that happened in the past. This is how causation work: stuff in the past is immune to being changed by stuff in the future. This is why the 2-boxing strategy is recommended by what's called "causal" decision theory. That decision theory is focused on what effects you could possible cause, right here, right now. Because you can't possibly change the amount of money in the boxes, causal decision theory recommends picking both boxes, because no matter what's in them, that strategy will give you more money.

     

    7 minutes ago, Problem Machine said:

    See, you're coming at this from the perspective that if people have spent a long time arguing about it it's an intractable problem.

    It's not an intractable problem. It's not even a problem! It's simply a thought experiment designed to illustrate the differences between evidential and causal decision theories. Your mistake is thinking about is as a paradox or a problem or whatever. Nobody's up at night sweating whether to pick one or two boxes. The choice is obvious. What's interesting is that the obvious choice differs between these two theories. Of course, there might be a nearby intractable problem: which decision theory should I pick? That's not what we're talking about here, though. This problem is quite tractable. Simply pick your favorite decision theory and it'll tell you what to do.

     

    8 minutes ago, Problem Machine said:

    I'm coming at it from the perspective that if people have spent a long time arguing about it it's probably a stupid problem. Most 'paradoxes' are just tricks of semantics and sloppy thinking.

    Again, this is not a "paradox," and to the extent there is any sloppy thinking going on, it is entirely in your corner. Believe me when I say that I and other philosophers don't waste our time on stupid problems, tricks of semantics, or sloppy thinking. We're not all a bunch of fucking yahoos.


  13. 2 hours ago, Problem Machine said:

    If the goal is to make the most money possible and they wanted actual clarity to that goal they should have just loaded them both with something close the same amount, like $1000/$2000. The way this is framed, it's taking a huge and unnecessary risk for a 0.1% gain. Casul decision theorists, good job, you just screwed yourself out of a million dollars with your bad decision-making.

     

    The problem is that causal decision theorists don't screw themselves out of anything with their bad decision-making, if by "bad decision-making" you mean picking both boxes. The only thing that can lose you the $1,000,000 is the predictor predicting that you'll pick two boxes. Your actual choice is irrelevant: it's your actions prior to the prediction that matter.

     

    If your point is just that being a causal decision theorist while the AI evaluates you is bad, that's obviously true. Nobody disagrees about that. The causal decision theorist does not say "you should be a causal decision theorist during the time the AI is observing your behavior." That would be goofy! If you think one side of this debate is doing something goofy, you're missing the point. Neither side is goofy.

     

    So, the non-goofy thing that the causal decision theorist is saying is that, once it's time to make the choice, you might as well pick both boxes. By that point, the prediction has already been made, and you're guaranteed to get more money if you pick both boxes. You get more money if the opaque box is empty: you get $1,000 vs $0. You get more money if the opaque box is full: you get $1,001,000 vs. $1,000,000. So, either way, you get more money if you pick both boxes. So you should pick both boxes!

     

    You're not taking any "risk"by picking both boxes, according to the causal decision theorist. It's too late to risk anything by picking both boxes. The AI has already put the money in the box. Nobody can take the money out of the box. It's there for you to take. If there's no money in the box, the only risk is taking that empty box rather than that empty box plus the $1k box. So of course you shouldn't take that risk.


  14. The entire point of this box thing is to illustrate the difference between two decision making procedures which have been proposed by philosophers.

     

    According to evidential decision theory, the right decision to make is the one that is based on the evidence available to you. Because the evidence suggests that the predictor is correct, you want to pick the single opaque box, because that way the correct predictor will have put $1,000,000 in the box.

     

    According to causal decision theory, the right decision to make is one that is based on what consequences your choice could have. Because it's too late for your choice to influence how much money is in the boxes, you should pick both boxes, because that way you get more money. You get more money if the opaque box is empty, because you get $1,000 vs. $0. You get more money if the opaque box is full, because you get $1,001,000 vs. $1,000. So either way, you should pick both boxes.

     

    It was not presented super well in the email sent to the podcast for various reasons. For instance, it's important to make clear to people that your goal is to make the most money possible - this screens off choices like Chris's "I'll pick one box because $1,000 is not enough to make a big deal in my life," which is a sensible choice in real life but which doesn't get at the point of the thought experiment. On the other hand, the thought experiment is really only useful for dividing evidential from causal decision theory, and that's a question basically nobody cares about, so maybe it's fine to get something like Chris's response.

     

    You can find more details here if you're interested.


  15. I moderately enjoyed both films, the latter one a bit more because it was prettier and it had marginally less "people with guns literally run towards John Wick in order to get shot." I'm not exactly Mr. REALISM PLEASE in movies about a society of assassins with strict rules and special gold coins and so on, but it does look silly just to see a bunch of people run towards the hero in order to get shot, this despite the fact that they have guns, a tool INVENTED to PREVENT YOU FROM HAVING TO RUN AT SOMEONE IN ORDER TO HURT THEM. The Peter Serafinowicz cameo was great and the mirrors sequence was cool too. I liked the bad guy who spoke in sign language.