Problem Machine

Members
  • Content count

    1829
  • Joined

  • Last visited

Posts posted by Problem Machine


  1. Eh I gotta disagree with people wanting a more humor-centric TAZ. I think the humor was great as-is, and all the more so for being contrasted against some heavy shit. That said, I would prefer a move away from the 'epic' towards more personal stories, since the personal interactions and improvisations were really where the show excelled.


  2. It super doesn't matter if any of us condemn him! We're not a jury. This fixation on needing to decide whether someone is a bad person or not is one of the least healthy forms of discourse and it's fucking everywhere and I'm sick of it. So so much of the vapidity and wastefulness of modern discourse falls away if we just let go of the need to determine Who Is Right and Who Is Good and Who Is Bad. He did some things he shouldn't have done and is probably going to face some consequences for it -- and as far as we, the public, are concerned, that's essentially the end of it. If someone is a fan of Nick's work and feels a desire to continue to support him, the question is not "is it okay to support a bad person?" it's "am I okay with supporting a person who has done the things he has done?"


  3. Short answer is it's not really any of our business, but as I understand it it mostly has to do with misrepresenting himself and being generally overly insistent and dishonest to get nudes and possibly more.

     

    But this still sounds a lot like not trusting women to know the difference between flirting and harassment and believing that they will constantly interpret benign behavior in a negative way as a trait that, again, crosses all cultural boundaries besides gender. I'm pretty sure that just being clueless or awkward doesn't accumulate a string of harassment stories from women throughout an entire industry. If you want more information on whether your own social behavior is acceptable, this is probably not the most beneficial avenue of investigation: I believe there are articles out there for dudes who are worried about being creepy, so maybe mining stories of peoples personal trauma isn't the best source of data.


  4. For me the Nick situation is pretty simple: When a bunch of women independently say a dude has harassed them, you can either believe that A: A bunch of people tied together only by their gender decided to make up rumors about a guy for reasons or B: he's a creeper. One of those is hugely misogynist, so I do the other one. Beyond that, the specifics aren't particularly relevant.


  5. Well my personal stance is that masculinity and femininity are stupid cultural ideas, but that building communities, developing self-reliance, caring for and protecting and nurturing providing for others are all desirable traits that overall benefit society and aren't really in conflict with each other. However, regardless of whether these ideas are stupid they still exist, and certain traits, positive and negative, are coded as masc or femme.


  6. I have no particular gender associations with someone living alone.

    What's maybe more interesting about that image search is how different the results are if you search for "do it yourself" instead of the acronym


  7. The "mother bear" scenario is not considered typical gender behavior though, the way that scenario is usually presented is that a mother's caring nature is so overpowering that it overcomes her typical feminine inclination towards non-violence and peaceful resolution. Like, the entire reason it's A Thing at all is because men are the ones who are stereotypically supposed to be the defenders.


  8. 1) Those aren't the same things except in a very vague sense, 2) The words we choose to describe things still matter, 3) Self-reliance is absolutely 100% more associated with masculinity????


  9. 11 hours ago, itsamoose said:

    From what I'm seeing, the softboy/fuckboy thing seems to highlight a failure of the dating and communication process more so than an aspect of masculinity.  Certainly in my life, and this seems to be true with my conversations with others, sex or any physical aspects of a relationship are simply unacceptable goals in a romantic endeavor.  Not only is this the case, but seeking these things directly is seen as a manipulation or subversion of the process.  To this end sex has become a sort of currency in the process rather than a part of it.

    If you're looking for a fuck, say you're looking for a fuck. There are a lot of websites out there for this explicit purpose. It's disingenuous to say you're looking for something else rather than a fuck if all you want is a fuck. So there's a baseline miscommunication/deceit which is itself not intrinsically masculine, but then masculinity is weaponized towards this deceitful goal. Also, what drives that deceit is the toxic masculine behavior of 'scoring' or seeing women as prizes to be won.

     

    11 hours ago, itsamoose said:

    So far in this thread I've noticed much discussion of masculinity, perhaps in totality, as a bad thing or as a thing misused by the deceitful.  I'd be curious to know, what would you say masculinity is?  Not what is it used for, or where it might exist, or who might exhibit it, or what version of it is best, but what is it exactly?  As someone who has had a quite positive experience in my life with (my) idea of masculinity I'm curious to know what you all think of the concept in its own context.

    Positive masculine values are generally along the lines of dadliness -- self-reliance and a desire to protect and provide for another. These aren't exclusively male traits obviously, but are traditionally considered masculine, as opposed to the more feminine equivalents of community-building and desire to care for and nurture another.


  10. True! That's why I say that it's the right decision over the long time scale and the wrong decision over a small timescale. My argument is that it's only wrong when viewed in a very specific way, whereas in the vast majority of circumstances it's right.

    How would you define a decision? The exact moment when you pick up the box? When you start moving to pick up the box? When you look at the box? When you think to yourself "I will take this box"? Language is by necessity imprecise, but we can at least be precise about its imprecision, and acknowledge that we often mean many different points in time when we talk about when someone makes a decision.


  11. 13 minutes ago, TychoCelchuuu said:

    It's the difference between being good at math and physics vs. being able to travel through time. One is perfectly understandable and the other contravenes the laws of physics.

    Uh huh, but is there any difference in the information obtained?

     

    13 minutes ago, TychoCelchuuu said:

    I think this is an indefensible view of what a "decision" consists of, and if you were forced to work out all the implications of your view you'd quickly find yourself unable to defend it. But, showing this is a lot of work, because it requires providing an entire theory of what it is to decide, which is very difficult. Suffice it to say that I think on your view, it turns out people either never make any decisions, or they only ever make one decision.

    I think we just use 'decision' as a descriptive term for an inflection point in a human behavior, but these are just arbitrary descriptions that we use out of convenience. So, depending on what magnitude or shape you require to describe something as a decision, we can describe someone as either making a million decisions a second or zero seconds over their entire lifetime. However, what drives those decisions is always deeply rooted in a person's history, and is inseparable from that history.


  12. Eh, is 100% reliable prediction of the future different from information directly from the future? I don't see any difference there.

    5 minutes ago, TychoCelchuuu said:

    Nobody has ever claimed you can jump the tracks, though. Your mistake is in thinking that it's possible for the predictor to err. Perhaps the predictor cannot err. Perhaps the predictor is 100% right because it knows physical mechanics well enough to predict your choice with 100% accuracy. That's fine. That doesn't disagree with a single thing I've said. (It's not really part of the setup of the problem because it's irrelevant either way. We can imagine the predictor has 100% accuracy because of physics, or we can imagine the predictor doesn't, and they're right because they're very good at psychology, or whatever. Again, it doesn't matter.)

    I don't understand, my entire point was that it isn't possible for the predictor to err. I mean everything else you say tracks I just found that confusing.

     

    5 minutes ago, TychoCelchuuu said:

    My point is merely that, according to a causal decision theorist, the right choice is two boxes. Maybe it's impossible for you to make that choice, and the predictor knows it's impossible, so they put the money in the box, because they know you'll one-box. Still, you're making the wrong decision. You should two-box (if you could). You can't, of course. You're stuck making the wrong decision. As you put it, you'd have to spontaneously change the mechanical composition of your brain, which is impossible. But just because you're forced to make a decision, this is no reason to think it's the right decision. According to the causal decision theorist, it's the wrong decision. (According to the evidential decision theorist, it's the right decision! So that's good news.)

    My perspective is that 'decisions' aren't something that happens in just one moment, that they're something that extends over time. Maybe at the exact moment your decision is enacted it's the wrong one, but on a time scale that includes the predictor's read on your decision it's the correct one -- the result is just being skewed by artificially cropping the time window down to the 'climax' of your decision.


  13. Argh now I'm frustrated because I feel like you're not acknowledging what I'M saying.

    The idea is no choice is made in isolation, it's always made within the context of the mind's framework which is mechanistically deterministic. Making the decision to take the box now is a result of the mechanisms of the mind, which also determine what's in the boxes. You can't just choose to take a different thing than you've chosen, that would require spontaneously changing the mechanical composition of your brain, in other words supernatural interference. Therefore, the causal effect you have on the contents of the box is in having the mind-layout that makes you make the correct choice. Sure, what box you choose doesn't change the past, but what box you will choose already has. Believing that you can just jump the tracks is supernatural nonsense.

    Getting information from the future doesn't require time travel in a mechanically deterministic universe, it only requires complete information about the current moment and the capacity to make complete mechanical predictions based upon that -- basically just simulating the universe and time stepping it forwards a few hours. We humans do a simple and shitty version of it all the time, but in this case I'm positing a predictor who can do the real thing.


  14. 4 minutes ago, TychoCelchuuu said:

    I am not sure where I suggested that this specific case is good at highlighting the relative strengths of these two decision theorists. Where exactly did you get the notion that this was the goal of the case? As I've said a few times in this thread already, the point of the case is to show how the two theories differ in their answers.

     

    This is one of the reasons why, in my very first post in this thread, I said the presentation of this problem in the podcast was not very good, and that the right way to think about the problem is to think of it as highlighting the difference between the two decision theories.

    I'm doing my best to understand your approach, but it's not helped by me not really being clear on when you're agreeing or disagreeing with me. My contention is just that this is an obvious choice based on one approach consistently yielding far better results: This doesn't seem to be something you disagree with me on. What you do seem to disagree with me on is the original intent of the problem posed, to which I say, "fine, but that's not how it was originally presented to me, so you can see why I'd find it silly in its original context". I now acknowledge the utility of it as a tool to explain the difference in approach to problem solving, but suggest that this utility is somewhat undercut by it presenting a circumstance where one approach is clearly superior to the other, so it could probably be formulated better if that's the intent. I don't feel that I'm ignoring the things you say, but sometimes nuances fall through the cracks, so perhaps I've missed something in my characterization of our exchange.

     

    Now, having had a chance to do a bit of reading re: evidential decision-making vs causal decision-making, my stance has shifted. I don't really understand the point of evidential decision-making except as a stand-in for when there's a lack of visible causal evidence, but I also don't feel this decision is a good illustration of causal vs evidential decision-making, since saying it does ignores the causal link between deciding to take the second box, between being the sort of person who would take the second box, and that box being full of money. You're saying that in the moment there's only one correct decision for the causal decision-maker to make, but there's no moment without the moment before, no effect without cause, and in this case the cause of the box being full of money is making the decision to take that box.

     

    Think of it this way. Say I'M the predictor: I can choose now to either fill the boxes with a million dollars or not. I know that afterwards my memory is going to be wiped and I will be presented with this problem (not being told that I was the predictor). If I predict wrong then they kill me or whatever, something that ensures that I try as hard as possible to predict correctly. Under those circumstances, I'm going to do my best to essentially collude with myself, I'm going to put a million dollars in the box knowing that my logical process would lead me to take that box. There is a clear causal relationship between my decisions now/later and what's going to be in the box. A machine that could mechanistically predict my actions would be if anything more reliable than me trying to predict my own actions -- saying that acknowledging that is somehow beyond the realm of causation is difficult for me to swallow. Both the predictor and the chooser are operating as different spokes on the same gear, moving at a time offset but in the same direction.


  15. Okay, but we agree that this specific case highly incentivizes that one particular branch of decision-making, which means it's not very good at highlighting their relative strengths. And, sure, creating a problem specifically to highlight a situation in which they generate different outcomes is interesting, but every presentation of the problem has been as a paradox or as a dilemma, not as an illustration of the consequences of different decision-making processes. My argument is not that evidential decision making is always better, just that clearly the correct decision in this case is to take the opaque box. Maybe that's different in other cases, in which case that's fine. I'd be interested in seeing those cases though, if you have any examples handy for me to search.


  16. I'm having trouble with how we can agree that causal decision making will definitely cost you a million dollars but you can still argue that it's not a bad way to approach the problem. This sounds like, rather than a dilemma created to demonstrate two separate but equally viable systems, a dilemma contrived to make one of those systems look foolish.

     

    13 minutes ago, Gormongous said:

     

    I am skeptical that "reality doesn't contradict itself" is not an a priori proposition borne of the assumption that reality is entirely defined by comprehensible and ineluctable rules. If your evidence for all paradoxes being semantics is that reality appears to be internally consistent, and your evidence for why reality appears to be internally consistent is that all paradoxes are semantics, I don't really know that I can agree with you, not with counterintuitive things like quantum theory in play.

    I didn't assume any such thing. What would reality contradicting itself look like? I'm not arguing that there's some inherent logic of reality, but that whatever reality is is what it is. Whether it's logical or not is just a matter of whether what reality is is something we can understand -- in any case, that's a flaw in our logic, in the symbolic systems describing the underlying reality, more than anything else.


  17. 43 minutes ago, Gormongous said:

     

    That is... a hell of a statement, even if we just confine it to philosophical abstracts. Do you have any more to this, or is it just an intuitively felt thing for you?

    Mostly based on observation, but also based on what a paradox is: The idea of a paradox is that something is contradictory, but reality doesn't contradict itself, only the symbols we use to describe reality can contradict. The problems presented by paradoxes are entirely symbolic, and thus can be resolved by rephrasing the problem, or discovering there was never an underlying problem at all, just a weird semantic trick.

     

    Quote

    It's not a 'paradox' at all. It's just an illustration of the different outcomes generated by different decision procedures. There is no "picking apart the timeline in a highly specific way rather than creating an actual strategy." Both decision procedures have an actual strategy - in fact, they have very detailed strategies that you can read up on here if you're interested. If you don't want to read up on it, please just trust me when I say there are definitely "actual strategies" out there that pick two boxes, and for good reason.

    This is what I mean when I say that this distinction comes down to weird time-picking: You should always, from our perspective now outside the problem, choose to take the opaque box. However, from the perspective of being inside the room at the moment-of, you should always choose to take both boxes -- except for, by being the person who in the moment would take both boxes, you've just again screwed yourself out of a million. Oops.

    So the solution, I suppose, is to always pick the opaque box and then end up unintentionally somehow picking up both. Does the AI account for clumsiness??? Does the AI account for you having been introduced to the problem already on a podcast/forum?

     

     

    Quote

    It's impossible to "fuck with the prediction." The prediction happened in the past: it is now immune to fuckery. No matter how badly you wish to fuck with it, it's immune to being fucked with, the same way you can't influence anything else that happened in the past. This is how causation work: stuff in the past is immune to being changed by stuff in the future. This is why the 2-boxing strategy is recommended by what's called "causal" decision theory. That decision theory is focused on what effects you could possible cause, right here, right now. Because you can't possibly change the amount of money in the boxes, causal decision theory recommends picking both boxes, because no matter what's in them, that strategy will give you more money.

    Except it's not impossible to fuck with the prediction when you assume that the machine has already simulated your exact decision-making process in the past. In that case, it is to your benefit to be the person who will come to the decision that. Causality works in reverse as well: Effects have causes.

     

    Quote

     

    It's not an intractable problem. It's not even a problem! It's simply a thought experiment designed to illustrate the differences between evidential and causal decision theories. Your mistake is thinking about is as a paradox or a problem or whatever. Nobody's up at night sweating whether to pick one or two boxes. The choice is obvious. What's interesting is that the obvious choice differs between these two theories. Of course, there might be a nearby intractable problem: which decision theory should I pick? That's not what we're talking about here, though. This problem is quite tractable. Simply pick your favorite decision theory and it'll tell you what to do.

     

    Again, this is not a "paradox," and to the extent there is any sloppy thinking going on, it is entirely in your corner. Believe me when I say that I and other philosophers don't waste our time on stupid problems, tricks of semantics, or sloppy thinking. We're not all a bunch of fucking yahoos.

    My argument is that one of the decision theories is based on a belief in the supernatural power of free will to escape causality, and is thus actually completely incorrect. And literally everyone wastes their time on stupid problems, semantics, and sloppy thinking, so let's not preemptively eliminate that as a possibility.

     

    Also re: all this not a paradox stuff, isn't this called "Newcomb's Paradox"???


  18. You are taking the risk by deciding to pick both boxes and choosing to be the jackass who takes both boxes. This is a 'paradox' created by picking apart the timeline in a highly specific way rather than creating an actual strategy. The best play is to choose to take one box. Maybe in the future it would be better to take both, but making the decision now you choose to take one, and you commit to that decision because to do otherwise would be to fuck with the prediction that is making it the best choice to choose one box.

     

    See, you're coming at this from the perspective that if people have spent a long time arguing about it it's an intractable problem. I'm coming at it from the perspective that if people have spent a long time arguing about it it's probably a stupid problem. Most 'paradoxes' are just tricks of semantics and sloppy thinking.


  19. If the goal is to make the most money possible and they wanted actual clarity to that goal they should have just loaded them both with something close the same amount, like $1000/$2000. The way this is framed, it's taking a huge and unnecessary risk for a 0.1% gain. Casul decision theorists, good job, you just screwed yourself out of a million dollars with your bad decision-making.