nelmis

Members
  • Content count

    8
  • Joined

  • Last visited

Posts posted by nelmis


  1. 6 hours ago, unimural said:

     

    When I was a teenager I had devised a thought experiment that bothered me a great deal. You have the power to heal people by physically harming them. Specifically, whatever you damage, will be healed perfectly. So to heal someone of lung cancer you have to physically damage their lungs. However, if anyone ever comes to believe that this is what you do, you loose the power for good. What, if any, would be an ethical way of using this power?

     

    Be McDonald's


  2. "Jelly is nutritious" is a hilarious and incredibly Breckonian hill upon which to die, and I am delighted that the debate reared its head again.  It's also great to have the video available so we can relive the experience of "is he doing a thing here, this is a thing... it isn't a thing."  JM Smucker, the 22nd brain.

     

    Speaking of which, has this already been reported?  45 Actual Brains Actually Found.


  3. The scenario didn't specify the distribution whereby the mean would be established, which I feel is critical.

     

    I like to think that on the morning of Remoween, everybody in the world gets up exactly the same as they were, except for one guy, all of whose attributes have been reset to whatever bonkers proportions they would need to be such that Chris Remo is statistically perfectly average in every way.


  4. 32 minutes ago, Jake said:

     

    That's bleak! I had thought a similarly bleak thing: It's a place to raise monkeys used for medical/drug testing by either that specific university, or many universities.

     

    I thought an even grosser combination of the two.  They make weapons there + they need primates nearby = biological weapons testing.

     

    17 hours ago, LyonArtime said:

     

    If you follow Nick's advice, you will have literally left $1,000 on the table. The AI made its choice before the show started, so literally nothing is stopping you from taking both... unless the AI predicted you would do that before you walked on stage, in which case the closed box is empty. 

     

    Also, unsurprisingly, Nick thought "the AI guessed correctly 50 times" equated to "the AI is infallible", which I probably should have anticipated considering your beliefs about robots.

     

    I dunno; I think both of those are a bit unfair.  I'm not a philosophy-man, but it seems to me that the whole point of the exercise is that it makes for a weird unsolvable paradox where you can't say right now what you should do later, because there's an intervening factor that is a near-infallible or infallible prediction about what you will do later, and that prediction will be based on what you do from now until then.  Looking at the moment in time where you're deciding which box to pick suggests that you have some control at that point, but really the question comes down to whether you're the kind of person who will pick A, or the kind of person who will pick B.  The $1.0001 million solution is "be the kind of person who will pick the closed box, but then don't do it."  Which is paradoxical.  It's not really true that you will have literally left $1,000 on the table, unless the paradoxical situation obtains, in which case, paradox.

     

    So Nick's original solution -- if you don't assume you can trick the nigh-infallible AI/gamer-god -- is optimal.  Be the kind of person who will take the million, and then take the million, and don't even try to f with the reverse causality hoist bomb that's waiting if you try to get a little extra.

     

    The way the question is framed makes it irrational to assume otherwise, I think.  It's the classic genie setup: you don't know exactly what weird rules govern this interaction, and you can profit slightly more greatly if you push the margins and come out ahead... but also you're standing on the mangled corpses of all the people from before who tried to game the system, and there's a million dollars in that box if you just deal with it.


  5. 14 hours ago, Turgid said:

    I think when it comes to legal defense, any sentence is going to have to be way more baroque and involved than that to make the meaning completely clear and unambiguous to a judge. You're basically writing a contract, there's a reason people study for years to do that kind of thing. Also, it doesn't have to hold up to human law, it has to hold up to genie law, so good luck making your human language sufficiently secure. 

     

    Yeah.  And then the more clauses you add, the more opportunities for weird bootstrapping interactions between clauses.  As I was listening to Harvey's description of his intentions with the proposed wish to submit to the jinn, my first thought was: this is exactly why people so hate and so need us lawyers.

     

    My SECOND thought was: oh, no, this is actually the origin story for Lord Hoistmas.  Chris remarked upon this later, but there's only one way "ironic twist-proof" ends. I'm way ahead of you, mythical creature renowned for always being way ahead of people who think they're way ahead of you, said the man as the entire world turned into an enormous petard.

     

    Then I heard the language for the wish, and it's pretty tightly worded, all things considered, but sadly, humanity is still fucked in fifteen words or less.

     

    Paraphrasing, the sentence begins: "I wish that every human being - the definition of which will still include everyone affected by this wish - alive today and born at any point in the future..."

     

    It's at this point that the jinn's eyes light up like a god-damned christmas tree and the sky starts swirling purple and red.

     

    The definition of "human being" is left up to interpretation*, EXCEPT that it will include "everyone" affected by the wish.  OK, says the jinn, and implements the wish to apply to robots. 

     

    I leave it as an exercise for the reader what then unfolds as a result of these "human beings" who can now summon from nothing any "foodstuffs" required to "maintain optimal functioning" of their "systems," so long as those functions don't harm THEMSELVES.

     

    *To be fair to Harvey, I think that it's clear by the reference to "human being" and "alive" and "born" and "human body" what the intent was: human being means homo sapiens, and the language about "still include" means that beings that seem to be homo sapiens, but now have this new magical ability, don't result in some kind of paradox because "human beings" can't do magic.  But just because it's clear that's what he meant doesn't mean that's what it says, and this is a jinn.  If the jinn says hey, OK, man, that's a weird definition, but I guess "human beings" must include birds, because I just made this wish apply to birds! there's not going to be any recourse for the rest of us.  And then say hello to what we've wrought: a nightmare scape of rotting flesh and worms, capriciously willed into being ten pounds times however many birds there are at a time.


  6. I think it's an amazing essay.  I found it very very very difficult to get through, as an educated white guy who has probably recommended 20 different DFW things to 20 different women over the course of my life.  I'm sitting here stewing in a visceral kind of horrified revulsion about it.  I... don't think I'm going to recommend DFW anymore until I've thought about it, like, a lot.