Sign in to follow this  
Nachimir

AI and the Uncanny Valley.

Recommended Posts

Gamasutra are lovely, and published another article I wrote.

Really short version: As AI becomes more sophisticated, before it ever acts convincingly and consistently like a human being, it's going to creep us the fuck out by behaviing in an almost human way.

Share this post


Link to post
Share on other sites

I'm both looking forward to and... the opposite of that, to this happens. For example, in a game like GTA, who would dare to do anything if people reacted completely human...ly? Obviously we'll reach the point where they look completely human long before they act like it, but won't almost all gameplay concerning human (or animal) interaction be impossible for most people?

I wouldn't dare playing Test Drive Unlimited if I risked causing a horrible accident with "real" characters dying and crying and acting all human. I would be a totally horrible sight to behold! And FPS games? Those would start actually becoming Jack Thompsons' killing sims.

Or are we going to "adapt" to this? Will we always be able to distance ourselves from even the most realistic VR-projected experience of murdering people, just because we know it's not real?

If any of you have played some of that Sumotori Dreams game, I've been thinking. The self-balancing algorithm in that game is pretty amazing. In a couple of years, we'll probably have lots more stuff like that. Characters that are essentially "rag-dolls" all the time, who'll trip and fall down stairs and break limbs, try to get up, I don't know. Aren't we approaching a more and more horrible and unbearable gaming experience? Will all future games have to adopt some wild and crazy art style to make it feel unreal?

Also, the text formatting stuff is broken. I've said it before!

Share this post


Link to post
Share on other sites

The discussion on game characters displaying signs of autism reminded me of a recurring theme in Douglas Coupland's work (Microserfs, jPod). He seems to believe that most programmers suffer from autism to some degree. I'm not too sure I believe that, but if true it may be a case of the art reflecting the artist.

You do make a good case for the application of the uncanny valley phenomenon. The closer something gets to appearing human, the more the little deviations freak me out. I never got a chance to try out Facade, due to a hard-coded minimum CPU requirement that literally won't let me play.

Share this post


Link to post
Share on other sites

You can see the whole spectacle of a burning Elmo here:

http://video.google.com/videoplay?docid=4211335897073017321

I wouldn't dare playing Test Drive Unlimited if I risked causing a horrible accident with "real" characters dying and crying and acting all human.

I think I'd still dare to play it, but it would strongly affect my in game decisions :)

If any of you have played some of that Sumotori Dreams game, I've been thinking. The self-balancing algorithm in that game is pretty amazing. In a couple of years, we'll probably have lots more stuff like that. Characters that are essentially "rag-dolls" all the time, who'll trip and fall down stairs and break limbs, try to get up, I don't know. Aren't we approaching a more and more horrible and unbearable gaming experience? Will all future games have to adopt some wild and crazy art style to make it feel unreal?

The stuff Natural Motion are doing with this right now is absolutely amazing. Everything in that vid is procedurally generated rather than animated by hand. There's also a free edition to play with:

http://www.naturalmotion.com/ele.htm

Seeing a demo of NMs software is one of the things that inspired the article: In terms of the animation they can now do, behaviour is convincingly there for *action*, but what happens when the action stops and the characters have to start talking? They fall flat. The game gets put back on rails.

I asked them if they were thinking much about body language and affect. They didn't say much but were friendly ;)

If you look at Heavy Rain, it shows that body language is the kind of thing that can be hand animated (brilliantly in that case), but the next step will be to define it with some form of syntax so that it can occur automatically.

The discussion on game characters displaying signs of autism reminded me of a recurring theme in Douglas Coupland's work (Microserfs, jPod). He seems to believe that most programmers suffer from autism to some degree. I'm not too sure I believe that, but if true it may be a case of the art reflecting the artist.

While it's true that slightly geekish and technical people tend to show autism spectrum traits, few of them are so severe as to actually be capable of receiving a psychiatric diagnosis that places them on the spectrum. For instance my brother and I have a bunch of strong aspergers traits, but not so severe we'd ever be diagnosed with it.

The thought of art reflecting the artist is an interesting one, but I don't think it's the case here. It's more what the art is capable of: AI right now is spectacularly basic, but gets a tiny bit better during each hardware cycle (There's also a massive gulf between academic AI and game AI, with the former trying to simulate the way cognition works, and the latter just trying to put on a good show - mechanics versus aesthetics if you will). Also, many AI programmers I've met are perfectly charming people.

The autistic behaviour of AI isn't just an artistic cul-de-sac game developers acidentally turned down; I don't think there's any elegant hack we missed for making realistic behaviour out of the ingredients we already have. We just don't know how this thing is meant to work, and it's being painstakingly built one chunk at a time. For game and academic AI both, we don't have the syntax of human behaviour and cognition pegged down yet.

Share this post


Link to post
Share on other sites

Isn't the next Indy game supposed to have stuff like this, or is that just more advanced animation blending?

Share this post


Link to post
Share on other sites

No you're right, it has stuff very much like this, they call it their Euphoria AI engine. I saw a demo of it at GDC and it looks really impressive, way better than floppy nonsensical rag-doll animations. The NPCs clearly want to remain upright, and they will struggle to do so.

Share this post


Link to post
Share on other sites
No you're right, it has stuff very much like this, they call it their Euphoria AI engine. I saw a demo of it at GDC and it looks really impressive, way better than floppy nonsensical rag-doll animations. The NPCs clearly want to remain upright, and they will struggle to do so.

That's totally awesome. Is it always on, though, or is it a triggered "unbalanced state" or something? Or maybe you aren't actually making the game.

Share this post


Link to post
Share on other sites
No you're right, it has stuff very much like this, they call it their Euphoria AI engine. I saw a demo of it at GDC and it looks really impressive, way better than floppy nonsensical rag-doll animations. The NPCs clearly want to remain upright, and they will struggle to do so.

The Euphoria Physics stuff is Natural Motion's work. It's improved a lot since that Indiana Jones demo.

Rockstar have also signed up with them.

wrt making the game, NM are keen to stress that animators are never written out of the loop. Their tools aren't about automating *everything*, just using automation to significantly increase the power and output of animators.

The same goes for all procedurally generated content. Procedurally generated cities will have to be tweaked by artists and designers. AI with procedures will trump basic, tree structures of AI responses explicitly defined by designers.

To put it another way: In any job, the boring, left-brained parts are syntactical and can (all, eventually) be automated. The creative, right-brained parts are semantic and consistently require a human designer.

Share this post


Link to post
Share on other sites

I really liked reading that article, I find the whole concept of the uncanny valley and its theoretical basis really interesting. I just want to clarify the below point you made;

A lot of what we intuitively know about people remains immeasurable because of limitations on our technology and knowledge. For instance, the positive and negative valence of emotional states in others is obvious to most people through facial expression, voice intonation, posture, and so forth, yet none of these constitute an impossible to fake, objectively reliable measure, and magnetic resonance imaging has not yet reached a fine enough resolution to allow sufficient neurological observation.

I totally agree that solving the uncanny valley is going to need recourse to psychology. I work in experimental psychology and am currently researching the area about which you write. I work with computer scientists, AI guys and developers and if psychology didn't inform, wasn't necessary for, furthering video games I wouldn't have a job.

Moreover, exp psychology has looked into emotional valency a great deal, concluding that the most important thing in evolutionary terms is to differentiate the emotion, and from thereon in categorical perception determines the valency. See Beatrice de Gelder for more on this. (she's great). This type of research like everything in exp psychology is empirical in nature, using functional MRIs, skin conductance, reaction times etc, validated over the course of years of testing and eradicating anomalous findings, and statistically controlling for error stringently (if the research is any good, that is).

The uncanny valley, when it is reached, will be an academic issue, as all things are theoretical before they can be applied to VR.I don't think it's first and foremost a developer's issue. I for one definitely believe that it can be crossed, and I can't wait, because once developers achieve photorealism they can go back to focusing on style more. I don't think it's quite been reached yet, and I think that the revulsion experienced at something like heavy rain is more an approximation of it rather than the real thing.

Share this post


Link to post
Share on other sites
I totally agree that solving the uncanny valley is going to need recourse to psychology. I work in experimental psychology and am currently researching the area about which you write. I work with computer scientists, AI guys and developers and if psychology didn't inform, wasn't necessary for, furthering video games I wouldn't have a job.

Thanks. I'll look Beatrice De Gelder up.

A couple of quick questions:

What the best resolution of MRI scanning at the moment? Last time I read anything about it it was 4 seconds.

Has anyone actually found a reliable way to measaure valence yet? I understand how crucial the concept is to emotion research, but haven't really seen any measurement go beyond arousal (which I understand is pretty easy through heart rate, GSV, etc.)

The uncanny valley, when it is reached, will be an academic issue, as all things are theoretical before they can be applied to VR.I don't think it's first and foremost a developer's issue. I for one definitely believe that it can be crossed, and I can't wait, because once developers achieve photorealism they can go back to focusing on style more. I don't think it's quite been reached yet, and I think that the revulsion experienced at something like heavy rain is more an approximation of it rather than the real thing.

Totally agreed. Photorealism and aesthetics is the only other thing I've written about for Gama ;) Looking at the uncanny valley in terms of visuals was interesting, but I always had a niggling doubt that it applied to far more, and the idea for the article I linked above hit me in February.

That we're only just passing over the lip of the valley and things are going to get worse is a really interesting idea... :grin:

Share this post


Link to post
Share on other sites

yeah, it's cool to think about how much creepier its gonna get! fmri is slowly improving, our scanner breaks down about every couple of weeks and its one of the best available! its interesting cos its really more applicable to games than anything else, one of the major problems with fmris is getting participants to stay still for an hour being subject to 120dB droning, and its been shown that participants playing games in the scanner dont even notice the time go by (sounds about right). the valence issue in emotion is something that beatrice de gelder would focus on to a degree, but its not that hot a topic right now. (btw de gelder is a rare kind academic who has put all of her publications as pdfs on her site, beatricedegelder.com) its just accepted that there is a continuum of facial expressions and the exemplars of each are the most recognisable across the board, and lie in the middle of each expression's range in the continuum, if you get me. thats where categorical perception comes in, that they are discrete categories.

its a really cool area to write about, i hope u dont think i was criticising you! just really interested!

Share this post


Link to post
Share on other sites

It's not just about showing the correct emotions though, it's also about thinking like a sane person, which can be hard to get right.

Share this post


Link to post
Share on other sites

There's a pretty good post here that brings up an interesting point: Using behaviour to define intelligence is probably incorrect.

I think that cuts into the difference between game AI and academic AI quite neatly: Whereas game AI tries to put on a good act by it's behaviour, academic AI tries to get right down and simulate the mechanics of a working mind.

its a really cool area to write about, i hope u dont think i was criticising you! just really interested!

No worries, and even if you did criticise it'd probably do me good.

I get what you mean by categorical perception, and I do really like Ekman's work on basic emotions, but I think it's a crucial point that expressions of emotion are not the emotions themselves.

"Emotion" in itself is a poorly defined term being grabbed at by many schools of thought right now, and the relationship of facial expressions to internal states is a very complex one. States elicit expressions, but manifesting expressions can also induce states to a certain extent. Expressions of emotion can be suppressed, and also faked with deception, or electrodes in the right spots (Duchenne de Boulogne once did some astonishing work with that).

I spent quite a while looking at motivation theory and worked out that what I knew as "emotion" seemed to be missing from most of it (for instance Maslow's work deals in extrinsic motivators but posits little to nothing about the internal states of a human being). Looking at emotions from such a functional point of view and trying to build flexibility into motivation, I plumped for a dimensional model mixed with attachment theory, which is why I'm so interested in valence: It's something I feel all the time, it makes for a compelling (pet) theory, but there's shit all in the way of objective measures ;)

However, I think as a syntax of motivation, it would be an ideal behavioural AI project. I have to learn a programming language. I've sketched out the logic for it and can already see a little of the math...

one of the major problems with fmris is getting participants to stay still for an hour being subject to 120dB droning, and its been shown that participants playing games in the scanner dont even notice the time go by

That's really interesting. What kind of games do you get them to play from the inside of a scanner?

Share this post


Link to post
Share on other sites

oh, sorry! I was waffling away about perception of emotion, cos thats what im looking at right now. Scary, didnt realise how narrow-focused i was becoming. research is weird like that. as regards programming, i'm a complete noob (refusing to use 0's) i've gotta learn c++ and visualbasic and all that jazz and i am completely lost - my brain wont work that way! the experience of emotion is something i have no idea about apart from some undergrad courses. it is interesting though and makes a lot of sense tying that into the autism spectrum.

on an entirely unrelated subject, i just got stung by a scorpion in animal crossing. wtf???!!

Share this post


Link to post
Share on other sites

That's the thing, "emotion" is more a like a stack of processes we're in the course of defining, all the way from functional viewpoints to affect expression, perception, socialisation, social contruction... argh. It's insanely complex, has innumerable overlaps and is pretty much all relevant to games. It was a stunning moment when I realised noone has actually come up with a good and widely accepted definition of what an emotion is yet.

Apart from user engagement, emotionally expressive characters seem to have been the entire focus and understanding of emotion in games recently. While that's obviously extremely applicable to AI and uncanny valley issues, I think there's way more in terms of research and viewpoints for game devs to dig at.

Share this post


Link to post
Share on other sites
the experience of emotion is something i have no idea about

I interrupt this fascinating intellectual discussion to point out a probably not actually particularly humorous construction.

Share this post


Link to post
Share on other sites

fair enough you pedantic little man, I do not have a research history within the field of the study of the subjective experience of emotion. Better?

meanwhile, regards the definition of emotion, social psychology has one, neuropsychology has one, personality psychology etc. Within my field we don't look at anything except the physiological nature of emotion, and the perception of emotion, and that sort of observable thing. To my mind the subjective, introspective stuff is part psychology, part philosophy. A social constructivist might say that regarding AI the subjective stuff is unnecessary, because if you take that into account you're getting yourself into a chinese room situation, so a good pragmatic definition might involve the perception and physiology more so. Although of course someone working in another field will have a different definition.

Lyle did not recompense me for my sting. HHmph.

Share this post


Link to post
Share on other sites

I've never really been able to take the Chinese room seriously. Searle seems to be really overlooking that the person in the room is not the only entity in the situation. To me, saying that because the man in the room doesn't understand Chinese there is no understanding occuring is like saying that because one of my neurons doesn't understand english, neither do I. He tries to dodge this by saying that one could "internalize" the system, but that really doesn't add a damn thing to his argument. It gets even worse when he's confronted later with parallel processing and calls it a Chinese gym, with room after room translating together. He later realized that he was just describing neurons and hasn't mentioned the article since. Gotta love when a guy realizes he's defeated his own argument.

Given the contentiousness of the whole situation and the problem of other minds, I don't think any definition of emotion aside frrom a physiological one is practical in any respect. Interesting to debate, sure, but not particularly useful to anyone. As such, I'd go with the perception and phisiology definition over anything else just because it involves things it's possible to know and can actually amount to something rather than more argument.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this