Yu-Chung recently drew my attention on the following Gamasutra article:
http://www.gamasutra.com/view/feature/1436/uncanny_ai_artificial_.php
Basically, it describes how the Uncanny Valley effect might also apply to social behavior, not only to the visual appeareance or animation of a virtual character. David Hayward suggests that we might get better AI if we study more psychology and psychiatry. I like how he compares the behavior of today’s AI to autism.
But this reminded me of a recent TED Talk I really enjoyed
It’s Jeff Hawkins talking about recent ideas about how the brain works. The talk is funny and lasts only 20 mintues but if you don’t have the time, fast forward to 10:30. There, he points out that the real important attribute of intelligence is not some special kind of distinct, smart behavior but an ability to predict events. He describes how prediction is a major part of our brain and how it helps us to deal with our world.
When you combine the two things you get that:
Smart AI is not the kind of AI which behaves in some fancy special way but the kind of AI that is able to somehow predict and anticipate what will happen next, especially what the player might do. Right wow, game designers tend to focus so much on virtual characters that the player might feel empathy to. Instead, we should do it the other way around. The computer needs to read the player’s feelings. Jenova Chen’s FlOw is in a way a humble beginning because it is a game design that takes the skills of a player into consideration. However, I think we could go even beyond that. I imagine some detailed input analysis routine running in the background and detecting patterns in the input of the player. Some basic stuff first – is the player at the keyboard at all? We already have some games which do that. Then you can help with usability. Is the player struggling with some fuctions of the program? By checking for common strategic mistakes, you can see how well the player understands the rules.
But then you can go deeper and try to see if you can predict what goal the player is working towards. Then of course you could somehow react to it. Then, the interaction with the AI could finally happen based on a mutual, deeper level of understanding, on the subtext of the choices. Here, I’m running out of ideas. I wonder how could you test this kind idea? What kind of game would be good to test this kind of system? It should be as simple as possible. Does anybody have some ideas?
Very nice talk. (A shame the TED page doesn’t work properly in Opera.)
After watching it, I can’t help but to remember one related thing I skimmed before: the inpredictability as another important feature of automonous agents.
Of course this inpredictability in behaviour is something totally different than the ability to predict the future. Just wanted to throw it into the discussion.
Inpredictability as another important feature of automonous agents?
Refresh my memory, where does that come from again?
Where did you read about it?
Somewhere in the KHM library I think the book was about, well, autonomous agents, more about AI in the context of robotic. Not sure.
I didn’t read too much into it as I was doing research for my Vordiplom and soon dismissed it as “irrelevant”.