March 23, 2008

Link Madness, Part 1: the Hyperbolic

by Andrew Stern · , 3:39 pm

I occasionally make posts composed of link dumps, to help GTxA readers find articles they might enjoy and may have missed. This time I need to split the dump into two parts, the first part being a set of articles ranging from the slightly over-the-top to the truly hyperbolic. I will gently attempt to challenge, refute or debunk each as I go. :-)

Let’s come down to earth in Part 2 of my link dump…

7 Responses to “Link Madness, Part 1: the Hyperbolic”


  1. Patrick Says:

    You may be interested in this exchange. My argument is, whatever the limitations, AGI and smoke and mirrors is more interesting than mere AI and smoke and mirrors, the problem comes when one assume the AGI means more than it does in that instance.

    Also, when you say “hand waving”, are you trying to play down speculation at the expense of near-term pragmatic discussion? Or are you emotionally perturbed by the idea that reality-as-you-know-it could cease to exist in the very near future?

  2. andrew Says:

    By handwaving, I mean it’s quite a leap from an agent observing player behavior in MMOs — for example see my mention of Neil Madden’s work here — to a “dramatic acceleration” that results in highly intelligent agents. How does one get from A to B? There’s a lot of ongoing research in the field of machine learning; it’s a really hard problem.

    When it comes to worrying about reality-as-I-know-it ceasing to exist due to advances in AI in my lifetime — I sleep like a baby.

    Not that I don’t expect reality to be different in significant ways, say, 20 years from now. 20 years ago (I was a freshman in college), I couldn’t hold a small wireless device in my hand that allows me to get almost any tidbit of information / human knowledge I wanted at any time.

  3. yolanpa Says:

    I don’t understand what this AGI is supposed to be. I think of ‘generalization’ as a process, and ‘generalized’ as a comparative adjective. So is generalized AI pretty much just the integration of multiple mere-AI? Does it need to use an algorithm/process which is more singular/cohesive/elegant then simply a sum of mere-AI?

    Phrased another way, if I’m working on an AGI which is incomplete (or otherwise limited) and it can only currently do a subset of things I want it do, how is that different from somebody else working on mere-AI that can only do that same set of things?

    The wikipedia page on Strong AI (which is what google returns for AGI) seems to imply that AGI/StrongAI is an anthropomorphic specialization of the more generalized mere-AI.

  4. Patrick Says:

    Andrew: “By handwaving, I mean it’s quite a leap from an agent observing player behavior in MMOs — for example see my mention of Neil Madden’s work here — to a “dramatic acceleration” that results in highly intelligent agents. How does one get from A to B? There’s a lot of ongoing research in the field of machine learning; it’s a really hard problem.”

    Is it a hard problem or a wicked problem? I suspect it’s a wicked problem with various hard problems embedded in it, and the reason Ben is being so assert with his proverbial hand, is that he suspects he’s got a wicked solution that handles sufficiently at least one hard problem. I must admit, when it comes to the Singularity, I want to believe, though recognizing that desire keeps me from falling into the trap of belief. If I can wring some good interactivity out of Novamente, or similar architectures, then that’s wicked.

    I suspect there will be, for at least the next two or three years, relative strengths to narrow and general AI applied to games, like employing mariontettes versus a Deus ex Machina, the latter is less focused, less goal-oriented, the prior can be weilded to more subtle craft. There’s definetly an Uncanny Valley situation that limits AGI-powered agents from inhabiting specific dramatic roles until a major qualitative leap has been made.

    yolanpa:

    “I don’t understand what this AGI is supposed to be. I think of ‘generalization’ as a process, and ‘generalized’ as a comparative adjective. So is generalized AI pretty much just the integration of multiple mere-AI? Does it need to use an algorithm/process which is more singular/cohesive/elegant then simply a sum of mere-AI?”

    As I understand it, narrow-AI is designed around a specific problem or application, while a general AI is an architecture that can be applied to whatever problem or activity it’s underlying dynamics provide suitable results for. I think it does need to be more than a Frankenstien, Novamente uses the MOSES algorithm along with probabilistic learning networks and a variety of other components, so you could say there is a degree of stichedness under the hood, in the sense that bones and sinew and a bloodstream stich together limbs and organs. I’m really not an expert on AI architecture though, you should visit their site and look at some of the lit they have. If you want the real nuts and bolts, the architecture is documented in some books Dr. Goertzel has written.

    “Phrased another way, if I’m working on an AGI which is incomplete (or otherwise limited) and it can only currently do a subset of things I want it do, how is that different from somebody else working on mere-AI that can only do that same set of things?”

    The AGI aggregates pattern data that can, at least in Novamente, be converted to new processes, so the function of the system is outside the scope of it’s applications.

  5. andrew Says:

    It’s wickedly hard, and not hardly wicked.

  6. Game AI Roundup Week #13 2008: 6 Stories, 1 Quote, 1 Video, 1 Seminar — AiGameDev.com Says:

    [...] Grand Text Auto – Link Madness [...]

  7. andrew Says:

    Just two days ago, New Scientist reported on Goertzel’s research.

    Bruce Blumberg, anyone? Geez, research even 5 years old seems to go unmentioned.

Powered by WordPress