February 27, 2008

by Noah Wardrip-Fruin · , 6:10 am

The finite-state machine is much like the quest flag or the dialogue tree. Each is conceptually simple, easy to implement, places low demand on system resources, and — over a certain level of complexity — becomes difficult to author and prone to breakdown. A quick look at the structure of FSMs shows the reasons for this.1

An FSM is composed of states and rules for transitioning between states. For example, an FSM could describe how to handle a telephone. In the initial state, the phone is sitting on the table. When the phone rings, the FSM rules dictate a transition to picking the phone up and saying “Hello.” If the caller asks for the character who answered, the rules could say to transition to a conversation state. If the caller asks for the character’s sister, the transition could be to calling the sister’s name aloud. When the conversation is over (if the call is for the character who answered the phone) or when the sister says “I’m coming” (if the call is for her) the phone goes back on the table.

As promised, all of this is quite simple. But what happens when the situation becomes more complex? We might add caller ID, after which the character needs to check who the call is from before answering (and sometimes not answer). We might add call waiting, transitioning to the “Hello” state from the conversation state (rather than from the phone on the table state) and then running a second conversation that, when it ends, returns to the previous conversation state (rather than putting the phone down on the table). We might add a check to see if the building is on fire, in which case the character asks for help from anyone who calls (or perhaps only from local callers). We might add a check to see if the character is deep in conversation with another physically-present character, in which case the ringing phone leads them to say, “Let’em leave a message” (rather than transitioning to the “Hello” state or the caller ID check). Now imagine all of the character’s behavior organized into a giant, interconnected mass — covering phone conversations, in-person conversations, eating and drinking, sleeping, working, playing games, reading, visiting museums, getting into traffic accidents, all the possible transitions between them, and all the possible combinations (playing a game while drinking, getting into an accident while talking). While authoring such an FSM would be possible, in practice the complexity would be a huge challenge to manage, and any change would threaten to produce unexpected results.

For this reason, game behavior authors have sought to manage FSM complexity in different ways. Jeff Orkin, in his work on the game No One Lives Forever 2 (Hubbard et al, 2002), gave characters sets of goals that compete for activation (Orkin, 2006). Each goal has its own FSM, of a much more manageable size than an FSM for controlling the character’s entire behavior, and the rules for transitioning between these goal-specific FSMs are decoupled from the FSMs themselves (instead handled at the level that determines the current highest-priority goal). Orkin describes the results for non-player characters (referring to them, in game industry parlance, as “A.I.”).

[E]ach goal contained an embedded FSM. There was no way to separate the goal from the plan used to satisfy that goal. . . .
Characters in NOLF2 were surrounded by objects in the environment that they could interact with. For example someone could sit down at a desk and do some work. The problem was that only the Work goal knew that the A.I. was in a sitting posture, interacting with the desk. When we shot the A.I., we wanted him to slump naturally over the desk. Instead, he would finish his work, stand up, push in his chair, and then fall to the floor. This was because there was no information sharing between goals, so each goal had to exit cleanly, and get the A.I. back into some default state where he could cleanly enter the next goal.

Of course, adding more states and transitions to the Work FSM could have addressed this. But pursuing such a path in general would have raised the same complexity issues that motivated the decision to compartmentalize the NOLF2 FSM for each goal. The problem lies in the compartmentalization strategy itself as a method for addressing the complexity of character behavior.

Ken Perlin, who pioneered computer graphics based on procedural textures and animations, describes the consequences of a similar problem in a rather different sort of game — The Sims (Wright et al, 2000):

Playing The Sims is lots of fun, but one thing conspicuously lacking from the experience is any compelling feeling that the characters are real. Much of this lack comes from The Sims’s reliance on sequences of linear animation to convey the behavior of its characters. For example, if the player indicates to a Sims character that the character should feed her baby, then the character will run a canned animation to walk over to the baby’s bassinet, pick up the baby, and make feeding movements. If the player then tells her to play with the baby, she will put the baby down, return to a previous position, then begin the animation to approach the bassinet, again pick up the baby, and start to play. One result of this mechanical behavior is that there is no real possibility of willing suspension of disbelief on the part of the player as to the reality of the character. (2004)


3

In The Sims the problem comes from something other than the logic of FSMs. Rather than animations driven by FSMs that are segregated according to character goals, in The Sims actions and their animations are compartmentalized via smart objects (e.g., a “shower object” contains information about its impact on the Sim and how it is used, including a pointer to the animation involved). This is another approach to managing complexity using a simple structure with limited points of interconnection — one which I will discuss further in a later chapter. This form of compartmentalization made it possible to manage the complexity created by the massive number of expansion packs for The Sims (each of which introduced many new objects) but also, as Perlin observed, resulted in breakdowns similar to those seen in NOLF2.

Avoiding these compartmentalization-driven breakdowns requires an approach to managing complexity that doesn’t isolate each action from the rest of the world. As it happens, I have already outlined such an approach: Tale-Spin’s. Each planbox in Tale-Spin is able to access a shared memory space — and a character only attempts to alter the world to suit planbox preconditions if those conditions aren’t already met. So, for example, a Tale-Spin planbox for playing with a baby might have a precondition of holding the baby. If the Tale-Spin character is already feeding the baby, rather than returning to a neutral state (which involves putting down the very same baby), the planbox would check memory, see the precondition was already met, and move on to the next step.

Tale-Spin was far from alone in employing planning based on actions, with preconditions checked against a central memory, intended to move the world from the current state to a goal state. This was already a widely practiced AI technique in the 1970s, one that “scruffy” AI took in a particular direction, rather than an approach initiated by Schank and Abelson. An earlier form, developed at the Stanford Research Institute, formed the basis for Orkin’s response to the character behavior and authoring problems of NOLF2.

Strips and F.E.A.R.

In 1968, Douglas Engelbart’s Augmentation Research Center — at the Stanford Research Institute (SRI) — stunned the computing world with a demonstration that combined the first public showings of the mouse, hypermedia, and teleconferencing (Engelbart and English, 1968). The research that followed this “mother of all demos” laid the foundations for the modern computing environment. Meanwhile, Engelbart’s colleagues in SRI’s Artificial Intelligence Center were pursuing research that, while it shaped the agenda for decades of work in AI, isn’t nearly as well known or recognizable in our daily lives. This project involved an aptly-named robot called “Shakey” — a small wheeled cart, carrying a boxy set of control systems, topped by a tower holding range finders, a television camera, and a radio link antenna. An SRI promotional film from that time shows Shakey moving around a brightly-lit space of right-angled walls populated by two shapes of blocks, its tower bouncing from the sudden stops at the end of each action, while Dave Brubeck’s “Take Five” repeats in the background (Hart, Nilsson, and Wilber, 1972).

Shakey’s antenna allowed it to connect with what, for the time, was a large computer. This computer controlled Shakey using a program called Planex. This program, in turn, guided Shakey through the execution of plans developed by another program called Strips (for Stanford Research Institute Problem Solver). Strips carried out laborious planning processes developed using stored knowledge about Shakey’s physical environment. Planex used a basic representation of the plan and the reasoning behind it to attempt to guide Shakey to the goal state, addressing problems along the way. These problems included finding doors blocked, blocks moved, and registration mismatches between the actual world and the world model in memory (the possible range of which increased with every movement).2

Strips plans

Rather than something like Schank’s conceptual dependency expressions, the Strips system stored the state of the world using first-order predicate calculus. This is a type of formal logical representation that, among other things, allowed Strips to use theorem proving in its planning operations — for example, to see if an action’s preconditions had already been met. Rather than searching exhaustively for a set of actions that could move the world from the current state to the goal state, Strips instead — following a “means-end analysis” strategy widely used since the seminal General Problem Solver (1957) — worked backwards from a goal state. This backwards planning movement depended on a library of actions, each classified according to its preconditions and effects. Strips identified actions with effects that could move the world to the goal state, the preconditions of those actions were identified as subgoals, other actions were found to achieve the subgoals via their effects, the preconditions for those actions were identified, and so on. As one of the system’s architects, Richard Fikes, puts it:

When STRIPS finds an acceptable plan, it has computed the world model anticipated before and after each action in the plan, has proven that the preconditions of each action in the plan are true in the model anticipated at the time of the action’s execution, and has proven that the task statement is satisfied in the model anticipated after completion of the plan’s execution. (1971)

The basics of this proof, especially the preconditions and effects of each action, were the materials used by Planex to guide the movements of Shakey. But the Strips model of planning has also been adopted by very different systems, in which the roles of Planex and Shakey disappear or are transformed significantly. An example is Orkin’s system for character behavior that took the next step after NOLF2: the AI system for F.E.A.R.

F.E.A.R. plans

F.E.A.R. (Hubbard et al, 2005) is a first-person shooter in which the player is a new member of an elite unit — “First Encounter Assault Recon” — assigned to deal with unusual threats. Like non-player characters in NOLF2, the AI enemies in F.E.A.R. have goals that compete for activation, such as KillEnemy, Dodge, and Goto. But rather than each goal having an embedded FSM, each goal can be reached by sequencing actions. As in Strips, these actions have preconditions and effects, and the effects satisfy goals (e.g., AttackFromCover and AttackMelee both satisfy the KillEnemy goal).

This may seem like a simple change, but it has a powerful impact. Because different actions can be chosen to satisfy goals — and the state of the world (through preconditions) can influence the choice of action — a character at a desk could choose an action to satisfy a Death goal that is appropriate to someone sitting. Similarly, the fact that actions are not compartmentalized makes it easy to author new logical conditions on actions. Orkin describes an example of this:

Late in development of NOLF2, we added the requirement that A.I. would turn on lights whenever entering a dark room. In our old system, this required us to revisit the state machine inside every goal and figure out how to insert this behavior. This was both a headache, and a risky thing to do so late in development. With the F.E.A.R. planning system, adding this behavior would have been much easier, as we could have just added a TurnOnLights action with a LightsOn effect, and added a LightsOn precondition to the Goto action. This would affect every goal that was satisfied by using the Goto action.


2

Orkin also made certain changes to the Strips model. One involves costs for actions. Rather than simply choosing the shortest chain of actions toward a goal, F.E.A.R. chooses the lowest cost. Making the generic Attack action more costly than the two actions GotoNode and AttackFromCover causes AI enemies that have available cover prefer the safer option (Orkin, 2005).


3

F.E.A.R. also differs from Strips in the presence of squad behaviors. These look for AI characters to fill slots in coordinated plans. If they preconditions are met, the AI characters are given goals that cause them to fulfill their roles — for example, one giving support fire while the others advance to cover that is closer to the player. In addition, and this is where F.E.A.R. shines as an authoring effort, squad dialogue behaviors are used to address the Tale-Spin effect.

Rather than a character behavior system that only communicates to the player through animation (leaving open the possibility that the player will assume the system is as simple as that employed in other games) F.E.A.R.’s enemy squads are in verbal communication with each other in a manner that exposes some of the internal state and workings of the system. When I played F.E.A.R.,3 I knew the AI characters had lost track of my position when one asked, “Do you see him?” and a another replied, “Quiet down!” When I’d hidden particularly well I could hear them ordered, first, to search the area — and, eventually, to return to their posts. Dialogue also reveals planning-developed decisions about heading for cover, advancing, and other activities. It even explains non-activity, as Orkin describes:

If an A.I. taking fire fails to reposition, he appears less intelligent. We can use dialogue to explain that he knows he needs to reposition, but is unaware of a better tactical position. The A.I. says “I’ve got nowhere to go!” (2006)

In my experience, Orkin’s adaptation of 35 year old AI techniques was a great success. By comparison, the AI enemies in even the best games released around the same period (such as Half-Life 2, 2004) feel like part of a shooting gallery — more moving parts of the landscape than characters. That this was accomplished with limited development resources demonstrates the power of even quite dated AI approaches for modern digital media authoring.


4

But the experience appears to have left Orkin looking for something more. After F.E.A.R. he left industry for graduate study at MIT’s Media Lab. In his 2006 paper on F.E.A.R. he describes the goal of his MIT research group as creating “robots and characters that can use language to communicate the way people do.” And it’s very unlikely that people operate in a manner at all like Strips or F.E.A.R.


2

Critiques of Strips and F.E.A.R.

In his dissertation’s discussion of Tale-Spin, James Meehan addresses Strips as an important piece of prior work in AI. But he also critiques it severely, arguing that it is a very unlikely model for human cognition. For example, Meehan points to the way that Strips stores information about the world using first-order predicate calculus. This represents all facts about the world as formal logical expressions, with “reasoning” about those facts performed using theorem proving techniques. Meehan and other scruffy researchers argued strongly — with some experimental data for support — that humans don’t remember all facts about the world in the same way or reason about them all using the same approach. Instead, scruffy researchers argued that human intelligence operates in different ways within different problem domains.

Even if some might disagree with Meehan’s critique of Strips, few would argue that F.E.A.R.’s planning uses a model that matches human memory. In F.E.A.R., rather than first-order predicate calculus, plans consult a model of the world composed of a fixed-size array (essentially, a data box divided into a pre-determined number of smaller compartments, all accessible at any time). This makes checking preconditions much faster, but has no psychological credibility.

Well after Meehan’s work, the 1980s and 90s brought fundamental critiques of the Strips approach from new directions. Phil Agre, in Computation and Human Experience (1997), provides a careful overview of the functions of (and relationship between) Strips and Planex before launching into a discussion of the recurring patterns of technical difficulty that will plague systems of this sort. The problems arise from the fundamental distinction between making plans and following plans — a recapitulation of the mind/body split of Descartes, and another kind of action compartmentalization — which is reified in the distinction between Strips and Planex but in the background of almost all work on planning.

Agre points to 1980s work of David Chapman, demonstrating that “plan construction is technically tractable in simple, deterministic worlds, but any non-trivial form of complexity or uncertainty in the world will require an impractical search” (156). In other words, traditional planning only works in microworlds, such as the Aesop’s fables world of Tale-Spin or the simple geometric spaces of Shakey. Agre proposes that, rather than being based on plans, “activity in worlds of realistic complexity is inherently a matter of improvisation” (156).

How might a Strips-like system be more improvisational? It seems clear that Agre would remove the boundary between Strips and Planex. He also quotes Strips authors Fikes, Hart, and Nilsson as follows:


8

Many of these problems of plan execution would disappear if our system generated a whole new plan after each execution step. Obviously, such a strategy would be too costly. (Agre, 1997, 151)


2

But it might not be too costly with modern computational power and a simplified world representation. In fact, improvisation of this sort is close to how planning operates in F.E.A.R. Plans are very short term — with each “goal” essentially being the next compound action (e.g., getting to cover, using the lowest-cost set of one or more actions to do so). If the plan is shown to be flawed in the course of execution (e.g., by the player invalidating the cover) F.E.A.R. doesn’t attempt to recover the plan via Planex-style compensations. Instead, a new plan is generated.4 F.E.A.R. can do this — and do it in real time, even with most of the available computational power claimed by game graphics — precisely because it is a microworld. The approach wouldn’t work in our everyday world of complexity and contingency. But, of course, all games are authored microworlds.

Given this discussion, we might still critique F.E.A.R. For example, its ambitions — improving the combat-centric behavior found in first-person shooters — are rather limited, in the broad spectrum of digital media. For these sorts of characters to play fictional roles beyond combat would require much more complexity. Specifically, these characters would need to be able to engage in multiple behaviors simultaneously and work toward both long-term and short-term goals. I will discuss this further in a later chapter, in connection with the characters of the Oz Project.


2

Yet the fact remains that F.E.A.R. demonstrates the power of traditional AI approaches as authoring tools. It also shows that many of the most damning critiques of these traditional approaches fall away when they are employed as authoring tools for games on modern computing hardware, rather than as research into systems for intelligent action in the everyday world. The only critique that remains — which, in fact, gains even more traction — is that these approaches lack validity as models of human cognition. As discussed in an earlier chapter, I believe this matters only to the extent that insights into human intelligence are what we hope to gain from our work with computational models. And this was precisely the goal of a major story system that followed Tale-Spin.

Notes

1The basic FSM structure can be implemented in a number of ways, some more efficient than others, in a manner than embodies the same operational logic.

2Something like the Strips/Planex distinction between plan formulation and execution was also found in Tale-Spin. For example, MTrans and PTrans had “action module” versions (Do-MTrans and Do-PTrans) that did additional work not contained in the versions used for planning (Meehan, 1976, 43). As Meehan puts it, “You don’t worry about runtime preconditions until you’re executing the plan. If I’m planning to get a Coke out of the machine upstairs, I worry about having enough money, but I don’t worry about walking up the stairs until I’m at the stairs” (41).

3I played the Xbox 360 version (Hubbard et al, 2006).

4In fact, Orkin writes of F.E.A.R.’s approach, “the biggest benefit comes from the ability to re-plan.”

Total comments on this page: 27

How to read/write comments

Comments on specific paragraphs:

Click the icon to the right of a paragraph

  • If there are no prior comments there, a comment entry form will appear automatically
  • If there are already comments, you will see them and the form will be at the bottom of the thread

Comments on the page as a whole:

Click the icon to the right of the page title (works the same as paragraphs)

Comments

No comments yet.

Richard Evans on paragraph 31:

This is a fascinating discussion, but I don’t think that continual re-planning is sufficient to fix the deep problems you identify.

The really deep problem here is, as you say, that planning and plan execution are separate. Re-planning won’t make this problem go away – in bad cases, it will just make the plan executor appear to be stuttering.

The problem is that plan execution is typically a self-contained routine which has no access to the data of the planner. The planner can weigh up as many options as it likes, but then, once it has made its decision, the only piece of data that is passed to the plan executor is that decision. The reasons for the decision, and all the other ambient issues which the planner considered, are lost to the executor. But all that the player directly sees is the movement of the executor. All that information wasted!

A concrete example: the planner decides to run away from the guard because the guard is a Zorg, and Zorgs are scary. It passes its decision to the plan executor who routes in the other direction. Unfortunately, this route goes through a cluster of Zorgs! The problem here, to reiterate to the point of tedium, is that the plan executor (the router) does not have access to the information (avoid Zorgs!) which prompted the decision (to avoid the guard).

February 27, 2008 12:40 pm
noah :

Yes, the solution for F.E.A.R. works so well (in my experience) because the characters are called on to do limited things and live for short periods of time. (In Orkin’s GDC 06 slides, the estimated enemy AI lifespan is 12.23 seconds.)

Given these conditions, it’s okay for a character to have cover invalidated, run to different cover, and then have that invalidated — the character will probably get killed before they bounce back and forth a couple more times and look to be dithering. (As your example character might, running back and forth between guard Zorgs and the cluster of Zorgs.)

The hard question is the next step, beyond F.E.A.R. (as it were).

For Agre the next step is to approach AI a different way, based on his ideas about how people operate intelligently in our contingent world — a matter, as I mention in this section, of improvisation.

But when we’re making media, like games, I don’t think we should worry too much about how real people operate in the everyday world. So our question is different, though still very challenging.

To return to your example, are you sure that re-planning after each execution step (as in Agre’s quote from the Strips creators) wouldn’t work here? If the planner decides to run to a safe place (away from the Zorg guards), then starts to run to the identified place (only to find a cluster of Zorgs), then (because a step has taken place) re-plans to find a different Zorg-free place to run (or take a different action, if the preconditions for running away can no longer be met, since no Zorg-free location is accessible) wouldn’t this potentially produce both appropriate and legible NPC/AI behavior? In fact, we could even add something to the system that causes the character to act increasingly at wit’s end the more quickly plans are being invalidated. Or it could open the powerful characterization device of showing how characters respond to pressure, with each character responding in their own way to a string of necessary re-planning, in a manner maximally legible to the audience — from exaggerated frightened dithering to cool calculation.

But back to the larger question about the effectiveness of re-planning. Does it all, perhaps, come down to the size of the execution steps after which re-planning occurs?

February 27, 2008 2:47 pm
alexjc on paragraph 32:

Replanning regularly is feasible, not every frame but say at 5Hz; it’s being done in some other shooters. But as Richard says it’s a cosmetic fix.

It makes things harder to debug, as you get the “memento” effect:
http://aigamedev.com/essays/memento-debugging-planner

I think it’s also very wasteful, and potentially a source of oscillation unless you have good safeguards in place. There have been a few projects in robotics and planning that integrate planning and execution, but I hope to experiment with these ideas myself in my Game::AI++ project… (See AiGameDev.com for details. :)

February 27, 2008 2:51 pm
noah :

Alex, let me start by saying that you and Richard are clearly much more expert in this area than I am. I’m very glad to be getting your feedback before the book goes to press!

That said, shouldn’t we make a distinction between continual replanning (as in your Memento post), replanning after each compound action and/or when a plan is invalidated (as I understand F.E.A.R.), and replanning after each execution step (as in Agre’s quote from the Strips creators)?

Obviously, it’s not specified what replanning after each execution step would mean in the context of media. Following Phoebe Sengers, I’d be inclined to define an “execution step” for an NPC/AI as something like “an action we have reason to believe the audience has perceived.” And I would want to imagine the current action context (what I’m doing now, and why, and what I was doing before) to be part of what informs the planning process. But maybe this kind of approach has pitfalls I’m not sufficiently acknowledging.

Meanwhile, I’m going to go back and look again at your three video posts on “Behavior Trees for Next-Gen Game AI” (starting here, for those following the discussion).

February 27, 2008 3:59 pm
Richard Evans on paragraph 31:

No matter how frequent the replanning, I don’t think it is enough to fix the deep issue that you articulated earlier (the division between mind and body hypostasised as planner versus executor. Suppose that every n seconds someone injected you with a temporary amnesia pill. This pill only lasted briefly, but while it was active, you couldn’t remember anything. Then your behavior would be erratic: it would alternate between guidedness and foolishness. No matter how short the pill lasted, if you performed any sort of action while it is active, it will be the wrong action.

Suppose the planner tells you to get to your vehicle. There are three paths to your vehicle, A B and C. Both B and C are more time-efficient, but are infested with Zorg guards. A is less time-efficient, but safe. Now if your executor (the router) has no understanding that it has to avoid Zorg guards, then it will choose B or C. Once it starts executing, it will replan, perhaps it will try again and choose C this time. No matter how frequently it replans, the router (which is by hypothesis blind to the planner’s reasons) will continue to choose B or C (because they are nearer to the vehicle by hypothesis). So the agent will oscillate foolishly between B and C and never get to his vehicle via A. This is so, no matter how frequently you re-plan. The problem is deeper: the router doesn’t understand what the executor knows. (Of course, we can “fix” this problem by telling the router/executor to avoid the last n failed routes, but that is to cover up the problem, rather than to solve it. The real solution is to allow the executor to have access to all the information that the planner has access to).

February 27, 2008 4:18 pm
noah :

Absolutely — I have no doubt that such a gap exists any time there’s a separation between planning and execution. And that’s an important point for this section.

But the next idea I’m trying to get at in this section (which clearly needs some reworking) is this one: In an authored microworld (like a game) can we create a situation in which that gap won’t matter to the audience?

For example, take the three routes to the vehicle. If the planner says “Get to the vehicle” then the executor can choose the wrong route, due to lack of information. If the planner says “Get to the vehicle via this route” then the executor can choose the wrong method of movement (e.g., choosing skiing, when the whole reason to move is that the snow is almost completely melted). And so on. The problem can always appear at the level where the planner hands off any meaningful decision to the executor.

In the everyday world, we’ll always have more in the world than the planner can handle. But in a simplified, authored microworld, shouldn’t we be able to plan “all the way down”? That’s to say, what if the planner chooses to leave and chooses which route to take — and the world involves no choice between skiing and walking? In this case, it seems like the NPC/AI can start to execute something that made sense during planning, then perhaps find out it doesn’t make sense, then re-plan, and the result is perfectly appropriate character behavior (e.g., starting to take route B, finding it too dangerous, visibly registering surprise, taking a moment to re-plan, turning around and going another way).

For this to work, of course, we need to enable it. We could try to give the executor all the information about what would invalidate the plan (e.g., Planex knows what the state of the world is supposed to be before the execution of each stage of the plan) and then the executor can run until something invalidates the plan. Or, alternately, we could re-plan after anything important happens (e.g., the NPC makes the first move down the path and there’s an explosion, or some Zorgs come in from an unexpected direction, or new information comes in from a teammate, or the character actually makes it to the vehicle, or…). My thought, in this section, is that F.E.A.R. points to something like this second option, doing quite well for its limited characters and fictional world.

But our conversation so far has me concerned. Is there something seriously flawed about this reasoning? Or is there a wider point that this way of posing things misses? Or, perhaps, is the problem that I’m not stating things clearly? I appreciate your thoughts on this.

February 27, 2008 8:38 pm
Mark Nelson on paragraph 31:

It may not fix all the problems either, but not all planning systems have such a strict separation between plan construction and execution as the classical ones do, and not all rely on replanning as the only way to deal with changes in the world or unexpected results. There’s a whole body of work on conditional planning, for example (sometimes called contingency planning), which instead of building a “this is what you should do” plan, builds a whole set of “this is what you should do, unless [x], in which case you should do this other thing”, etc., etc. In the limit case you have a policy for what to do in any possible situation.

February 27, 2008 6:53 pm
noah :

Mark, that’s a good point. The current section is basically saying, “Look, game worlds are so much more defined than the everyday world, and modern computers are so fast, that with F.E.A.R.-style cleverness we can re-plan whenever we need to.” But it’s also worth pointing out that a lot of work has been done on planning since Strips — including that of the sort you describe.

February 28, 2008 11:08 am
Jeff Orkin on paragraph 9:

Your comparison between NOLF2 and The Sims is very aprepos, in that NOLF2′s non-combat AI was very much Sims-inspired. Our worlds were filled with “Smart Objects”, which dictated animation sets in a similar manner to The Sims (e.g. writing at desks, napping in beds, dancing to boom boxes, etc). We wanted the player to feel as though the other characters truly *lived* in the world, rather than simply waiting for the player to arrive, and what better model for living domestic environments than The Sims.

February 27, 2008 8:22 pm
noah :

Jeff, that’s a helpful piece of information. I’m going to make sure to work that into the manuscript.

February 28, 2008 11:08 am
Jeff Orkin on paragraph 20:

The major difference between FEAR’s planner and STRIPS is the simplification in representing the state of the world — we limited world state to a fixed set of variables, each of which can be assigned one value. I refer to FEARs planner as a Goal-Oriented Action Planning (GOAP) system. A GOAP system facilitates real-time planning under tight processing constraints by distributing processing to minimize the load during plan formulation. Processing is distributed among asynchronously updated sensors and subsystems that all share centralized dynamic Working Memory, as well as a static Blackboard. Several other developers are starting to employ GOAP systems – I’m beginning to keep track of them here: http://www.jorkin.com/goap.html

February 27, 2008 8:52 pm
noah :

The simplified representation of the world in F.E.A.R. is one of the things I largely left out of my description here — but now that strikes me as a mistake. Instead, I should mention it, both as something that makes planning faster and as something that allows plan execution to have access to the same model of the world as plan formulation.

Thanks for the pointer to your GOAP list. I’m guessing it will get even longer before the paper version of Expressive Processing is in the hands of readers, but I think I’ll still add a footnote about how the approach is finding traction with other creators.

February 28, 2008 11:11 am
Jeff Orkin on paragraph 21:

The planning for squads is actually external to the STRIPS-like planner. The important point is that It is a two-layered system where squads send orders to individuals, and the individuals use the planner to formulate a sequence of actions to respond to some order. Individuals can choose to disobey orders as well, if the AI feels that some other goal is more relevant (e.g. running from a grenade is more relevant than getting to a cover position). The squad-level planner was more adhoc than the more formalized planner for individuals.

The problem is that STRIPS is not a hierarchical planner. When we think about squads, we need a plan that can handle multiple agents acting in parallel. More recent developments in planning include Hierarchical Task Network (HTN) planners, which may be well suited for games. Since FEAR was my first foray into planning, I chose to focus on formalizing planning for individuals, rather than biting off more than I could chew.

February 27, 2008 9:20 pm
noah :

Right, I should mention that the Strips-style planner is only for the individuals, and not for squads.

It’s been a while since I’ve read your F.E.A.R. papers. Do you say anything specific about the implementation of squad behaviors? If not, would you be willing to say a little here?

February 28, 2008 11:12 am
Jeff Orkin on paragraph 25:

I absolutely agree that humans operate in far more sophisticated ways than simple logical planners. In the world of games, we tend to be more interested in the illusion of human-like intelligence than in actually operating like humans.

But what is important to note is that the motivation to pursue a planning system for FEAR was really to make authoring complex behaviors more comprehensible by human designers. By breaking behavior in bite-sized modular chunks, and offloading low level action sequencing to the AI characters themselves, we freed ourselves to focus on the higher level of complexity — squad behaviors.

February 27, 2008 9:28 pm
noah :

Jeff, I completely agree with you on that point. Maybe I misread this line of your paper?

March 1, 2008 7:47 am
Jeff Orkin on paragraph 31:

As Mark points out, there are many ways to implement a planner. FEAR’s plan executor actually does have access to the same Working Memory as the plan formulator has — this is where sensors record information which may be vital to determining that the current plan has become invalidated (e.g. the destination cover position is no loner valid). In addition, when a plans fail, in some cases the AI record the reason for failure in Working Memory, for use in future planning. For example, when a door is blocked, the AI remembers this so that next time he can try diving through the window instead. So, AI are not necessarily prone to the amnesia Richard describes.

I agree with Richard in principal that the separation between planning and executing is not a perfect solution, and is not the way real humans work, but in the case of an FPS, FEARs planner should be able to handle the cases Richard brought up. As Noah suggested, an AI running from Zorgs could repeatedly replan running to new destinations until all options are exhausted, at which point he could cower in fear. This is almost exactly how AI in FEAR handle grenades — run away if you can find a route, and crouch for cover from the blast otherwise.

February 27, 2008 9:48 pm
noah :

I think the solution here may be to delve further into how F.E.A.R. works in this section (hopefully without too much lengthening). Particularly things like the ongoing work of sensors, the results of which are available to both planning and execution, are not what’s going to be expected by people familiar with the traditional arrangement.

Meanwhile, in a later section I’m going to get into things like Oz, The Expressivator, and Facade — so will at least approach some of the work that leads up to Alex’s on behavior trees.

March 1, 2008 8:06 am
Jeff Orkin on paragraph 34:

I’m not sure if you’ve mentioned one of the biggest reasons that planning can be a success in games, as opposed to planning for robots in the real world. Robots have terrible problems estimating the state of the world from noisy sensor data, while in games we have the luxury of perfect world state information. Many approaches to AI that were abandoned years ago as not robust enough for robots in the physical world are great fits for characters in game worlds. These AI approaches used to be too costly for games, but as processors improved (and multiplied) and graphics moved onto dedicated video processors, we finally had the resources we needed to apply academic techniques.

February 27, 2008 9:57 pm
noah :

Yes, I’m trying to get at this sort of thing with the phrase “all games are authored microworlds.” But I think it would be good to make your comment’s particular angle more explicit. One of the fascinating things about Shakey was the way it used a video camera and edge detection to try to compensate for divergences between its actual location and the presumed location in its logical model. Game characters not only don’t have to try to compensate in these ways (they can know exactly where they are) but also don’t have to worry about many kinds of unknowability, ambiguity, and so on.

March 1, 2008 8:11 am
Mark Riedl on paragraph 26:

A distinction one could make with regard to problem solving is whether one is operating in the everyday, known, and familiar world, or whether one is operating in a novel situation. Schank and his predecessors were acutely concerned with how people operate in the everyday. Thus they were concerned with scripts, schemas, and case-based reasoning. An interesting article (Ratterman et al., 2001, “paritial and total-order planning: evidence from noral and prefrontally damaged populations,” Cognitive Science, 25) provides evidence that strips-style reasoning is not necessarily a bad model for problem solving in novel situations.

February 28, 2008 9:52 am
noah :

Mark, you’re definitely right about routine action being the focus of structures like scripts. That’s why Meehan couldn’t use straight scripts, Turner couldn’t just re-use cases, and so on — because they valued novelty in stories.

That said, while I’m interested in the article you mention (and will check it out), the main point of the section of this chapter I’ll post on Monday is that, when we’re building media, I don’t think we should be too concerned about how well our AI models match theories of human cognition. Still, since I’m mentioning the scruffy argument here, it would probably make sense here to footnote the fact that there’s some evidence that supports viewing Strips as a model of some kinds of human thought.

March 1, 2008 7:59 am
Jeff Orkin on paragraph 9:

FYI, we had a fun PC Gamer interview about our NOLF2 AI and Smart Objects, scanned and posted here:
http://web.media.mit.edu/~jorkin/pcgamer.html

February 28, 2008 11:28 am
Jeff Orkin on paragraph 21:

Take a look at pp.13-15 of “3 States & a Plan.” That’s a pretty good overview of how our squads worked — and as noted in that document this approach was inspired by Richard Evan’s GDC talk and Gamasutra article “Social Activities: Implementing Wittgenstein”

February 28, 2008 11:31 am
Jeff Orkin on paragraph 25:

I actually like the point that you’re making. But regarding what I meant by that line — I should have worded it “robots and characters that communicate in human-like ways”. The point being that the combat chatter in FEAR was basically an effect layered on top of a centralized squad controller, rather than the result of individuals truly communicating with each other.

March 1, 2008 8:20 am
noah :

That makes sense. I should probably re-write that paragraph then.

BTW, have you written anything about your move that I might cite here? Or, if it’s not already out there, could I convince you to write something about the larger personal context of your work — like Meehan on Tale-Spin and Turner on Minstrel?

March 1, 2008 8:51 am

[...] 6.2: Beyond Compartmentalized Actions [...]

March 8, 2008 4:56 pm

Sorry, the comment form is closed at this time.

Powered by WordPress