I met with a ton of people last week, including my adviser, and have made a lot of progress in bringing the paper together. This post will summarize the paper-relevant feedback I got from the meetings/interviews before I jump into taking a stab at a full first draft.
While I have been able to meet with three groups and get three different graphs of their authoring processes, they (as I mentioned in my last post) didn’t converge into a similar-enough shape. In many ways comparing the three systems (ABL, FAtiMA, and BOD/POSH) was comparing apples to oranges. Being the most complicated and in some ways encompassing the other systems entirely (in terms of their general purposes and components), we decided to focus on ABL in the paper and reserve the other systems as supporting evidence to our claims. PAPER NOTE: Going to need to describe how they are different and why we chose ABL in citeable ways.
I identified what I still temporarily call the “magic step” in authoring in all of these systems, which can be defined as the process by which an intermediate/expert author processes an abstract task and creates the functional code that represents this task. “Abstract task” here is the specs of the behavior in natural language, and the “functional code” output is the code that can successfully be run by the system and drives the agents in the system do perform the abstract task according to its specs.
All systems share this magic step, but the process within the magic step is different for each system. A “useful” authoring tool for a system must quantifiably alleviate some part of the magic step, transferring the process from a human author to an external source. This is my personal definition for what I call “useful”. Also note that the eternal source does not have to be digital or computational: a white board where you offload a process as a process map is technically an authoring tool. However, if the whiteboard doesn’t save time (is this our ultimate definition of “quantifiability?”) or help us make behaviors that would otherwise be too complex for someone to hold inside their brain, it is not “useful”? Is this definition going to be troublesome? Maybe!
Last post detailed the magic step of BOD/POSH because of its simplicity.
I had some words here, but they went away. I’m sure Swen will be able to articulate what I’d say better anyhow!
I spoke with Paulo in order to tease out the magic step of FAtiMA. While I thought it might start with a list of abstract actions like BOD or ABL, FAtiMA is primarily concerned with goals and motivations instead. Teasing those out in quantifiable portions is the key challenge of FAtiMA authoring.
The whiteboard up there shows the process. Instead of starting with actions, you start with goals (being altruistic, personal safety/fear) which are driven by the decision points (Helping the soldier find his lost interpreter or not) which are often broken down into sub-decisions (Analyze the photo or not, tell the soldier the info or not). Each decision must also have a quantifiable reason to pick one alternative or another (Wanting to help vs. 10% chance of being harmed). Actions must also be made to support all the agents moving through the three (moving/speech acts that are implemented separately). Finally, you make sure that different people have different personality scalar values, which ultimately help choose which decision path each person takes. It takes a lot of trial-and-error testing to make sure the decision paths work as intended! [Samuel at Portugal said his most difficult authoring tasks were the tedious number-tweaking].
Paulo shared an interesting insight about authoring his recent paper. He spent a while building up his chain of goals and decisions, but he was having problems with simultaneous goal consideration. Paulo had to contact a more experienced FAtiMA author to learn an implicit limitation of a system: that an agent can only consider 1 active goal at a time. That means that any simultaneous considerations must be made as intent goals. We want the authoring tool to inform the user about these necessities to save valuable time, as Paulo had to redesign his structure based on the system’s limitation. The act of writing an authoring tool forces questions like these to get answered (Can there be more than 1 of X? What do you need to complete a Y?)
Speaking of referencing more knowledgeable people…
Every ABL author has to ask for help or reference at some point or another, but novice and intermediate authors have to do it A LOT. ABL is its own language with no IDE or language support other than a text editor and filterable print statements. Debugging it is a nightmare, and it has huge room for improvement. The following interviewees are listed in increasing order of ABL authoring experience.
Will works on connecting the new assets gotten from the animators and other basic functionality (like locomotion) and connects them as ABL acts for programmers to use. The most authoring he’s done in ABL is using behavior template things for super basic stuff that involves only one character (that I’m not familiar with because I never had to touch them). Will wishes for an IDE and basic language functions (like iterations, If-then, etc) to help reduce code bloat. He imagines a kind of isomorphic debugger that will help debug logical errors as well, but something more usable than the ABL Debugger because he thought it was very intimidating.
The above image outlines Karen’s behavior authoring process. In ABL, the behavior being authored is called an SIU: Social Interaction Unit. The first step involves sitting down and, in great detail, figuring out what the behavior should do. The process involves understanding what signals, WMEs, Animations, and Responses will be used/required, and if any of them are missing Karen must make sure they get made. Andrew will also point her in the direction of similar code that has been made in the past for her to use as examples. This whole first process is the most crucial step, because if Karen asks the right questions and it is clear what is expected of the behavior, there will be less tweaks to author later. Karen also notes that she rarely is asked to simply author a whole behavior, as it is the tweaking/adding functionality bits at the end that require the most of her attention/task allotment.
Coding then begins. If anything is needed from other people, she contacts them. If she needs to use other people’s code as examples, she asks them to explain their logic. If she runs into trouble in debugging, she will ask Larry/Josh/Andrew in that order (going up the food chain). She takes notes along the way if there are any decisions she had to make that she did not forsee (ex: Andrew may not have told her HOW to make something gracefully halt, so she tries something that she thinks will work.) When the list of things she distilled from the initial spec is completed the “First Draft” is done and a meeting with Andrew to discuss it is scheduled.
The second big meeting is where everything is analyzed. Always, specs need to be changed, animations need to be tweaked or replaced (which may require outside waiting), or the entire behavior of the SIU needs to be changed (ex: “watch others” was meant to be a short enhanced idle, and turned into the “reactionary state” in which the Head of Household is always in, ready to react to disturbances). Further iterations on the behavior authoring and Andrew-meeting occur. And once the behavior is pronounced ‘done’ or at least ‘done enough,’ Karen may be asked to tweak/change its animations, performance, or interaction with other SIUs at any point in the future (what is considered “polishing”). An SIU is NEVER DONE.
Karen’s interview offered great insight into the interpersonal process by which ABL SIUs are authored, but not so much into her personal coding problems. She feels that she is novice enough in the project that her main operations are copy-pasting existing code and tweaking it as necessary. An authoring tool would be able to offer her less code support (other than extending currently-existing templates) and more performance-based visualization.
Larry’s interview was exceptionally fruitful, as we dug deeply into his current authoring task: aiming a rifle. This act is extremely punchy because it has very extreme social repercussions, and it is very complicated because it must function between the player and NPCs, as well as for the case that the Initiator and Responder both have rifles. Because of the extreme social repercussions, handling the reaction to a gun being raised/lowered is very important — what distractions, and for how long, are allowed to interrupt the performance of the SIU? Very muddy and difficult questions for ABL authoring.
What you see in the image above is 3-4 interconnected sub-trees of ABL logic showing the raise-rifle SIU performance of the PC/NPC, Initiator/Responder performance, possible recursion, performance conclusion, and interruption resolution. It would take a long time to go into detail about each of these graph partitions, probably more detail than we have to spare in the paper other than with a complicated-looking diagram. What is most important about this, however, is the need for a graph visualizer for the flow in ABL, something more usable than the ABL debugger. Larry needs to be able to show, easily, that the different roles using this SIU will behave properly going down different appropriate paths, including with other SIUs interrupt. A template should also be able to be generated/created to use as an example for such complicated behavior in the future. And not just tree structures should be able to be visualized, but the repeated performances by all the agents should be traceable and easily seen as well.
Josh has not been part of the behavior-authoring gig in a long time, but he was the author of some of the original SIU structures and basic templates that exist now. He primarily works on the ABL compiler making language improvements, specifically adding CiF social rules to the ABL structure. (Making it MORE complicated to author, AAAAH!!!) But he was the most helpful in showing me how to run the ABL Debugger, how (in general) to make an Eclipse patch for basic IDE help (which Claudio Pedica supported), and other ideas for connecting tools to the ABL runtime code. I have many clear authoring paths forward (for future work in ABL).
IMMERSE Project Coordinator and leader of our little band of coding misfits. He helps design the scenarios and the SIUS that are needed for the scenario behaviors. He codes infrastructure and SIU support so that all the other authors know where their SIUs sit in the grand scheme of things, and also how their SIUs may be interrupted. His authoring methods were described as follows:
- Figure out the spec in descriptive terms: Initiator/Responder, Signals, and the logic of decisions
- Make a basic version in the code of step 1
- Look at the basic version from step 2, improv/tweek it, and make sure it plays nice (interruptions) with other SIUs
This process supports Karen’s description.
Andrew agrees that tools wouldn’t be as useful for him because he is so experienced with the code base. However, it is clear that help with visualization and debugging is needed. Currently all they have now are Josh’s code templates. A better animation viewer would also be nice. Josh’s task to make a way to test SIU sequencing without animation was in the works, but never got finished.
The biggest coding road block are the visuals: turn around time is slow, recording to get them made is slow. Nico is our only interface with the animators, and the animators don’t ever get to see how their animations play/fit in with the bigger system. There are also technical bugs with the animation system, animation blending, and the performance manager (which doesn’t have animation blending yet).
This section pretty much sums up what I spoke to Josh about. Listed in rough order of difficulty and time, at least by my estimate:
- Eclipse Patch: Requires so little domain knowledge it’d probably be more suited to one of Lyn’s group because it requires making a grammar. Josh estimated a day or two of someone sitting down and just making it. Would help with Eclipse coding support that some people (mainly Will and more industry folk) would like.
- Animation-less Sequencer: A different IMMERSE build (but would work with any ABL runtime) that would skip/ignore animation code and only worry about action sequences. Should have a gui about what actions to keep track of, and maybe some other intelligent filtering. Maybe specifying what sequence you’re expecting and catching derivations? Separating out animation from logic debugging is a key advantage that FAtiMA/BOD/POSH/Basically all other systems of this type have
- ABL Debugger 2.0: Revives the old ABL debugger. Adds a more searchable/configurable behavior tree visualization (seeing multiple zoomed sections of the tree at once. Saving those zoom points for fast iteration). Easier/better break points. Maybe stepping behaviors forward? Possible re-runnable GUI for quick iterations on small parts of the tree. Maybe a search function to explore all possible instances of code written? Including interruptions and signal noise. Basically allowing for robuster code.
- Better Animation Viewer: Possibly linked into the ABL Debugger. Most likely linked into the ABL runtime. Expanding the ABL debugger with graphical stuff, basically. Might need to brainstorm what this would look like/how it would be different from the ABL debugger…. or how much more it would take.
- Some support for Social Games — not really implemented or well-known, so no idea how hard it will be to author or debug.
I did interview Claudio Pedica, lead designer and programmer of the Impulsion Project. It was written in C# using Unity’s behavior tree library with some custom tweaks. He agrees that we need a tool to visualize the state of a BT because what makes up the state is spread all throughout the tree. And that it is very hard to debug between an error in the code and a character that is misbehaving. Reactive planning is great at helping make sure characters respond to the right thing at the right time, but it makes debugging the ‘behavior sequence’ extremely challenging.
We have 3 cases where we analyzed a behavior (lost interpreter) and the authoring process to create it down to the code level. Each case shows a different level of complexity, using a system with different philosophies behind them, but all of them share the “magic step” by necessity. This paper analyzes the “magic step” process of each of these systems to better understand what would be required in an authoring tool to alleviate the author’s burden in executing this step. Hopefully the details made explicit here will aid any group looking to reduce the authoring burden in creating interactive characters.