It is so hard to get back on a horse that you’ve fallen off of, when it should be galloping away into the sunset. Good news is that the FDG paper is finally sent off
, so at least I did something! Sorry, it’s been kinda rough personally lately. I will do my best to get back into the swing of things for this post.
~~~ ***** ~~~
The caveats that I wrote and didn’t really need in the last few posts were meant for this one. Probably my weakest link so far in terms of research, but I can’t avoid it any more.
I have been assuming an agent flow something along the lines of:
Sensing -> Processing -> Deciding -> Actuating
And I am searching for some architecture that helps with Processing and Deciding. There has to be better AI words and definitions for what I mean, but in my paper I called it the “Decision-Making Mechanism.” Architectures that help with it can be of any model, discipline, or motivation: be it affective computing, cognitive science, planning, or something I’ve failed to mention so far. So long as it fits what I want. The following are listed in no particular order, just architectures I have heard of and my impressions of them. (Wikipedia lists so many more… Do I even want to try and look at them all? Apparently other people are a lot better at it than I am http://ai.eecs.umich.edu/cogarch0/ )
I’m using it as a token cognitive architecture here.
From its FAQ: “Also, if you only have limited time that you can spend developing the system, then Soar is probably not the best choice. It currently appears to require a lot of effort to learn Soar, and more practice before you become proficient than is needed if you use other, simpler, systems, such as Jess.” http://acs.ist.psu.edu/soar-faq/soar-faq.html#G8
They list their pro’s as being able to handle learning, interruptibility, large-scale rule systems, parallel reasoning, and a design approach based on problem spaces (which I’m not entirely sure what that means, but okay!).
Their primary concerns seem to be functionality and adherence to their model, not necessarily usability. I’ve worked with large rule systems before, and it difficult to make those rules make sense, to imagine them as a whole/working in conjunction, and in general organization. I would need to do a lot of work imposing structure on the undefined nature of everything and organizing the rules into something coherent.
My token representation of affective computing (and general planning). Again, more people have done better comparisons than me. http://www.cogsys.org/pdf/paper-3-2-39.pdf
On the surface, it seems that using emotions as a model makes the authoring a bit more understandable and transparent — I am afraid of something, so I won’t want to do it! Simple. However, the goal structures that determine scenarios feel weird, the personality files that control changes in opinion feel even weirder, and something feels off about distilling everything into a handful of positive/negative feelings. Dramatic stories are conflicted, not always so clear-cut, and it is difficult to be expressive when all agents are cut from the same decision-making cloth.
Used to make Facade, this seems to be the clearest answer out of the lot of them. That’s sorta what this whole thing has been leading to on purpose, but let’s see how well it all works out anyway.
- Favors usability and common-sense logic – ABL’s structure is a Behavior Tree (ABL Behavior Tree or ABT), made up of choices. It’s one of the simplest, fundamental representations of branching logic.
- Low price of admission – One of ABL’s biggest flaws (like any architecture on this list) is its complexity and capability. My paper talked about how novice authors need pre-defined idioms to even get started. However, with a fundamental set of idioms and structures written for an author, the difficulty of making behaviors is drastically lowered. I know, I’ve been there!
- Capable of higher complexity – An agent can have its own ABT, or one tree can govern multiple agents. Reactive planning allows an ABT to grow to be as complex or simple as necessary in any given moment.
- Scalable – Facade proves its capabilities of scaling to a satisfactory degree.
- Hierarchical and Modular – Reactive planning ensures that sub-goals of a player are self-contained (as they have to be when they get added onto the ABT). A behavior tree made of behavior lego’s.
- Embodied – Hooks up into Unreal or whatever the user wants.
- Interactive – Reactive planning is called such because it reacts to user’s actions… interactive by definition.
- Dramatic – Facade shows that it’s at least capable of dramatic authoring
- Expressive – Facade shows that it’s at least capable of expressiveness
~~~ ***** ~~~
I probably went about this the wrong way… But I felt I had to answer the questions: Why ABL? Why reactive planning? Why not another architecture or method? I don’t even think I really answered the questions with the above. I don’t think I could even enumerating every architecture I could find, although that might be a useful exercise in the future, or how well that would answer the questions either. I shouldn’t take the power of ABL for granted.
Behavior Trees do not scale well on their own and are not capable of accounting for every possibility at every moment for an embodied agent. The ABT built of reactive planning makes the best of both worlds: it takes the simplicity of design of behavior trees and the potential complexity of situated AI and makes it work. ABL has no model or structure of the human mind, emotions, or any of that built-in, but it is capable of supporting whatever an author may choose is their most natural form of decision-making. ABL enables whatever folk psychology an author has, and it is up to the author whether it will work, make sense, or be designed well enough to function. I get to show authors what is possible, what ABL and its agents are capable of, not chain them to a particular philosophy and force them to work within its boundaries.
That is why I chose ABL out of any others. That and I have experience with it, and access to Michael Mateas whenever I need it >=D.