Game Development

Blog 658: Too Bot To Handle

I’ve actually flirted with game-playing artificial intelligence a few times in my life. The most notable attempt has to be the chatterbots behind my Warcraft III map The Arena, who roved around the land, picked up items, bought new equipment, ran home when hurt and, yes, taunted and responded to textual prompts rather more than people liked.

Alas, No Excuses will require bots with slightly more finesse, because they are not to be infinitely respawning players in an enclosed arena. They need to hold down consistent jobs, but get distracted sometimes and then go back to work. They need to make sure they’re not trying to shoot through their allies, but also try to avoid taking hits.

That is a heady cocktail.

Too Bot To Handle

Luckily, I’ve already written a good deal of bot code for No Excuses. In previous videos, I’ve demonstrated the player leading bots around corners and hiding from them, letting them relax and return to their patrol routes. I’ve demonstrated their ability to navigate corners and jump over obstacles as they do it, too.

What I have not demonstrated is the hideous mishmash of code that underpinned all of that, and, well, I’m not going to demonstrate that now. I am, however, going to discuss my current work to rebuild it Better.

The major problem was that the nitty-gritty details of navigating the landscape from point A to point B were all wired up with the details of patrolling and pursuing, which were all mixed up with the details of target acquisition. Spaghetti code!

I had polymorphic behaviour classes that nominally encapsulated different attitudes to life, patrolling and pursuing and attacking and so on, but these were subclasses that traded blows up and down with common code in the base class and the lines blurred in awkward places. (Inheritance heirarchies are still useful and powerful, I just let this one run away a bit, okay? Sheesh, it’s not like I’ve done this before.)

The various parts were so tangled up that I just couldn’t pinpoint the sources of some strange actions, so something had to give. I mean, sometimes they’d just stand on the spot and spin around — while that sort of thing is hilarious it also makes for terrible gameplay.

The player has to be Red, the enemies have to be Blue. It's been a universal truth since UT99.
The player has to be Red, the enemies have to be Blue. It’s been a universal truth since UT99.

So I’ve stripped it all back. There are several layers to a bot’s behaviour, and they are now conveniently distinct. All of these layers are tied together by the Controller, which stands in for the mouse & keyboard input parser of the human character: my bots act by passing “orders” to their units exactly the same way the player does. No omnipotent cheating AIs for me!

  1. Vision: the ability to understand what units (either allies or enemies) the vehicle’s pilot can “see” from its cockpit; beyond its elevation to a first-class MonoBehaviour component, this is unchanged from how I’ve described it before
  2. Navigation: the act of walking from point A to point B, including deciding whether or not to jump over gaps and working out how to get around obstacles
  3. Tracking: the bot drops “breadcrumbs” as it goes so it can retrace its steps if it needs to (e.g. after pursuing an opponent and killing them)
  4. Objective: what the bot actually wants to be doing right now, from standing still to following patrol routes and attacking enemies

My previous approach to navigation, that I was so proud of that I did a lightning talk on it at work, actually has a pretty massive flaw in it. Based on the bot “looking down” at an angle at the ground in front of it, to keep an eye on whether it is about to walk into a wall or not, it was so good at detecting walls that the bots jumped over steps they should have been happy walking up. Having just added slightly raised kerbs to my scratchpad scene in anticipation of it being a plausible game space, things started to fall apart pretty fast.

My new approach is based on raycasting directly downwards instead. Ground is detected by the cast hitting something, but walls are now implied by the height difference between the unit and the impact point of the cast rather than an impact with a vertical surface. Needless to say, a lack of impact means either the wall is so high that the cast happened completely inside it, or there is a deep pit ahead: the bot does a quick forward cast to see if it’s a wall or if it can jump the gap before returning a result.

This means bots can do floor detection slightly closer to their centres, which should allow them to manoeuvre more consistently and correctly in tight spaces and around finicky bits of geometry. I’m sure this will throw up different oddities in future, but that’s life.

If those enemy mechs weren't armed with cannons and therefore couldn't kill me so fast, I'd have got away because they'd have refused to jump down the cliffs. Alas!
If those enemy mechs weren’t armed with cannons and therefore couldn’t kill me so fast, I’d have got away because they’d have refused to jump down the cliffs. Alas!

After that comes the “objective” system, the new polymorphic heart of my AI routines. Objectives delegate all navigation details to the imaginatively-named Navigator, allowing them to focus entirely on the high-level desire at the front of the bot’s mind.

For example, a Pathing objective will tell the bot to move towards a target pathing node. When it reaches that point, the objective becomes Complete and cedes its place to its Next objective — another Pathing objective that leads to the next node in its current route. The important thing is that an individual objective holds some state, like the pathing route the bot is following, but does very little work beyond asking the Navigator for directions to that point and recognising when the task has been completed. This keeps them clean and simple, which is great because I want to be able to vary them based on mood or character traits in future (either with internal switches or completely distinct classes).

It’s the controller at the top that handles swapping between objectives, pushing them onto the stack when the bot gets distracted and popping them off when actions are completed. For example, a Pathing objective is very likely to be interrupted by an Attacking objective when the bot spots an enemy. The Attacking objective then ensures the bot tries to get the target within range of its primary weapons, at which point it will open fire (eventually, this is the objective where circle-strafing and dodge-jumping will come in too).

Once the target has been eliminated, the objective becomes Complete and is replaced by a RetracingSteps objective — this ensures that the bot is able to return to where it left off without getting stuck on odd terrain and corners. Once it reaches the starting point again, the original Pathing objective is popped off the stack and it continues as if nothing has changed. Delightful!

With the lasers in hand... I still don't managed to out-cannon my opponents. Better make them dumber again!
With the lasers in hand… I still don’t managed to out-cannon my opponents. Better make them dumber again!

I have to retract my opening paragraph, though.

The dream is, yes, to create large open-level action-adventures missions with conversation trees and inventory Tetris, where robots ask each other “What can change the nature of a man?” and you talk politics with bartenders. There will be dungeons and there will be empty plains, places to explore and alternative approaches to uncover and daring raids across the solar system. That is the dream.

Rome was not built in a day, however, so as I also alluded to in the opening I intend to begin with a singleplayer hero arena. I want to grow it from a simple no-frills deathmatch to a rip-roaring mish-mash of competing objectives and overpowered villains, just like I did for that WC3 map so many years ago. This will allow me to refine the core loop, build and expand all the basic features, bulk out the enemy variety — and then launch into the grand plan.

Ideally, this approach will produce a demo sooner rather than later. It is, after all, almost the festive season — it’s the most productive time of the year!

And you tell me...

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.