So last time I talked about bot navigation, which is all very well and good, but bots are nothing until they shoot back. Before they can start shooting, however, they have to be able to see their targets.
But they are bots. They don’t have eyes. How can they see anything?
Ice Cream Cone
The vision cone should have been an easy thing to introduce. Cycle through all the enemies within a sphere around the bot, then cut them down to those within a certain angle window of the direction it’s facing. Easy peasy — we have verified that the target is within the bot’s “field of view”.
The fatal flaw in that assumption is that enemies are comprised of a single point that is either visible or not visible. Au contraire, people; enemies have size and shape.
This led to some hilarity in that, while my bots appeared to be picking targets just fine at range, they couldn’t acquire targets right under their noses. Turned out, of course, that even if an enemy’s whole turret was filling a bot’s face, the geometric centre of the enemy was still outwith the cone of vision. (Obviously I am doing vision cones from head position rather than geometric centre, because I never like to make things easy on myself when there are difficult options available.)
“Lost sight of target,” the bot said to the log trace, desperately trying to walk forward to the last place it saw its target. Its way was being blocked by the target it had just lost sight of, another bot which was trying to do the exact same thing back at it.
There’s always a balance to be struck with things like this — how accurate does the bot’s “sight” really need to be? Do I want to draw a thousand rays across the vision cone to get a perfect understanding of every object that is visible, or can I fudge something a little less intensive?
Turns out that, whew, yes I can.
Luckily for me, when we find a potential target inside the sphere of possibly-visible, we can then read its radius and its height. That gives us a rough, but good enough, approximation of its extremities. From this, we can calculate four points on the square outline as would seen by the bot — its top left, top right, bottom left and bottom right according to the bot’s eyesight. Now all we need to do is check each of those extremities in the same way we started out with: one, is that point within the angle of the vision cone, and two, is there any terrain obstacle blocking sight of it?
If any one of those points passes the test, then the whole unit is considered visible. Turns out that, yes indeed, even if the target is right up in the bot’s face some of its extremities will still be within the cone and it can lock on to unleash hell. It’s good enough to be convincing, but not so granular that each bot will spend all its time processing sight or be unable to work in more cluttered levels.
This all ties quite nicely into the pursuit mechanics I established last week. Bots spend all their time patrolling (or standing idle), until they spot a target and enter pursuit mode (though I reckon pursuit mode is itself made up of several independent behaviours). They read their weapons to find the distance they need to stand at to ensure their shots will hit and attempt to close the distance — following the target around corners as best they can. If they lose sight of the target, they’ll start casting their gaze left and right in an attempt to reacquire, walking up to the last point they laid eyes on their target.
A wily player will of course hide at this point, knowing the bot will give up and turn around after a short pause. Break cover too soon and the bot will lock on again and repeat until it is in position to open fire.
There are currently four weapons in circulation: mining rams, laser rifles, shard rifles, and cannons. The bots are pretty happy with the latter three, all of them being automatic hold-down-the-trigger spray-and-pray weapons. Unfortunately, the mining rams need to be charged and then released and they can’t cope with this just yet. There’s another finite state machine in there that I need to tease out.