Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
Tags

Design epiphany

Until recently, I thought of the game as a series of alternating levels: PvP level, parkour level, PvP level, parkour level...

That's a lot of content. It makes a lot more sense to design each level to work equally well in both contexts. First, you play the level facing off against another online player. The game overrides the level's color scheme and render settings to achieve the feeling of tension I want, and to communicate critical information about the game state.

Then you play the same level in parkour mode, where each level can look and feel wildly different.

It's not a new idea, plenty of games do this. The tricky part is tightening up the design so that each element in mode X serves a corresponding purpose in mode Y.

Custom nav meshes

For reasons not yet clear, I find myself in need of a bot-friendly method of navigating around a level.

Now, I already have a traditional nav mesh for bipedal bots, thanks to Recast. It's super nice and can even be modified at runtime.

But this bot needs to be able to shoot itself around the level, attaching to walls and ceilings like the player does. I need a different kind of navigation mesh for this.

Here's the plan:

Task 1. Sprinkle points across all the surfaces in the level.
Task 2. Do a bunch of raycasts to determine which points connect to each other.

Of course this is pretty compute intensive, so I do it at build time.

Surface parameterization

I try to code up task #1 live on Twitch, resulting in this:

Eh, it's a bit off. The problem is, I want the points to be regularly spaced in a nice grid on each surface. This is closer to what I want:

Here's the process I end up with:

1. Loop through each triangle in the scene.
2. For each triangle, calculate two "basis" vectors for the grid on the triangle.
3. Use a standard triangle rasterizer to generate all the points on the grid, projecting each one back into 3D space.

The end result works pretty well, although I'm still struggling with thin triangles slipping through the cracks of the grid, and with sampling around the edges of triangles. Here's a shot of a particularly bad configuration resulting in a lot of missed points:

Adjacency calculation

Next, I connect each point with up to 48 of its closest neighbors, like this:

Here you can see the sparse point sampling completely missed the walkway. Not good. I'll probably revisit this problem at some point.

It takes about 5 minutes to generate and connect ~4000 points. The raycasts are really slow. I end up splitting the level mesh into chunks, which speeds up the raycasts immensely. The whole process takes less than 30 seconds now.

Most of the points in the graph are maxed out at the 48-neighbor limit. The connectedness of the graph is insane.

Pathfinding

I code up a quick implementation of A* and run it.

Turns out, when each point in a graph has 48 neighbors, the computational complexity of A* explodes. Even a path of only 2 or 3 hops takes a good 30 ms to calculate. Granted, it's unoptimized, and I could also try another algorithm entirely, but I suspect any algorithm would struggle with this graph. The good news is, since the connectivity is so high, and since points can connect to each other across long distances, I probably won't see paths longer than 3 or 4 hops in practice.

I end up putting A* on a separate thread. Similar to the threaded renderer, I communicate with the AI thread via a simple bytecode protocol written to a pair of ring buffers. Results are returned via callbacks.