Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

Apologies for the late reply, I'm okay sharing information here for other game devs that want to learn about it too. 

Regarding AI:

Our AI has 2 senses - hearing and vision. The player's footsteps create "noise" which the AI can hear withing a certain range. The noise strength(e.g. crouching less strength than walking) and occluders matter (blocking entities such as walls add a multiplier to reduce noise strength). Noise and vision raise the AI's suspicion which puts the AI into investigate mode -> go to sus point and look around. If the AI doesn't spot the player he goes back to patrolling, else if the player finds the player he starts chasing him. Vision makes the AI go into chase mode almost immediately.

Vision and hearing are done using raycasts yes. OnSee/OnHear events feed data into the AI then the AI decides what to do with the data.

Chase mode is quite simple -> pathfind your way to the player using a pathfinding algorithm  and when close enough trigger kill player sequence.

Investigate mode is quite simple it  just makes u go to a suspicious point and look around

Patrol mode doesn't currently use breadcrumbs but it uses predefined designer made paths. Investigate is what triggers breadcrumbs you can say. Ideally we would fine tune patrol mode to have it so the player drops breadcrumbs occassionally and then the AI would go there instead of the default fallback waypoint. However, this would be hard to balance within the gamejam scope and we had way too much other stuff we wanted to do. We used to have a run path life echo which caused the AI to follow the player's previous path (before he died) but this didn't fit at all and ours was too strict (followed exact path etc.).

Overall, our AI is pretty simple but it does its job well when in each mode. For a full release or an actual game I would definitely make it way more complex with more stuff behind the scene going in. Off the top of my head I would:

1. Add a better algorithm for patrol -> go towards player area if far for long time (between x,y seconds).

2. Proper searching after investigating -> Search/patrol nearby areas without player triggering sounds -> we had smth like this that barely worked so we scrapped it due to the scope.

We also had to nerf the AI about 4 times lmao he was too good at his job. Some of these nerfs literally halved his power (vision, hearing, chase speed) all got cut nearly in half (I was too good at the game apparently ;P)


Regarding the modelling process of assets:

For this project we had 1 person working on the art.

Modelling the player and modular parts took approximately 24hrs total. We didn't make all the animations ourselves -> we got some from Mixamo, however we did some animations like the cutscene animations which took about half a day. Every single thing you see in the game is modelled by our artist except I think 2 props (3 max).

We made our own textures for absolutely everything which also took a day. Still only 1 person doing this. The enemy is a tweaked player model with different textures and slightly different model changes depending on what state the game is in (player 1st life, 2nd life, etc.). So this wasn't a big modelling job - no need for a 2D template etc. This allowed us to focus on other stuff but it's also completely inline with the hidden message in our game which I will not spoil. All I will say is there a reason the player gains clarity after each death.

Thanks for taking an interest <3 If I glossed over anything or you have more questions please ask.

I actually went and added a light breadcrumb functionality today to make it so the AI plays more around the player. Feedback's been pretty good for it. Thank you for making me think about it again lmaoo