Posted January 09, 2026 by Nyaki
As the end goal for our pre-production stage of Next Chapter (New™ working title btw, to match the changes to the game's setting and design), we wanted to be able to put together a proof of concept that would demonstrate the feasibility of implementing our core mechanics and show off the dynamics that arise from the systems involved in our first memory section. To accomplish this, we decided to make a fully functioning prototype of the office minigame that makes up approximately a third of our first memory section, focusing on implementing all of the systems and mechanics that will be involved in that minigame.
As the lead programmer on this project, most of this work naturally fell on me, as the rest of the team worked on cleaning up and fleshing out our pitch for the game as a whole and our production plan for the next stage of development. Even in this early prototyping stage, I wanted to make our fundamental systems as modular as was reasonable within the timeframe, which meant pulling some of my favourite old tools from my Unity toolkit. I'm getting ahead of myself though, let's go through the two systems that best highlight this modularity and I'll explain each of these tools in context.
First off, the interaction system. The logic behind this system was actually made for another project I'm working on and is something I have generally built up over time, but I haven't documented it yet, so I'm gonna go into all the juicy detail anyways.
To start, let's define what this system is intended to accomplish. We wanted our interaction system to allow the player to make something happen in the game when they click on an object within a given radius of their character. This could be anything from dialogue, to scene transitions, to playing a sound effect, pretty much any kind of in-game event you can think of (those more familiar with C# might already be able to see where this is going).
The first step, of course, is just getting objects to recognize when they are clicked. On the surface, this seems pretty easy. Just throw a Collider component on the object and use Unity's built-in OnMouseDown message, maybe check the distance between it and the player-character (which can be easily accessed through the singleton instance, done using a reusable Singleton class that I got from my wonderful professor Douglas Gregory), and it should work just like that, right? Almost. This was my first attempt at getting interaction working, and it worked great right up until I started placing objects in the level, and found that other collision could block the object from registering mouse input.
So I needed a different solution, which actually came from a friend's own personal attempt at implementing a similar kind of interaction system. They tried using a raycast down from the mouse's position, which ran into the same issues of getting blocked by other collision. But raycasts have one thing that OnMouseDown doesn't: layer masking. Using a layer mask, I could set object layers to be ignored by the raycast, so it would only register objects that were on an interactable layer. From there, I could just let the object hit by the raycast know that it had been interacted with by getting the Interactable component from it and calling its Interact() method (you could also use GameObject.SendMessage, but I personally prefer to avoid using that wherever possible). This also has the added benefit of moving the interaction logic off of the interactable objects and centralizing it in just a single component, that can be attached to the player to make handling interaction range really easy (and also lets us use our handy-dandy singleton class again).
Now that we can tell our interactable objects that they've been interacted with, we now need to actually make something happen in response to that interaction. This is where my beloved UnityEvents come in. For those who are unaware, the UnityEvent class is a delegate class that works somewhat similarly to C#'s "event" keyword, but with one absolutely massive difference: UnityEvents are natively serializable. This means that they appear in Unity's Inspector window, and since they appear in the Inspector, you can have them call methods on objects in the game scene. They do have some limitations, notably the fact that they cannot call methods with more than a single parameter, and cannot call any methods that have a return type, but these are easy restrictions to work around and are massively outweighed by the value of getting to dynamically run events on objects in the scene and the amount of time saved by being able to easily edit what happens when a certain event is run without having to change any code.
With UnityEvents, our interaction system becomes extremely modular, letting us do pretty much anything we want off of an interaction with only a single component class. I've also used them elsewhere to achieve similar levels of modularity, such as for triggering dialogue and other events when the player has completed a certain number of tasks in the office minigame.
The other system I want to highlight here is the Tasks system, which is specifically used for the office section of our first memory. The high-level purpose of this system is to generate "tasks" for the player to complete, recognize when the player has completed or failed the task, and handle the consequences of that completion or failure. Since tasks are just a series of objects that the player has to interact with, this in practice means selecting a sequence of objects, and, when the player interacts with an object, checking that object against the current "task" sequence to determine whether that task has been completed, failed, or is still ongoing.
The key issue for modularity here was in the objects that compose the tasks. They needed to connect with a bunch of different things, including the actual interactable object in the game scene, the task, and the sprite that appears on the UI. Initially, I tried to solve this using a combination of an enum and direct references to the scene objects, but that quickly got frustratingly complicated and didn't scale in the way I wanted it to.
The solution was found in one of Unity's most interesting and versatile tools: Scriptable Objects. Scriptable Objects are a kind of data container, similar to a struct or data class, but with the distinction that they can be stored as assets in a Unity project, and referenced in the same way other assets can be. This means that I could create a Scriptable Object class that could hold a reference to the sprite to be associated with that object, as well as any future data that might need to be associated with them, and then give a reference to the Scriptable Object asset to whatever needed to use them. It also allowed me to just create a generic component for all task-related objects in the scene, and use a reference to a "task object" Scriptable Object to define it's sprite and behaviour.
I also used Scriptable Object to define each of the individual tasks that the player might have to complete, mainly due to the fact that it allowed me to give the tasks meaningful names, such as "Fetch Documents" or "Reload Printer", rather than just having them be generic lists of scene object references.
These two tools alone have consistently allowed me to create modular and scalable systems that are extremely easy for other designers to work with, even without a strong programming background. They highlight some of my favourite aspects of Unity as an engine, and why I just can't seem to tear myself away from it. For other designers, programmers, or aspiring game developers, hopefully you learned something valuable from this, if just from getting a peek into my personal process for designing modular tools and systems both for my own sake and for the sake of my teammates.