Posted July 03, 2025 by antjowie
In case the images won't load you can check the PDF version here
This post is about what topic we dealt with during the GoedWare Game Jam, and my reflection on them. It’s a combination of a dev log and a retrospective. In case you haven’t played it yet, you can try it out in your browser here.
The jam takes place over 10 days. Additionally our team is remote. To ensure we deliver something workable I set up a few processes:
To start brainstorming we dedicated an hour to talk on Discord and fill out a mind map about the theme.
Note that we’re Dutch so our communication tends to be a mix of English and Dutch. Miro has a really nice mindmap tool that automatically positions your nodes.
That was all for day 1 as it was late (I still work and the others have school). For day 2 we would brainstorm again but now that we discussed the theme broadly we gave ourselves the task of thinking how we could turn these notes into a game.
I noticed that things didn’t really go well during the 2nd day. Delving deeper into our ideas and trying to see how they relate to resources just wasn’t sparking any creativity. So half an hour into our 2nd brainstorm I decided for the team to take a 1 hour break and use that time to think about a gameplay idea using the ideas we’ve been discussing. Then we would pitch these ideas against each other and take all the good ideas to combine them into our game.
This went quite well, the pitch that we ultimately went with was this one
With a gameplay idea set we went on to make a little roadmap to see if the work would fit.
I expected this roadmap to change pretty much after the first day, and it did, but the goal of it was to align on vision and create expectations for the upcoming days. With this, our planning was done and we were ready to start on the 3rd day.
A technique I wanted to try out for a while is to create 2D assets from 3D models. It’s a workflow Dead Cells makes use of and allows you to quickly create and adjust tons of animations. There were 2 things I wanted to achieve:
To achieve this, you first create the model and animation in blender. To create the station I creates a hollow cylinder and used the discombobulate modifier on it. Afterwards I disable all viewport gizmos and then render the viewport twice:
I exported 120 frames for the station to get a bit of smooth movement. This rendering generates 120 images so we still had to turn this into 1 texture/spritesheet.
To merge them I used Aseprite, you can “import as spritesheet” in which you select all images. And then “export as spritesheet”. Initially our texture was about 500Mb, but after enabling the settings to cull empty space and reduce the resolution we managed to shrink the texture all the way to 10Mb.
The world is a very important system as it drives the whole game. It does a few things:
To generate the world we’re using a combination of multiple noise maps and thresholds. I actually wanted to explore a voronoi solution to have some more interesting visuals but we had to cut our scope.
All ore sample from their own mixed noise texture, and we use threshold values and multipliers to control which resources overwrite which.
Terrain data itself is stored in a 2d array of enums. We have several data structures:
We have another array for the fog mask, which are just stored as floats. Then we have several functions to interact with this array, like getResourcesAffectedByPayload, damage or getAdjacentResourcesOfType.
The render system simply takes the array and makes a texture from it, then based on the ResourceType it generates a pixel of that color. We actually wanted to use tiled textures and then use this texture not directly, but to sample into the tiled texture and add some detailing to the terrain.
We have a custom shader for the planet that takes a fog texture. We simply use this to modify the color in the fragment shader though we were thinking of using it as a mask over a fog texture.
This again is a different texture that writes to the same shader. Our approach was quite naive but we took the fog texture where red represents alpha, and then spawn and translate this according to the light direction. This way, you would get a texture that represents where light would land.
It was really bad on performance but with smoothing it did look very nice.
Next time I was thinking of implementing this as a post process shader. You already have an occlusion texture (the fog texture) and then for each fragment trace it to the sun. If it collides somewhere in the fog texture you know if it's shaded or not and it should be more performant.
The weapons were downscoped a lot as well. Initially there would be an upgrade system for your weapons, as well as an energy systems and you would have to upgrade and you would have to queue projectiles and build stations to increase how many projectiles you could shoot in parallel. Not only was this obviously overscoped, but it also increases the complexity of the game a lot, meaning we have to teach the player all this, and the game already runs on a timer.
Needless to say it got scrapped. The weapons themselves just call into the World object, modifying the terrain by calling Damage with a payload. I don’t think there’s too much crazy to mention. All weapon and projectile stats are defined in the script variables, so when I implemented upgrades, I changed those to getters and used the upgrade multipliers. It’s not the most robust solution (and I think the current upgrades are actually bugged) but it worked quite well for this project.
Performance ran fine in the editor, but the game ran terrible in a cooked build. I’ve never seen a build perform worse in a cooked build. I think it was due to the resolution but I started profiling it. Whenever you would scroll to the bottom of the planet, the game would drop to 15-30 fps. We also had an issue where the game would just freeze everytime you break resources.
For the fps issue the profiler showed that 40ms were spent in VSync on the renderer. I was shocked to see that and thought wow just disabling vsync will save the game. This didn’t do anything and I was sent on a false goose chase where I thought the build profile settings were not working as well as setting the vsync off in script.
After some further research, the profiler shows vsync when the GPU is busy, it doesn’t really have to be vsync that it is actually waiting on. What ended up being the bottleneck was the god ray shader and all the iterations we did to stamp the fog texture for light info. Disabling smoothing made the god rays quite ugly, but performance was back to the 100rds of fps.
For the freeze on resource destruction we initially thought it was our raycast function, but after profiling it was our world render script. Since our world systems are decoupled they communicate all mutations with events. For the render part we just rerender the whole texture. This issue was solved by passing an array of modified resources, so we can easily see which are affected and just update those.
Playtesting happened on the last 2 days together with polishing. During playtesting we’ve added a ton of missing feedback, like the resources you’ve gained, the weapon you should use, the upgrade system (we decided this was the one thing truly missing). With 1 day left we had just enough time to add the systems.
Nobody in the team had any former experience with Unity but one of our members wanted to learn about it, this seemed like a good opportunity to learn about it.
Overall, I'm not the biggest fan. Being unable to read some engine api internals leaves me guessing and some of the new system like the new input system didn’t work well with the short timespan we had.
For example I tried to use 2 player inputs on different gameobjects because I assumed they forward input events to our script. Instead it would bind the 2nd input to another device, which makes sense for coop gaming but couldn't find a way around this.
Another issue was that pressing the UI would not eat input events. To solve this the official docs mention: “...The easiest way to resolve such ambiguities is to respond to in-game actions by polling from inside MonoBehaviour.Update…”. But polling kinda goes against making an event driven input system right? Now a lot of my confusion can also be attributed to the lack of wanting to learn the system with the timeline we had so I do give them the benefit of doubt.
On the positive side, it did contain all the tooling needed to create our project. It was also quite performant, we did run into performance issues but I tried to make the same project in GDScript and iterating over 200000 pixels already froze my application by 500ms. And lastly it did increase my motivation to use a custom engine or Godot for my next projects.
This was our first game jam. At the beginning time felt quite abundant but it quickly became apparent that the scope was too big. We've cut quite a few features but have still managed to make something that feels quite polished and fun to play.
Getting to experience working together towards a deliverable in a short timespan has been very insightful, the biggest concern was that our idea required quite a few systems to start testing, so we never really knew if it would be fun to play until it was done (and we didn't have anyone on the team with a focus on designing to ensure we're making something that's actually fun).
Once the systems were finished and we tried it out, it was quite rough, but by playing and coming up with ideas to improve the gameplay while ensuring they stay within scope we were able to deliver something that we are satisfied with.
For next time I want to focus on: