Posted January 25, 2026 by LyffLyff
#shader #game-of-life #emergence #godot
Introduction:
Hello and the best of morning, noon, or nights to anyone who somehow found their way to this random Devlog!
A few years ago, I released this project after creating it basically in one night, without any real experience in life or in making projects like this. Because of that, it genuinely makes me happy that people still seem to stumble across it every day.
For that reason—and because I kind of fell in love with shader programming—I decided to do a Performance Update. The goal was to migrate my old codebase from basic CPU-based logic to new, shiny, and much more performant code using Godot’s Shader Language. I actually tried to do this back then, but since my skills fell short at the time, I had to give up.
In this Devlog, I want to briefly explain why the old version kind of sucked, why the old shader version didn’t work, and how you (yes, you) can create a fully procedural, generation-based shader in Godot using ping-pong buffering.
The Problem with the Old Version:
When you’re trying to create a simulation with cells on a large grid—each cell needing to be calculated individually—what would you do?
Use the CPU (for example, GDScript), which processes everything serially, calculating one cell after another literally millions of times per frame? ORRRR should you use the GPU (via shaders), which can perform those calculations in parallel? Spoiler, the old version used the CPU.
While this approach was much easier to implement and gave me the freedom of an effectively infinite, unbounded universe without the annoyances of Shader Languages, it was also wildly inefficient. Once more than a few hundred cells were visible on screen at the same time, performance tanked hard. The whole thing started lagging like a duck.
Why the Old Shader Code Didn’t Work:
After seeing the poor performance of the CPU version—and motivated to finally learn Godot’s shader language, I created a shader-based version of CGOL using screen reading. I thought that would solve everything. It didn’t. The result was not a simulation at all, just static pixels pretending to be part of something greater.
The problem is that when Godot reads from the screen, it reads an old version of the frame, not the one currently being written by the shader. As a result, the simulation never advances to the next generation. It’s permanently stuck on the first one.
Back then, people on forums suggested using BackBufferCopy or a SubViewport. Unfortunately, none of that worked for me at the time, and most of it completely flew over my head—though the idea of using a SubViewport wasn’t bad at all.
Ping-Pong Buffering:
To get around these issues, I eventually discovered the concept of ping-pong buffering. Not only does it have a funny name, it also completely solved the problem of the automaton being stuck in its first generation. The core idea is pretty simple.
You use two SubViewports, each with the same ShaderMaterial attached. These shaders perform the actual simulation calculations. A variable toggles between 0 and 1, determining which viewport is currently displayed.
Summary:
If you’ve read this far, thank you for your time. I’d be very happy to answer any questions you might have about this topic, whether in the comments or anywhere else you might contact me.
Have a great day and an even better life!