Today was a deep dive into two critical fronts: refining the Combat UI and strengthening the testing pipeline.
On the combat side, I addressed several issues in turn flow, UI behavior, and ship logic — including a bug where only the defender was firing and the End Turn button did nothing. The combat log now displays (but is empty), and its minimize function is still broken. The ship details panel also lacks cooldown indicators, which I鈥檝e flagged for follow-up.
Testing revealed a deeper issue: Claude was reporting successful test runs, even when GUT (Godot Unit Test) was throwing warnings and memory leaks in-editor. The culprit? Autoloads behave differently in headless testing versus the editor. That discovery forced a rethink of the workflow. I'll have to start running tests manually.
To address this, I created two new Claude agents:
Both were drafted with ChatGPT, then refined via Claude. While the test suite still needs cleanup — failing tests, memory leaks, and noisy logs — the pipeline is improving.
In parallel, I started building a Zig-based test harness to simulate full game runs via a TCP server embedded in Godot. This system will allow me to run batches of games and collect real gameplay data — offering better insights into balance, exploit detection, and rough edges that might be invisible during manual testing.
Did you like this post? Tell us
Leave a comment
Log in with your itch.io account to leave a comment.