Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(+1)

Your answers are great, I'm grateful for them no matter how long they are.

Your test on the timers clarified the situation, I'm actually using them for things like jumping (doing interpolations between different gravities) and as an optimization feature (checking a condition every 0.2 seconds, for example).

This already alerts me to another project where musical synchronization is important, I need to create my own timers fed by the framerate to correct possible desynchronizations. I was entrusting this function to the standard timers.

About Debug mode, I still need to take a better look at it, for now, I use them only to read logs (to correct the flow of actions). It was a good reminder.

My strategy for the spikes was to select the closest one (this one I had to deactivate the prices, because it was generating bugs) and turn them into tiles (they were animated sprites).

Does the function of selecting objects by the condition "the distance between objects is less than" does it have a big impact on performance compared to your selection box? Anyway, your strategy looks good and I'll try to apply it. I'm wondering, better optimization than this, maybe just sectorize objects on the screen one step before checking the imaginary box. For example, creating 4 variations of the spike object, each being added to a specific corner of the screen. So once in a while check if "The X position of Spike >= Player.X() – 32" of all the spikes, check only those that are in a specific corner (quarter of screen), according to the position of the player. But maybe it's too much work for little performance gain.

We're having a great conversation here, but it's okay if you can't continue it :)

(I wish I could contribute some useful knowledge for you too, but I don't have the knowledge for that, sorry haha)

(+1)

I'd like to clarify that, although my programming is very frame-based, I do use TimeDelta() to mitigate the effects of lag (I don't have time to explain how I do that right now) and I think that it can be a very effective way to program games. I do not have a personal vendetta against the Timer system, though I do wish it's Frame-independent nature was more clearly explained.

Your use of timers sounds consistent with how features behaved at lower framerates: the gravity felt faster than the jump.

  • "This already alerts me to another project where musical synchronization is important, I need to create my own timers fed by the framerate to correct possible desynchronizations. I was entrusting this function to the standard timers."

This isn't hard advice, as I've never done music synchronization, and, from what I hear, it is very difficult! But I probably would use TimeDelta() instead of framerate for synchronizing something to music, which plays according to Time and not Frames. I'm not sure if I would use the in-built Timer system (which I am very unfamiliar with), but I would use TimeDelta() in conjunction with variables for timing.

  • "My strategy for the spikes was to select the closest one (this one I had to deactivate the prices, because it was generating bugs) and turn them into tiles (they were animated sprites)."

Could you clarify what "deactivate the prices" refers to?

Otherwise, that sounds reasonable. While "closest" isn't the most performant (as I'll talk about later), it shouldn't be particularly heavy. It also seems like tiles have better performance.

The sectorization is an interesting hypothesis. I don't think it would be necessary for a project like this, and it might create some weird edge cases (what if player is in quadrant Top-Left and needs to collide with a Spike on the edge of quadrant Top-Right?). Additionally, trying to program for four different kinds of spikes sounds a bit stressful. However, keeping in mind the use of unique objects to reduce load is important.

  • "Does the function of selecting objects by the condition "the distance between objects is less than" does it have a big impact on performance compared to your selection box?"

Yes.

I want to preface that I don't have intimate knowledge of how these conditions work; I mostly just do testing within the engine to come to my conclusions.

"[Object A] distance to [Object B] is below X pixels" is a very good condition. It uses a distance calculation (which probably involves square roots) between the Origin of the first and second objects. Basically, it's an "imaginary circle". It's especially useful for "area of effect" type processes. However, while this calculation is very useful for logic reasons, it is not performant.

Since I wasn't entirely sure about this, I decided to make a test project.

There's one player and some spikes. The player can be moved with WASD. Inside the frame, I have 67 spikes organized semi-randomly, with a grid of Spikes close to where the player starts. Then, at the beginning of the scene, I make 50,000 spikes at -128;-128, to increase the load.


There are four different methods for picking and checking for collisions between Player and Spike. (-1): No Cull. (0): Box Method. (1): Nearest To. (2): Distance Below. I can switch between them by modifying "Mode" by pressing the 0-3 keys. Here's what the code looks like (minus a couple of group headers for readability):


(Note: The profiler is not an exact sort of thing, especially since it's measured in time it takes for my computer to process it, which is different across every machine. There's also a lot of fluctuation in the time it takes to do stuff. I've mitigated this a bit by creating 50,000 spikes to exaggerate the differences in the methods.)

So, using the four methods and 50,067 spikes, I measured using the profiler for 600 frames each.


No Cull
10.96 ms
Box Method
1.83 ms
Nearest To
5.45 ms
Distance Below
10.35 ms

(Note: I only included the time it took to process these specific events. I didn't include the massive amounts of time it took the game to process and render the objects. The actual full process time came out to about 50-70 ms to run one frame. 50,000 objects is a tad too many!)

I was very surprised that "Distance Below" is almost identical to not culling at all. I speculate that the collision algorithm already uses a similar distance calculation to "Distance Below". Again, "Distance Below" is a good logical check, but it does not improve performance.

"Nearest To" is a very useful condition. and it seems to perform quite nicely. For example, I used "Nearest To" when deciding which Lightbulb in "Logic Bulb Adventure" should be interactable, since there should be only one of those at any time. It's a distance calculation, so it's essentially "imaginary lines", where the object picked has the shortest line.

But, it can be problematic for collisions with multiple objects in close proximity. Also, if the positions being compared aren't in the "correct" place, it can produce edge cases. In the test project, the origin of the Spike and Player are both at 0;0 (Top-left), which produces situations where the player is clearly overlapping with a spike, but the "Nearest" spike is not that one.

Of course, this could all be irrelevant to the performance in Visions of Ethereal; it might be something completely unrelated! You'll have to run your own tests to find out where the heaviest loads in the code are.

(1 edit) (+1)

Lots of light being brought to the conversation with these tests.

I understand that timer, DeltaTime() and frame count must have their moments to shine. Depends on the context.

Talking about the game with music sync, I'll still try, it won't be a very high rhythmic precision game. At the moment the thing is well connected to the timers (with BPM calculation and other details), so either with frame count or TimeDelta(), in order not to have to do a big rework, I'm going to use them as a second layer of confirmation of the events. It's something I'll think about later.

"Could you clarify what "deactivate the prices" refers to?"

Haha I'm sorry! Small mistake I missed after my review with the translation tool. The correct sentence will be

"My strategy for the spikes was to select the closest one (this one I had to deactivate in a hurry [near the end of the jam], because it was generating bugs) and turn them into tiles (they were animated sprites)."

And now, about: "I was very surprised that "Distance Below" is almost identical to not culling at all. I speculate that the collision algorithm already uses a similar distance calculation to "Distance Below""

I was as surprised as you are, and I believe in the same hypothesis you raised. Anyway, this test project was awesome, and now I'll be a follower of box method supremacy! haha

In the end, each strategy works well in specific cases.

Now, after doing some testing on the project...

First, I decided to do the jump smoothing using frame count instead of timers. The result started to look nice, but I still had problems with timing, when reviewing I remembered that the jump sustain time is measured in seconds (i.e., in fact, the platformer behavior has at least one timer running with it, solve turn it off and do a manual air control by frames).

However, I was a bit apprehensive, I started to print the frame count on the screen with the jump button pressed and to reach 60, it was taking more than a second.

I improved some events using the debugger tool (very nice tool!) but nothing particularly heavy, rendering and pre-events were still the biggest performance drains, but again, nothing serious.

  So I decided to print other information, including FPS (using the extension), FPS which is constantly at 37 frames, and that's when things start to get even weirder...

Not understanding what could be draining so much performance, I did something extreme, turned off all events and deleted almost all objects from the scene, leaving only the character and the ground, the result:


As if everything wasn't confusing enough, I changed the FPS cap from 60 to 0 (unlimited) and at that point, my game went to 69~76 FPS in a regular scene.



I went back to 60fps, went to a blank scene, just copied the value printing events, and the result?

* "Scene Real Time" is the scene time counter.

** "Scene Frame Time" is a count of every frame/60.

I'm going to take a break from testing now and I'll come back to it later, but I don't know if there's anything else to be done. As soon as I had the first case of slowness a few days ago, I cleaned all Windows background processes leaving only the OS excencials and it didn't solve it. Detail, other games run here normally, even some 3D ones that require more GPU.

The mystery continues...

Thanks for another great reply :)

(+1)

.....

What is your monitor's refresh rate?

For reference: Web browsers use V-Sync by default. Electron, the JavaScript interpreter that GDevelop uses to make desktop builds (and, by extension, previews), uses V-Sync by default, because it is built on the same principles as a web browser. Basically, GDevelop games have V-Sync by default and necessity. I have my own opinions on this fact, but it is was it is.

This is also why it's hard to use FPS to determine performance.

(+1)

I have 3 monitors, two are 1080p 60hz and the biggest one is 1440p 75hz.

Come had a report, this performance issue for me started suddenly in the same Windows session and never came back.

I've tried changing some properties like minimum FPS, rendering type/filter for the pixels and other minor things, and nothing delivers the expected FPS.

(+1)

Just to be clear, is it running at the same speed on the 60hz monitor as on the 75hz monitor?

(2 edits) (+1)

Yes, the game runs at the same resolution regardless of where I place the window or start running the application, however, I decided to go deeper with your new message, and I disabled all the videos and ran the game again. Result: both 60hz monitors ran the application at 60 frames, the 75hz monitor ran at 37.5 frames.

I forced monitor 75 to 60, and again, framerate was stable. In the same case, I let the game run at unlimited maximum frame rate, but again, stable at 60fps. In that case is it the counter that doesn't work or the game really doesn't pass 60fps?

EDIT: Additionally, with the 3 monitors active, game in 60fps max, I left monitor 1 at 50hz (1080p, not the main one, just port 1). I ran the game from monitor 2 (1440p, 60hz, main) and the game ran at 50fps. In this case another monitor influenced the total frame count.

This story becomes more intriguing though, at least my case of lag was "resolved". Perhaps a background update from Nvidia was what started the problem over here? (even at 75hz the game ran normally before)

This also doesn't solve your lag case and other people's, but I certainly already have a point to keep an eye on and question about.

I wasn't prepared to think so far out of the box, maybe the next step would be to use a multimeter to measure the alternating current of the residential electrical network? haha

(2 edits) (+1)

I should probably elaborate on my earlier point about VSync.

A lot of software uses VSync, or "Vertical Sync." It tries to prevent "screen tearing," a phenomenon where frames rendering out-of-sync with the monitor's refresh rate results in the halves of two frames rendering at the same time. It does so by limiting the software's ability to process frames to either the refresh rate, or a half divisor of the refresh rate.

For example, in software with VSync, if a monitor has a refresh rate of 60hz, it will always output either 60hz, 30hz, 15hz, etc.. Even if the software could output 72fps or 120fps, it will only process, at most, 60fps. Or, if a monitor has a refresh rate of 75hz, it will always output either 75hz, 37.5hz, etc..

Basically all web browsers, by default, use VSync. Electron, the JavaScript parser that enables GDevelop to output desktop files and previews, is based on Chromium, which is essentially a web browser, so it, too, uses VSync. I'm not sure at present if it is possible to change this, but this results in the strange behavior.

[EDIT: Just saw your 50fps edit, added it to table]

Max FPS
Monitor Refresh Rate
Final FPS
606060
6075(75 / 2 ) == 37.5
Unlimited6060
Unlimited7575
605050

The reason it cuts in half into 37.5fps at a refresh rate of 75hz is because, as the Max FPS is 60, it's going down to the nearest half.

While it might seem quite stupid (and I'm not convinced that it isn't) this is also part of the reality of building HTML5 applications for web browsers, where the standard 60hz monitor is essentially king. Even if you did change the behavior of Electron to make it not use VSync, people's web browsers are going to use VSync, and most people have 60hz monitors.

I hope this doesn't sound patronizing, it's a very complex topic.

(+1)

I knew superficially the concept of V-sync, but I didn't know that it only worked with fractions (/2). It makes sense when you think about the concept, which makes me understand the benefit and complexity of Free/G-Sync.

In the medium term, the way this game behaves on the Web won't matter so much, since the final version will have to compile an executable to be able to put it on Steam.

However, if I manage to get a publisher for the consoles, a port of the game would probably be done with a Unity browser running the HTML version of the game.

Anyway, maybe the V-sync is still in the executable. Would working with max frame 0 (unlimited) be a bad practice?

The monitor you used to test the game here at itch, is it 60hz?

(+1)
  • "The monitor you used to test the game here at itch, is it 60hz?"

Yes, my monitor is 60hz. As far as I am able to tell, 60hz is the general industry standard for computer monitors. While many high-end gaming hobbyists have a fondness for the fidelity of 75hz, 120hz, and 144hz monitors, the vast majority of software users and game players use monitors or TVs with a refresh rate of 60hz, which has been the NTSC industry standard for decades.

For context: the 60hz standard existed before video games or home computers; it was originally the American standard for AC (alternating current) electric grid frequency. CRT monitors used this as their refresh rate because it was easy to synchronize.

  • "Anyway, maybe the V-sync is still in the executable. Would working with max frame 0 (unlimited) be a bad practice?"

If the game's coding is going to be mostly or entirely Frame-based, it is probably a bad practice to export with an unlimited max frame rate. If the game's coding is going to be mostly or entirely Time-based, it will matter a lot less.

With Frame-based coding, non-standard refresh rate users are always going to experience problems until they set their monitors to 60hz, but I personally think limiting the frame rate is the correct course of action in this scenario. If the frame rate is unlimited, this can result in insanely high speeds (a game designed for 60fps running at 144fps, for example). If the frame rate is limited, this will result in slowdown, with one exception: 120hz monitors will experience a perfect 60fps. While this is probably a matter of preference, I think a bit of slowdown is preferable to a theoretically large speed-up.

There are no great solutions to this problem. I think a lot of this comes from mismanaged hype on the part of manufacturers of high-end computer monitors. Additionally, this is mostly a result of GDevelop's "Web-First" design philosophy, and while I would personally like it if there was native support for disabling VSync in desktop distributions, I do not currently expect them to devote resources to that, especially while they're still developing the experimental 3D engine.