Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
Tags

Badwrong Games

57
Posts
1
Topics
49
Followers
5
Following
A member registered May 28, 2019 · View creator page →

Creator of

Recent community posts

(1 edit)

Yes I saw your email as well.

To test, go into User Event 0 of the light_engine object and on line 5 where it says "if (false) // Force GLSL ES for testing" change it to true and see if that changes anything.  My assumption is the wrong shaders are trying to be used when it runs on iOS.

Are there any other assets being used?

I remember something similar happened to someone, and they had another asset doing something that caused the same issue.

I might be the ambient occlusion value of your material.

I'll look at the fire object too.

Do you have any screenshot examples to show exactly what you mean?

The light attenuation model is uses an easing function to soften the area where the falloff occurs.  You could edit it in the shaders to use another easing function.

For example, in the fragment shader of shd_light_ss_hlsl on line 189 the easing is done by:

float edge = sin((1.0 - smoothstep(focus, radius + 1.0, light_len)) * M_PI_2);

You could keep just:

float edge = smoothstep(focus, radius + 1.0, light_len);

That would remove the easing and might get what you want.

The PBR shaders would work, but the shadow casting and geometry is designed for 2D planes.  It also is specific to an orthographic projection only.

Your screenshot shows an FPS of 919 with an average of 1479.

Ah, that makes some sense then.  Integrated graphics are going to be extremely inconsistent depending on what rendering is being done.  It is weird that it all sits at surface reset, but how they handle VRAM when its integrated would affect that.

I use a lot of multiple render target (MRT) outputs in the shaders, and I bet integrated graphics do not handle that well at all.  So, yes you'll want to figure out why it decides to use your integrated graphics at all.  If you do not use it at all, then I would suggest disabling it in your BIOS settings entirely.  Then GM will have to choice when selecting a device.

For example, I added 20 of those particle fire lights and ran the profiler:

The important ones to look at here are: 

NameTime (Ms)
surface_reset_target0.01
part_system_drawit0.042

These are great metrics, and this is with VM of course.  So, it would be faster with YYC.

My computer is not a good benchmark though, because I build a new one a few months ago with a 4090 RTX and other high-end parts.  I have tested Eclipse on mobile, Surface Pro (2014), and my old Linux machine/server.  Never seen a problem with particles, so I'm curious what it looks like for you?

The particles rely on GM's built-in particle system, so it could be limited by how efficient they have made things internally.  Unfortunately, GM still has a very old graphic API implementation, and yes, you will find performance for lesser things will cost more than newer games like you mentioned.

To troubleshoot, please answer:

  • What is the actual time in milliseconds that you can see?
  • Have you ran the same test compiled with YYC? (you'd need to output your own timing or FPS values)
  • What is your GPU utilization actually at when the FPS is that low?  (Task Manager > Performance > GPU)

surface_reset_target() is a pipeline change, but saying that is the most costly thing is actually good, because it is not costly.

15% of the step is fine (but we really need to know milliseconds), and it is normal for rendering to cost the most performance.  The question is where is the other 85% going?  GM did recently change some things about particles, and there could be a lot of data going between CPU and GPU which would be slow if done poorly.

Yes that is possible.

(1 edit)

Are you changing your camera at more than one point during the draw events? 

Do you have different origins on the two different tilemaps?

Do you have normal maps also assigned?

Ah, it wont be the same with sprite_index because it sets the full sprite to the single value only for metal/roughness.  The actual sprite colors would come out really weird as metal/roughness.  It is possible to try and edit shaders and use like grayscale values, but it will still be a bit odd I think.  The material packer that comes with is your best bet at easily making materials for sprites/pixel art, as most workflows for materials are much more in-depth and require full tools for doing so.

If I understand what you are asking, you can do full image metal/roughness just by setting those values and not use a material map sprite.  When the material map sprite is assigned to the same sprite_index (i.e., left as default) then the shaders will draw metal/roughness using the current sprite_index and all pixels at the metal/roughness set.

Cool.  You may not really need a shader, and just setting the global draw color would work.  Basically the RGB values of the draw color would be the same as intensity of the sun light.  If it looks good then no need to do much more, or if you need more control then a shader might be needed.

Layer scripts are the easiest, and you can get the intensity value from the day/night object (make sure it exists or use "with").

There is no way in Eclipse too have a light only change a specific layer, object, material, etc.  Something like that would require a totally separate render pass.

My suggestion would be to use layer scripts on your background and set a simple shader that takes the day/night values such as intensity as uniforms to change the background.

Currently, if you setup the layers correctly you can have backgrounds that are totally unaffected by Eclipse (covered in basic setup tutorial).  So, all you would need to do is setup your background like that and add layers scripts.

I just updated Eclipse with the fix for culled tiles that YYG had added.  You can upgrade to the newest GM runtime.

I'll need to get the newest GM version and see what needs updated in Eclipse to match it then.

Without normal maps and materials there is information missing in how things are rendered.  This is a PBR lighting solution after all.

If you were to use an earlier version then anything around March 2023 or earlier should be when the last Eclipse update was.  

I will release a fix soon I think so that the issue with camera culling tiles is fixed.  I need to replicate it locally first however.

YYC will not change anything with the graphics API.

It may be an issue with the tilemap culling.  YYG recently fixed something internally and now tilemaps are culled correctly according to the camera.  When the normal maps and materials made with tiles are drawn there is no camera applied.

Do you have normal maps and materials set for your tiles?

Does it happen if there is no shadow caster on them?

Use an emissive sprite to draw light without shadow.  A transparent or semi-transparent one would work well as a player light radius 

Mikk,

I just pushed an update with the new attenuation model.  It should now act more like you would have expected it to.

It's possible to make multiple particle objects and put them on different layers.  That would require keeping track of their id of course.

Now that static variables are really easy to use I might try to add another setup I use for "depth" based particles.  Basically it's a static array of particle systems that gets cycled through and the depth changes.

It's just annoying that entire particle systems in GM are assigned a single depth or layer.


Gotcha.  So, that macro packs alpha and the rotation value into one float.  Then in the shader it rotates the normal vector around the z axis according to the image angle of the instance.

Ah, I was looking through things and it is the regular instance variable depth that needs changed along with the particle system.

I have it setup to use layers which work well for drawing over things.  

I'll modify things for the next update, and until then set the instance "depth" variable in the effects object as you would any instance.

Sounds, good and thanks for sharing.

Note that I use the depth buffer to sort the depth of everything when drawing the normal and material maps.  If not using the depth buffer things will simply draw in order, which may be ok depending on your own game.

One reason this type of drawing isn't supported directly by Eclipse is because it would be far simpler to just do it in full 3D.  For normal map to display correctly after applying a transform such as draw_sprite_pos_fixed() you need to use tangent space to create a TBN matrix.  The efficient way to use that matrix is to transform the actual light positions by the transpose.  Since Eclipse is more or less a 2D lighting solution (using some 3D maths) that entire process is skipped because the normal maps all sit on a plane facing upward toward the viewpoint.  Its cheaper and works well with how default GM drawing works.  

So, doing full 3D transformed sprites would really be easier in a normal 3D lighting solution.

Some other things to note, when drawing normal maps I use the alpha channel to encode the emissive value of that pixel.  So, you would actually want to draw in 0 alpha if there would be no emissive.  That can be done for the entire normal map sprite, or per-pixel in the normal map itself.

That fire effect is a light and uses the particle system combined. So, you would set the light depth of the object, and the depth of the particle system.

If that asset uses a shader then it would have to be completely combined with many of the eclipse shaders.  They use an extra transform that their special shader uses.

The only simple method would be to draw it first to a surface then draw it normally from that surface with the eclipse stuff.  That also means getting the current shader, current MRT surfaces, etc.  Then after drawing it, restore those. 

Try setting the "shadow_depth" variable on the particle system object.  It is normalized 0 - 1 as well. 

It uses both the instance "depth" variable and that.  One is for just the basic draw order (depth sorted) and the other is how it is lit.  If that doesn't work let me know. 

They would have to release it before it can be considered.  There is no point trying to bug fix and make it work when the feature isn't even completed.  In the future when they move to the new runtime, then it would be worth looking into and possibly work anyway since the newer graphics API is needed.

It's supposed to remove as much work as possible from the user.  Watch some of the tutorials to see what you think.  You still have to create the assets for it to work though.

Why would the shadow not cast "north" if the sun was in the "south"? 

The track that is above would be a shadow caster with a higher shadow depth than the track below it.  The sun would also have a depth, but light depth, which would be higher than the top (or equal since it uses >=).

Sun is light depth 1.0

Top track is shadow depth 0.9

Bottom track is shadow depth 0.8

The shadow would then cast correctly.  The only thing you can't really do is have it cast as if the track below has any slope to it since there is no 3D model to work with, i.e., it has a constant shadow depth in a plane.

The top track would be on a different layer than the bottom track, and cars would be on layers right above each.  At the right places on the track those cars would switch layers as well.  They should switch their shadow depth to match the part of track they are on.

The blocker thing would also just be a shadow caster, then the car shadow would be covered by it.

Well that blocker object would need to also be a shadow caster.  We don't have full 3D geometry to work with here.

If the thing blocking didn't also cast shadow it would be odd that it also is tall enough to block it.

(1 edit)

I think I get it.

You will want to set the light depth and shadow depth: 

The value is normalized from 0 to 1 which lets you setup your own values based on whatever depth you use for your layers/objects.  A value of 1 is closest to the top and 0 is the bottom.  So, you could lerp() between your maximum and minimum layer depth values for your lights and shadow casters.

I'm not sure what you mean by that.  The shadows cast according to light source position and light type determines how its lit.

Using the light depth and shadow depth you can affect some things like that, or another shadow caster blocking.

You'll have to explain in more detail, since there is no way to select a single shadow like that.  Its similar to a 3D game's shadows.

Thank you very much for the support.


My guess with the error is you need to include event_inherited(); in the create event of your object.

Eclipse is a PBR lighting solution, so if the plan was just basic lighting then it would be hard to justify its use.

At the moment I am creating a layer and depth based lighting engine which does not use PBR materials and is specifically designed so that lights are created on layers or by depth just like normal instances.  I can surely give you a copy when it is a little further along.

What sort of light settings are you using?   Also, the light color matters too and you could use smaller values and the light will have less radiance.  Like on the light you could use set_color(make_color_rgb(80, 80, 80)); 

When you dragged the image file into the clip window, did you drop it in the bottom right?  After that drag the same image file into the material window in the albedo slot.  That's the base color for your sprite or material and in the packer it is just there to preview things.  You can tick the Lit box to see a sort of preview using Eclipse and then play with values.  Once you have that figured out you can go to places like https://freepbr.com/ and just download materials that look like they would create what you need and just use the normal map.  It will of course be a textured look and not exact to the sprites pixels.  If you want that you can generate them as I think you already found the site, or there are other programs that generate much nicer normal maps.