Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
Tags

Thank you! The performance of the effect is one thing, but the performance of the whole scene / engine is yet another to-be-optimized issue. I have just realiszed that I didn't follow one of the most important rules: keep the surface count low. For instance, 7000 Grass meshes (2 quads x-ed) that are made with copyentity and via a contingent placed and oriented around the camera, are still 7000 surfaces, a huge impact. I just optimized it here from 20fps to 30 fps on my little card, simply by kicking the contingent system out, create 70'000 grass meshes, and 10x10 dummy meshes that are distributed evenly over the area. Then I addmesh the grass to the dummies, depending on their location. This way the grass is split up into 100 segments, each one containing only 3 surfaces, because they are 3 different brushes / grass-types. Directx does that, when you addmesh things together, it will optimize the brush count by reusing already existing identical brushes and adds the mesh to the corresponding surface,  keeping the surface count low. The camera range can then exclude a lot of the grass sectors easily. It really speeds things up. I'll do the same with the trees and bushes get rid of the LOD system. Funny, it started with the LOD. Trying to get better grass, testing some alternative ground, I'll add a screenie.

Whether Render to texture is faster than copyrect I don't know, but in theory, when you render to the texture, you can skip the copyrect part, but you have to render anyway. So I guess yes, it might be faster, even tho, copyrect 256x256 to a 256-flagged texture was like 1 or 2 ms only here. I guess optimizing the scene as mentioned has a bigger impact. Esp, since the scene is rendered twice.

(1 edit)

That's awesome! ๐Ÿ˜Š looking forward to the update ๐Ÿ‘

I also did a side by side comparison with the fastext version and I noticed one major thing, the fastext rays could be set not to or does not reach the whole scene.

Click here for screenshot 

Perhaps this can be one way to better the framerate and performance. Also, the rays are more defined while keeping the scene not washed out.

..and how's about just making the rays based on the light position?

You can just do the same how fastext does it using CameraProject and TFormVector.

(+1)

It's a decade ago I purchased the FastExt lib, I didn't even remember they had this. But yes, the alignment of the effect should still be fixed, it's not logical by now. If the effect is fixed to an object like the sun, you can lower polycount and texture size of the ray mesh. But Somehow I like the screen filter approach, that fixes the filter to the camera. It just really needs a fix that adjusts the angle and position, so certain camera motions don't make it look unlogical. Right now I am more concerned about optimizing the render time, fixing the effect angle etc. I save as the easy part as a dessert. 

Also, small texture size causes flickering rays because of marching pixels (and the KIND of pre-mipmapping in the dds texture has an impact, masked textures flicker more with the sharp "next neighbor" method), a blur of that render might fix it, but it's costly, or yet another  challenge.

(+1)

Great stuff! ๐Ÿ‘Œ

(+1)

Wow I just used the word "fix" 6 times ^^

side question: on another effect.. do you think EMBM is possible and usable without using tom speed's DX7 dll?

I have no idea. I even don't know what speed's DX7 dll is. That said, displacement mapping is the only kind of quasi bump mapping that I really like. B3D has only dot3 / normalmapping. I sometimes use masked textures to "bump" a texture, but it works only when Y-offset just a tiny bit. And then it's barely noticeable.

Currently working on better bushes etc. for relic hunter, pls give some feedback. The longer I work on it the less I can tell whether it actually looks better generally. Those old ones had like 16 triangles, they couldn't be shaded without to reveal the cheating. The new ones (basically just a mushroom aligned bunch of 4-sided bottomless cones) give more detail and depth, yet they lack of a kind of self-illumination you would expect... when you look at these things for too long, it's like a painter going out of creative mode, becoming 100% technical.

It's more on the texture distortion or perturbation effect for EMBM, but that is another topic to discuss. 

I'd be interested to see the new optimization methods that you have mentioned above, but for overall feedback, I'd say it's normal performance wise considering there's still alpha textures involved.

btw, any new z-sorting code in relic hunter? or same old tried and tested work? ๐Ÿ˜‰ haven't checked that part yet..

All the grass, trees and bushes are masked mode. Only a few things are alpha, like the fake shadows (see latest screenies), but they stick on the ground, so there's is barely any z-fight. Also, those grass islands under a grass mesh, they are masked too and slightly elevated uniquely. To prevent z-fights, they are elevated depending on their x/y position (the fraction after the comma, divided by 20 or so).

So there's no z-sorting by me. I did Z-sort alpha in the past, by repositioning the meshes in the order of distance to the camera, which works, but as resolutions increase and I learn more about good masking, it seems kind of unnecessary.

I had the intel DSS export plugin for photoshop years ago, that allowed to make masked textures that fould fade in as smooth as alpha ones. Now I use paint dot net, still good dss export options, but not as good as the ones o the other one.

Anyway, in case you didn't know, this is the most important thing when working with masked textures, dds, png or tga: duplicate the actual layer, then blur the one below, gaussian 4 or so, then set it's alpha to about 3%. This will help DX to fade the outline to the correct color (otherwise to black)