Skip to main content

On Sale: GamesAssetsToolsTabletopComics
Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

That's a cool idea. I've never heard of an autostereogram like that, but I'm certainly willing to try it. The only problem I can think of is that edges of objects might look more aliased.

Also really interested in adding color, but I'm not sure what you mean by alternating between both images. 

The stereogram consists of essentially 2 images constructed of noise that has been shifted...distribute the RGB colour across each image and across frames(across space and time)  so say you want a pixel to be purple..the left and right corresponding 2x5 pixel groups might be a collection of red and blue, (#FF0000 and #0000FF) in a random distribution across the screen and across sequential frames...

The big improvement will come if you can manage to implement the depth shifting which i feel comes hand in hand with the pixel shifting...this will give you full screen resolution and infinite depth......and it negates aliasing.

Might sound hard to believe, but I had this very idea about 12 months ago so I've had some time to think about this. But we were to use idtech4 BFG and integrate other c+ physics simulators to develop and train A.I. Our principal programmer had R.L issues so we never got around to really starting. I hope you appreciate what you are about to unleash....turning every 2D screen into a 3D puppet show, the games will be one thing....the applications in real-time medical imaging.....well...let's just say you're a pioneer sir, and it's a real honor to greet you...

pardon me, i should not be so presumptuous....perhaps a mam.

While I appreciate the praise, most of the work was done by the guys who wrote the paper, and the guy who wrote DoomLoader. I just mashed the two together. :)

And thanks for all the suggestions; I'm definitely going to look into them. Maybe I should start a discussion board for this...

Can you give us a brief description of your "mashing".....i am gathering from other comments that you averaged? and rounded? render to 16? depths.... can you shift these depth locations on each sequential frame?... be a major step

So DoomLoader basically ported DOOM to Unity. What I'm doing is running DoomLoader in the background (so it's still a 3D game, technically), but I overlay a UI texture across the screen. I have a script that takes the 3D camera feed, turns it into a render texture (just an array of the pixels that the camera would normally display to the player), applies the algorithm from the paper, and then feeds that to the UI texture as the end display.

The key is that the algorithm needs a depth map. So I made a couple shaders that almost all the objects are now using, which ignore the DOOM textures and instead make all objects greyscale, depending on their distance from the player. So underneath the stereogram, the game is just playing, and every frame looks like a depth map. I'll post an image of that below, with a high "render distance" setting.


The rounding that I've mentioned is in reference to the paper, specifically line 27 if you look at the code in Figure 3 here. The equation is s = round((1-mu*z) * E/(2-mu*z)), which will make more sense if you read the rest.

(1 edit)

okay i understand, so it converts to 4 bit grey scale..thus 16 depths.......so it needs to convert it to 8 bit, thus 256 depths...but grab every 16th layer on one frame, then shift forward one bit and grab every 16th +1 depth on the next....16th +2 on the next..and so on...that seems relatively simple....that means each 256 depths will be displayed about 4 time every second.....that sounds perfect....could implement fibonacci later but keep it simple for now i suppose.

It's not quite that simple. I'm already using  full 32 bit textures, so 8 bit for the grey. Greyness corresponds to z in the algorithm (z is a value between 0 and 1, representing distance from the player). It's not the granularity of the z values, but the fact that two different z values will round to the same separation value in the algorithm. 

(2 edits)

so among other things this needs tweaking?????      #define separation(2) round((1-mu*Z)*E/(2-mu*2)

i'm sorry i myself am not much of a programmer

no sorry it is this that needs tweaking

#define round(X) (int)((X)+0.5

This is just a standard way to round to the nearest int, which needs to be done because you can't have a separation of 60.5 pixels, for example, because pixels come in wholes. 

So the only things I can change that would affect the layers are DPI and mu (depth of field). There are combinations of these that give a better spread of separation, but it often turns out messing up the stereogram.

Okay, making more sense now, I can see you have an option for render distance....i take it this needs to be a specific value?..can you dynamically cycle these 3 values +, ++, +++, +, ++, +++ on consecutive frames???? it would go someway to doing it.

(1 edit)

okay i was wrong..there appears to be 20 layers...int + 0.5......

can you change that 0.5 value on each consecutive frame to a cycling value?

it needs to be a dynamic value

Fibonacci would be easy to introduce...just cut out furthest 23 depths and deal with 233 depths...by using Fibonacci ratios the rendered scene will not "pulse"..although that could look cool i suppose for a gaming environment.