okay i understand, so it converts to 4 bit grey scale..thus 16 depths.......so it needs to convert it to 8 bit, thus 256 depths...but grab every 16th layer on one frame, then shift forward one bit and grab every 16th +1 depth on the next....16th +2 on the next..and so on...that seems relatively simple....that means each 256 depths will be displayed about 4 time every second.....that sounds perfect....could implement fibonacci later but keep it simple for now i suppose.
It's not quite that simple. I'm already using full 32 bit textures, so 8 bit for the grey. Greyness corresponds to z in the algorithm (z is a value between 0 and 1, representing distance from the player). It's not the granularity of the z values, but the fact that two different z values will round to the same separation value in the algorithm.
This is just a standard way to round to the nearest int, which needs to be done because you can't have a separation of 60.5 pixels, for example, because pixels come in wholes.
So the only things I can change that would affect the layers are DPI and mu (depth of field). There are combinations of these that give a better spread of separation, but it often turns out messing up the stereogram.
You have obviously chosen the depth of field and DPI values carefully, if you can cycle through all the values that give a good separation you may approximate a 3D scene
You could have a seperation of half a pixel, it's just this pixel takes on 2 values over 2 consecutive frames.........
To this end you could have any fraction of a pixel within reason of frame rate, bit depth and human perception...