And this process is reversible since you can just un-distort the noise when you zoom out? How consistent is the addition and subtraction of the noise since there is slight variation every time you zoom in and out.
The distortion is not destructive in itself i'm just gradually modulating the base image with a perlin noise, so there's no real adding & substracting, just interpolating.
There are inconsistencies though as you noticed, they come from the moments where I store the current state and use it as a new base image. I should have offset the perlin noise based on the exact zoom position where the snapshot happened but I didn't. This makes the distortion vary depending on the exact position of the previous snapshot.
I keep coming back to this page every once in a while. I've tried to implement this about 6 times and didn't get quite where I want to. Would it be possible for you to share any source code or give me a bit more detail? I'm very interested in procedural generation and fractals and would love to finally know the answer to this burning question. Thanks!