Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs

Dstoop

5
Posts
1
Topics
1
Following
A member registered Oct 23, 2018

Recent community posts

Pic2Jelly community · Created a new topic Layer Depth

Hello JITD, I am glad to hear more updates are coming soon. 
It is still not clear how we're supposed to add layer depth to an image without losing visibility.

Let me be clearer: when adding different images for separate layers - the layers do not actually interact. Because if you select a new layer, the program treats them as separate images. If we're to make more complex works - we need to be able to limit the transparency for a layer - so that the moving parts on 1 layer don't interact with another layer.

Example:
https://imgur.com/a/BUd8jG0

Say I cut the girl on the left and the girl on the right's hair out of the picture completely. I'd like to animate just their hair. So I open it up on a second layer - and animate the breasts of the top girl on the background layer.

But when I go to create the image - it only recognizes the background image.

It doesn't render all of the layers. Why is that the case?

Basically alot of this boils down to:

Please give us more tools to determine and judge the 3d nature of the objects we're working with.

Like, we need to know what color does what. What does dark green mean? What does red mean? We have a general idea from experimenting, but we need an exact answer to know exactly what we're working with. Please tell us what all of the colors mean.

I don't understand why you didn't just add an opacity modifier for the layers so that users don't have to load in a transparent image.

I'm saying the program should do that by itself.

At best it's wasting resources . An average person is gonna try and use the 2nd layers, then see that everything goes black.

They're gonna look for a way to adjust it, and there won't be any tool for that. Users have to guess Which is really confusing.
Are you saying that I need to go find a blank transparent png to work with? But even if I load that in, I still won't be able to see the layer underneath! Are you saying I need to convert the image I'm working with to a transparent png? But that will cause all sorts of graphic errors!

Maybe you should show me an example or something.

I don't understand what you mean by alpha for the layers. I'm simply suggesting that you allow users to control the camera from all angles in 360 degrees so that it's easier to understand where and how to layer the objects more accurately to the image. Limiting us to 180 degrees of vision is needlessly restrictive.

Actually now that I think about it. You haven't even given us 180 degrees of perspective. In reality, all we can see is maybe 60 degrees Is there something stopping the program from being capable of 360 degree vision? If we could see from all angles it would make things much more easy.

What I'm talking about on the line in the artifacts is this: the focus point in the center, if the line is pointing up at 90 degrees- means that the object you created is like a hill in the shape you made it in, with the surface being the picture.  You can see what I'm talking about if you simply turn the object inside out.

The picture is a flat surface, and the artifact is an elevated object that you're putting on it. Correct me if my understanding is wrong.

Now, if you point the line down. As in to 270 degrees, then it seems as though you're planting a hole in the picture. Or 'pressing' downwards on it. I believe this may be where some of the artifact clashing could be coming from. Objects being placed that are interacting at different layers of depth on the z axis that aren't meant to interact.

Again, on the hips, as well as all the other tools without a focus center, it is difficult to use them without knowing what what the depth is.  It's confusing to be given a tool without being able to determine what depth it has.

Thank you for replying.

The other software that I found are wallpaper enginge, and an app called 'AndWobble'.

Wallpaper engine is alot more complex than your program - but as I stated earlier, does not allow for the exporting of the projects you make. So screen recording is the only option - the same goes for the other app I mentioned.

Wallpaper Engine can do much of what your program does, including extra particle effects and all sorts of lighting - but it does not operate with any depth or 3rd dimension. Everything seems to be done in 2d, with very extensive layering options.


Now, I don't really understand why the layers have to work in your program the way they do. Why not simply limit the opacity to make the image transparent? It is very difficult to work with any depth this way. You're basically making the user have to load in their own opacity screen. Do they all have to be PNG images to be transparent?

Another thing, for the artifacts to operate, I recognize that they work in a 3d manner - and by playing with the scaler I can tell which direction the Z axis is going - but the way it's configured is obvious wonky to work with and leaves me wondering how can I tell what direction to point stuff with the line on all of the images? I figured out that You made it so that if the line is pointing up - that means the image is pointing 'up' and 'out' towards the user - coming out of the screen. And if you reverse the line to make it so that it is pointing down - it will dig into the image instead in the opposite direction. Going into the screen.

i find this extremely fascinating and it makes me wonder if it would be possible to make more in depth models this way - but you haven't given us a way to examine the artifacts from all angles yet. You need to implement a perspective tool that grants 360 degrees of movement around the picture. This will allow for much more accurate modeling and may even make full 3d models possible. Do you think the program is powerful enough to render that many artifacts at once?


As for the app I found named AndWobble. It has very simple and similar application to pic2jelly, except again - I do not believe it operates with any depth or 3 dimensional axis. However, somehow they've figured out a way to make their objects within an image clash with very little polygon meshing or funny business. I'm actually not completely convinced it isn't 3d yet - because of the vagueness of how they named a certain tool in the app.

As for the double hip direction - why isn't there any line indicator like the jiggle objects? Are we supposed to just rely on the Z force multiplier? How do we calculate the angle without the marker? Does it not operate in 3d like the other tools?

Additional questions:

Is there any other way we can contact you besides this comment section? This feels so 2003 and with all the different communication platforms available, you would think you'd use them.

Give us an email or something. I messaged you on your old reddit account when you made the beta test from 4 months ago. 

I have big hopes for the project and I hope you respond soon.

Hey, I have a ton of questions.

I don't understand how to prevent artifact clashes, your hip tool, and how to layer without the whole screen going blank.

Discovering the program piqued my interest in many others like it, and I'm currently frustrated at the difficulty of using separate programs.

However, no other program with similar application as this one allows for the exporting of images and video -  which leaves me in a complicated situation.

I want to learn how to use your app better, so that I don't have to rely on the other apps for now. It would make life much easier.