Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

AI Generated vs AI Assisted: Where's the lines?

A topic by DrakeAR3 created 24 days ago Views: 476 Replies: 18
Viewing posts 1 to 5

I'm quite new to Itch.io and I'm looking to create my first project page within a month or two, so I'm familiarizing myself with the rules and expectations now.
I find myself a little uncertain about where the AI content rules lie. Currently, I find ChatGPT to be rather helpful in two specific areas. I am autistic and struggle with with syntax of language. It took me a very long time to become decent in English and I still struggle with syntax and articulation.  Typing the lines of code is still super difficult even when I understand the psuedo-code logic.
I also find AI to be helpful at teaching me concepts like "Hey, can you teach me the logic of this A* Pathfinding logic I heard about?" or "I really don't like having this branching IF block inside a loop that runs every frame assessing the same conditions over and over, can I just have the IF block  run once on object creation, assign the outputs to variables, and then the loop uses those variables to optimize the code? I can? Great!"
In regards to the community rules and expectations, where, if any, is the distinction between asking AI to "do it for me" vs assist me with learning the concepts and proofing code?

I personally think that there is absolutely nothing inherently bad with using AI for code as long as the result is not trash, which unfortunately AI still does generate quite often. So I frequently use AI to generate a 'proof of concept', but I take it as a rule to never ever copy-paste code from AI directly, instead using it as a reference to write code myself. Not because of some prejudice, but because due to the nature of LLM models, AI is exceptionally adept at generating 'almost correct' code, with subtle errors which make is very difficult to debug, especially if you did not take time to understand it. FWIW, I do the same with all the code I get online, even from comparatively reliable sources like Stackoverflow.

Same thing goes for learning -- advanced AI models act very good in teacher role, with infinite patience and boundless erudition. However, they can easily and convincingly mislead you, simply because of random fluctuation of one of hundred billion neuron weights. So when I use AI to learn something, chat with it to get a quick understanding, and then go look at original source (in your example it would be A* paper or some textbook on algorithms) to make sure there is no mistakes in my understanding.

I believe in a few more years of progress those concerns will diminish, but we are certainly not there yet.

Yes, I've noticed. I've had some very interesting debugging sessions already. I'm not going to claim to be some AI whisperer but it does seem to have some common themes to the mistakes it makes at least and honestly, my goal is ultimately to learn so asking the AI to elucidate the programming concepts behind its recommendations until my intuition kicks in has been worth the extra headache.
That said, I'm still using it for learning and generating boiler plate that I then refactor to my liking and I was asking about where this community's opinions and standards tend to land since it is becoming a bit of an issue. I see every day escalating AI misuse, but as an autistic person with a hugely different learning style from conventional education its been near life changing for me. (And honestly, claiming AI makes mistakes isn't really a deal breaker for me because honestly, have you -seen- how often human flesh and blood teachers make mistakes and oversights too? At least the AI can discern my word salad reliably.)

The line between AI assisted and AI generated. For such a line those two would have to be on the same scale and increasing something would make it switch over from one state to the other.

For walking and running that is speed and the line is when both feet leave the ground.

What is the something that changes for AI involvement? I guess it is decision making and sample size. But it depends on the context. If a photographer takes 1000 pictures and selects 1, did he create the photo? Or was it the camera and the objects that were photographed? Does an orchestra conductor create music?

Those two are generally considered art. But if you prompt a gen AI system to make 1000 pictures after your direction and select 1, it currently is not considered art, and it is debated if it counts as creation. And the selected image will be AI generated, even if you change something, because an image is a big sample size.

For code creation you have templates and text book examples, discussion threads and existing code to copy from. Was it you, that made the code, if you copied it and modified it? If your sample size or your own deciscion making for the modification are  sufficient to not violate copyright, yeah, you made it.

So, for code generation, I would put that line at the point where you let the AI do basically the game mechanics, instead of parts, like algorithms and functions. Also, you do not ask AI to give you 1000 functions and chose one. If it works, it works, if it does not, you make it work. 

And for image generation, that line is shifted. If you ask an AI to draw the hands on your stick figure or you have the AI make the stick figure and correct the hands, both those big samples are AI generated. 

As for the rules on Itch, if you do not have AI artworks and merely AI "assisted" code, but not large portions of AI code, I would just leave the disclosure blank for the time being, if possible. That disclosure feature has a lot of issues, both in the declaration and how the feature is useable for players. I have yet to see someone lamenting about how bad AI code games are, which makes me think, that players do not care about the code. They care about story and art works. But Itch lumps all "AI" together with their "no-ai" filter. 

Thank you. Yes, I'm very involved in the coding process. Even when AI leads me down a wild rabbit trail when trying to debug one of its own problems it's still really educational. Heck, I remember trying to debug collision between CharacterBody2D and Area2D nodes in GODOT. AI had me using _on_area_entered(area) when I really needed _on_body_entered(body).
Before it was all over I had print commands set up to spam Output with the ID, collision layer, and mask layer of every bullet being generated. Absolutely everything was lined up but hits weren't registering until my gut told me that particular function might be the culprit. Looked it up, sure enough, just a GODOT quirk. Learned so much that day.

(+1)

The thing is, in coding you have library calls. This is not present in art. You do not paint a thing and instead of a hand put a reference to the hand-maker with the parameter of 5 fingers. Reusing existing things is kinda the whole point of programming.

But if you rehash art, people get angry. For various reasons. Does not matter if you do it by hand or with extra steps of using a large language model. There will also be a line between copying/rehashing and actual learning how to do a thing. If AI crosses that line, it will be hard to argue that it plagiarizes by traning data. It will depend only on the results. A human can also plagiarize. But a human looking at art works will not be accused of rehashing those in memory to be able to reproduce those works.

As long as those systems put out the wrong number of fingers, they are quite on the other side of that proposed line. Or in case of code, your example with the wrong function. The training data probably had examples of those in a different context, but the AI did not grasp what you actually needed, it only gave you a thing that is probably similar to the answer by looks alone.

I hadn't thought of it that way. If recycling and refactoring is already such a crucial part of the coding process, then having AI generate code snippets and generic function templates instead of grabbing them from design bibles, sample projects, or code repositories isn't really that much different in the long run, is it?
Thanks! I'll get back to working on my project. I hope to have a playable tech demo soon.

For code snippets, yes. If you ask the prompt to give you a match-3 game, that would be a bit different. But essentially, code you write is built on other code. You use Godot. The things you do with it, would not be as easy, without Godot existing. There are people doing certain things from scratch (in relative terms. They still use operating system calls and whatnot).

In coding there is also this concept of hierarchy of languages. How abstract is the code from the machine code. The continuation of that concept is using natural language. When the pseudo code is the code. The prompt you use would be the code. But since they use training data of the usual abstract languages, that approach is flawed at the moment. As you saw with your example.

In the arts, there are not that many abstraction layers and "library" calls. There are some. Especially when doing it digitally. And the artists also have issues with techniques like "tracing". Or "referencing". That is just a nice word to describe that one artist copies how a thing looks by looking or tracing the outlines from other art. What you would do in coding by looking at examples.

Anyway, that is my opinion why AI is not as frowned upon in coding as it is in "art".

(4 edits)

For normal minded people (applicable to art and to programming):
-AI assisted: you used AI to generate something, then worked on it, or used it as reference, or mixed generative and manual work in some amount, or you made things like the concept, styles, and pose, etc of the final generated image, so it involves a decent amount of conscious work and not a purely random one. Then you may even paint over it, or change it manually, or repeat this whole process lots of times until you get the final thing.
-AI generated: you just select one or two things, and pushed a button, then you got a random image.

For against-generative art fanatics: all is AI generated and crap.
For pure AI-fanatics: this kind of people almost doesn't exist, so usually doesn't happen, but I met people who disliked everything that was not purely generated and didn't want to even postwork an image.

(+2)

Vegan comes to mind. Some AI haters go at it with the same mindset. If the glue of the label of the bottle of water is not animal free, the water is not vegan. And yes, there are non vegan water bottles. And there are some lemonades that are filtered with gelatine.

But I think it is the same dilemma for human assisted work. You have two people where one is designated the assistant. How much assistance can the assistant give, to have the work still count as being only assisted by the assitant. When do the roles switch and the designation who the assitant is, is only on paper.

I don't think what you are doing would count as "generated content." What you are doing is similar to copying a few code snippets from tutorials.

Now if you were having an LLM spit out artwork, audio, levels, and other assets that you simply plop into your game, then you would need the tag.

From https://itch.io/docs/creators/quality-guidelines#accurately-tag-your-use-of-gene...:

Generative AI refers to artificial intelligence systems that create new content (text, images, music) by learning from large datasets. This includes large language models like ChatGPT and image generation models like DALL-E, Midjourney, and Stable Diffusion that create new outputs based on training data. 

We ask that you accurately tag your project if it contains materials produced by generative AI by utilizing the AI Disclosure section on your project’s edit page.

They don't even mention code here, and it's not like you are generating pages of code anyway, so I think you should NOT use the tag.

(+1)

It is mentioned here, where no one reads. https://itch.io/t/4309690/generative-ai-disclosure-tagging

The disclosure has 4 buttons and the "no-ai" filter lumps all of them together.

You can specifically search for it positively. https://itch.io/games/tag-ai-generated-code

I wish for a "no-ai-content" filter and a "ai-generated-content" filter which would only bunch text, images, music together how it is currently written in the faq , since I think it would better reflect the prefrences of users.

When you upload your game it says that it must be marked with "AI generated" tag literally "even if you hand edited it". 

So even if you used a piece of code it must have the tag. Or if you used an image as reference even if you worked on it, too. Or a chord in a music. 

Because of that, there's a long discussion in the forum whether it should be an option  to set  "AI assisted" and not "ai generated". Ai generated sounds like all is made by ai only. 

(+1)
So even if you used a piece of code it must have the tag.

By that logic, most things would need to have that "tag". You will not find much premade libraries, engines, operation system calls and so forth where you can certify that it does not contain a "made by AI, but modified by a human" line of code. By the same logic most pieces of software are not even (fully) created by the developer, since they are bound to have copied some piece for it out of a tutorial and modified it.

An image is not a game. You ask the AI for an image and hand edit it. That is still an AI generated image. If you put the image in a game, it therefore has AI generated assets.

If you ask the AI for a game and then hand edit it, that would be the same. The game is AI made.

But images are not the same clusterdump of individual parts as code. You do not use a body of stock super hero, put a custom drawn head on it and ask the internet how hair curls look and adapt the tutorial pics on your custom head and call it "your" image. But simpliefied that is what coding is. If you use an engine like Godot or Unity, 99+% of the code are not by the developer. And the chances are high that those 99% already have some AI in it or will have in a future version. AI tools and assistance are becoming a tool of the trade. Just like digital artists use a filler function to not color in the pixels by hand and apply a shading gradiant. Or how 3d render artists do not even draw at all but use a model and texture and let the software render the image. I wonder when or if those filler functions of Photoshop will be seen as such. 

Ai generated sounds like all is made by ai only

Yup. The disclosure is quite unhelpful in the way it is now. If I look at Steam, the disclosure (that the user sees) is wordy. It describes what is actually done, so potential customers can make an informed decision, usually how they value the displayed art/story/whatever was described as being gen AI. And I do not even know if they ask or disclose code "assistance". Depending where you look, there are claims about how like a third of all lines of code is allegedly written by AI currently. Whatever that means, as most code is repetition with variation. There are bound to be games among those, and I have yet to see a Steam game with code disclosure. 

yes, sadly, as AI disclosure is here now, everything becomes "AI generated" if you just used a bit. That's exactly what is asked when you upload. 

On Stem they let you explain exactly how AI is used. That works for people that are rational, modern, and live in 2025. But for the "vegan mindset" as you said, it's exactly the same: "AI generated". We could talk about "Photoshop generated" or "filter generated", or "intellisense generated",or a lot more, if we applied the same criteria.

On the other side, many who use AI, specially on art, are not honest, and tend to say "I draw it" or "I painted", when they actually didn't at all, nor even partially.

One can notice very easily: AI tends to produce standardized outputs, and when people don't create their own styles or concepts, or just put the raw output you find:
- On images, they all look the same author (this happens without AI too, actually), as if they all were made by the same person. No personal style at all

- On code, it does very conventional apps and games. 

- On sound and music, does strange stops and incoherence.

And so on. Actually people can notice when something is fruit from a personal style and intervention and when it's not. Those who can't, will be able to notice in a few years. 

But the amount of people that don't want any AI at all, will never accept it. And the tag is for those people, so they can  keep their activity isolated from the games made with AI, specially on art.

I use AI at my art, with a lot of customization and intervention in manual way, but I marked the tag anyway, as I honestly also don't want that kind of public anyway. For an extended explanation they can go to my site, and read the About, or just play the game, watch it, etc.

I recommend any person that uses AI to use the tag. They will save a lot of headaches. Those people will live in their world isolated from the major tendency for the years to come. Also, if you don't make manga it's easier, because that is the only style that most of them like. Some like more things, but the majority doesn't. Probably this is this way because the first generators focused on manga (Waifu Diffusion).

When in one or two years all is made using AI as we use Photoshop, or Blender, everyone will be ok with it. 

The best vaccine to not be against AI and stop fearing it is to use it. You quickly will see the limitations. The crap it produces is massive. Then you start to see the good results it can make if you put intervention in it, or mix it with more things. Mix also coding with your brain, generative art with drawing and painting, composing with small parts of music (I suppose). Then you see that a human is always needed, and the fear stops, unless you want to produce massive amounts of crap (as sadly many do, but I think it's because it's  something new).

(+1)

I see it pragmatically. Most of the "AI" games on Itch would not even exist, if not for AI. Similar to game engines. Single developers usually cannot program a game from scratch. Or have the budget to commission art. Being able to do art and code is rare. Engines helped for the coding. Stock images, things like Blender and other tools for the art. And lately AI generated images. Someone familiar with generating code has an easier time figuring out how to operate an AI system. It is not that easy as some people think. I like to compare it to photography. A photographer can't draw and only clicks a button - to the lay person. Suddenly everyone has a digital camera in their pockets and everyone's a photographer.

Attacking AI for the origin of the training data is short sighted. The outcome will not go away. It will be improved, and probably legally and morally cleansed. Or there will be a procedural gen AI. Anyhow, there still we be a method for lay persons to create images "by click on a button". 

For games and bigger studios, they do have a budget. They better use a professional to create their content. There I have little tolerance for AI usage. They are not lay persons. They are not hobby devs that try to make pocket money or become a professional.

And I think that many people think so, if I judge by the popularity of games. Exactly because AI is so recogniseable when a lay person does it, you spot images and text right away. 

But if I were to see a disclosure that says, we did some boring sub routines with AI helpers in the code, meh, who cares. If they say, they wrote the plot with AI ... that is a different beast. If it is not an experimental game or some such, I would rather not read that.

But currently there is no distinction between AI assisted or generated, so all this is moot. And people would probably try to stretch any definition to their advantage.

Agree in all that you said. 

(+1)

Honestly, for my "tutorial project" its more educational than anything. Once I've done something a few times like set up a new npc.tscn I can do that on my own, I only needed ChatGPT to handhold me for first one or two times. I'm declaring signals, linking them, and rationalizing my way through node interactions more or less on my own.

But like a textbook, each and every time I turn the metaphorical page and see something new I may need to be walked through the process once or twice.

And funny thing, I've noticed ChatGPT can actually catch several of its mistakes if prompted differently.

AI: So, you wanna do $this, var this, and _that().
Me: That function call looks suspicious. Are you sure that's the right one for doing [psuedo-code step here]
AI: Oh yeah, good catch. It's documented that it isn't. Try using _this() instead of _that()
And then that one super specific thing works first try in test play.

Yes that's how they behave. Microsoft Copilot does the same. Just by using them you notice it's just like working with someone that makes mistakes and it's just that they have access to information but are just a help, not what people think, and don't replace anyone.