Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(+7)

I am...pretty negative on AI. I think my main objection to it is really just an objection to capitalism: artists and coders need money to live, and AI is a way to devalue their work and let tech billionaires siphon up even more money, and turn the Internet into even more of a slop heap than it already was in the process.

It's grotesque when corporations with actual budgets use it. Maybe it's the future and maybe it's not, but the future's shaping up to be a dystopian nightmare, and AI has been just one more step down that road. Here on itch, anyone selling AI assets and trying to make a buck off them directly is just trashy. Honestly I think itch should just ban the practice, or require all AI assets to be free.

A solo dev using AI...I personally find it distasteful, but I can recognize that it's more complicated. It's hard to make a game, and if the things you're passionate about--the gameplay, the characters, the story you want to tell--don't include the art or physically getting it to run as a game, I can see how it's tempting to offload that work to AI and focus on the parts that make your eyes light up. I'm not sure I have a good answer for this. Human creativity is still precious, and if AI is the difference between a game being made at all and a game never existing outside of your head...I don't know, I don't want to judge someone for that too harshly.

As for the cover images: both the original and the new covers read as AI to me. The original is more subtle and it honestly does have a personality that I'd like, if it wasn't generated. The new one...I took one glance and immediately went "Yeah a human didn't make this." It's got this uncanny tried-to-be-pixel-art-but-doesn't-know-what-pixel-art-is look. Was it a stock image you purchased?

(+3)

Thank you so much for your thoughtful response!!!

I completely agree with you when it comes to the use of AI by large corporations—it often feels like a way to cut costs at the expense of real artists and professionals who deserve recognition and fair pay. That’s definitely something we should push back against.

At the same time, for people like me, who aren’t skilled at drawing or painting, generative AI can be a helpful tool—especially for things like placeholder art, concept design, or, in my case, the original cover. It allowed me to visualize something I wouldn’t have been able to create myself.

That said, I totally understand the concerns, and that’s actually part of why I decided to replace the AI-generated cover. 

By the way the new one is based on a real photo my father tooks—funny enough, it's from the real village where the game takes place! 

Really BEAUTIFUL place :)

Thanks again for taking the time to share your perspective. I really appreciate it!!!


(4 edits) (+4)

AI is a tool and a tool is no more evil than the person who wields it.

Every major invention is followed by upheaval until society adapts to its new circumstances. How many scribes lost their jobs when the printing press was invented? How many laborers were laid off when the industrial revolution came along? How many people in the horse industry fell into irrelevancy after the invention of cars and locomotives? Yet all these inventions benefited humanity in the long run.

Self-driving cars and AI are just the newest batch of inventions that society complains about because they are hyper-focused on the short-term consequences, and not the greater benefits they could bring. I know nobody likes being told their education, skills and career are becoming irrelevant... You want to stay competitive you have to provide something AI can't, or do it better than it does. 

Most industries already have little regard for their employees and costumers... Cutting corners to maximize their profits has been their modus operandi for a while now, even before AI. They have grown corrupt, immoral and unsustainable, all in the name short-term profit and goals. We're partially to blame too, because despite our complaints we still buy for them or use their services.

(+1)

"No more evil than the person who wields it" I would like to add that AI is FAR more than the sum of its inputs. From what I know, it's totally capable of jumping to conclusions and self-modifying out any safeguards. That makes it capable of much, much more evil than any tool I know of. True, you can use a knife without cutting yourself if you're careful, but what if the knife was self-aware and decided that you were redundant?

To use a phrase from an old cartoon, "I wouldn't touch you with a 99 1/2 foot pole."

Besides that, I don't believe that machines are capable of their own reasoning. Take that how you will.

Totally agree on the sad state of industry and politics, but AI isn't the problem there, just a sign.

(+1)

You might have been mislead by the term "AI". There is nothing intelligent nor self aware about a large language model.

Llm are this https://en.wikipedia.org/wiki/Markov_chain with a lot of sophistication and an immensly huge model and added dimensions. The "AI" we are talking about needs a prompt. It will then generate a probable answer for that prompt. And that probable includes pseudo-randomnes.

Since there is chance involved that means that your results can have random errors. Or systematic errors if the training data is faulty or biased. Bias is also it's own category of problems. Or it will just not really fit the prompt, just look like it might. I have even seen factual wrong answers from AI for trivial things. For pictures, the most common error is number of fingers, just like in the picture in the OP. The gloved hand has only 4. The character has no teeth either. Or look at the numbers of the clock. From afar it looks like it might be roman numerals, but they are wrong.

The AI we talk about is not the sum / more than the sum of its input (training data). It is a condensed version, boiled down, their essence. One could even say, a very good model would be like a super high efficiency lossy compression algorithm. One need only a good prompt and a random seed number to almost recreate almost all training inputs. Plus a lot more outputs that are similar to the input in principle and reachable by random variations. 

Dang, I've programmed NPCs that are smarter than that. You keep using your algorithm. As a game dev you might do better to build your own though.

forever {

get Input (I)

function; pickRandom <1-10> store (R)

I*R=T

print (T)

T+=globalStorageVar

}

(JK this code is totally useless, 98% comedy and 2% Python)

I replied to this statement below. It made you sound like you have seen one too many Terminator movies and other science fiction with bad robots and applied it to the generative "AI" thingies. You can debate ethics of generative AI and call it evil tools, but your argument of how it is evil because it can do certain stuff is based on a false premise.

From what I know, it's totally capable of jumping to conclusions and self-modifying out any safeguards. That makes it capable of much, much more evil than any tool I know of. True, you can use a knife without cutting yourself if you're careful, but what if the knife was self-aware and decided that you were redundant?
(1 edit)

Call me crazy if you like, but I'm not drawing on science fiction for any of this. I had a really intense conversation with someone who firmly and fully believes in the capabilities and benifits of AI, and that conversation is where most of my reference comes from. Self-modifying code is super unstable in my opinion, if it can change without any moderation then there are no limits. Murphy's law: What can go wrong WILL go wrong. Even if there ARE limits, then you know the developers just make it show you targeted ads and tell you to buy their software. Who wouldn't, it's job security.

Maybe the knife analogy was overkill, so just forget that. I really wanted to make a point and got carried away.

Terminator was actually a really bad movie in my opinion. Also note that I said "From what I know." If you know differently feel free to correct me, but don't insult my intelligence. Take everything in this discussion board with a grain of salt.

The AI systems we are talking about are not self modifying code. They are llm. Large language models. There is a model that is a condensed essence of the training data and that so called large language model is capable of creating a response to a prompt. That can be applied to images as well. Obtaining the training data and using the works of such systems is critizised under ethical aspects, therefore the "evil" aspect.

But maybe you were talking about self driving cars.

The Genetic Arms Race | How CRISPR and AI Destroy the World

Die Quantenapokalypse: Alle Ihre Geheimnisse enthüllt

Artificial Intelligence Out of Control: The Apocalypse is Here | How AI and ChatGPT End Humanity

Why is it not just called a database then?

(1 edit) (+1)

If you guys wanna talk about dangers of technology, please do so in another topic. Whatever your point is, it is offtopic.

This thread is about usage of the output of large language models (falsly named AI) in game development. That means images from applications like Stable Diffusion or text from applications like chatgpt. Things like that. And not general "AI" and "quantum".

You can run Stable Diffusion on your own computer to see what it does and what it can do and what it can not do. Try it. Results are impressive and it will teach you more about llm than videos that are commercially made to bait clicks and show advertisements.