Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(+1)

You might have been mislead by the term "AI". There is nothing intelligent nor self aware about a large language model.

Llm are this https://en.wikipedia.org/wiki/Markov_chain with a lot of sophistication and an immensly huge model and added dimensions. The "AI" we are talking about needs a prompt. It will then generate a probable answer for that prompt. And that probable includes pseudo-randomnes.

Since there is chance involved that means that your results can have random errors. Or systematic errors if the training data is faulty or biased. Bias is also it's own category of problems. Or it will just not really fit the prompt, just look like it might. I have even seen factual wrong answers from AI for trivial things. For pictures, the most common error is number of fingers, just like in the picture in the OP. The gloved hand has only 4. The character has no teeth either. Or look at the numbers of the clock. From afar it looks like it might be roman numerals, but they are wrong.

The AI we talk about is not the sum / more than the sum of its input (training data). It is a condensed version, boiled down, their essence. One could even say, a very good model would be like a super high efficiency lossy compression algorithm. One need only a good prompt and a random seed number to almost recreate almost all training inputs. Plus a lot more outputs that are similar to the input in principle and reachable by random variations. 

Dang, I've programmed NPCs that are smarter than that. You keep using your algorithm. As a game dev you might do better to build your own though.

forever {

get Input (I)

function; pickRandom <1-10> store (R)

I*R=T

print (T)

T+=globalStorageVar

}

(JK this code is totally useless, 98% comedy and 2% Python)

I replied to this statement below. It made you sound like you have seen one too many Terminator movies and other science fiction with bad robots and applied it to the generative "AI" thingies. You can debate ethics of generative AI and call it evil tools, but your argument of how it is evil because it can do certain stuff is based on a false premise.

From what I know, it's totally capable of jumping to conclusions and self-modifying out any safeguards. That makes it capable of much, much more evil than any tool I know of. True, you can use a knife without cutting yourself if you're careful, but what if the knife was self-aware and decided that you were redundant?
(1 edit)

Call me crazy if you like, but I'm not drawing on science fiction for any of this. I had a really intense conversation with someone who firmly and fully believes in the capabilities and benifits of AI, and that conversation is where most of my reference comes from. Self-modifying code is super unstable in my opinion, if it can change without any moderation then there are no limits. Murphy's law: What can go wrong WILL go wrong. Even if there ARE limits, then you know the developers just make it show you targeted ads and tell you to buy their software. Who wouldn't, it's job security.

Maybe the knife analogy was overkill, so just forget that. I really wanted to make a point and got carried away.

Terminator was actually a really bad movie in my opinion. Also note that I said "From what I know." If you know differently feel free to correct me, but don't insult my intelligence. Take everything in this discussion board with a grain of salt.

The AI systems we are talking about are not self modifying code. They are llm. Large language models. There is a model that is a condensed essence of the training data and that so called large language model is capable of creating a response to a prompt. That can be applied to images as well. Obtaining the training data and using the works of such systems is critizised under ethical aspects, therefore the "evil" aspect.

But maybe you were talking about self driving cars.

The Genetic Arms Race | How CRISPR and AI Destroy the World

Die Quantenapokalypse: Alle Ihre Geheimnisse enthüllt

Artificial Intelligence Out of Control: The Apocalypse is Here | How AI and ChatGPT End Humanity

Why is it not just called a database then?

(1 edit) (+1)

If you guys wanna talk about dangers of technology, please do so in another topic. Whatever your point is, it is offtopic.

This thread is about usage of the output of large language models (falsly named AI) in game development. That means images from applications like Stable Diffusion or text from applications like chatgpt. Things like that. And not general "AI" and "quantum".

You can run Stable Diffusion on your own computer to see what it does and what it can do and what it can not do. Try it. Results are impressive and it will teach you more about llm than videos that are commercially made to bait clicks and show advertisements. 

thanks mate 😅