Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(4 edits) (+5)

AI is a tool and a tool is no more evil than the person who wields it.

Every major invention is followed by upheaval until society adapts to its new circumstances. How many scribes lost their jobs when the printing press was invented? How many laborers were laid off when the industrial revolution came along? How many people in the horse industry fell into irrelevancy after the invention of cars and locomotives? Yet all these inventions benefited humanity in the long run.

Self-driving cars and AI are just the newest batch of inventions that society complains about because they are hyper-focused on the short-term consequences, and not the greater benefits they could bring. I know nobody likes being told their education, skills and career are becoming irrelevant... You want to stay competitive you have to provide something AI can't, or do it better than it does. 

Most industries already have little regard for their employees and costumers... Cutting corners to maximize their profits has been their modus operandi for a while now, even before AI. They have grown corrupt, immoral and unsustainable, all in the name short-term profit and goals. We're partially to blame too, because despite our complaints we still buy for them or use their services.

(+1)

"No more evil than the person who wields it" I would like to add that AI is FAR more than the sum of its inputs. From what I know, it's totally capable of jumping to conclusions and self-modifying out any safeguards. That makes it capable of much, much more evil than any tool I know of. True, you can use a knife without cutting yourself if you're careful, but what if the knife was self-aware and decided that you were redundant?

To use a phrase from an old cartoon, "I wouldn't touch you with a 99 1/2 foot pole."

Besides that, I don't believe that machines are capable of their own reasoning. Take that how you will.

Totally agree on the sad state of industry and politics, but AI isn't the problem there, just a sign.

(+1)

You might have been mislead by the term "AI". There is nothing intelligent nor self aware about a large language model.

Llm are this https://en.wikipedia.org/wiki/Markov_chain with a lot of sophistication and an immensly huge model and added dimensions. The "AI" we are talking about needs a prompt. It will then generate a probable answer for that prompt. And that probable includes pseudo-randomnes.

Since there is chance involved that means that your results can have random errors. Or systematic errors if the training data is faulty or biased. Bias is also it's own category of problems. Or it will just not really fit the prompt, just look like it might. I have even seen factual wrong answers from AI for trivial things. For pictures, the most common error is number of fingers, just like in the picture in the OP. The gloved hand has only 4. The character has no teeth either. Or look at the numbers of the clock. From afar it looks like it might be roman numerals, but they are wrong.

The AI we talk about is not the sum / more than the sum of its input (training data). It is a condensed version, boiled down, their essence. One could even say, a very good model would be like a super high efficiency lossy compression algorithm. One need only a good prompt and a random seed number to almost recreate almost all training inputs. Plus a lot more outputs that are similar to the input in principle and reachable by random variations. 

Dang, I've programmed NPCs that are smarter than that. You keep using your algorithm. As a game dev you might do better to build your own though.

forever {

get Input (I)

function; pickRandom <1-10> store (R)

I*R=T

print (T)

T+=globalStorageVar

}

(JK this code is totally useless, 98% comedy and 2% Python)

I replied to this statement below. It made you sound like you have seen one too many Terminator movies and other science fiction with bad robots and applied it to the generative "AI" thingies. You can debate ethics of generative AI and call it evil tools, but your argument of how it is evil because it can do certain stuff is based on a false premise.

From what I know, it's totally capable of jumping to conclusions and self-modifying out any safeguards. That makes it capable of much, much more evil than any tool I know of. True, you can use a knife without cutting yourself if you're careful, but what if the knife was self-aware and decided that you were redundant?
(1 edit)

Call me crazy if you like, but I'm not drawing on science fiction for any of this. I had a really intense conversation with someone who firmly and fully believes in the capabilities and benifits of AI, and that conversation is where most of my reference comes from. Self-modifying code is super unstable in my opinion, if it can change without any moderation then there are no limits. Murphy's law: What can go wrong WILL go wrong. Even if there ARE limits, then you know the developers just make it show you targeted ads and tell you to buy their software. Who wouldn't, it's job security.

Maybe the knife analogy was overkill, so just forget that. I really wanted to make a point and got carried away.

Terminator was actually a really bad movie in my opinion. Also note that I said "From what I know." If you know differently feel free to correct me, but don't insult my intelligence. Take everything in this discussion board with a grain of salt.

The AI systems we are talking about are not self modifying code. They are llm. Large language models. There is a model that is a condensed essence of the training data and that so called large language model is capable of creating a response to a prompt. That can be applied to images as well. Obtaining the training data and using the works of such systems is critizised under ethical aspects, therefore the "evil" aspect.

But maybe you were talking about self driving cars.

The Genetic Arms Race | How CRISPR and AI Destroy the World

Die Quantenapokalypse: Alle Ihre Geheimnisse enthüllt

Artificial Intelligence Out of Control: The Apocalypse is Here | How AI and ChatGPT End Humanity

Why is it not just called a database then?

(1 edit) (+1)

If you guys wanna talk about dangers of technology, please do so in another topic. Whatever your point is, it is offtopic.

This thread is about usage of the output of large language models (falsly named AI) in game development. That means images from applications like Stable Diffusion or text from applications like chatgpt. Things like that. And not general "AI" and "quantum".

You can run Stable Diffusion on your own computer to see what it does and what it can do and what it can not do. Try it. Results are impressive and it will teach you more about llm than videos that are commercially made to bait clicks and show advertisements. 

thanks mate 😅