Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

If you manage to make such a sentient crawler - you won't need to bother with research or games - you can just sell the crawler!

Realistically, the first few minutes of my games are logo screen, instructions, menu screen, awards screen - then the game.  Your crawler would have to press the right button to navigate across that.  It would need to read the instructions to know what to do.  Only then can it "play" the game - and only if someone programs those instructions into them.  I don't think you're going to be able to do that with a bot.

You might be more successful by just gathering the various screenshots and / or videos and getting your data from those instead.

(1 edit) (+2)

One of our first projects in this area used screenshots collected while playing back speedrun input files: https://adamsmith.as/papers/KEG-18_paper_4.pdf (tested on Super Mario World and Super Metroid). It showed that you could search by screenshots and find specific results within a given game, but it begged the question: where do you get the speedruns from?

In another project, we extracted screenshots and audio transcripts from YouTube Let's Play videos: https://adamsmith.as/papers/GAMNLP-19-3.pdf We could find the "horses" in Last of Us and the "selfie" in Life is Strange, but still, can a machine do the playing instead of a human?

Of course a machine can -- and even really dumb strategies can sometimes do well. We tried out some simple strategies on games spanning from Atari 2600 to Nintendo 64: https://adamsmith.as/papers/KEG_2019_paper_9.pdf

But, like, can the algorithm cover serious ground on some timescale that compares with human gameplay? We looked at that in Super Mario World and Legend of Zelda: https://adamsmith.as/papers/PID5953685.pdf It turns out that both humans and machines loose steam as you let them explore for longer. Interleaving human and machine exploration lead to the most amount of ground covered for a given amount of time invested.

Do these things just press random buttons or does it learn from experience? Our first algorithms just pressed random buttons, yes, but the new ones (tested on in-development Unity games that integrate our machine playtesting script) do learn from experience and improve over time: https://adamsmith.as/papers/MonsterCarlo2.pdf They can bootstrap from human gameplay demonstration or from their own random flailing.

We're in the business of creating all this stuff and just giving it away for free. My lab brings in money by proposing interesting new work to be done (to be paid for by taxpayers) rather than selling the technology we developed last year. I'm scratching on the door of Itch as part of a proposal to get more money (from the US National Science Foundation) to do the next steps in these projects. It's tricky to get funding for research around games, but we're trying.