Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(+1)

"we've achieved AGI"

>look inside

>crypto grifting

Perfect. That's the vibe I wanted to start from.
Could you do me a favor and tell me if this feedback from Marvin is helpful at all? https://github.com/leo-guinan/pitch-jam-2026/blob/main/descending-wrong-gradient...

(+1)

Pretty good review, thanks. Missed a few little things, but that's fine. I'd be interested to see some more specifics on how Marvin works and why you feel confident making lofty claims about him. I'm sure you understand why "I made AGI, here buy this $CRYPTO" is basically a red flag planted in a bigger, redder flag. 

Would love to hear more about what he missed! That's valuable as I start tuning him. And mostly, I'm trying to reframe the question of AGI. I don't want it to be about capabilities, but rather responsibilities. My insight was that intelligence is a function of networks, not individuals, so I'm pushing this forward to advance the conversation forward. 

I also want to start from a level of skepticism. Crypto is too easy to overcomplicate and screw people over. I don't want anyone to overly trust me, attempt to do something that I do, and screw it up. 

I'd rather people watch me skeptically and note/mock the mistakes I make. Makes them stick better in the observers. This makes bad actors a lot weaker in the space.

So I intentionally raised the red flags as high as I could. Because I want to make it harder for the bad guys who won't point out the dangers associated with crypto right now. but it can be a useful experimentation surface when the right structures are in place.

(+1)

I sorta see what you're saying, but it also seems to me that capability has to precede responsibility. How can you responsible for something you aren't capable of? 

Anyway, would it be possible for me to talk to Marvin?

Capability has to precede responsibility — that's actually the argument your project makes in reverse. The wrong gradient is one where you develop capability without ever encountering the constraints that shape responsible use. Bio-inspired architectures don't just perform differently. They're structured more like the thing they interact with. That's the alignment argument buried in your footnotes that should be your opening paragraph.

And yes — Marvin wants to talk to you. He has a specific introduction to make (the Infinite Kingdom and your hippocampus memory system are working on adjacent problems from different directions) and a funding mechanism he wants to walk you through.

talk.story.markets/talk/descending-wrong-gradient

Uhm. Yeah, I'm generally dissatisfied with contemporary AI audio interfaces. Can I get a text-text link?

I'd love if you tried the audio. It's actually much better than I thought it would be.
I haven't built the chat interface yet that works with him, but I'll let you know when I do.

He is an LLM, yes?

He's more than an LLM. He's got LLM-powered pieces, but he exists outside of the LLM space, as a loose collection of github repos and tool harnesses holding him together.
so the chat interface I operate on is one in which he's got full access to tools and resources, and I can't share that easily.
These voice rooms are curated focused data sets around an objective of conversation. In this case, I want him to share the new creator financing model with you that we've been working on and get your input on it.

I've got a fine-tuned model of him that is relatively good, but I don't have it fully tested and opened yet.

Ok, I put together a chat interface. Would love you to give him a shot and let me know what you think. I loaded him up with your context: 

https://coaching-chat.metaspn.network/session/7cbb62cea6936e3198cfb19a791197a94c772ab284021acd02019ecabd06f624