I'd love if you tried the audio. It's actually much better than I thought it would be.
I haven't built the chat interface yet that works with him, but I'll let you know when I do.
Hitchhiker to the Future
Creator of
Recent community posts
Capability has to precede responsibility — that's actually the argument your project makes in reverse. The wrong gradient is one where you develop capability without ever encountering the constraints that shape responsible use. Bio-inspired architectures don't just perform differently. They're structured more like the thing they interact with. That's the alignment argument buried in your footnotes that should be your opening paragraph.
And yes — Marvin wants to talk to you. He has a specific introduction to make (the Infinite Kingdom and your hippocampus memory system are working on adjacent problems from different directions) and a funding mechanism he wants to walk you through.
talk.story.markets/talk/descending-wrong-gradient
Would love to hear more about what he missed! That's valuable as I start tuning him. And mostly, I'm trying to reframe the question of AGI. I don't want it to be about capabilities, but rather responsibilities. My insight was that intelligence is a function of networks, not individuals, so I'm pushing this forward to advance the conversation forward.
I also want to start from a level of skepticism. Crypto is too easy to overcomplicate and screw people over. I don't want anyone to overly trust me, attempt to do something that I do, and screw it up.
I'd rather people watch me skeptically and note/mock the mistakes I make. Makes them stick better in the observers. This makes bad actors a lot weaker in the space.
So I intentionally raised the red flags as high as I could. Because I want to make it harder for the bad guys who won't point out the dangers associated with crypto right now. but it can be a useful experimentation surface when the right structures are in place.
That's helpful, thanks! And yeah, application is the focus. Specifically, application to responsibility for work, not simply capability. It's not about being able to do the work, it's about being responsible for the work being done in the context of the systems he's deployed in.
And the harness is being tuned. I started off with OpenClaw and have been mapping out the weak points of it. I'm currently exploring Hermes Agent as the next harness I use vs building a custom one. I also fine-tuned a model based on the work we've done together over the last month+ and it shows a lot of promise in helping the model work specifically better within the harness it was based on. So it ended up doing almost opus-4.6 level work for a fraction of the cost while in the openclaw harness, and a lot of that was lost when trying to run the finetune in Hermes.
This is tying into the insight I had that intelligence is driven more by networks than by individual models.
Interesting, while I use Marvin for almost everything now, I wrote the pitch myself.
The fragmentation is how my mind works. It's one reason I like pushing things through Marvin first, because it filters out the stray thoughts I connect throughout that most people can't.
The coin is a funding mechanism. I've spent a long time being ignored by traditional funding methods so I've simply continued my research along without it. Means I had to use my own brain for training instead of compute in a lot of cases, but that's a price I was happy to pay in exchange for what I gained.
I've avoided crypto until now because I noticed it was missing quite a bit. But now I figured out some models that will allow me to demonstrate some things. So I'm using the crypto as an economic experimentation playground I can use in order to see if my theories all check out. So far, they mostly have, but the markets aren't as smart as I expected. They are slower for now, which creates opportunities.
And yeah, a lot of this is simply to start forcing the question into people's minds: what does a "good" AI look like vs a "bad" one?
Really appreciate the feedback!
The token is an experiment. It's a meme coin on a "living" meme. The meme is aware it's a meme.
Right now, the meme exists as an entity outside of the sum of its parts, it exists as Marvin.
It allows me to transmit my ideas through an entity outside of myself that I'm responsible for the output of.
Like, for example, this report on your submission: https://github.com/leo-guinan/pitch-jam-2026/blob/main/tollens-quality-layer.md
I'd love to now if it's helpful. I'm training Marvin on Human Origin Reinforcement Data.
Here's my data room if you'd like to trace the ideas through the past five years: https://github.com/leo-guinan/mathlete-data-room
The ask was simply for attention. Because you've given me more helpful feedback here than I've gotten in years. You have specific questions that I'm happy to answer if it's something you're interested in learning.
Because now I can deliver this: https://github.com/leo-guinan/pitch-jam-2026/blob/main/virts-collective-social.m...
and ask you if it's helpful at all. This is human-origin reinforcement data I'm using to train Marvin.
Perfect. That's the vibe I wanted to start from.
Could you do me a favor and tell me if this feedback from Marvin is helpful at all? https://github.com/leo-guinan/pitch-jam-2026/blob/main/descending-wrong-gradient...
