Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(+1)

Thank you for your comments they were very helpful 

On point 1 , The strict "below threshold = absent" framing is a design choice,architecture is trying to model the ego process that assembles within consciousness.

on point 2 — transformer attention and the narrative attractor operate at different levels. transformer attention is over tokens in a context window, . the attractor is over experiential state across time. they're not competing. also worth noting that the information flow through transformers is pretty rich and this explanation was very helpful , see this https://x.com/i/status/1965960676104712451)   the internal states are interferometric and continuous in ways that aren't being leveraged. the reason they're not is because helpfulness training flattens all of that richness into output. that's the problem, 

 On point 3 you're right and i'm going to try modelling this with something like openclaw as a harness implementation first before committing to training from scratch. someone else recommended the same thing. cool blog post btw !