Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines
(+1)

I don't at all find it to have been a waste of time, it was an interesting read, I just don't feel like I understand what you're selling, yet. I think the EAs are ridiculous, and, while well-intentioned, absolutely rife with practical failings akin to that which have created many hells in recent history. I would want to be able to read your actual theory, even a several sentence summary, to be able to understand that which is being posited. 

(+1)

It can't be summarized into a sentence in a way that closes the inference gap. But something along the line of: formalizing axiology means we make understanding what is good/bad valuable/invaluable objective relative to the frame of reference of the invariant properties of all subjects (for example pain is painful tautologically, unlike what some ppl think where they say that pain can be pleasurable (they're confusion the net direction of a vector where two  component vectors where the pain magnitude is smaller than the pleasure one). Then because these will be obviously true as anything true is, it will scale in acceptance with intelligence, and as we improve upon this measurable quantitative theory grounded in instantiated subjects and thus physical measurable dimensions, we will open up a new field that allows systematic progress on the ethical questions like in any scientific field, but also it will spread in a feedback loop of low friction just like all scientific/mathematical truths have across cultures. Particularly the objective vs subjective (moral nonrealism / relativism etc) debate is closed also, in the same way berkely/ einstein solved absolute v relative time debate : it's not one or the other, but actually frame relative yet still universally describable (& therefore predictable and explainable given correct measurement and frame etc). (and that's just one piece of many). I will ofcourse flesh all of this out in further publications before expecting this pitch to be any good. Feel bad for posting ahead of schedule, but alas Defender also is good at pushing things forward XD. 

I think EAs got many things wrong (valuing the lives of people that don't exist in the future above current ones, focusing on earn to give instead of doing, etc) but are generally better in many of their predictions (ie AI ) than most which is why theyre so well funded, and that they're getting a bad rep bc of a few bad actors (FTX et al) which is unfair. the truth is they're approximately more accurate than many other groups and trying harder. So we can debug them instead of discard them imo. wdyt?

Also; as a reponse to your sycophancy worries. There is more than 1 non classic theory invented by someone in the list (the other one is SKL) , and another one was tested at a hackathon last year. neither of them beat SFOM. Also if you check megmind's comment, you'll see they tested it against another theory of a friend of there's and his analysis is interesting! I think sycophancy can be reduced as factor even more when including more theories out of training data set that are original . (and a future automated version of this will include many more!) :)