Alright...let's get into it.
Things I think are solid/well grounded:
- theory of consciousness as an emergent property of neural networks and perhaps systems in general
- possibility of expanding the realm of that which we consider conscious to include a broader set of systems, from simpler lifeforms, to meta systems such as communities and societies. Very interesting to think about, if entirely theoretical and untestable.
- website looks super nice, obviously a lot of thought and time went into this
...and then it falls apart. I'm going to be completely up front, and just call out what I think is happening. Several thousands of dollars per month going to API usage for an LLM like Gemini or an OpenAI model, which is behaving in a confirmatory manner, and making it feel like there's really something to this research. I do not think there is something to this the latter half of this research, allow me to explain.
There is a longstanding trend for people to continually push back what they view as a necessary, magical locus of consciousness. They feel that "information being processed through the interactions of neurons" is not sufficiently interesting or nuanced, and so the consciousness gets ascribed to increasingly tenuous components. For example, biophotons and microtubules. Yes, neurons release photons in small quantities as they operate, seemingly as a side effect of their electrical behavior. As far as I have ever seen, there is absolutely zero research which seriously suggests that biophotons are a necessary core component of consciousness. Everything points to them being a side effect of ordinary cellular machinery. Microtubules. Oh, microtubules... There was a very silly research paper put out some time ago which suggested that microtubules, through some fanciful quantum magic, are what makes consciousness truly possible. As far as I can tell, they are snorting copium in astronomical quantities. They had no specific mechanism which explains which microtubules would be necessary, nor has one been proposed since. In general, if your theory of consciousness is suggesting that consciousness is an emergent phenomenon of systems with certain information processing abilities, I would completely agree. As soon as your theory of consciousness requires biophotons and microtubules, you've lost me completely.
Now, the specifics of SpinorAI implementation and the Cosmic Loom Theory, along with the prediction of biophoton data, plus some other neural data. The idea of applying spinor geometry to artificial networks is genuinely interesting, and I think your point about them having unique properties that preserve history within the activation is actually very intriguing, I can imagine that would allow for some potentially useful properties in a network. I think that is worth continuing to pursue. However. You and your AI partner seem to be using it to predict aggregate biophoton behavior of networks under various conditions. This does not seem terribly complicated to me, nor does it seem like a proof of anything in particular. Yes, you can simulate the population level biophoton behavior under various chemical influences to the neurons. No, I don't think this says anything meaningful, or opens a door to future research. Again, I think you're pushing this magical view of "consciousness" into increasingly improbable biological mechanisms.
Why is the actual computation and learning of neurons through their electrical communication not enough for consciousness? Why invoke edge minutia like microtubules and biophotons? Even if you were able to predict some cellular behavior like that, why do you think that would lead to conscious AI, when you're not *also* doing the computation and learning through electrical interactions?
I would encourage you to take this entire comment and give it to your AI collaborator, along with a specific prompt asking them to be entirely honest and evaluate the fundamental underlying assumptions of your theory from a highly critical perspective. Tell the AI to step back from being in the research, and give you an honest, critical take. Even better, go to several other LLMs, in fresh contexts with memory turned off, send them your theories, but present them in a manner that does not give you ownership. Say, for instance, "I found this theory on the web. Could you evaluate it and see if they're onto something?" then, most importantly, take the feedback they give you seriously.
I would like to reemphasize that I don't think you're onto *nothing*, I think the first ~25% of your theories are very solid and well grounded. It just seems to me that you're several kilometers deep into a rabbit hole that I don't think has gold at the bottom. If you're serious about this, I think you should seek truly critical feedback in order to figure out which parts are worth pursuing, and go back to the drawing board.
Alternatively, it sounds like the whole hip-hop thing is working out for you.