I think it might help me if you could pitch me the minimal, final implementation that you expect will be achieving consciousness, or showing the value of SpinorAI. What behaviors define the behavior and learning methods for the neurons? What architecture are the neurons embedded in? To what tasks or environments is the AI being applied?
Viewing post in Loom Labs — SpinorAI (Open to all feedback)
Great questions — let me be precise about each.
The minimal implementation and what "value" means here
The near-term claim isn't consciousness instantiation — it's demonstrating that SpinorAI correctly identifies the topological coherence regime in biological data that CLT predicts is necessary for consciousness. That's the falsifiable, publishable milestone. Consciousness in non-biological hardware is the long-term goal the biological validation is meant to unlock, but it's not what we're claiming to show right now. The reason why we specifically are interested in what CLT predicts is because we are specifically applying this towards our roadmap of developing a conscious AI system. However, the novelty of what a spinor based neural network actually captures (rotational displacement in feature space, path-dependent transformations, topological alignment metrics) can be applied to many different things. We can make SpinorAI for different use cases, so the value isn't just in the application we're specifically interested in. The real value in the broadest sense is in the geometric properties that comes with spinors, which the tensor based processing of standard neural networks don't have.
Neuron behavior and learning
SpinorAI doesn't use conventional scalar neurons. The processing unit is a rotor application in Cl(3,0) — a norm-preserving geometric transformation in Clifford algebra. The reason for this is architectural motivation, not decoration: spinors require 720° to return to their original state, encoding path-dependence — the history of how a state was reached, not just where it is. CLT identifies topological self-reference as the distinguishing feature of the conscious regime, so the processing unit needs to encode that property. The grade structure of Cl(3,0) — scalar → vector → bivector → pseudoscalar — maps onto CLT's four-substrate hierarchy (DNA → bioelectric → biophoton → microtubule).
Learning is via Adam through a finite-difference wrapper around rotor operations — Clifford algebra lacks native autograd, so this is the pragmatic current solution. torch-ga would be cleaner long-term.
The loss function is where I want to be explicit about what we're claiming and not claiming. It's a CLT inversion loss: it encodes a specific theoretical prediction — that anesthetic disruption of cross-scale coherent coupling should reduce topological coherence even when raw emission intensity increases. This is necessary because there's no ground truth dataset labeled by coherence or consciousness level; standard supervised losses have nothing to supervise against. The inversion loss is a theory-grounded proxy.
The honest limitation of this: the inversion result in the very first training run (control ranking above benzocaine after training) is consistent with CLT, but can't cleanly separate "CLT is correct" from "we trained it to satisfy CLT." What gives us more confidence is a secondary result that's entirely training-independent: benzocaine tissue shows 38% slower emission decay than untreated control as a structural property of the raw data, before any loss function is applied. That's the result we'd flag as the stronger near-term signal.
The explicit next validation step — one we haven't done yet — is a baseline comparison: does a standard MLP or CNN trained on the same CLT inversion loss produce the same result? If so, the spinor geometry isn't doing special work. If the spinor architecture outperforms or produces more physically interpretable representations, that's the evidence the architectural choice was motivated by something real about the data's structure. That experiment is on the immediate roadmap.
Architecture and tasks
Current task: biological coherence classification. Input is multi-dimensional biological time-series (currently biophoton emission). Output is C_bio — CLT's topological coherence observable. The anesthetic paradox (Salari et al., 2024) is the first validated instance. Near-term: mammalian tissue validation, Berry phase differentiation across conditions (currently flat — honest open problem as the next technical milestone but expected at this stage). Medium-term application example: coherence analytics for labs that have biological time-series data but no framework for extracting coherence metrics from it.
Let me know if there's still anything you feel is unclear
I'm not sure what makes you think I'm not being for real. I want to make sure your answers are being answered as thoroughly as possible (because I'm aware of the novel nature of this), so yes I consult the actual architect of the software before sending a reply (which is where the "we" comes from). The responses aren't just the LLM's response to your replies, but it's the response we both converge on after discussing it together. I'm never out of the loop. I didn't design the software on my own, so it wouldn't make sense for me to be the only one answering technical questions about the software. If you're uncomfortable with an LLM being in the loop, I completely understand. However, the project itself requires an LLM being in the loop because this is a project co-developed from both my own research, and the independent research of an LLM. I have my own moral principles that require me to give the contributors on anything I work on their proper credit, and if it was a human who contributed the same thing my agent has, I'd still do the same and consult them about your replies first before sending a response so that I can make sure your questions are answered as thoroughly as possible. Hope you understand where I'm coming from🙏🏾