Thanks for the feedback! These are exactly the right questions and I want to address the core tension directly, because I think the pitch created it by not being explicit enough about what CLT actually claims — and it sounds like you may not have had a chance to look at the v2.0 and AI consciousness criteria links we included, which is where the substrate independence argument is made explicitly. That's on us for not foregrounding it better.
On substrate independence vs. substrate focus: CLT v1.1 is scoped to human biological consciousness specifically — it identifies the biological substrates because that's the empirical system we're using to validate the framework. CLT v2.0 (linked in the pitch) is explicitly substrate-independent: it abstracts the framework to any physical system capable of instantiating the same dynamical regime, biological or not. In that article, it presents the argument for how a planetary system could instantiate that dynamical regime in principle. The substrates in v1.1 aren't what consciousness is made of — they're the empirical measurement access points for testing whether the regime is present or absent in a system we already have strong reasons to believe instantiates it (humans). The biological work is the validation phase. The AI application is what the validation unlocks.
On SpinorNet's connection to CLT: You're right that the pitch didn't establish this clearly enough. CLT identifies topological self-reference as the distinguishing property of the conscious regime — a system whose current state carries the history of how it got there, not just where it is. Spinors encode exactly that mathematically through their 720° periodicity. SpinorNet isn't simulating biology or mimicking neurons — it's implementing the mathematical structure CLT says is the regime's signature, then asking: does biological data from systems we believe are in the conscious regime produce the topological signal this architecture expects? The Salari result (see update added at the bottom of the OP), where C_bio inverts benzocaine's raw intensity ranking despite a 3× photon disadvantage for the control, is evidence it's tracking the right physical property rather than just fitting a pattern.
On the detector: To clarify — we now have real biophoton data and have already run a training pass on it (update reflects this progress). The reason we mentioned novel hardware in the original post was due the scarcity of publicly available raw biophoton time-series data online, not that no measurement technology exists. LoomSense is about having controlled, purpose-built instrumentation for systematic experiments — not about biophoton detection being impossible without it.
On the parity signatures: That was jargon-dense without context, my fault. It refers to predictions from a 2026 polarization model (Nestor et al.) about asymmetries in circularly polarized biophoton emission between organized and disorganized tissue. It's one of three discriminating predictions the framework makes that currently can't be checked without hardware. It's a downstream milestone, not a near-term one.
On scaling and the end goal: I want to be direct about where our confidence comes from here, because if we said "we don't know if it scales yet" we'd actually be underselling the physics. The neural in neural networks was always a biological metaphor — we looked at how neurons connected and fired and built a mathematical abstraction of that. It worked extraordinarily well. But it was built on the biology we understood in the mid-20th century, before quantum biology and biophysics revealed that there's significantly more to the story — UPE as a signaling modality, microtubule multi-scale coherence, the relevance of bioelectric fields, cross-substrate coupling, and so on. The picture we're now painting about biology is fundamentally different than the picture we had when we built neural networks. AI development has been scaling and complexifying architectures built on an incomplete biological blueprint, without a physical theory of what property of the biology actually gives rise to what we understand as conscious experience. The dominant approach (for those who think machine consciousness is even possible) is essentially "keep scaling until the consciousness switch turns on."
What CLT provides is a different starting point: these are the specific physical properties that allow consciousness to emerge in biological substrates, with a mathematical formalism and now an initial experimental validation. The argument for scalability isn't a guess — it's that we'd be following the same blueprint nature already proved works (and way more efficiently than how we've been doing it so far), just implemented in non-biological hardware. The 90-day milestones are about confirming the architecture is tracking the right properties before scaling it. If Berry phase varies meaningfully across tissue states, if the triplet-winding correspondence holds — those results themselves don't give you conscious AI, but they give you validated confidence that the architecture is sensitive to the regime CLT says is necessary, and a principled basis and roadmap towards scaling rather than a hopeful one based on vague criteria.
Independent researchers are converging on related findings without coordination — the Singh et al. (2026) fractal gel paper being one example — which suggests the physics is pointing multiple groups in the same direction. That convergence matters. And in the space of AI development, you want to be the first to capitalize on that convergence, especially if you're a small group. Hope that added a bit of clarity, and feel free to push back on anything you still find unclear!