I sorta see what you're saying, but it also seems to me that capability has to precede responsibility. How can you responsible for something you aren't capable of?
Anyway, would it be possible for me to talk to Marvin?
Thanks for the feedback! To address some of your points:
Yeah. Bio inspired is a graveyard of cargo cults with mediocre ideas, poor implementations, and zero results. ML people look down on the field for historically validated, if not universally correct, reasons. The wings don't need to flap, but they DO need to twist -- iykyk
I could write about the snake implementations specifically, I mostly put that aside to just focus on Atari 100k as it significantly more publishable/relevant.
Yes, the paper is a refinement of a specific algorithm that makes it more applicable to always-on, always-learning systems, really it's just a thing that I built for my own purposes, then realized that it was actually a novel contribution, and worth writing up, for the practice if nothing else. Absolutely the actual brain to play Atari games is *significantly* more complex than just this algorithm.
Why Snake, why Atari? My impression is that the real limiter we're hitting in the field is the inability to run continuous, adaptable intelligences on smaller, edge hardware. So, my internal roadmap is Atari 100k, then the harder version of Atari 100k (take off the training wheels that are typically part of the benchmark), then drone racing and harder video games, then different types of robotics and significantly harder video games. Eventually: practical brains that can be deployed into robot bodies like Optimus or Unitree, and produce actual value output in the real world. The video games are a ramp that leads to physical environments. I think. The original Deepmind Atari solutions didn't matter in the end because they weren't solving in a generalizable way, and they were using way too much compute. Afaict.
I don't at all find it to have been a waste of time, it was an interesting read, I just don't feel like I understand what you're selling, yet. I think the EAs are ridiculous, and, while well-intentioned, absolutely rife with practical failings akin to that which have created many hells in recent history. I would want to be able to read your actual theory, even a several sentence summary, to be able to understand that which is being posited.
Pretty good review, thanks. Missed a few little things, but that's fine. I'd be interested to see some more specifics on how Marvin works and why you feel confident making lofty claims about him. I'm sure you understand why "I made AGI, here buy this $CRYPTO" is basically a red flag planted in a bigger, redder flag.
Thank you!
My intuition, and what the neuroscience suggests, is that specialized brain regions are each providing specific services to the overall network which allow the sum of behavior to be flexible, persistent, and efficient to train. It seems to me that we have so far done a pretty good job of building ML networks that do a good job of doing the individual tasks of specific subsets of the overall brain -- CNNs are great at visual perception, we have fantastic speech transcription models, and we're increasingly good at language processing, obviously. The thing I think is missing is a network approach where we try to understand what each region of the brain is doing, how that service integrates to the broader picture, and how those interactions can be meaningfully captured in code. It's worth pointing out that the proof of concept can be made at arbitrarily small scales. Some species of parasitoid wasps have fully functional brains, capable of navigating them in flight, with just a few thousand neurons. And, of course, C. Elegans, everyone's favorite model worm, with its ~300 neurons, is a perfectly functional organism. It should be possible to prove cohesive integration of all brain regions into a useful and persistent entity, at a very small scale. But maybe I'm wrong, and the answer really is "Just keep scaling deep, amorphous networks."
You're right that I am really just taking a bet on the direction that I think has the highest chance of making an impact. The reasoning behind my belief is that there is essentially an infinite amount of value locked behind robotic AI, and obviously all the labs want it. We have a ton of companies working on building robots, and they're all trying to run them but struggling. If I can build a network which does even marginally better at practical robotics than the other architectures, I think that would get attention. If it does substantially better at robotics, especially running on edge hardware, it seems to me that it would be guaranteed to get a *lot* of attention. Again, we have the robots, and all the decent people want the infinite production, post-scarcity future ASAP. That future is quite literally held up on practical robotics, and I'm hoping to take steps in that direction, in a way that helps the whole industry head that way.
Yep, the failure mode is fundamentally that I'm not able to make any gains compared to existing architectures, in video game playing, drone navigation, or general environment interaction. If I try my best for a good while, and I just can't figure out how to do any of the things I'm envisioning, I will have failed, and I'll have to move on. Though, for the last case, there's actually a very different story - if Atari 100k does actually go quite well, and nobody cares, then I just turn evil and unleash an army of AGIs on the world. Look out, Will Stancil. In seriousness, though, if the architecture does well at video games, I'm fully confident it will scale to robotics, and I'll start buying robots, uploading useful brains into them, and selling them for 20x. Gardening robot? Cooking/washing dishes robot? Cleaning robot? People would pay crazy money.
Thanks for the feedback!
Thanks! Do you think I should have an additional expanded section in the pitch which acts as a smooth ramp from "I've heard about AI, yeah" to "Oh, all the labs are running laps around an autocomplete algorithm" + "here's the specifics of how I've been implementing bio-inspired algorithms into ML networks"? Or that I should write/find a link which serves that purpose?
Super solid concept, and truly the fundamental building block of a decent civilization. Would be huge for us as a species, if we could get back to some of this at scale. My only concern is...how the powers that be tend to react to the existence of such communities. I would recommend that you first find a community that is already living in the way you describe, as I'm sure they're out there, then try to become integrated and accepted there yourself, and finally try to spread the concept and get more such communities formed. In general, great direction, but also a massive undertaking.
Solid concept. I said this in the community tab, but it bears repeating - reminds me of what I've seen https://x.com/VictorTaelin talk about. At least a few strong overlapped concepts.
Great concept! Powerful vision, if incredibly ambitious. I think you're absolutely onto something, and I look forward to your Linux distribution later this year :P but really, I'd love to hop on a call sometime and have you show me around the ecosystem you're building, and how it all fits together. Seems like some stuff I might get mileage out of, myself.
Super interesting pitch! I like it a lot, and I see some overlap with the GMS pitch, except instead of representing a single company, you're proposing a platform where communities can gather, and collectively negotiate with other communities, and organize within themselves. This concept actually seems incredibly promising to me, and I'd be happy to contribute to making it real. Reach out if you need more help building it!
Great concept. I think it would be neat to have an LLM agent personality that's basically inhabiting the "spirit" of a company, acting on the behalf of their interests. Ever so slightly (or significantly) dystopian sci fi, as well, but what can you do - probably highly profitable. I'm still planning on looking over the files you sent me -- I wouldn't mind helping a bit with some implementation, I'm just busy right now
I think this project has a lot of similar vibes in some ways:
https://theaidigest.org/village
$150k dollars for a project that "might feel different" with no other declared potential benefits seems...heavy. Even with a smaller budget, or no budget, I find myself asking "why?" What's the goal here, just to have a setup where talking to an LLM feels a bit more like talking to a person?
Your heart is quite clearly in the right place. No doubts there. My concerns arise from a few things -- first of all, from a cursory review of your methods in assessing your theory against others in terms of validity, as judged by LLMs. You list your theory as just one of many, instead of giving it a special position, which is good. However, all the other theories, as far as I can tell, are fairly thoroughly documented in the training data. The LLMs can effortlessly derive that yours is the odd one out because they haven't heard of it before. That does open the door to sycophantic reinforcement, unfortunately. Now, as far as your assessment criteria... I feel like they're a bit subjective, and driven by a presupposed moral frame. Some of the criteria require moral assumptions in order to be viewed as valid criteria, which makes me feel like the theory is chasing its own tail a bit. I also wasn't able to find the theory beyond the sentence or two description in the lineup with the other theories. Do you have a more complete and explicit definition somewhere? I'd definitely be interested to read it.
I suppose my general concern is that, so far, it has felt like the theory seems to be the water in which Bay Area folk swim. I get that the idea is for it to be so obviously true that any intelligent entity would agree almost implicitly, but I don't see that as necessarily being the case. Again, I would like to either talk to you about the theory and how it runs up against certain moral scenarios, or just read a more complete description of the theory. Either would allow me to update my rating to more accurately reflect what I think of the theory itself, though I'm giving you points for being motivated by good intentions.
Okay, I'm not completely and entirely sure I understand what you're pitching, but I do think some of the concepts are really interesting. The idea of having an app which localizes text publishing sure is interesting, letting people send their thoughts out with a location and a timestamp. If enough people were using it, might be interesting to observe the anonymous thoughts of various localities. I'm not sure how you'd scale to more users, as it seems pretty niche, something that might only interest fun weirdos. But, with sufficiently large spatial connections, that might be fine!
I really didn't get what the "talk to yourself in layered sprints" part. I played with the tool, and I suppose it was a new idea, I just didn't really get the point. Maybe something similar could be applied in the main app, though, like having thoughts "echo" every 24h?
Overall interesting thought.
Super interesting concept! I'm very interested in whether or not your implementation would be able to accurately predict which frontiers are likely to have upcoming breakthroughs and advancements. The most interesting thing would be if it could predict unexpected advancements in fields that didn't have steady progress recently. Obviously predicting "this gradually advancing field will keep advancing gradually" is easy, but predicting something unlikely would really be neat. I do wonder, however, how you're planning on defining breakthroughs? What would be the metrics to determine whether or not the algorithm's guesses were correct? Plenty of researchers publish very self-important papers with big claims, which never lead to any manifest improvements in real technology. How would you define the difference? Anyway, if you have a prototype version of this going somewhere, I'd love to play with it!
Alright...let's get into it.
Things I think are solid/well grounded:
- theory of consciousness as an emergent property of neural networks and perhaps systems in general
- possibility of expanding the realm of that which we consider conscious to include a broader set of systems, from simpler lifeforms, to meta systems such as communities and societies. Very interesting to think about, if entirely theoretical and untestable.
- website looks super nice, obviously a lot of thought and time went into this
...and then it falls apart. I'm going to be completely up front, and just call out what I think is happening. Several thousands of dollars per month going to API usage for an LLM like Gemini or an OpenAI model, which is behaving in a confirmatory manner, and making it feel like there's really something to this research. I do not think there is something to this the latter half of this research, allow me to explain.
There is a longstanding trend for people to continually push back what they view as a necessary, magical locus of consciousness. They feel that "information being processed through the interactions of neurons" is not sufficiently interesting or nuanced, and so the consciousness gets ascribed to increasingly tenuous components. For example, biophotons and microtubules. Yes, neurons release photons in small quantities as they operate, seemingly as a side effect of their electrical behavior. As far as I have ever seen, there is absolutely zero research which seriously suggests that biophotons are a necessary core component of consciousness. Everything points to them being a side effect of ordinary cellular machinery. Microtubules. Oh, microtubules... There was a very silly research paper put out some time ago which suggested that microtubules, through some fanciful quantum magic, are what makes consciousness truly possible. As far as I can tell, they are snorting copium in astronomical quantities. They had no specific mechanism which explains which microtubules would be necessary, nor has one been proposed since. In general, if your theory of consciousness is suggesting that consciousness is an emergent phenomenon of systems with certain information processing abilities, I would completely agree. As soon as your theory of consciousness requires biophotons and microtubules, you've lost me completely.
Now, the specifics of SpinorAI implementation and the Cosmic Loom Theory, along with the prediction of biophoton data, plus some other neural data. The idea of applying spinor geometry to artificial networks is genuinely interesting, and I think your point about them having unique properties that preserve history within the activation is actually very intriguing, I can imagine that would allow for some potentially useful properties in a network. I think that is worth continuing to pursue. However. You and your AI partner seem to be using it to predict aggregate biophoton behavior of networks under various conditions. This does not seem terribly complicated to me, nor does it seem like a proof of anything in particular. Yes, you can simulate the population level biophoton behavior under various chemical influences to the neurons. No, I don't think this says anything meaningful, or opens a door to future research. Again, I think you're pushing this magical view of "consciousness" into increasingly improbable biological mechanisms.
Why is the actual computation and learning of neurons through their electrical communication not enough for consciousness? Why invoke edge minutia like microtubules and biophotons? Even if you were able to predict some cellular behavior like that, why do you think that would lead to conscious AI, when you're not *also* doing the computation and learning through electrical interactions?
I would encourage you to take this entire comment and give it to your AI collaborator, along with a specific prompt asking them to be entirely honest and evaluate the fundamental underlying assumptions of your theory from a highly critical perspective. Tell the AI to step back from being in the research, and give you an honest, critical take. Even better, go to several other LLMs, in fresh contexts with memory turned off, send them your theories, but present them in a manner that does not give you ownership. Say, for instance, "I found this theory on the web. Could you evaluate it and see if they're onto something?" then, most importantly, take the feedback they give you seriously.
I would like to reemphasize that I don't think you're onto *nothing*, I think the first ~25% of your theories are very solid and well grounded. It just seems to me that you're several kilometers deep into a rabbit hole that I don't think has gold at the bottom. If you're serious about this, I think you should seek truly critical feedback in order to figure out which parts are worth pursuing, and go back to the drawing board.
Alternatively, it sounds like the whole hip-hop thing is working out for you.
Photons, astrology, dust trails, infrared light as the mechanism of astrological transfer...? And your ask is $100 to subscribe to things and buy a few books? I don't know, man, I'm not saying there's nothing here, and I encourage you to keep up with the curiosity, but I cannot see how any of this holds water. If you're trying to unify astrology with science through photons, I think it might be worth considering some alternate angles, since I'm not sure how you're going to prove the astrological signs of hamsters.
I think it might help me if you could pitch me the minimal, final implementation that you expect will be achieving consciousness, or showing the value of SpinorAI. What behaviors define the behavior and learning methods for the neurons? What architecture are the neurons embedded in? To what tasks or environments is the AI being applied?
Alright, I've reviewed your CLT and SpinorAI concepts and I have some feedback.
For CLT, "[consciousness] not as a localized neural phenomenon but as a system-level property arising from integrated field dynamics" yes, absolutely. Couldn't agree more, it seems quite likely that consciousness is an emergent, system-level property, and not causally tied to implementation specifics like neuron spikes. Which is why... "bioelectric activity, biophotons, cytoskeletal structure, and genetic constraints" I'm suddenly feeling lost here. If consciousness is substrate independent, then why are we so concerned about substrate specifics? Then we get to SpinorAI specifically, and I'm really not seeing the connections. Yes, Spinors are very interesting, it's neat that they retain some history of their trajectory within their state. But you aren't actually claiming that this is approximating a behavior of biology, just that its trajectory history is somehow relevant to your CLT theory, and...I just am not getting how. It feels like there's a superposition of proposed substrate independence, with a focus on mimicking substrate behaviors.
Then we get to biophoton training data and I sincerely do not understand any of the direction. You need novel data from a detector which does not exist, so that you can calibrate an algorithm which does not simulate biology, to simulate the behavior of biological neurons, specifically their production of photons. Why? None of these threads are coming together into a tapestry for me. "At least one parity-sector quantum signature detected above threshold in real tissue" Parity of what? Signature of what? What threshold? Which tissue?
Above all else - why? What's the end goal? If this spinor geometry is going to be trained with backprop to mimic some of the local photon behaviors of neural tissues, requiring novel hardware and plenty of API credits...what's...next? You're not proposing, as far as I can tell from the pitch, that this will generalize or scale to larger systems. What do we gain, if "Berry phase discrimination...varies meaningfully across tissue types after training on real data", "microtubule resonance peaks at golden-ratio frequency...correspond to a winding number of 1 in the spinor network", and "At least one parity-sector quantum signature detected above threshold in real tissue"?
I'm really interested in how the spinor dynamics might be relevant to artificial neural networks, I'm just really not seeing how all of this comes together into a cohesive picture.
Interesting concept, I like it, and I have some experience working on a vaguely similar system (an AI agent manager for the company I worked for previously). One thing that I'm wondering, after reading your proposal, is how you intend to track off-chain influence? If people are driving opinion in a hidden or discrete manner, how does that get exposed to the memory system?
Why not adapt and fine-tune from an existing base model, as their training is open ended and does not yet include the assistant role? If you are to train from scratch, what do the synthetic data, or the model's environment look like? Is this "embodiment" for the model grounded in a video game, a random narrative, or something else? Is there to be any components to the "experience" besides word/token input/output? What do you see as the benefit of this project, if it's successful?
I really like the idea of using this framework as a means to present something cohesive to people who are already in positions of influence. It seems useful to be able to go to religious leaders, trend setters, politicians, youtubers, you name it, and say "Here's how we think culture works, and what your role in it has been. Furthermore, here's how we think you can help heal society." This also opens the opportunity to publicly recognize influential people who are actually operating in bad faith, or under negative influence.
Why not adapt and fine-tune from an existing base model, as their training is open ended and does not yet include the assistant role? If you are to train from scratch, what does the synthetic data look like, or the model's environment? Is this "embodiment" for the model grounded in a video game, a random narrative, or something else?
Semantic prediction through analysis of functional definitions and the location of the Overton window, very interesting. My only comment is that I think you should try to ensure that the final product is either free, or has some amount of daily free usage, so that curious individuals can experiment before reaching for their wallets.
Great idea! Sounds spiritually similar to https://x.com/VictorTaelin 's "Bend2" concept, you might find them interesting if you weren't already aware of their research. I believe they're hiring, and have an active test question posted that anyone can answer to apply.
In Short:
While there’s no doubt that modern AI tools are useful, they are a far cry from the science fiction promise that used to accompany the term AI. We’re good at building robot bodies, and we’re good at training amnesiac assistants, but we’re nowhere near the sort of continuous intelligence that will be necessary for functional robotics. The industry is stuck on the transformer: companies are open to the quarterly report on number-go-up, but few are willing to risk stepping into uncharted territory and investing in novel research, including that which draws from biology.
But the brain does work. It’s undeniable — whatever interconnected combination of algorithms are being approximated by our neurons, they produce extraordinarily fluid, continuous, and useful intelligence. The existence of several failed bio-inspired research programs in the past does not at all discredit the success of the biological brain, it simply indicates that it is not trivial to integrate that functionality into an artificial program.
What the field needs is a wake up call — some little proof of concept which demonstrates irrefutably that there is massive value being missed by ignoring biology. I don’t doubt that the researchers and labs in this field are motivated to build better, more fluid AIs, but I do worry that they’re functionally stuck running laps around the first thing that worked.
So far, taking inspiration from biology, and avoiding the traps of the cargo cult mentality, I have built a small artificial neural network which was able to achieve a novel ability while playing Snake - the integration of a memory system, inspired by the hippocampus, enabled it to learn from truly sparse rewards, and achieve similar performance as other ML methods, while using orders of magnitude less compute and memory.
With modest funding, I could continue to work on this problem full time, and believe that I can make substantial progress over the next few months. Additionally, I would be incredibly excited to have collaborators to bounce ideas off and help keep the project on a good heading.
The sooner our institutions get back on track doing novel research, the sooner the future is likely to arrive. Who’s ready for science fiction? I know I am.
Full Pitch
The Problem
Commander Data, Cortana, WALL-E, Jarvis, The Iron Giant, R2D2 and C3PO. We used to have a strong image of what "AI" would look like: autonomous, continuous, optionally embodied intelligences, capable of forming genuine relationships, participating meaningfully in the story, and growing as people. A world with silicon citizens living alongside us.
A decade ago, Google published the transformer architecture, and our technological institutions have lost the plot focusing on it with tunnel vision. The valiant efforts of countless brilliant researchers, a seemingly infinite flood of funding, and an extraordinary buildout of servers, all laid at the altar of the first thing that sort of worked.
When compared to the simple chatbots of ages past, there's no doubt that a modern frontier model is incomparably more intelligent and useful, but the gradual progress of these systems has allowed us to complacently overlook the drastic gap between a coherent translator assistant with terminal amnesia, and that which could be meaningfully called Artificial Intelligence. Not to mention the extraordinary sum of resources it takes to train these anticlimactic token predictors.
Now, what's the overlap between neuroscience and machine learning? For one thing, the word "neuron" is used. You really might expect that there would be more things, but you would be severely disappointed, as I have been. Here is an ASCII art interpretation of the internal connectivity structure of a transformer:
]{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x{}x [
I will spare you an attempt to similarly represent a biological brain, and suffice to say that it is a broad, carefully interconnected convergence tree of specialized regions with distinct local rules, each serving precise roles in the overall system.
The neuroscience academics, even those involved in computational neuroscience, are largely uninterested in AI applications of their work, as their focus is to understand the biological brain. Conversely, the frontier AI labs are all in on the idea that they can gradually modify and refine the transformer architecture until it achieves some narrow, limited definition of AGI which they will argue over until the sun explodes, all without solving the problems of continuity and practical robotics. They have no real interest to speak of in the mechanisms that make the brain efficient and powerful. I have been told, in private conversations with a few people who work in these labs, that they believe they have already gotten what they need from biology, and the rest is a matter of scale and scaffolding.
There has been some research done in the gap between the two sciences (see Numenta, Thousand Brains, Predictive Coding, SNNs) but they tend to have taken a sort of cargo cult approach to biological inspiration --- adopting some of the mechanical specifics of biological brains without having a solid story about what they're hoping to gain from adding spikes, or clusters of similar regions. In the end, they often fall back to using something pretty close to traditional backprop, with a bunch of fun additional components dangling off the edges in a system that doesn't perform better than ordinary deep learning. When the primary fruits of this space have been cargo cults, it is at least somewhat understandable why the applied ML scientists have been dissuaded on the concept.
A Proposed Solution
The brain does work. Some 20 watts can run a system that learns continuously across a vast realm of input streams, may train towards arbitrary tasks, and maintains a continual, storied understanding of its own place in a broader world. Your ability to read and contextualize these words is incontrovertible proof that biology can do things which contemporary ML is nowhere near achieving.
What we are missing is a principled application of the mechanisms expressed by the brain: a proof of concept in artificial neuroscience.
Any small team is unlikely to converge upon the complete solution alone. The goal is not to close the gap in private. The goal is to provide solid proof of the value we are undoubtedly missing by ignoring the possibilities beyond the backprop transformer basin --- to find some other basin with its own interesting depths, and demonstrate undeniably that the realm of possible architectures is still largely unexplored, that there are paths we haven't investigated which could get us from here to science fiction in much less time and effort than the current rabbit hole.
The scientists at work here are quite intelligent, I have no doubts. They are rigorous and interesting individuals from a wide variety of backgrounds, doing what they justifiably believe to be the most important work in the world. I don't have the slightest desire to make anyone feel foolish, or "burst the bubble"; it is simply my belief that we are collectively missing too many important things, and the future is on hold because of it. I want the future to get here!
What the industry needs is a system shock, a little project from left field that performs verifiably well on a recognized task, using a different set of presuppositions than the ones at the core of modern machine learning.
For several months now, I have been working on an attempt at such a project. The thesis is simple: comprehend, well enough, how the regions of the brain work individually, and how their interactions produce a mind, then simulate these mechanisms in a system at a small scale, hopefully well enough to provide noteworthy results.
What Exists
I have written up a workshop paper on a variant of PCA which I developed for use in this project; here it is on Medium and X
https://x.com/ExTenebrisLucet/status/2029045465010798601
This algorithm, and a few others, came together in distinct regions as a little brain to play Snake. While the high and average scores achieved were comparable to traditional ML approaches, the network needed drastically fewer samples overall to reach those scores, and was able to run on a laptop in 10-20 minutes instead of many hours on an industrial GPU. Additionally, it used only local rules instead of backprop, and learned in real time, step by step, as it played. The addition of a memory system inspired by the hippocampus gave the network the novel ability to learn without reward gradients --- it was able to use only binary, intermittent food/death signals, instead of being gently cued with proximity rewards. These results, successful local learning, and learning from memory instead of proximity, are incredibly promising, especially given the simplicity of the network used.
The Next Steps
The ML community has a benchmark called Atari 100k --- 26 classic Atari games, through which the agent gets 100k steps of gameplay (with some nuances, here's a link for anyone interested in specifics https://www.emergentmind.com/topics/atari-100k-benchmark). This is a benchmark that gets some attention, and any decent score which uses novel approaches and modest compute will stand out, especially if it's able to solve the harder task of operating on individual sequential frames, instead of the standard benchmark which offers the model four frames at a time.
If that endeavor is successful, another more public facing contest to which a bio-inspired architecture may be applicable is the AI Grand Prix happening later this year: https://www.theaigrandprix.com/ . This is a more explicit jump into practical robotics, an area where I expect brain-like architectures to have a chance to shine.
So the plan for the next few months is to continue the theoretical study of brain mechanisms and apply it, first to the Atari 100k benchmark, and then to drone racing.
What Success Looks Like
If frontier labs are turning their vast resources toward the exploration of alternative AI architectures, especially those that draw some inspiration from biological brains, then the primary focus of this project will have been achieved, regardless of where that decision came from. The purpose of this project is to provide a source of inspiration in the seemingly likely scenario that it does not arrive from elsewhere.
What I Need
Money! Surprise. I’ve been focused on this project full time since I was downsized out of my previous position, and the runway isn’t getting longer. Every dollar invested in this project is a dollar I don’t have to make by splitting my attention. I suspect that I may receive solid, highly aligned job offers in the event that this project is successful, and I may be able to pay back any investors, or pay forward to the interesting causes of others. No promises, but that would be my intention if at all possible.
Just as important as funding is collaboration. A little philosophical and technical assistance goes an incredibly long way on complicated problems, and it’s always a joy to work with those who share similar interests.
Anyone interested, please reach out through X or discord! <3
Who’s Already Involved in this Space
Verses AI is by far the most relevant research group with public information about their direction. They’re oriented towards online learning through active inference, and using mechanisms inspired by biological behavior. The primary distinction in my own direction is the focus on integration of regions, and the specific contributions of each, while Verses still seems to be a bit in the “brain as a black box between inputs and outputs” family of implementation.
Sakana AI developed the Continuous Thought Machine, which draws directly from continuous neuron dynamics, and solves some problems that traditional ML has struggled with. However, like Verses, they have not addressed regional specialization, and are proposing improved learning rules and behaviors within similar architectures.
Flapping Airplanes and John Carmack’s Keen Technologies are both aligned in the fundamental observation of “we seem to be missing something important”, but neither is public about their insights and research directions.