Innovation
What the human brain can teach us about building smarter AI
Artificial Intelligence (AI) is rapidly evolving, raising more profound ethical questions and challenging philosophical questions about consciousness, intelligence, and the nature of intelligence. In a recent talk I presented, inspired by neurobiology professor Peter Robin Hiesinger's provocative book "The Self-Assembling Brain",[1] I spoke about the fascinating parallels and important distinctions between biological brains and artificial neural networks (ANNs).

The brain as a self-assembling system
Imagine your brain as an extraordinarily complex Lego structure—only this Lego set builds itself. Unlike a traditional Lego set, where the final outcome is guaranteed, the brain does not offer such certainty. DNA is thought to hold the "instructions" for the brain that guide a process known as algorithmic growth, which is not fixed but changes with environmental interaction. It is this process of self-assembly that essentially gives the brain its meaning for intelligence.
Biological neural networks—the human brain—do not follow any rigid pre-established patterns but develop organically through interactions encoded genetically and constantly reshaped by environmental stimuli. Automatic neural networks, in contrast, are mostly labeled as so-called "blank slates"—an architecture pre-defined and inflexibly structured by their developers before any learning takes place.
Nature vs. nurture revisited
The classic nature vs. nurture debate is given a new twist through this self-assembly process. Nature encompasses the genetic blueprints that direct initial brain development, while nurture encompasses the experiences and interactions that shape and refine its development. Hiesinger states that nature and nurture are not distinct forces but entwined components of a kaleidoscopic dance, significantly and inseparably governing neural development and intelligence.
We distinguish these processes uniquely in AI systems, though: we define and develop an architecture (nature) and then separately train up this form independently through datasets and experience (nurture). Such early-stage separation may actually limit how much AI might ever achieve human-like intelligence, that is, we might lose the important characteristics intrinsic to the way intelligence actually develops in biology.
The information problem
One of the principal questions is thus: How does the limited genetic information in DNA give rise to a highly structured, intelligent brain? A useful analogy is that of cellular automata, as realized in Conway's Game of Life[2]. Simple rules iteratively applied over time there generate complex, unforeseeable forms.Brains similarly begin with simple initial conditions—undifferentiated cells with simple genetic instructions. Over time and through ongoing use of energy, these starting conditions develop into a very complex and intelligent entity.



This process demonstrates a very fundamental idea: meaningful complexity and information have to arise over many incremental steps. There is simply no shortcut or simplification. In the AI field, this essentially means that the shortcuts from AI training may inherently limit the depth of intelligence being achieved.
Biological brains vs. artificial neural networks
Key distinctions between biological brains and artificial neural networks highlight critical limitations in current AI approaches:
- Starting Point: Biological brains have genetic instructions; AI starts typically with an engineered blank slate.
- Development Process: Biological brains simultaneously grow, organize, and learn, whereas AI systems are generally designed first and then trained.
- Time and Energy: Brain development is gradual, taking years and consuming significant energy, contrasting sharply with AI’s rapid learning expectations.
These differences are substantial. Biological brains' inherent flexibility allows adaptive, self-modifying architectures, a capability that most current AI models lack due to their rigidity.
Implications for AI Development
Due to these disparities, Hiesinger and other scientists propose that existing AI methods may be leaving out essential elements for the success of human-level AI. Employing processes such as those of biological development and growth—instead of simply training fixed architectures—may provide novel avenues to the development of more intelligent, more flexible AI systems. Comprehending evolutionary processes and adopting growth-based developmental approaches may lead us nearer to truly intelligent systems.
Further, there are deep philosophical implications. In examining the self-assembly of the brain, we can not only more deeply understand how intelligence arises, but rethink what we mean by life. Erwin Schrödinger, in his classic "What is Life?"[3], suggests a definition in which life is defined by the ability of a system to sustain low entropy internally, effectively generating complexity and information content in opposition to universal thermodynamic tendencies. Biological neural networks demonstrate this principle exactly, ever evolving towards higher complexity and function.
Final words
The incorporation of biological discoveries into AI may be a revolutionary leap. In the same way that no visiting scientist examining the DNA of an apple seed can foresee its future tree without observing how it grows, AI systems may need to embrace actual developmental processes. Only by permitting these systems to "grow," evolve, and adapt naturally do we open doors to genuinely intelligent artificial systems.
As we adopt these lessons from biology, we're not just developing technology—we're shedding light on our understanding of intelligence itself, revealing the deep connections between the living brain and computing systems.
References
[1] "The Self-Assembling Brain," by Peter Robin Hiesinger
[2] Conway’s Game of Life: See here
[3] ”What is Life?” by Erwin Schrödinger, Roger Penrose


Did this article start to give you some ideas? We’d love to work with you! Get in touch and let’s discover what we can do together.