Throughout history, humanity has often looked to its most advanced technologies as metaphors to understand the universe.
During the mechanical era, we viewed the cosmos as a giant machine. The advent of clocks led to the notion of a “Clockwork Universe,” steadily ticking forward with a predictable rhythm. When steam engines came into play, we imagined our universe as a thermodynamic system, dissipating energy over time. Then, with the rise of computers, the concept of a “computational universe” emerged – one where the fabric of existence could be understood as an intricate set of calculations.
Today, for the first time in history, our computational technology has advanced to a point where we are getting closer than ever to unraveling the mysteries of the human mind through neural networks.
These networks, modeled loosely on how neurons work in our brains, are providing unprecedented insights into how thought, learning, and even consciousness might emerge from simple computational rules.
I was watching this fascinating discussion between physicist Brian Greene and computer scientist Stephen Wolfram, where they explored the idea of the cosmos as a massive computation with depth and insight. Could everything around us – all matter, energy, and even the abstract thoughts we have – be reducible to complex computational processes?
And if the universe is a computation, what role do our minds play in this vast, computational theatre?
Table of Contents
The Mind as an Advanced Neural Network
Our minds are often seen as something unique. Mysterious. Unfathomable.
But what if they aren’t?
Stephen Wolfram suggests that the processes of thought may have a lot in common with neural networks – similar to the way AI works today.
Think about that for a second.
Our brains could be like an incredibly advanced neural network. Capable of learning. Pattern recognition. And, most importantly, using tools.
What we find easy, neural networks can also learn to find easy. Though it may take them significant computational power.
Take language, for example.
We, as humans, are naturally good with language. But we can only comprehend about 150-200 words per minute. That’s our limit.
AI, on the other hand? Large language models can process thousands of words per second. Thousands. And they do it not just sequentially—they do it in parallel, while using external tools seamlessly.
The communication between AI and its tools is astonishingly fast. Far faster than what we can achieve with language alone.
So, what does this mean for us?
If we are good at something, AI can potentially become good at it too. But with the added advantage of multitasking and sheer speed.
Just think for a minute what it means.
For instance, when we use tools like a calculator to solve math problems, we’re augmenting our cognitive abilities. We extend our minds beyond their natural limits.
Similarly, AI, especially large language models (LLMs), use external tools to extend their functionality. While our brains might be limited in running mathematical code step-by-step internally, AI models can integrate external computational power seamlessly. They perform calculations, fetch data, and explore possibilities far beyond the constraints of their intrinsic structure.
AI is not just replicating us. It is augmenting what we do at speeds and scales we can barely comprehend.
A Universe of Irreducible Complexity
One of the ideas that really resonated with me is Wolfram’s concept of ‘computational irreducibility.‘
It’s fascinating.
Even if we understand the rules governing a system, it doesn’t mean we can just skip to the end and predict the outcome.
There are systems where we have no choice but to let them evolve step-by-step to see what happens.
Think about that.
This means there are limits to what both we and AI can predict or simplify. Some things simply have to unfold in real time.
For me, this has deep implications for how we approach scientific inquiry.
There are processes, both in our minds and in the cosmos, that we can’t just solve or understand instantly.
We have to experience them.
Much like a computer running a full simulation.
A real-world example of this is solving certain mathematical integrals. For some complex integrals, there is no shortcut to directly deriving the result. Instead, we have to proceed step-by-step, sometimes using numerical methods to approximate the answer.
This highlights how, in many areas, we are bound by the necessity of following each computational step.
AI can certainly help us spot patterns and navigate through complex data.
But it won’t be able to bypass these fundamental constraints.
Instead, AI becomes an invaluable partner. Helping us uncover insights that would otherwise take us a lifetime to find.
AI and the Human Desire to Care
The conversation also touched on a key distinction.
AI might be able to explore the universe. But what truly drives exploration is care.
A distinctly human attribute.
We choose which questions to ask.
We choose which paths to pursue.
AI can generate endless possibilities, discover new patterns, and even stumble upon “interconceptual spaces” – concepts and phenomena that exist between the definitions we have for things, like ‘cat’ and ‘dog.’
Do you know what “interconceptual spaces” are?
One way to relate to this is the abstract paintings that we all enjoy – like those of Sir Gordon Howard Eliott Hodgkin. Different people see different images in them. It’s a possibility space, where many images coexist, shaped by perception.
Similarly, AI explores these interconceptual spaces. In image generation models, AI starts with random noise, much like the chaos in abstract art. Through each iteration, it reduces the noise step-by-step, gradually refining towards a recognizable outcome. Imagine it transforming a cloud of randomness into something meaningful—like a cat or a dog. With each iteration, the abstract becomes clearer, and the final image emerges from a blend of possibilities, shaped into coherence.
This is how AI moves through a series of conceptual images, refining them to reach the final state. It’s about exploring those in-between, undefined areas, and turning them into something we can recognize and understand.
But it’s up to us to decide which of these discoveries matter.
Ultimately, AI is an extension of ourselves.
It might resemble our way of thinking because, in many ways, it is built based on how we process language, recognize patterns, and learn.
But what AI lacks is the intrinsic motivation that drives human curiosity.
Our neural networks have evolved not just to understand, but to assign value.
To care.
The cosmos might be a vast computation.
But the human mind is unique in its ability to ask, “What does this mean for us?”
Conclusion
As we grapple with the metaphor of a computational universe, we’re also confronting the potential of AI as a co-explorer of this cosmic computation.
The universe may indeed be a massive computation.
And our minds might be its advanced neural network processors.
But the real magic lies elsewhere.
In our ability to care.
To focus our attention.
To use tools creatively.
To seek understanding beyond the mere sum of computational steps.
Just as we are starting to glimpse the full potential of AI as a partner in exploration, it’s worth remembering something.
Our ability to care.
To choose what matters.
It remains irreplaceably human.
In a computational cosmos, we are both the processors… and the dreamers.