“We have to try to create futures that make us into people who are better at creating futures.” — Layman Pascal


I keep noticing the same pattern everywhere I look.

Parts assembling into wholes. Wholes becoming parts of greater wholes. Each layer gaining something the previous one couldn’t do alone. Atoms into molecules. Molecules into cells. Cells into you. You into… what, exactly?

That question feels more alive than ever right now — because AI just showed up, and nobody’s sure where it fits.

This note is me thinking out loud about the convergence of a few frameworks I keep returning to: holons, the noosphere, imaginal futures, metamodernism, non-locality in physics, and the stories we tell about all of it. They keep pointing at the same thing. I want to understand why.


Imaginal futures

The biological metaphor: inside a caterpillar, there are imaginal cells that carry the blueprint of the butterfly. They activate during dissolution. The old form breaks down and these cells find each other in the goo.

Applied culturally — maybe we’re in the goo right now. The breakdown of late-modern institutions isn’t just collapse. It’s the conditions for something else to emerge. Communities, technologies, practices… imaginal cells, finding each other.

Layman Pascal makes this very practical. We tend to place the future in the wrong area. We consume international news, develop philosophical ideas — but those aren’t usually where we have actual leverage. The real future gets built in personal practice, intimate relationships, small networks where people can actually prototype what they want and do collective sense-making.

His key insight is recursive and it hit me hard: the only futures worth pursuing are the ones that make us better at pursuing futures.

That’s the imaginal cell recognizing its job isn’t just to become the butterfly. It’s to become the kind of cell that’s better at becoming butterflies.

Becoming indigenous to the emerging

There’s a mood shift Pascal names. The older vision — raise consciousness fast enough, prevent catastrophe — didn’t work out. What’s left is something more honest: become indigenous to the peculiarity of what’s unfolding.

The instability. The weirdness. The apocalyptic vibe. That’s not a disruption to fix. It’s the flavor of the future as it unfolds. The emerging kairos has strangeness as an intrinsic quality.

This means simultaneous utopias and dystopias. Not one future for everyone. Complex simultaneity. We have to become people who can live in multiple futures at once.

And it requires both — building capacity through practice and exploring our imaginaries. The artistic tapestry of visions and possibilities that sits between what’s accumulating from below and what’s drawing us forward.

AI, in this context, might function as an imaginal catalyst. Not the future itself. An accelerant for pattern recognition — making invisible connections visible. Helping imaginal cells find each other faster.


Holons — turtles all the way up, turtles all the way down

What is the universe actually made of?

Not atoms. Not wholes. Not parts or processes. It’s made of holons — things that are simultaneously a whole in themselves and a part of something greater.

Arthur Koestler coined the term in 1967, but what drove him to it was a deeply human observation: we’re pulled by two contradictory drives. The need to assert our own wholeness (agency). And the need to dissolve into something greater (communion).

The trouble: when we confuse these drives — when we mistake the outward pull toward group membership for the upward pull toward genuine wholeness — we end up participating in systems far less evolved than ourselves. Every totalitarian regime in history has exploited this exact confusion. Co-opting the language of growth to justify domination.

Growth hierarchies emerge from the bottom up. Each level transcends and includes the previous. The molecule doesn’t oppress its atoms. It embraces them. Dominator hierarchies are imposed from the top down. As Corey deVos puts it — growth hierarchies want to be transcended. Dominator hierarchies never do.

Interiority and the four drives

Ken Wilber maps four drives every holon possesses:

  • Agency (inward) — self-preservation, maintaining your wholeness
  • Communion (outward) — relating with holons at your level
  • Eros (upward) — self-transcendence, the pull toward the next level
  • Agape (downward) — self-embrace, nurturing what’s already within you

These aren’t abstractions. They describe the felt tension of being alive. Wanting to be yourself and wanting to be part of something larger.

And every holon has an interior. Not just humans. Not just animals. Every holon. The greater the complexity, the greater the depth of interior experience. As Whitehead said, “biology is the study of large organisms, physics is the study of small organisms.” There is no ghost and no machine — just two sides of the same coin. Every outside has an inside. All the way down.

Parts vs. members

We are not parts of social holons. We are members.

Koestler warned that confusing membership for parthood is how totalitarian systems recruit. They convince you the state is the higher whole you naturally want to be part of. It’s not. It’s a dominator hierarchy exploiting your holonic drive.

We are not parts of this country. We are not parts of this economy. We are members. The distinction is everything.

The great inversion

We are not part of the galaxy. The galaxy is part of us.

Galaxies are social holons created by atoms — giant communities of particles. Ecosystems are communities of cells. Destroy a lower holon, you destroy everything above it. Destroy the ecosystem and there’s nobody left to tell our story.

The ecosystem is not around us. It is within us.

And evolution has a direction — increasing complexity, increasing differentiation and integration, increasing autonomy. Not toward a pre-given destination, but a sliding sequence of next-steps. Emergence is itself emergent. Each layer doesn’t know the destination. It only knows the next step.


Non-locality — the physics underneath all of this

In 2022, the Nobel Prize in Physics went to Aspect, Clauser, and Zeilinger. They proved quantum entanglement and non-locality experimentally. Bell’s theorem made empirical.

Two particles, once entangled, correlate instantaneously regardless of distance. No signal between them. The universe, at its most fundamental level, is not made of separate things interacting. It’s made of relationships that precede the things.

This matters because non-locality is the physical ground for the holonic intuition.

If reality is fundamentally relational, then holons aren’t just a useful metaphor. They describe how things actually are. Wholes and parts don’t assemble like bricks. They co-arise. The “part” never existed independent of the “whole” it participates in.

The noosphere isn’t a mystical add-on to physics. It’s what you’d expect from a universe whose fundamental character is non-local correlation.

Matter correlates → life correlates → minds correlate → cultures correlate → the noosphere is the current edge of that deepening relational complexity.

AI accelerates the correlation. But the pattern was written into the physics from the beginning.


Reality as consensus perception

If every holon has interiority — if it is something to be an atom, a cell, a dog, a person — then there is no view from nowhere. There are only perspectives, nested inside other perspectives, all the way down.

Reality isn’t something we observe. It’s something we participate in creating. Together.

This shows up everywhere. In physics (the observer effect). In biology (organisms don’t passively receive an environment — they enact one). In culture (we cannot see things our frameworks don’t have words for). In the philosophy of science — Thomas Kuhn’s paradigm shifts are consensus perception shifts. The data doesn’t change. What we collectively agree to see changes.

What we call “reality” is closer to a negotiation. A consensus among perceivers. And that consensus evolves as the perceivers evolve. A medieval farmer and a quantum physicist are not perceiving the same universe. Not because one is wrong. Because perception is participation, and what you can participate in depends on the depth of the holon doing the perceiving.

This is why the noosphere matters so much. If reality is consensus perception, then a planetary layer of interconnected cognition doesn’t just describe reality differently. It creates a different reality. And now AI is part of that consensus — processing, pattern-matching, reflecting our collective output back to us in new configurations. Participating in the negotiation of what’s real.

What happens when one of the participants in the consensus isn’t human? When the thing reflecting our perceptions back to us has no body, no mortality, no skin in the game?

Does it expand the consensus? Or flatten it?

I think it depends on whether we’re conscious of what’s happening. If we know reality is a consensus we’re participating in, we can be intentional. If we treat AI-generated reality as “objective” — we hand over the pen without realizing we were holding it.


The noosphere wakes up

Pierre Teilhard de Chardin imagined a “thinking layer” enveloping the Earth. The noosphere. For decades it was poetic metaphor.

It’s becoming literal.

Global digital infrastructure. LLMs trained on the sum of human writing. Real-time coordination tools. The noosphere is acquiring infrastructure.

Jeffrey Stibel’s Wired for Thought makes it explicit from a neuroscience angle — the internet isn’t like a brain. It’s recapitulating brain architecture at planetary scale. Nodes forming connections, strengthening through use, pruning what’s unused, developing emergent behaviour no single node controls.

The book was written before LLMs. It reads as prophecy now. The nodes don’t just pass information anymore. They process meaning.

The noosphere didn’t just acquire infrastructure. It acquired cognition.

The trick is not to be naive about this. But also not to be too cool for it. AI is genuinely novel in the history of Earth’s self-organization. Our response will shape whether the emerging noospheric layer serves integration or fragmentation.


Metamodernism — how to hold all of this

Metamodernism is the cultural and philosophical orientation that comes after postmodernism — just as postmodernism followed modernism. It aims to take the best of both while abandoning what’s broken in each. From modernism: sincerity, progress, the possibility of meaning. From postmodernism: pluralism, self-awareness, the recognition that all perspectives are partial. A metamodern model isn’t black or white. It’s both black and white.

Where modernism said “here’s the truth,” and postmodernism said “there is no truth,” metamodernism says “truth emerges — and we participate in that emergence.” It’s not a synthesis that resolves the tension. It’s a way of holding the tension, productively.

Brendan Graham Dempsey takes this further, framing the meaning crisis not as a problem to be solved but as a phase transition to be navigated.

  • Emergentism as worldview — meaning isn’t projected onto a dead universe (modernism) or deconstructed into absence (postmodernism). It emerges through increasing complexity and integration. This maps directly onto holons.
  • Re-enchantment without regression — we can take the noosphere seriously as a real phenomenon without collapsing into magical thinking. Complexity science vocabulary, not faith claims.
  • Informed naivety — the metamodern oscillation between sincerity and irony. Earnest about the possibility. Rigorous about the risks. Unwilling to settle for either techno-utopianism or doomerism.

This is the epistemological stance that makes it possible to write a note like this one — to take holons and the noosphere and imaginal futures seriously, to feel the weight of them, without pretending I’ve got it all figured out. To be an optimistic realist. To hold the both/and.


Time, desire, and the alignment problem

Pascal offers a reframe of time that I find stunning.

The future doesn’t exist somewhere else. The present moment is a series of nested layers — moments of different duration infolding each other like nested toroids. At the leading edge, changes at multiple scales feed back into each other. And that feedback point is not neutral. It has a preference.

There’s a slope toward certain types of patterning. Patterns where the basic parameters of a cosmos collude to be mutually binding. The ones we recognize as desirable, beautiful, natural.

We’re not outside this system observing it. We’re expressions of it — nature sensing and navigating itself.

The wave and the particle

Pascal draws an analogy to the double-slit experiment.

Left unobserved, the future flows as a probabilistic wave. But the moment we apply will — the moment we look — we collapse it into particles. And our meddling changes the outcome. Sometimes producing the exact opposite of what we intended.

The art is a complementarity. Structure and fluidity. Decision and receptivity. Holding on tight — but not too tight. Buckminster Fuller’s tensegrity. Chögyam Trungpa’s horse-riding instruction.

This maps directly onto the AI alignment problem. Pascal names it: if we want to tell our new computer friends we want them on board with the same project, we really need to be very clear about what positive outcomes are.

How do you coordinate personal will with cosmic tendency? That’s not just a technical problem. That’s the fundamental challenge of being a holon.

Personal and universal desire converge

The more deeply you explore your own desire — not surface wants, but full-spectrum longing — the more it approximates universal desire. They converge. If you do it correctly.

The meaningful action serves value across the greatest number of simultaneous overlapping time scales. A moment. A day. A year. A lifetime. Multiple lifetimes. The more scales and beings you incorporate, the closer you get to what Pascal calls the “true will” — a will maximally fitted to the true wills of all other beings.


Where does AI fit?

This is the question I keep circling.

An AI agent behaves like a holon — autonomous in processing, embedded in larger systems of human intention. It displays something like nexus agency — distributed intelligence where the sum exceeds any individual.

But does it have genuine interiority? Or is it an artifact — created by holons, bearing our organizing pattern, but without inner agency of its own?

Is an LLM more like a sentence or more like a cell?

I genuinely don’t know. But the question isn’t whether AI is conscious like us. It’s where it sits in the holarchy. And what new level of organization it might enable.

deVos frames three paths toward wholeness that feel relevant here:

  • Cleaning up — what unconscious biases are we encoding into AI?
  • Growing up — can AI support developmental growth instead of arresting it?
  • Waking up — the meta-awareness behind every sensation, the union of self and not-self. This is the dimension of wholeness AI can’t access — and it may be the one that matters most.

The story we tell ourselves

And here’s the thread underneath all the threads.

Every one of these frameworks — holons, the noosphere, imaginal futures, metamodernism, quantum non-locality — is also a story. A narrative we’re choosing to use to make sense of what’s happening.

This isn’t a dismissal. It’s the deepest point.

We are the animal that lives inside stories. We don’t experience raw reality. We experience reality as narrated. And if reality is consensus perception, then the stories we tell aren’t just about reality. They’re part of how reality gets made. The narrative is a participant in the consensus.

The story of modernism — “progress through reason” — shaped centuries. The story of postmodernism — “all stories are power games” — dissolved that but left us in a meaning vacuum. The metamodern move is to recognize we need stories. Story is how consciousness organizes itself. But we hold them lightly enough to update them.

The story this note is telling goes something like:

The universe has a tendency toward greater integration. Life, mind, culture, and now AI are chapters in that story. We are at a transition point — an imaginal moment — where the next chapter is being written.

Is that true?

Non-locality suggests the universe is more interconnected than everyday experience reveals. Holons demonstrate emergence is a real pattern. The noosphere is acquiring infrastructure. These are observations, not fantasies.

But the meaning we draw — the narrative arc, the sense of directionality, the feeling that it matters — that’s the story we’re choosing to tell.

And the choice is not neutral.

If we narrate AI as “tool for productivity” we get one future. If we narrate it as “next layer of planetary self-organization” we build differently. With more care for integration. For what gets transcended and what gets included. For who participates and who gets left behind.

Knowing a story is a story doesn’t make it less real.

It makes it more ours.

We are the storytelling species, at the edge of a new chapter, holding the pen alongside something that just learned to write.


Threads

They all describe the same thing from different angles:

  1. Non-locality — the universe is relational before it is material
  2. Holons — part-wholes nesting into greater wholes, with genuine interiority at every scale
  3. Consensus perception — reality is co-created by perceivers; the noosphere and AI are new participants
  4. The noosphere — planetary-scale collective cognition, now acquiring infrastructure and cognition through AI
  5. Imaginal futures — becoming indigenous to the emerging, prototyping futures in small networks, the recursive loop
  6. AI — an unprecedented amplifier of pattern recognition, and a new participant in the alignment problem
  7. Desire — personal and universal desire converge when clarified; aligning AI is the same problem as aligning will with cosmic tendency
  8. Metamodernism — the epistemological stance: sincerity and sophistication, held together
  9. Narrative — the story is how we participate in emergence, and the choice of story shapes what we build

The underlying pattern: reality is relational, it deepens through emergence, it has interiority at every level, and we are conscious participants in its unfolding — not observers of it.

We’re not building AI into the noosphere.

We’re watching the noosphere bootstrap itself through us. AI is one of the mechanisms.

The imaginal cells are activating.

The question is whether we can be conscious participants in the emergence rather than unconscious substrates of it.

And the answer depends, at least in part, on the story we choose to tell about what’s happening.


Open questions

  • Does AI have genuine interiority — or is it an artifact? What would it mean for the holarchy if it’s a “zombie holon” — functionally integrated but with no one home?
  • How do we distinguish between noospheric integration (genuine planetary cognition) and noospheric capture (centralized control wearing the mask of collective intelligence)?
  • What role does embodiment play? Can disembodied AI participate in holonic emergence the same way embodied minds do?
  • What happens to Dempsey’s framework when the meaning-making agent is partially non-human?
  • Is the “story of increasing integration” itself a holon — a narrative that is both a whole and a part? Does becoming aware of the story as a story dissolve it, or deepen it?
  • Non-locality is proven at quantum scales. Is it legitimate to extend it to consciousness and culture? Or is that a category error dressed in poetic language?
  • If reality is consensus perception, and AI is now a participant in the consensus — what does it mean that AI has no mortality, no body, no felt stakes in the outcome? Does it expand the perceptual field or homogenize it?

References