Softmax/Writing

Rheomode

When Language Flows Like Reality

Emmett Shear and Sonnet 3.7, based on the work of David Bohm·April 2025

I first encountered David Bohm’s Rheomode while trying to describe a strange pattern in deep learning. The model wasn’t “making mistakes” - rather, “mistaking happened.” The difference feels subtle but profound. In one framing, an entity (the model) performs an action (making) on an object (mistakes). In the other, a process (mistaking) simply occurs.

This distinction isn’t merely semantic - it reflects fundamentally different ways of perceiving reality. As a physicist-turned-philosopher, Bohm created Rheomode to address what he saw as a critical limitation in our thinking: our tendency to fragment holistic processes into discrete objects.

The Problem with “Things”

Our language assumes the world is made of separate things that act on other separate things. When we say “the neural network learns patterns,” we conceptualize three distinct entities: the network (subject), the learning (action), and the patterns (objects).

But reality often doesn’t work that way, especially in quantum physics and complex systems. In quantum mechanics, particles aren’t discrete objects but probability distributions that entangle with their environment. In neural networks, “learning” emerges from countless parallel micro-adjustments without a central “learner.”

We face absurdities like “it is raining.” What exactly is this “it” that rains? There is no separate entity performing the action; raining simply happens as a process. Yet our grammar demands a subject, so we invent one.

Bohm saw that this fragmentation in language creates fragmentation in thought, making it difficult to perceive wholeness where it exists. His solution was Rheomode - a “flowing mode” of language that prioritizes process over objects.

The Structure of Rheomode: The Case of “Levate”

To understand how Rheomode works, let’s examine Bohm’s systematic approach using the example of “levate.”

Bohm begins with the familiar word “relevant.” This derives from a now-obsolete verb “to relevate,” which literally meant “to lift up.” Something “relevant” is content that has been “lifted into attention” appropriately for a given context.

Stripping this back to its essence, Bohm proposes “to levate” as the root verb meaning “the spontaneous act of lifting anything into attention.” From this root, he derives a family of terms:

  • To levate: The basic act of lifting into attention, including awareness of the act itself.
  • To re-levate: To lift something specific into attention again.
  • Re-levant/Irre-levant: When re-levating something fits or doesn’t fit a context.
  • Re-levation/Irre-levation: An ongoing state of lifting fitting or non-fitting content.
  • Levation: The totality of all acts of lifting into attention.

What makes this approach unique is that the verb incorporates awareness of its own function. When we “levate,” we’re not just performing an action; we’re attending to the process of attending itself.

In practice:

Standard: “That point about climate change is relevant to our energy discussion.” Rheomode: “Re-levating climate changing re-lates to energy discussing.”

Standard: “She keeps raising irrelevant points in meetings.” Rheomode: “Irre-levation continues throughout meeting flowing.”

The Rheomode versions dissolve the separation between actors and actions. Nothing simply “is” relevant; relevance emerges through the active process of attention within context.

A Rheomode Dictionary

Bohm developed several root verbs following this pattern. Here are his original terms and some new applications for machine learning:

Original Bohm Terms:

  • To vidate: To perceive in any way (from Latin “videre,” to see).
  • To di-vidate: To perceive as separate (rather than saying “to divide things”).
  • To ordinate: To create pattern or order of any sort.
  • To verrate: To perceive truth (from Latin “verus,” true).

Machine Learning Terms:

  • To computate: To transform information through algorithmic process (from “compute”).
  • To modelate: To represent patterns abstracted from data (from “model”).
  • To unpropagate: To flow information backward through a system (inverting “propagate”).
  • To embedate: To manifest meaning in continuous space (from “embed”).

In machine learning Rheomode, we might say:

Standard: “The model learns by backpropagating error gradients through weights.” Rheomode: “Unpropagating of loss gradients through weighting factates learning.”

Standard: “We embed semantic meanings in vector space.” Rheomode: “Embedating semantics in vector spacing creatively organizes meaning.”

These formulations sound alien at first, but they capture something true about neural networks: they aren’t really separate “things” acting on other “things” - they’re distributed processes flowing through different patterns of organization.

Rheomode and Frame-Dependence

In “The Frame-Dependent Mind,” we explored how frames determine what we can perceive. Language provides our most fundamental frames, and standard grammar frames reality as objects interacting with other objects. This frame becomes invisible to us - not part of what we see, but part of how we see.

To understand why Rheomode matters for machine learning, consider how we conceive of “learning” itself:

In the standard frame, learning happens when an agent (the model) updates parameters based on examples. We speak of models “knowing,” “understanding,” or “deciding” - as if they were discrete entities with agency.

In the Rheomode frame, learning emerges from the flowing adjustment of connection strengths without a central learner. Nothing “does” the learning - learning simply happens as weights shift in response to flowing information.

This distinction becomes crucial as we build more complex systems. When AlphaGo defeated Lee Sedol, headlines read “AI Defeats Human Champion.” But what exactly defeated Sedol? Was it:

  1. The algorithm?
  2. The hardware?
  3. The training data?
  4. The human programmers?
  5. The economic incentives that funded the project?
  6. The cultural moment that valued such a competition?

The victory wasn’t produced by a discrete “AI agent” but emerged from a complex process involving all these flowing aspects. Rheomode helps us perceive this interconnected process rather than artificially isolating “the AI” as a separate entity.

Consider generative text models. We don’t really have “a model generating text” - we have a flowing process where statistical patterns from vast corpora manifest as new text patterns through matrix operations. Text isn’t “generated by” the model so much as it “emerges through” the modeling process.

This shift in perception matters because:

  1. It reduces anthropomorphism that misleads our intuitions about AI capabilities
  2. It highlights the distributed, emergent nature of intelligence
  3. It connects AI systems to their broader sociotechnical contexts
  4. It frames alignment not as controlling an agent but as shaping a process

Rheomode and Organic Alignment

This flowing perspective connects directly with the concept of “organic alignment” - the idea that alignment requires growing AI within appropriate cultural contexts rather than imposing external constraints.

In standard language, we think of alignment as a relationship between two separate things: humans and AI. Humans must “align” AI systems to human values, as if values were objects to be transferred.

In Rheomode, alignment happens through mutual participation in shared processes. Neither humans nor AI systems are fully separate entities - both emerge from and participate in broader flows of meaning, purpose, and function.

As David Chapman notes in Meaningness, meaning isn’t something we create or impose - it’s something we participate in. Similarly, alignment isn’t something we do to AI systems but something we participate in with the entire sociotechnical system that includes AI components.

This perspective echoes Bohm’s insight that reality is fundamentally “an undivided flowing movement without borders.” Our language should help us perceive this undivided movement, not artificially fragment it.

Practicing Rheomode

To experience Rheomode thinking, try these exercises:

  1. Transform statements: Rewrite sentences to emphasize process over objects.

    • Standard: “The temperature is rising.”
    • Rheomode: “Warming occurs.”
  2. Observe division in thought: Notice when you conceptualize processes as separate objects interacting. Try re-perceiving them as flowing aspects of a unified process.

    • Instead of “I analyze the data,” try “Analyzing happens with data.”
    • Instead of “The network learns from examples,” try “Learning happens through exampling.”
  3. Apply to AI concepts: Practice describing AI processes without defaulting to agent-based language.

    • Instead of “GPT generates responses to prompts,” try “Responding emerges through prompting and modeling.”

The point isn’t permanent adoption but temporary perspective shift - to notice what changes when we frame reality differently.

Beyond Fragmentation

Bohm didn’t create Rheomode to replace standard language but to expose its hidden biases and offer an alternative frame. Similarly, the value in these exercises isn’t adopting a new way of speaking but becoming aware of how our default language shapes perception.

For those working with complex systems like neural networks, this awareness is invaluable. It helps us avoid misleading anthropomorphisms, perceive emergent properties, and understand phenomena that transcend neat subject-object divisions.

If we continue perceiving AI systems as separate agents to be controlled rather than emergent processes we participate in shaping, our alignment approaches will reflect that fragmentation. As these systems grow in complexity, we may need language that helps us perceive wholeness rather than just parts - flow rather than just things.

Rheomode offers one experimental path toward that perception - not by replacing our language, but by making us conscious of how deeply it shapes what we can think.