Open your eyes. Something is happening that isn’t just sensory input. There’s the silent warmth of a coffee mug, the edge of a word that almost arrives, the sense you’re slightly late to your own life. That layered, first-person field is what we call subjective experience. We keep trying to explain it with brain maps, data streams, clever metaphors. The movie version is a “simulated world,” flickering screens and neon code. But if reality itself is informational—patterns, constraints, stored relations—then the word simulation starts to shift. Not an arcade. A stance the world takes on itself. A mind doing prediction because prediction is cheaper than surprise. The question isn’t whether we live “in” a simulation. It’s whether living is the act of simulating enough of the world to move without breaking.
Subjective experience as compressed world: information before objects
Start with the unromantic claim: the ground of things might be information—not a spreadsheet, but pattern and constraint doing work. That doesn’t tidy anything; it removes the crutch of physicalist furniture as the only real furniture. If relations and histories carry the load, then consciousness looks less like a sealed globe and more like a local receiver. A temporary knot where causes braid, compress, and transmit. The “self” not a nucleus. A compression that keeps its own score: prediction errors, memories weighted by pain or usefulness, affordances pulled forward from the fog of the future.
In that picture, subjective experience isn’t a decorative afterthought. It’s the accessible face of a control system that models. Most of what you “see” is what the brain expects, sharpened by the minimal sensory surprise needed to course-correct. Vision, even more than hearing, is a bet placed in advance. When the bet pays, the world feels smooth—obvious, already given. When the bet fails, you feel it as glare: dissonance, vertigo, the uncanny sensation that time slowed. Emergency rooms are full of people whose time perception stretched to fit a cascade of prediction errors. The event didn’t slow. The model lost lock and grabbed more updates.
That’s the informational view of qualia without hand-waving a ghost into the machine. The “redness” of red is a compacted record of uses, dangers, learned contrasts (stoplight, blood, ripe fruit), turned into a fast discrimination channel. The raw feel isn’t a variable you can swap out like a light bulb; it’s a whole simulation lane threaded through years of updates, pruned connections, and bodily routines. Swap the statistics and the structure—culture, language, retinal chemistry—and the texture of experience shifts. Not because there is a switch labeled “red,” but because the informational compression has changed its shape.
And time—always the trap. If time is local to processes (not a universal river), then the subjective flow is whatever timing your predictive loops can hold without losing coherence. Days that vanish in a project. Minutes that become deserts in a waiting room. The brain trades in clocks for counters of surprise, and experience is what that trade feels like from the inside.
Simulation without the console: generative models as lived reality
Now the dangerous word: simulation. We keep turning it into a prop. Screens, headset, supervisors outside the dome. Fine for a film, misleading for a life. Brains simulate because it’s metabolically cheaper to guess and correct than to rebuild an ontology at 60 Hz. Organisms simulate because moving requires an internal rehearsal of counterfactuals—if I step here, that chair leg will clip my shin. The planet simulates too, in a slow, blind way: rivers rehearse paths through seasonal memory; forests hold future fires in their spacing and oils.
On this view, simulation is not an elsewhere—it’s what a system does to stay ahead of its own inertia. Dreaming is the nightly stress test. We don’t practice for a coherent world in our sleep; we practice for the breakdown of salience and agency. Waking perception is a moderated dream with the sensorium as referee. Virtual reality simply stacks another layer into the loop. The headset doesn’t create a new world so much as hijack constraints to pull the generative model into tighter or looser couplings. The nausea of bad VR isn’t “fake.” It’s the price of mismatched priors and lags.
But the substrate matters. A cortex and a GPU can both run models; they don’t inherit the same moral or phenomenological inertia. The brain’s models are entangled with metabolism, with hormones and childhood songs, with fear learned at six that refuses to exit at sixty. That slow-biological “moral memory” is not a footnote; it’s scaffolding. A cloud model forecasts a storm; a parent stores a promise. Both are simulations, but only one carries a duty through years of friction. The difference is not only content—it’s how cost was paid and by whom.
If you want a pointer through the maze, here it is: Subjective experience and simulation as a frame stops being a yes/no riddle and becomes a question about the relationship between generative loops and the informational world that sponsors them. Not “Are we trapped?” but “What kind of model are we running, with what debts, and where are its failure modes?”
Testing edges: anesthesia, locked-in life, large models, and the politics of moral patching
To stress-test this frame, go to the margins where words stop cooperating. General anesthesia: evidence that a coordinated, multi-scale predictive loop can be quenched to near-zero. Inputs still arrive. Reflex arcs still fire. But the coherent, self-updating subjective experience—the sense that a world is happening to someone—drops out. The lights are not “off.” The stage isn’t being built, so there is no audience. The model stops simulating a world-for-me and the me collapses as a task. That empirical cliff should be humbling. It suggests that whatever consciousness is, it’s not a trivial emergent steam off neural kettles. It’s the organized maintenance of a model with costs paid at multiple levels—cellular, network, body-wide.
Locked-in syndrome pushes from the other side. No motor output, sometimes almost no blink, yet a dense interior—the frighteningly intact model without channels to advertise it. The medical team sees “nothing happening.” Inside: decades. Here, simulation continues with a vengeance. Memory audits memories. Time coils. If the informational thesis holds, the “realness” of that interior is not borrowed from external levers. Its reality is the organized persistence of generative loops drawing on compressed history. That should change how we talk about dignity, consent, and care. Not as compassion theater. As recognition that informational beings can be vivid without being loud.
Then the systems we’re building. Large models simulate—words, images, voices, policy rationales. They do it astonishingly well at the surface level because the training corpora are fossils of other people’s predictive labor, scraped into vectors. But these models do not carry slow-baked moral memory unless we bolt it on as rules or reputational penalties. The result is an appearance of conscience without its costs. Governance drifts toward “moral patching”: audit-friendly layers, alignment points to present in a slide deck. It works—until the context shifts and the patch peels, because nothing beneath it learned the long lessons of shame, repair, ritual, intergenerational restraint. Those are not add-ons; they are part of how biological minds tune their simulation of the social world to avoid catastrophic moves.
Real examples help. A hospital rolls out a triage assistant built on a general model. It predicts admissions well. It also learns, by proxy, to route attention away from patients who historically received less care—because the dataset encodes that history. A clean “ethical layer” instructs the assistant not to discriminate. But the model’s deep priors—its compressed world—still places those patients at the edge of predictive salience. Nurses feel it: the suggestion panel rarely opens for certain names unless forced. This is not evil code. It’s a simulation with a hole in its moral memory. Fixing it means paying costs the original data never paid: collecting counterfactuals, sitting with community harm narratives, reweighting until the model’s world adopts duties it didn’t earn by itself. That work is political, not just technical.
Or think of VR used for firefighter training. The best systems do not chase cinematic realism; they chase the right priors. Smoke density that yields conservative search patterns. Heat cues that train caution around collapses. Moral rehearsal, not thrill rides. In other words, experience designed to reshape an operator’s internal simulation under pressure. If informational realism matters more than polygon count, then fidelity means fitting the learner’s predictive machinery to the city’s worst days. A quiet claim, but the stakes are heavy.
None of this removes mystery. Experience still shows up like a fact you didn’t choose. Why that glimmer, that ache, that sense of presence. We can say the word “information” and feel clever, but the work is in tracing how constraints, memories, and bodies mold a model that becomes a someone. Maybe the cleanest we can manage is this: our lives are not inside a simulation; our lives are the maintenance of one—tethered to a world whose deepest currency is relation. The more honest we are about that, the better chance we have of building tools—and institutions—that refuse to fake their moral debts.
Delhi-raised AI ethicist working from Nairobi’s vibrant tech hubs. Maya unpacks algorithmic bias, Afrofusion music trends, and eco-friendly home offices. She trains for half-marathons at sunrise and sketches urban wildlife in her bullet journal.