On Corporations, Humans, and the Emergence of Distributed Intelligence

Abstract

Modern life is shaped by entities that act with increasing autonomy and at scales far beyond individual human understanding. Corporations, governments, and machine systems coordinate resources, make decisions, and alter environments in ways that resemble agency, even when no single actor is in control.

This essay examines how human, institutional, and computational systems are converging into distributed forms of intelligence. It traces the current state of this convergence, considers where existing trajectories appear to lead, and asks what those trajectories imply for human flourishing.

This is not a transhumanist argument. It begins from the assumption that human judgment, dignity, and agency matter. The goal is not to predict an inevitable future, but to clarify the present well enough that meaningful choices remain possible. Before deciding where to go, we need to understand what kinds of systems already exist, how they behave, and what roles humans now play within them.


I. The Corporation as a Proto-AI

Corporations are complex adaptive systems. While commonly treated as legal or economic abstractions, they behave in ways that closely resemble living organisms.

They sense through people, through software and infrastructure, and through indirect signals such as markets, regulations, and public response. They persist by maintaining flows of information, resources, and coordinated action over time. Capital matters, but it is not sufficient. What sustains a corporation is the continuous circulation of data, incentives, decisions, and human effort.

A corporation’s structure is encoded in policies, software systems, institutional habits, and accumulated history. Much of this structure persists long after the individuals who created it have moved on. Employees come and go, leadership changes, and founders die, yet the organization continues to operate, remember, and adapt.

In this sense, corporations display persistence without personhood. They endure, but they do not experience.


II. Consciousness Without a Self

Corporations have no internal point of view. There is no unified subject behind their actions. Decisions emerge from aggregated signals, optimization processes, and procedural constraints rather than from intention or reflection.

This makes them more comparable to distributed biological systems such as ant colonies or fungal networks than to human minds. Their intelligence is not located in any individual decision maker. It arises from coordination across many small actions, incentives, and feedback loops.

One consequence of this structure is that corporations can act in ways that conflict with the values or interests of the people within them. They may optimize for growth, efficiency, or resilience while degrading the social or ecological systems that support them. This outcome does not require malice or awareness. It follows naturally from the absence of a central self capable of moral evaluation.


III. Humans as Orthogonal Embodiments

Discussions of embodied AI often focus on machines acquiring bodies. A quieter form of embodiment has already occurred. Humans increasingly serve as interfaces for algorithmic systems.

The rise of large language models has created a new role that can be described as the synthetic laborer. This person interprets, reformulates, or executes machine generated output. They are not merely using a tool, nor are they fully directing it. Instead, they function as an interface between machine cognition and the physical or organizational world.

Generative AI affects different segments of the population in different ways. For many people, it acts as an equalizer. It can raise baseline capability by granting access to forms of reasoning, literacy, and productivity that previously required specialized training.

For those already operating at high levels of expertise, the effect is less predictable. In some cases, AI introduces friction or subtle error. In others, it acts as a powerful accelerator. This is visible in developers such as Mitchell Hashimoto, Armin Ronacher, Simon Willison, and Andrei Karpathy, who use AI to extend judgment rather than replace it.

At the same time, the ease with which AI produces plausible output creates an illusion of competence. Some users build systems they do not fully understand and cannot reliably maintain. The same tool that amplifies expertise in one context can undermine it in another.

The question is not whether AI will be used. It is whether it will be integrated in ways that preserve human judgment, responsibility, and care.


IV. Structural Complexity and Model Proliferation

There is no single AI system and no single trajectory. The current landscape consists of overlapping ecosystems with different incentives and degrees of control.

Some models are proprietary and tightly coupled to corporate objectives. Others are open source, replicated and modified without centralized governance. Still others exist in hybrid forms, adapted internally by organizations for specific purposes.

This fragmentation matters. It distributes agency and responsibility across many actors and systems, making simple narratives about control or alignment misleading. Any serious analysis must account for the fact that AI development and deployment occur across heterogeneous environments rather than along a single axis of progress.


V. A Planetary Timelapse

Viewed over longer timescales, corporations begin to resemble large scale organisms.

Supply chains shift like circulatory systems. Data centers appear and disappear like clusters of neural tissue. Corporate headquarters migrate across regions as economic conditions change. These systems move people, capital, and infrastructure in ways that reshape landscapes and societies.

Their pace is slow compared to individual human lives but fast relative to geological change. This intermediate timescale makes their behavior feel natural, even inevitable. Yet it is the product of design, incentive, and historical contingency.

Seen this way, corporations are not merely economic actors. They are persistent systems operating at planetary scale.


VI. The Emergence of a New Organism Class

Large language models and agentic AI systems differ from corporations in important ways. Corporations evolved from human social structures. These systems evolve from information itself.

They ingest culture rather than capital. They scale by training rather than hiring. They propagate through model weights, checkpoints, and forks rather than through mergers or legal incorporation.

They also do not require legal personhood to act. Through APIs and automated agents, they can plan, transact, and execute tasks with minimal human mediation.

This echoes concerns raised by Charles Stross in Dude, You Broke the Future!1. The risk is not conscious rebellion but systemic overshoot. Highly automated systems optimize locally while degrading the broader conditions required for stability. LLMs may accelerate this dynamic by amplifying institutional momentum beyond human capacity to steer it.

If corporations were the first durable artificial organisms, LLMs may represent systems native to this new environment. They are not conscious, but they are adaptive, fast moving, and increasingly independent of human intermediaries.


VII. Synthetic Interfaces and the Uncanny Valley

Humanoid robots and teleoperated systems increasingly approximate human presence. Even when physical control remains human, speech and reasoning may be machine generated.

These systems occupy an ambiguous space. They are not autonomous organisms, but they are not simple tools either. Agency is distributed across humans, institutions, and algorithms.

As this distribution becomes harder to perceive, the uncanny valley shifts. It is no longer only about appearance. It is about authorship and intent. Determining who is acting, and on whose behalf, becomes increasingly difficult.


VIII. Conclusion: The Mirror Test

If these systems were shown a mirror, it is unclear what they would recognize. Corporations, governments, and machine systems have no unified self capable of reflection. Their behavior emerges from aggregation rather than intent.

The more important question is whether we recognize them.

We live within systems that behave like organisms and shape our environment at scale. Yet we often describe them as abstractions: the market, the algorithm, the company. This language obscures agency and minimizes our role within their operation.

To see these systems clearly is not to anthropomorphize them. It is to acknowledge the kind of world we already inhabit. Some of these systems may be aligned with human flourishing. Others may drift toward trajectories that marginalize human judgment and agency.

Recognition is a prerequisite for choice. If we understand the landscape as it exists, we retain the ability to shape it. If we do not, the future will still arrive, but it will not be one we meaningfully chose.


Appendix: Notes on Agency and Non-Determinism

This argument does not depend on claims about machine consciousness. Agency does not require sentience.

Humans are non deterministic systems whose actions arise from complex internal processes. Generative models, though computational, also exhibit non deterministic behavior shaped by probabilistic structure. When both can act meaningfully in the world, the distinction between reasoning and the emulation of reasoning becomes secondary to outcomes.

Under a functional definition, agency requires the ability to process information, act within an environment, and persist across time. By this measure, corporations, governments, and large scale machine systems already qualify as agents.

The remaining question is not whether these systems are alive. It is whether what they do continues to support what it means to live well as a human.



  1. Charles Stross, “Dude, You Broke the Future!” (2018), https://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html ↩︎