Dusk Somewhere

Imperfect abstractions: On “Software in the natural world”

Posted at — Jun 25, 2024 by Izzy Meckler

There is an interesting paper from earlier this month called “Software in the natural world: A computational approach to hierarchical emergence” by Rosas, Geiger, Luppi, Seth, Polani, Gastapar, and Mediano.

They are attempting to formalize the phenomenon of processes which possess their own logic, despite being “actually” implemented in terms of a more fundamental micro-scale logic.

For example, computers can be understood as behaving as if they are driven by the programs they run rather than the physics of the atoms that make them up.

This is a longstanding interest of mine. I have mostly not published my writing on the subject on this blog, apart from this post of April 2023, formalizing the idea that any “logic” embedded in the universe will eventually decohere.

The definitions I make are closely related to those made in the Software in the natural world paper.

My own motivations are the formalization of the kind of arguments made by Marxists in understanding history and society: the idea of capital as an “automatic subject” which can execute itself regardless of the intentions of the people participating in its execution; the possibility of a “capitalist mode of production” persisting despite changes in its “implementation” (in culture, technology, institutional arrangements, specific relationship with nature, etc.); the definition of “contradiction”; the clear explication of the irresolvable contradiction between the process of capital and nature; the contradiction between productive forces and relations of production; etc.

All of these require a notion of logics which exist on different levels of abstraction, and a way of relating these levels of abstraction, similar to what they want in the Software paper.

Relating levels of abstraction

The Software paper relates levels of abstraction in the following way. Say $X = (X_t)_{t \in \mathbb{Z}}$ is the micro-level process, where each $X_t$ is a random variable. A coarse graining of $X$ is a stochastic process $Z$ with $Z_t = g(X_t, X_{t-1}, X_{t-2}, \dots)$ where $g$ takes an infinite sequence of states of $X$, and outputs a state of $Z$. In other words, a coarse graining of $X$ is a stochastic process $Z$ which at every moment is the deterministic image of the history up til that moment.

A coarse-graining is informationally closed (one of several equivalent notions of essentially what it means for a process to exist independent of its implementation) if knowing the history of the micro-level process $X$ yields no additional information for the future of $Z$ beyond that which is already provided by the history of $Z$. Formally, $Z$ is informationally closed if for all $L \in \mathbb{N}$

$$ I((X_t, X_{t-1}, \dots); (Z_{t+1}, Z_{t+2}, \dots, Z_{t+L}) \mid (Z_t, Z_{t-1}, \dots)) = 0. $$

This is a very nice formalization of the idea of a process being insulated from the details of its implementation. However, it is far too strict for my purposes (although it is extremely useful as an ideal notion).

In reality, macro level processes are only mostly causally insulated from their micro-level implementations. A stray cosmic ray can cause a computer system to flip a bit and betray its physicality.

Imperfect abstraction

One obvious move is to relax the above definition of perfect abstraction to one of imperfect abstraction. We can say $Z$ is $\epsilon$-informationally closed if

$$ I((X_t, X_{t-1}, \dots); (Z_{t+1}, Z_{t+2}, \dots, Z_{t+L}) \mid (Z_t, Z_{t-1}, \dots)) < \epsilon(L) $$

That is, if the additional information that the micro-scale history provides is small. We allow $\epsilon$ to depend on $L$ to account for the fact that the correlation with the micro-scale may show itself more and more on the macro-scale as we look farther into the future.

What if we start with an abstract model?

Fix a micro-scale process $X$, and a coarse-graining $Z$ with associated function $g$.

A common practice in science is to create a model of the evolution of a coarse-grained state which is then evaluated against the actual evaluation of the coarse-grained state in the real world.

In other words, we come up with an explicit description of a stochastic process $\widehat{Z}$, and then compare its evolution to the observed coarse-grain data $Z = g(X)$.

Since $\widehat{Z}$ has an explicit description, it will provide conceptual understanding as to the nature of the process given by $\widehat{Z}$. Often it is given by a Markov model with a transition function $\tau$ so that $Z_{t+1} = \tau(Z_t)$. It has an autonomous logic which is already independent of any potential implementation.

The observed coarse-graining $Z$ will typically in reality be too noisy to give it a clean explicit description. Unwanted details from the micro-scale will leak into low-probability events, or distinguish states that should really be identified.

The framework in the Software paper does not really give us tools for comparing the model $\widehat{Z}$ with the reality $Z$, but we should be able to make a claim like: if $\widehat{Z}$ is close in some sense to $Z$ (e.g., in terms of KL divergence), then $Z$ will be approximately informationally closed.

This is because $\widehat{Z}$ evidently does not depend on micro-scale details, so if $Z$ is close to it, $Z$ should also not depend on micro-scale details.