On Thinking in the Limit
After physics, self-interest is right up there as a fundamental force
People who worry about AI often frame those worries in terms of us getting to some future we expected to come much later “too fast,” i.e. experiencing a “century of technological progress in a decade” and that being an amount of progress too difficult to steer or control in so short a time.
But some ask why the world “a century” from now has to be so dramatically different than the present, such that moving to it in a decade would be jarring. One thing they might point to is the level of technological change we saw between 1200 and 1300, let alone 16100 BC and 16000 BC. For most of human history, technological progress was so virtually-non-existent that some people are even happy to debate whether a whole 300 calendar years were just made up by a pair of emperors and the pope.
There are a couple of different places you can go from the observation that tech progress is rare and slow across history, but one that fascinates me is the idea held by some that we’re basically done; our current level of technology is at a qualitative peak and most of what’s left to do is bring down costs, distribute, and increase reliability/resilience.
This view seems crazy.
It’s underpinned by some combination of hard natural constraints and inherent limitations to what human society is able or willing to do. The view makes for wild claims that a ~professional technology commenter actually made on Twitter this year that, for example, the porcelain cistern toilet invented in the 19th century and refined to its current form by the mid 20th century will be a mainstay of life until the sun swallows the Earth in a billion years.
The logic goes “this has been around for decades and today’s billionaires seem just as perfectly happy with it as poor people in rich countries, so why would we ever change it?”
I think this attitude reflects a failure to “think in the limit.” Thinking in the limit is a kind of big picture thinking about the general dynamics that have driven all the changes in history and applying those very-high-level dynamics to paint a broad picture of what the future will be like.
Thinking in the limit does not predict cistern toilets forever. Thinking in the limit predicts a very, very different world at some point. In particular, it predicts a world where we and/or our descendants take entirely different forms ourselves and live in radically different environments than those we’ve seen throughout natural history. It says life or intelligence or something like that will scale up dramatically and exert influence and control over much-finer-grained aspects of reality, down to something like the atomic level.
Below, I’ll try to define what I mean by thinking in the limit more precisely and discuss the main forces it says will drive the future before briefly gesturing at a couple of basic pictures of what the future might look like on this schema.
What is Thinking in the Limit?
What does the world look like when all the forces of nature and society have been so exhausted that we can reliably predict no further big changes?
I refer to whatever the answer is as "the limit" and thinking in the limit generally presupposes that there is such a state, that some civilization at some point will reach it, and that aspects of it are somewhat predictable. This approach isn't merely speculative futurism—it's an attempt to understand where fundamental forces inexorably lead when given sufficient time to play out.
In my experience, most people who think about the limit conclude that it presents a dramatically different future which entails a society far larger than ours and with much more fine-grained influence on the world it inhabits, down to the level of individual atoms. This vision contradicts the "tech plateau" view that assumes we've largely reached the apex of technological development, even along just a few dimensions, let alone all of them.
Below I'll say more about the relevant forces of nature and society and why I think they get us to an equilibrium like that.
Forces of Nature
Energy, Matter, and Time
In my understanding of physics, these seem like the truly "hard" constraints. The universe puts a hard cap on your ability to summon up new matter or energy at zero, and you can't move faster than light or otherwise change the flow of time. It’s generally agreed we can’t overcome these limits, but even if we could, the future would only be stranger. My point here is that it’s already likely to be plenty strange within these limits.
The main exercise of thinking in the limit is seeing how close technology can come to these constraints while still on balance serving some recognizably intelligent goal or social purpose (considering e.g. the costs of developing, maintaining, and using the relevant technology). This means imagining systems that approach perfect energy efficiency or complete utilization of available matter — not as fantastical constructs but as logical endpoints of ongoing technological trends.
What we see in the history of the natural world and human technology is increasingly efficient use of resources within these constraints towards some purpose: usually reproduction in nature and some weirder things humans like more recently, e.g. art, comfort, entertainment, and things instrumental to these like transportation and medicine. Simply extrapolating very very long-running trends points us towards a future where we get close to physical limits not too long from now on cosmic scales, and the reasons to think they’ll really halt forever seem quite flimsy by comparison.
Existence Proofs and Natural Extensions of Existence Proofs
One way to think about human technological progress specifically is to see that we've noticed nature making more efficient use of resources than we could and finding a way to harness the principles at stake for our own purposes. Nature provides "existence proofs" that certain capabilities are possible, and we extend these principles in novel directions.
For example, once you know how a bird flies, you can usually imagine a faster bird and eventually design planes around similar principles. The jet engine isn't found in nature, but the basic principle of powered flight demonstrably works, which gave us confidence to pursue and refine it. Similarly, we know fine-grained molecular manipulation is possible because life does it constantly.
We have things like fruit flies and aphids autonomously reshaping our whole world all the time, to say nothing of motor proteins that act as little robots inside of individual cells. These exist not as curiosities but as demonstrations that autonomous, self-replicating systems can exert outsized influence on their environments through distributed action at microscopic scales.
Evolution Itself
Evolution is basically the force of nature pushing things towards a limit — systems that make better use of the matter and energy available to them reproduce more and come to define their environment over time until they press against other natural limits. This process doesn't stop at "good enough." It continues indefinitely via intense competition for resources and ecological niches.
Eventually new systems mutate that can overcome the existing natural limits and the cycle presses forward. Any environment that appears stable is just waiting for the next innovation to bypass current constraints. The evolutionary pressure to use resources more efficiently doesn’t disappear no matter how comfy members of a single generation might feel.
Despite being a totally blind and dumb process, evolution got us humans and also aphids and motor proteins and many things in between. Humans, as we consciously try to solve our problems and satisfy our desires, are not nearly so constrained in what we could create or how long it would take us to create it.
Much like we invented planes to harness the capacities of birds for our own purposes, nothing in principle stops us from harnessing aphid-like technology to control more matter and energy for our purposes, though a reasonable question to ask is "will we have purposes that require aphid-level control over the world?"1 Looking at the trend, I think the answer is clearly yes.
Forces of Society
Society is all downstream of evolution, but it sure seems to have all the ingredients to give evolution a big boost and capture far more energy far more efficiently, after all planes fly far faster than all birds and solar panels are approaching 100x the per-unit energy capture of most plants. If we needed things like micro-organisms to achieve our goals, it seems like we could improve upon their potency as well – in some ways mRNA vaccines do a crude version of this.
People Want More From Less
The commenter above said he was mainly coming from the perspective of people not wanting to live in dramatically different worlds, so he infers that tech progress will stall out for lack of demand. I think this is naïve.
Ancient Greeks would probably not have wanted to be so sedentary and locked on to screens as we are now, but their descendants reliably became people that live as we do.
The reason is that this all happened incrementally. At first, we just wanted to weave garments or mill grain a little more efficiently, so we made looms and steam engines. We then found a bunch of other uses for these, including uses that raise new threats like industrial scale war that created an urgency around various defensive technologies like computers (if the Bletchley Park story is roughly right), and the cycle of incremental improvements → new use cases turned on and our desire to satisfy social and physical needs drove us to jump at new technologies even though our ancestors two steps removed wouldn't have seen a use for them at all.
People Are Curious
The basic insight is that if you combine a naturally intelligent and curious species with problems (and I don't think many deny that we still have an absolute plethora of problems of one kind or another), you get new things with multiple use cases and some of our old desires map onto those use cases. Then new problems arise and before too long the world looks quite different.
Curiosity is perhaps the most underrated force in technological development. Even without immediate application, humans investigate and tinker because we're driven to understand. This exploration often yields unexpected applications that create new desires or solve problems we didn't know we had.
Feedback Loops
The new tools we stumble on this way also make yet newer tools. There's a lengthy debate about whether ideas are getting harder to find or progress is slowing down (and perhaps trending towards a halt) since roughly 1970, but we're still discovering new things and it seems well possible that one of these kickstarts this cycle in the way the steam engine did again.
AI is a particularly promising candidate for reasons you can imagine. Intellectual work has always pulled more than its share of the weight in technological progress. So by hypothesis, having a massive flood of genius-level would-be inventors enter the world does seem like a potential gamechanger.
The key insight across all these dynamics is that beyond merely having the capacity for innovation that could in theory replicate and outstrip everything evolution has managed to create so far, we also discover new desires and options in even incremental technologies that turn more matter and energy towards ourselves and what we want to achieve.
I remember this TV news special from the mid 2000s about how much stuff modern Americans consume where they laid out on a lawn e.g. all the bottles of laundry detergent you'd go through in your life alongside all the other equivalent staples of life. The schtick was that it added up to a lot. I think an ancient looking at this would find it totally bizarre and alien to need that much random plastic in your life (whatever plastic is). I basically expect our reaction to how future people live to be the same.
In my imagination, much of the experience future people have will be mediated through something that we'd most nearly recognize as a computer on which they have extremely intense, detailed, and immersive virtual experiences. It seems like our ability to simulate virtual environments will go far beyond what our flesh and blood bodies can feel out in the everyday world, but it will be very computationally intense. So as much as we can, we will scale up physical computers as everyone in the world wants these vivid experiences of who-knows-what.
This is all to say nothing of the AIs presumably doing all the work and needing lots of compute to run themselves. But whether or not it's compute, the point is that reality is made of a limited supply of matter and energy the actors in the world can put towards their ends.
History has been a long trend of natural and human actors putting a greater portion of matter available to their ends and it's clear how and why. To our unlimited desires and problems, we set our limited matter. Technology has given us ever more accessible, cheap and fine-grained control over that scarce matter and there are examples from technology and nature already that suggest that control can get arbitrarily more fine-grained, down to the level of atoms at least. Just set your autonomous nanobots to the task of remaking the world in the image you want.
The Long Run Equilibrium
In the truly longest run, thinking in the limit says the laws of physics — entropy in particular — win out and the universe decays into a uniform nothingness.
But for a long time before that, various kinds of Earth-originating life and/or technology will achieve a steady state that could in theory hold for hundreds of billions of years before physics meaningfully messes with it. It might entail some large setbacks or reversions for what we might call social reasons (whatever the future equivalents of wars and plagues will be), but the incentives will push us back to some particular, meaningfully stable way of leveraging energy and relating to matter.
The core point across all of the many possible scenarios here is that life/technology will really leverage every bit of spare energy it can. Nature in many ways is already on this trajectory – various kinds of bacteria, fungi, plants, and viruses have increased the share of the Earth's energy they dominate in their unbounded drive to reproduce.
A major limitation these systems have is defending themselves against external extinction threats like asteroids and supervolcanoes. Perhaps that's where humans come in. Intelligence is an adaptation that allows agents to predict increasingly rare and novel threats or opportunities and thereby convert matter and energy into proliferation and longevity.
Two high level futures I think occupy outsized amounts of the admittedly huge possibility space here are Malthusian and non-Malthusian outcomes. The Malthusian outcome is one we're familiar with from nature: it's where organisms reproduce until they saturate their environment and then constantly bid down their costs of living until you have huge populations of beings living at subsistence in constant tension with the limits nature imposes on them and the limits they impose on each other. It is generally an ugly world.
Another possibility is that some set of intelligent actors — perhaps a society we would recognize — bend technology to their will and cut off competitive dynamics before they devolve into pejorative Malthusianism. The pressures to revert to Malthusianism will always persist, but real intelligence is quite a new phenomena in evolutionary terms and I personally hold out hope that it can rationally resist those pressures. Everyone with the power to act can see a race to the bottom on the horizon and avoid the first steps down that undesirable path.
An important feature of that future or any other future will be control over all the nooks and crannies of the universe from which threats might emerge. I’ll concede that meaningfully threatening nooks and crannies would have to be at least somewhat large, maybe planet-sized, but the good things we can drive into being scale deep down to the level of atoms. What’s the smallest mind you can make? What are the returns to almost costless additional compute or energy to a society when you have intelligent nanobots to bring those atoms into a better order?
These quantities all seem high to me. Intelligence seems like not much more than creative and efficient use of what available to you plus the ability to make more available to you. Whether you’re birthing new minds or just adding new flourishes to the experience of existing minds, it seems both possible and desireable, so I think the burden of argument falls pretty hard on the side saying that either the innovation cycle stops near where we are *or* we reach the point we could arrange the accessible universe down to the atom in a way we’d prefer at subjectively no net cost, but, – that out of some background laziness or some such – we simply decide to leave matter lying around or hew to our old traditions in a universal, unenforced boycott of all the new and doubtless fascinating possibilities in the world.
When matter and energy themselves become scarce and our technological power is near its physical limits, we (or some set of actors) will be able to bend the whole of nature towards the things we want and find valuable. By hypothesis it will be cheap to do so, and I think we will.
Conclusion
The claim that we're at or near the end of meaningful technological advancement ignores both the fundamental forces that have driven progress thus far and the vast gulf between our current capabilities and what the laws of physics actually permit. Thinking in the limit suggests we're nowhere near a technological plateau—we're merely at another waypoint in a much longer journey.
The cistern toilet isn't going to be with us for a billion years. Neither, likely, are many other fixtures of modern life we take for granted. What will replace them isn't necessarily predictable in specific detail, but the broad strokes—increasingly efficient utilization of energy and matter, more fine-grained control over physical reality, and radically different forms of intelligent life—follow logically from the dynamics that have shaped our world thus far.
Much as I’m loathe to say it because it is so mundane for purposes of this post, we obviously want nanobots for healthcare to clear our arteries and surgically pick away at tumors without harming healthy tissue.
This is definitely the fun question. I really like Keynes on this question— http://www.econ.yale.edu/smith/econ116a/keynes1.pdf
It’s usually mocked today because while he was right that today we are fantastically richer than 1930, we still work quite substantial hours and the “ordinary person with no special talents” is still remarkably economically productive.
But I think there’s a sense in which we should ask the question of whether we are approaching the point where we’ve solved the “economic problem”. I tend to think we will and that while there will still be wants and progress on the economic end, much of the rest of history might proceed on an axis of thinking about the arts or purpose and in general, finding meaning beyond prosperity.
But perhaps hedonism really is the final frontier.