"Are we at the Fermi Paradox filter moment?"
BWX Technologies, a company that builds power systems for submarines and aircraft carriers, is developing a mobile reactor for the US military that will fit into a standard shipping container and be delivered on a truck. It may not be long before you can order one for your RV on Amazon. The company expects installation should take about three days and the unit will run three years before it needs to be sent back to a service center for refueling.
Artificial intelligence may accelerate BMX’s timetable and provide even more engineering marvels. Is that a good thing? This past week I took a deep dive into the three-and-a-half-hour podcasted conversation between Nate Hagens and Daniel Schmachtenberger, on how artificial intelligence accelerates climate disruption and pushes us well beyond the many planetary boundaries we have already breached. For the sake of those with less time to spare, let me extract some of the more salient parts of that discussion from The Great Simplification. I recognize these are long sets of quotations, but they extract some of the best parts of a much longer and far more intricate and nuanced conversation.
NH: So humans are a social species and in the modern world, we self-organize as family units, as small businesses, as corporations, as nation states, as an entire global economic system around profits. Profits are our goal and profits lead to GDP or GWP globally. And what we need for that GDP is three things. We need energy, we need materials, and we need technology or in your terms, information. And we have outsourced the wisdom and the decision-making of this entire system to the market. And the market is blind to the impacts of this growth. We represent this by money, and money is a claim on energy. And energy from fossil hydrocarbons is incredibly powerful, indistinguishable from magic, effectively, on human time scales. It’s also not infinite. And as a society we are drawing down the bank account of fossil carbon and non-renewable inputs like cobalt and copper and neodymium and water aquifers and forests, millions of times faster than they were sequestered.
So there is a recognition that we’re impacting the environment and all of the risk associated with this. We label it the metacrisis or the polycrisis or the human predicament, but they’re all tied together. The system fits together, human behavior, energy, materials, money, climate, the environment, governance, the economic system, et cetera. So right now, our entire economic imperative as nations and as a world is to grow the economy partially because that’s what our institutions are set up to do, partially because when we create money primarily from commercial banks, increasingly from central banks, when governments deficit spend, there is no biophysical tether and the interest is not created. So if the interest is not created, it creates a growth imperative for the whole system and we require growth.
DS: GPT-3 getting a hundred million users in, forget exact exactly what it was now — six weeks or something, which was radically faster than TikTok’s adoption curve, Facebook’s, YouTube’s, cell phones, anything which were already radically faster than the adoption curve of oil or the plow or anything else. So world changing powerful technologies at a speed of deployment, which then led to other companies deploying similar things, which led to people building companies on top of them, which leads to irretractability.
And so the speed of what started to happen between the corporate races, the adoption curves and the dependencies understandably changed the conversation and brought it into the center of mainstream conversation where it had been only in the domain of people paying attention to artificial intelligence or the risks or promises associated previously.
There are clusters of cognitive biases that go together to define default worldviews. And they’re not a single cognitive bias, they’re a kind of bunch of them. … One of them that I think is really worth addressing when it comes to AI is a general orientation to techno-optimism or techno-pessimism, which is a subset of a general orientation to the progress narrative. … I would argue that there are naive versions of the progress narrative: Capitalism is making everything better and better. Democracy is, science is, technology is. Don’t we all like the world much better now that there’s novocaine and antibiotics and infant mortality’s down and so many more total people are fed and we can go to the stars and blah, blah, blah?
Obviously there are true parts in everything I just said, but there is a naive version of that that does not factor all the costs that were associated adequately. … One is the costs like climate change and the oceans and insects and the other is the one-time subsidy of non-renewable energy and inputs and the source capacity of the earth, and those are not finite.
If you ask the many, many indigenous cultures who were genocided or extincted or who have just remnants of their culture left, or if you ask all of the extinct species or all of the endangered species or all of the highly oppressed people, their version of the progress narrative is different.
And just like the story of history, it’s written by winners or losers. But if you add all of those up, the totality of everything that was not the winner’s story is a critique of the progress narrative. And so one way of thinking about it is that the progress narrative is there are some things that we make better. Maybe we make things better for an in-group relative to an out-group. Maybe we make things better for a class relative to another class for a race relative to another race, or for our species relative to the biosphere and the rest of the species. … Or for our generation versus future generations. Short-term versus long-term.
We’re not saying that nothing could progress, we’re saying ‘Are we calculating that well?’ And if we factor all of the stakeholders, meaning not just the ones in the in-group but all of the people, and not just all the people but all the people into the future, and not just all the people but all the other life forms and all of the definitions of what is worthwhile and what is a meaningful life, not just GDP, then are these things … actually creating progress across that whole scope?
Picking up on my critique of nuclear energy from last week, my primary complaint was directed precisely to this point. The boundaries we have set for measuring societal and ecological impacts are far too narrow. We are willing to take risks for our own safety, possibly to obtain the creature comforts offered, or to reduce our carbon footprint, by obtaining more expensive electricity, but we are externalizing the real risks. We are not pointing the figurative revolver in a game of Russian Roulette at our own heads. We have pointed it at the head of our yet-to-be-born child, or any number of endangered species, or the entire ecological matrix that makes life possible. We have narrowed the boundaries of our value set to merely what is convenient and easily grasped. And then, willy-nilly, we are pulling the trigger. Click. Click.
The Hagens-Schmactenberger discussion gets quite dense, but it is worth simplification, to borrow Hagens’ podcast’s title. Next week I may break it apart a little more, but let’s return to the discussion of AI, and how the world may be transforming rapidly before our eyes.
NH: In the short term, of course, I should advance the AI applied to genomics to solve cancer, without thinking through the fact that the fourth order effects might involve increasing bioweapons capability for everyone, and destruction of the world. So, even the cancer solutions don’t matter in the course of those people’s lives. Is there enough perspective to be able to see how the things that seem wise from a narrow perspective actually look stupid?
DS: So when we talk about problems in capitalism they were different expressions but many of those problems can be seen in terms of environmental harm from optimizing narrow goals.… There was never a person or a group of humans that said, ‘Hey let’s invent capitalism.’ It was always an emergent response to the challenges and the innovation and the coordination of intelligence towards problem-solving of the day, and it took on momentum and then institutions and everything built on top of it.
Rational actors will make a rational choice to utilize the resources that we have, intelligently, for the things that improve our lives the most, and that creates an incentive niche for people to innovate how to make goods and services. AI improves people’s lives more…. You get a benevolent god emerging from that decentralized collective intelligence, right?
The interview began with Daniel Schmachtenberger quoting the opening lines of Lao Tsu’s Tao Te Ching: “The Tao that can be spoken of is not the eternal Tao. The name that can be named is not the eternal name. The nameless is the beginning of heaven and earth.” Later, he returns to that verse.
DS: It is fair to say that the cause of the metacrisis and the growth imperative of the superorganism or the capacity that gives rise to it that intelligence has created all the technologies — the industrial tech, the agricultural tech, the digital tech, the nuclear weapons, the energy harvesting, the all of it that intelligence has created — all those things. It has made the system of capitalism. It made the system of communism.
Now that system of intelligence takes corporeal capacities — things that a body could do — and externalizes them the way that a fist can get extended through a hammer, or a grip can get extended through a plier, or an eye can get extended through a microscope or a telescope, or our own metabolism can get extended through an internal combustion engine. So it takes the corporeal capacity and extends out of it extracorporeally … in maximized recursion not bound by wisdom, driven by international multi-polar military traps, and markets, and narrow short-term goals, at the expense of long-term, wider values.
In the metacrisis there are many risks — synthetic biology can make bad pandemics and extreme weather events can drive human migration in local wars and this kind of weapon can do this and this kind of mining can cause this pollution and this kind of pesticide can kill these animals. Those are all risks within the metacrisis. AI is not a risk within the metacrisis, it is an accelerant to all of them.
AI being used by all types of militaries, all types of governments, all types of corporations, for all types of purposes, achieving narrow goals, externalizing harm to wide goals. It’s an accelerant of the metacrisis on every dimension and so now as we take the intelligence that has driven these problems unbound by wisdom and we exponentialize that kind of intelligence, we get to see — whoa — superintelligence… with something that’s a trillion trillion times smarter and faster than humans. What goals are worth optimizing? It’s not Global GDP because I can increase GDP with war and addiction and all all kinds of things and destroy the environment.
And so I say [to the AI instruction set] ‘Okay it’s GDP plus GINI coefficient plus this other thing plus carbon removal plus whatever.
Nope. there’s still lots of life that matters outside of those 10 metrics or 100 metrics that I can damage.
To improve [you’d need] a metric set that is definable. It’s like the Tao Te Ching — the Tao that is speakable in words is not the eternal Tao. The metric set that is definable is not the right metric set. So if I keep expanding the metric set to be GDP plus dot dot dot, I can still do a weighted optimization with an AI on this and destroy life. The unknown unknown means there will always be stuff that matters that has to be pulled in… the difference between the set of metrics you’ve identified as important and reality itself limits all your own models. That is not intelligence. That is wisdom.
To have portable nuclear power plants that can be plopped down into war zones today and homes tomorrow may seem intelligent. Lots of short-term gains. But is it wisdom?
In 1950 Enrico Fermi mused to his colleagues, the Milky Way is about 10 billion years old and 100,000 light-years across. If aliens had spaceships that could travel at 1 percent of the speed of light, the galaxy could have already been colonized 1,000 times. “So,” Fermi asked, “Where is everybody?” If there were civilizations scattered across the stars by the billions, why haven’t we heard from them?
Many explanations have been offered. According to the Drake equation, if a civilization could live at least a century after developing radio transmission technology, there could be 10 civilizations in our galaxy alone. But what if after developing technology advanced civilizations hit a biophysical wall and ceased to exist? Perhaps an advanced civilization cannot live for long after developing nuclear power, warming its climate, or otherwise soiling its nest. Is it such a stretch to say that we may be approaching the Fermi Paradox filter moment right now?
So, I asked ChatGPT. It replied: “As an AI language model, I don’t have real-time information or knowledge of specific events that have occurred after my last update in September 2021. At that time, humanity had not yet resolved the Fermi Paradox.”
Bard was more optimistic. “Only time will tell how close humanity is to crossing a threshold of the Fermi Paradox. However, the fact that we are even having this conversation is a sign that we are making progress. As our technology continues to advance, we will be able to search for extraterrestrial life more effectively. And who knows? Maybe one day we will finally find the answer to the Fermi Paradox.”
How very comforting.