The Technorealist alternative
Safetyism and Techno-optimism aren’t the only options for dealing with AI risks. Technorealism is the framework needed to guide us through these issues.
The day before Sam Altman was pushed out of OpenAI in a boardroom coup, the UK announced that it would not pass AI regulation for fear of stifling innovation. The Sunak Ministry had considered the issue carefully, struck an agreement with AI companies to allow testing, and decided to err on the side of innovation, leaving the door open to adapt as circumstances change. This seems sensible, I remember thinking as I read the FT report.
As of this writing, Sam Altman is back at OpenAI with an updated board. And yet the most significant legacy of all the drama may be public awareness of the debate over AI risks. Like a devil on each shoulder, the pessimists say “shut it down” while the optimists say “get out of the way and accelerate.” Both sides can seem dogmatic and juvenile at times. Where is the sensible pro-technology position like PM Rishi Sunak’s? Where is the “adult in the room” energy to accelerate and also address issues as they come up? Where is the concrete evidence backing up the assertion that AI poses existential risks in the first place?
The purpose of this essay is to put a stake in the ground on a pragmatic, pro-technology, geopolitically-aware position that is not reflected in today’s polarized discourse about AI and other technologies. I am calling this Technorealism.
The technology debate of the century
The debate over AI risks reflects competing visions for how to deal with the risks of powerful technologies. On one side are “Safetyists” who believe that some technologies pose existential risks to humankind and that these risks must be contained. To mitigate these risks, many Safetyists entertain heavy-handed solutions such as global governance, policing, or the deliberate deceleration of some types of innovation. The Safetyist position is best understood through a 2019 essay by Oxford philosopher Nick Bostrom called “The Vulnerable World Hypothesis” or VWH for short.
On the other side of the debate are “Techno-optimists” who believe technology innovation is imperative to human flourishing. This position is best understood through Marc Andreesen’s “Techno-Optimist Manifesto,” which champions aggressive innovation through “the techno-capitalist machine.” For Techno-optimists, innovation is a cultural issue more than anything. They see themselves as pushing back against a tide of stagnation, pessimism, and meddlesome regulators. Their enthusiasm for AI and other technology is compelling, but they have little to say about policy or geopolitics.
Beyond the false binary
The boardroom drama at OpenAI revealed the stakes of this divide. The board originally booted Sam Altman partly out of safety concerns following a breakthrough in AI technology called Q* (Q-star).
An anonymous “e/acc” (effective accelerationism) account on X stated the view of many Techno-optimists at the time:
“As I see it now, this is war. Ideological, philosophical, memetic. It’s not a war for land, power, or money (even though these are at stake in the short term). It’s a war of ideas and principles, a war that will define the future of our civilization (I am dead serious on this), a war for the cultural zeitgeist. On one side, we have e/acc and Techno-Optimism that embrace technological advancement and social change as forces for progress. On the other side is Effective Altruism (EA), decels, wordcels, extinctionists, which offer a different perspective focused on what they claim is evidence-based solutions to pressing issues.”
The problem with this framing is that it is a false binary. The Techno-optimists’ rhetoric is compelling, but so is the philosophical reasoning of the Safetyists. It seems intuitive both that technology is good and also that some technologies are becoming so powerful and so accessible that they risk harming human civilization.
We need language to describe a position that is affirmatively pro-technology and also practical when it comes to dealing with technology risks and geopolitical considerations. We need technorealism.
Seven principles of technorealism
Technorealism is a practical pro-technology orientation that constantly asks: How can we accelerate technology innovation while addressing existential risks and geopolitical issues as they arise?
The following are seven working principles of Technorealism.
1. Technology acceleration is good.
Technorealists share Techno-optimists’ fundamental view that technology is essential to economic growth, human progress, and human flourishing. Technology helps humans do more with less. It is key to continued abundance and progress, particularly as the global population increases. Technorealism is pro-acceleration… but not blindly so!
2. Technology can pose existential and systemic risks.
Like Safetyists, Technorealists acknowledge that some technologies could pose existential risks that threaten humanity. They accept that Bostrom’s vulnerable world hypothesis might be true — and that if it isn’t true now, it could be in the future. To Technorealists, it seems naive not to acknowledge these risks considering the history of nuclear weapons or even the Covid experience. Technorealists accept that risk and uncertainty are inherent in technology innovation. They are clear-eyed about risks based on evidence.
3. Technology exists in geopolitical and socio-political contexts.
Imagine how different the world might look if Nazi Germany had won the race to develop nuclear weapons, or if Al-Qaeda had nuclear or bioweapon capabilities at the time of 9/11. Technorealists acknowledge the role of technology in geopolitics. They acknowledge that technology can be used as a tool of tyranny in some political systems. They acknowledge that some innovations can have disruptive effects by hurting democratic health, increasing inequality, or damaging people's mental or physical health. Technology is mostly good, but it doesn’t exist in a vacuum.
4. Centralized control is bad.
Bostrom’s vulnerable world hypothesis trades technology risks for global governance and heavy policing. But these come with their own risks of tyranny, dysfunction, and mono-culture. Peter Thiel’s January 2023 speech at Oxford responds to Bostrom on this point. “The big state is more of a problem than big tech companies and sometimes the state uses big tech as a vehicle for its power,” Thiel noted. One only needs to look at China’s surveillance state to see how technology is deployed to suppress human freedom.
At the same time, there are valid concerns about the concentration of power within technology companies too. Plutocracy fueled by technology monopolies is its own form of dystopia. Thus, Technorealists are wary of centralized power whether from government or monopolies. Technorealism embraces practical approaches that nurture democratic governance and market innovation.
5. Technology risks and issues are best approached with an adaptive, pragmatic orientation.
Technorealists recognize that there is no singular solution for dealing with the risks and issues presented by technology. They deal with risks and issues as they come. A world that is complex and adaptive requires a solutions orientation that is complex and adaptive. Policy-wise, this means embracing ruthless pragmatism and constant reorientation based on evidence. Think of this as a mashup of John Boyd’s emphasis on repeated reorientation and Elinor Ostrom’s work on polycentric governance to manage the commons.
6. State power cannot be ignored
We are in an era where tech power is colliding with state power, and Technorealists see a role for the state both in regulating technology and in accelerating technology innovation. The latter point is worth emphasizing: Technorealists are in some sense more pro-acceleration than Techno-optimists because they know how to marshal state power to turbocharge innovation and understand the geopolitical considerations for doing so.
7. Pro-technology is pro-America
America’s innovation advantage is one of our greatest strengths. Technorealists realize that stifling our own innovation can result in a win for our enemies, who aren’t as creative or effective at innovating and whose vision for the world may be contrary to democratic values. The techno-capitalist machine that Andreessen refers to is one of America’s greatest competitive advantages. It should be preserved and nurtured and, yes, improved upon, even as we address its risks and issues. Innovation is central the continued success of America and allies.
Technorealism compared
The following chart attempts to compare Technorealism to Techno-optimism and Safetyism across a range of issues. This is a working chart and these are generalizations, but here is the point: Technorealism is a coherent POV that deserves a place in the discourse.
Arguments against Technorealism
Friends I’ve discussed this with have raised three counterarguments, which I will address here.
Safetyists might say Technorealism’s pragmatic approach to containing risks is not sufficiently proactive. There are tremendous coordination problems across governments such that, for example, an Ebola-like global virus couldn’t be stopped without some sort of global governance. But, again, Safetyists overlook the risks inherent in empowering a global government and in their heavy handed regulations. Complex adaptive governance made up of multiple actors — what Ostrom refers to as polycentric governance — is both more politically viable and would do a better job containing risks anyways.
The reality is that Techno-optimism is effective as a vibe-based memetic movement to push back against stagnation, but it offers little from a policy standpoint. Technorealism can be seen as a policy-minded, realist-oriented fork in e/acc.
Techno-optimists, on the other hand, might argue that Technorealism is too much of an invitation to regulate and meddle. But would Techno-optimists not seek to contain nuclear weaponry or chemical weapons? Would they oppose non-proliferation treaties? How would they handle Covid? The reality is that Techno-optimism is effective as a vibe-based memetic movement to push back against stagnation, but it offers little from a policy standpoint. Technorealism can be seen as a policy-minded, realist-oriented fork in e/acc.
A more biting counterargument, which could come from either camp, is that Technorealism is an excuse to embrace the status quo. It shouldn’t be surprising that the Technorealist view tends to look like something in the real world! But while Technorealism may reflect the current mindset of some political leaders, this doesn’t mean Technorealists blithely accept the status quo or have no agenda to improve upon it. Again, the core question Techno-realists ask is: How do we maximize freedom of innovation while addressing existential and geopolitical risks as they arise?
Conclusion - Oxford meets Stanford in Sunak
Technorealism is not an excuse for inaction but rather a framework for navigating these issues. Applied to AI, a Technorealist policy agenda might not only monitor risks but look for ways to accelerate innovation by removing hardware bottlenecks, boosting computing power, and addressing strategic supply chain issues. It might address geopolitical considerations through policies that prevent foreign adversaries from stealing our technology, which has been all too easy over recent decades.
This seems to be the general direction Rishi Sunak is heading in the UK. Is he an appropriate standard-bearer for Technorealism? Maybe. Sunak is pro-technology, pragmatic, and a “realist” about risks and geopolitics. Even his educational background — Oxford undergrad, Stanford MBA — represents a synthesis of forces in the debate.
As a society, we need to be realistic about the essential role of technology in enabling human progress and flourishing. Yes, we should accelerate technology innovation. Yes, we should push past stagnation. And yes, we should be open to the possibility of existential risks and be more geopolitically aware, too.
There is no simple solution for navigating the world-changing possibilities and challenges raised by powerful technologies. It is a formidable collective challenge. The Technorealism framework is a compass that can guide us, though, and should be part of the conversation when it comes to AI and other technologies.
===========================================
👋 Thank you for reading this Monday Mandate. Please like, share, or comment to add to the conversation. If you have friends or colleagues who want to be on the list, they can sign up at boydinstitute.org
Recommended reading
The vulnerable world hypothesis, by Nick Bostrom
The techno-optimist manifesto, by Marc Andreessen
Elinor Ostrom’s Nobel Prize Lecture
Pausing AI Developments Isn't Enough. We Need to Shut it All Down, by Eliezer Yudkowsky
The Precautionary Principle, by Nassim Talem, Joe Norman, others
The Technopolar Moment, by Ian Bremmer
Conversation starters
What do you think of Technorealism?
How would you update or revise the core principles?
What kind of Technorealist policy agenda would you like to see?