Will technology save humanity or kill us all? The recent discourse seems to be bifurcating in two violently different directions. It started with a16z’’s techno-optimist manifesto that is unapologetically pro-tech which engendered a backlash of responses. I read the manifesto and was somewhat surprised by the level of vitriolic reactions. To be candid, I’m more philosophically aligned with the pro-tech crowd and an optimist by nature. Reading some of the reactions, there are some valid arguments but feel they miss the forest by looking at the trees.
What do we mean by technology?
Undoubtedly, technology is a Greek word. The Greek word “technología” itself is a combination of other two words, “techne” which means art, skill or craft and “logía” which means study of. Technology is meant to make human life better.
I’m being a little facetious here, I do think definitions in this case matter very much.
The physical world is filled with technology that we’ve invented for our benefit. Transportation, sanitation, agriculture, energy et al are in service of improving our lives. So at the most basic level, it’s hard to argue against continuously improving all technology.
The debate revolves around software technology specifically whose recent advances are progressing at a torrid pace. My hunch is that people have an uneasy feeling when they hear about technological progress specifically because of the harms (real and imagined) that software causes. I believe this stance is somewhat short-sighted, software is about to improve all the technologies we rely on and dare I say bring science fiction to life.
The harms are real
This is not to say that there are of course some very real harms and I hope I’m not treating them too lightly. For instance, time spent on screens is a significant societal problem. I imagine a large part of the discomfort and push back on “techno-optimism” comes from the association of technology with screen time. As a new parent, I’m also terrified when I see other young children glued to a phone and have few reference points on what I should do. Short bursts of dopamine are leading to widespread screen addiction. That’s on one end of the spectrum of the harms, more corrupt tools like population control on the other.
A lot of more research needs to be done here and potentially regulation as well. I don’t think we understand the second and third order effects yet. I’m sure we will find some strong correlations with decrease in fertility due to less in person social interactions and many other unintended consequences.
How do we solve these issues? Well, Pandora’s box is open and it can’t be closed. The only way to improve these tools is investing more in them, making them safer and providing more powerful controls to the user. We’re driving cars without seat belts having been invented yet, let alone mandated by law. This begs the question, how much should the government intervene and play a role?
AI regulation is precocious (today)
What about AI becoming sentient and instantly killing us all? Many experts are vocal about the dangers of AI and are calling for the government to step in and regulate AI development. I’m going to take the complete opposite stance and argue that regulating AI is much more dangerous in my view – at least today.
My rationale is based on two points:
1) Centralizing control of these tools into the hands of a few corporations and governments will concentrate their power, what can possibly go wrong? This will unnecessarily slow down progress and create an oligopoly. Open source needs to be protected at all costs in order to avoid regulatory capture by a handful of big corporations.
2) Second, these tools are on the brink of saving many peoples. Slowing it down will cost very real human lives versus theoretical deaths from AGI. Let’s just take the example of self-driving cars. This technology has reached maturity and is fundamentally safer than human drivers. If the government impedes its progress, it will quite literally result in needless loss of life..
The same goes for healthcare. The role the government can play here is to remove barriers that were erected during the industrial age and help accelerate AI technologies in medicine.
Where I would focus government efforts is more mature technologies like social media addiction where the dangers are better understood. We shouldn’t throw the baby out with the bath water.
I have sympathy for the existential AI risk argument partly because so many brilliant people believe it. It’s also an asymmetric risk. Even if it’s a very small chance, it should be addressed. The question of AGI risk is way above my pay grade, I tend to look at these technologies as tools that can do good as well as evil. Just like fire can warm your home or burn it down. I have a sense that we will learn to co-exist with AGI and more good than harm will come as a result.
Moreover, we don’t need to wait for a theoretical & ill-defined AGI until real dangers start appearing. The tools don’t need to become sentient in order to harm us. The movie “Her” has already happened in real life. The promises and perils of AI are plentiful with many moral & societal questions that are not properly understood yet. For instance, will the rise of extremely powerful digital companions further isolate us and reinforce the crisis of loneliness? How should autonomous weapon systems that kill without human intervention be programmed?
This leads to an important fork in the road: is there merit in at least slowing down development? Or should we go faster?
Is e/acc the answer?
Effective Accelerationism is the philosophy that we ought to accelerate technological development rather than slow it down. It stems from the belief that artificial intelligence will lead to a post-scarcity technological utopia. There’s a certain appeal to this concept for optimists like me, there are also credible arguments to suggest we might be headed there.
Going back to the notion that technology is meant to improve everyone’s lives, we’re about to see dramatic radical advances in the physical world. Software has moved beyond the screen and is improving hardware. We’re on the cusp of technology having a dramatic effect on almost every aspect of society. It is not just in commercial applications but every major scientific field will witness big breakthroughs thanks to AI.
From biology to agriculture to clean energy, the very way we conduct science is being upgraded by these new tools. Technology by its nature creates abundance and we are about to have limitless intelligence powered by nearly-free energy before the end of the century. This isn’t a panacea of course and doesn’t solve very real social inequalities. Everyone should stand to benefit though as major aspects of society are improved. Just like a rising tide that lifts all boats.
The optimist in me is incredibly excited by what lies ahead. Will technology save the world? I think it will. The fact that it’s a matter of debate shows much it’s progressed in recent years. In any case, I, for one, welcome our new AI overlords.