Today we start our review of the book “Life 3.0” by Max Tegmark. This book claims that greater than human level intelligence is inevitable and then discusses what can be done to keep it safe. Part One introduces the book and discusses the prelude and the first chapter.
Click here to download the PDF transcript with illustrations. This episodes transcript has no illustrations.
[Please note that this may not exactly match the audio. However, there should be no significant differences.]
Metaphysics of Physics is the much needed and crucial voice of reason in the philosophy of science, rarely found anywhere else in the world today. We are equipped with the fundamental principles of a rational philosophy that gives us the edge, may make us misfits in the mainstream sciences but also attracts rational minds to our community.
With this show, we are fighting for a more rational world, mostly by looking through the lens of the philosophy of science. We raise awareness of issues within the philosophy of science and present alternative and rational approaches.
You can find all the episodes, transcripts, subscription options and more on the website at metaphysicsofphysics.com.
Hi everyone! This is episode fourteen of the Metaphysics of Physics podcast. I am Ashna, your host and guide through the hallowed halls of the philosophy of science. Thanks for tuning in!
Today we are going to start our review of “Life 3.0” by Max Tegmark. This will be the first part of a series where we go many of the central ideas presented in this terrible book.
Today we will cover the prelude and the first chapter. Later parts will cover further chapters at about two or three chapters per part. Meaning that the entire series will be about three or four parts long.
But, without further ado, let us start with a quick introduction to the book itself.
The book is called “Life 3.0” and it is subtitled: “Being human in the age of artificial intelligence”. Which, to be fair, does give you a fair idea of what you should expect.
Here is the end of the blurb provided on the inside jacket of the copy I have before me:
“What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn’t shy away from the full range of viewpoints or from the most controversial issues – from super-intelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos”.
Basically, it argues that artificial intelligence in the form of greater than human level intelligence is all but inevitable. And that we should start thinking about what this implies for us. Now, rather than in the future when Max Tegmark believes it will be too late.
The book starts by making the case that the issue of how to handle the possible rise of artificial intelligence is the most important issue of our time.
It then goes on to show the possible benefits and dangers of AI and how it might drastically alter our lives and civilizations. And what we should do to make sure AI does not prove to be dangerous enough to wipe us out.
Before, we go any further, when I say “AI”, it should be assumed that I mean “strong AI” or “human-level intelligence” unless otherwise stated. Alright, now we have that noted, let us continue.
What do we think of all of this? Well, the main issue we have is that it makes a huge, huge leap: That AI is possible in the first place. We have argued that in fact, it is not.
You can see our argument for this presented way back in episode four:
If we were to assume that such AI is indeed possible, then we would probably leave the book alone. Since if AI was indeed possible, then some of it would certainly follow.
We disagree with this premise, so we are not going to leave his book alone. Instead, we are going to deal with his arguments for AI and whatever other philosophically dubious ideas we encounter.
This being Max Tegmark, we should not have a lot of trouble finding quite a few philosophically dubious ideas.