Now we see why Tegmark defined life as he did. He wants to make it easy to classify machines as potentially being a form of life.
This raises the question: What is learning?
He does not tackle the question of what learning is until chapter two! Even there, he does not really pin it down in any definite way.
He seems to consider learning to be, in my own words: “the ability to perform some task”.
To call that learning is rather ridiculous. Yes, a computer can acquire the abilities that they did not originally have. For instance, a computer can acquire the ability to recognize human faces in photos or to identify fraudulent financial transactions. But does that mean that it has learned anything?
Learning is in fact, the acquisition of knowledge. Which is a mental grasp of reality acquired by perceptual observation or a process of reasoning based on perceptual observation.
If a computer learns to identify human faces in a photo has it learned anything? Has it any kind of mental grasp of reality? No. It has no mental grasp of anything, it has no mental abilities at all! In fact, it has no mind!
Does have any capacity for perceptual observation? No. It has no perceptual faculties of any kind. It is merely a machine and has no consciousness of any kind.
So, how could it perceive anything? The ability to process input from a camera or such is not the same thing as consciousness.
It most certainly has no ability to reason! Reasoning is the ability of conscious beings with volition to direct their mental processes for the purpose of attempting to understand some aspect of reality.
Computers do not have volition, nor any kind of mental faculties with which to reason. So, if they are not aware of anything nor have any mental faculties, how can they be said to learn?!
This is why Tegmark avoids defining what learning is and tries to suggest that learning is simply acquiring the ability to do something. That way, at least for now, he does not have to deal with the issues of consciousness or whether or not something has any kind of mind.
Which brings us to his definition of intelligence…
Intelligence: The ability to acquire complex goals.
What does he define goals as? He doesn’t define what he means by “goals”.
We can turn to his definition of “having a goal” for some clue:
Having a goal: Exhibiting goal-orientated behaviour.
What is “goal-orientated behaviour”?
Goal-orientated behaviour: Behaviour more easily explained via its effect rather than its cause.
What this eventually leads to in a later chapter, is treating a goal as some kind of “desired outcome”. As though certain things act in order to do something.
You see, Tegmark likes to use teleology as an explanation for things. Many physicists are guilty of this, but Tegmark is very explicit.
For instance, physicists sometimes claim that gases behave the way they do so because entropy must always increase.
That is not why gases act that way. That is trying to explain the behaviour of gases as though they somehow know how they are meant to behave and that this explains why they behave that way. As though things act the way they do in order to achieve some end state.
But this explains nothing. Things act the way they do because of their nature. There is something about their nature which results in them acting that way. They do not act that way in order to reach a certain state.
Tegmark is doing something similar here. He is pretending as though machines have a goal simply because they tend to operate in a way that results in certain outcomes.
It is then easier to assume that the reason this happens is because machines have the goal to achieve that outcome! This kind of thinking leads him to conclude that “even missiles have goals”!
Which is absurd. No, missiles do not have goals. Missiles do not strike their target because of some desired outcome. They do so because things have been set up in such a way that it is what must happen as it is in the missiles nature to do so in that context.
Why does he use goal in this way? Well, because anything with the ability to acquire complex goals is apparently intelligent! So, as long as it is capable of achieving a complex desired outcome it is intelligent!
The thing does not, apparently, need to be conscious or have any kind of mind to be intelligent. It simply has to achieve the desired outcomes.
Desired by whom? Does not matter. Apparently, it can be desired by the thing itself, or its creator or just someone using it! Presumably, as long as it is able to achieve those outcomes, it is intelligent.
He gives no real criteria for what qualifies as a “complex” goal, making it very hard to know exactly what would qualify as intelligent.
According to this definition, a pocket calculator is certainly intelligent. It can be used to achieve the complex goal of performing complex mathematical operations such as finding square roots. Or, is that not complex enough?