DEV Community

Cover image for Book summary: Life 3.0 by Max Tegmark
Halldor Stefansson
Halldor Stefansson

Posted on • Updated on • Originally published at halldorstefans.com

Book summary: Life 3.0 by Max Tegmark

Originally published on my blog at halldorstefans.com on April 29, 2019

1. How do you define life and intelligence?

The book splits life too three stages:

  • Life 1.0 (Biological stage) - Hardware and software are evolved over generations
  • Life 2.0 (Cultural stage) - Software is designed (through learning)
  • Life 3.0 (Technological stage) - Hardware also develops. Doesn't exist at the moment. Can AI reach it?

The three main differences of opinion are:

  • Techno-sceptics believe it's too hard to build superhuman artificial general intelligence (AGI) and won't happen for hundreds of years.
  • Digital utopians view it as likely to happen this century and welcome it.
  • The beneficial AI movement also thinks it's likely this century, but a good outcome is not guaranteed and needs to be worked on.

We need to make sure the meanings of individual words are precise and beware of common misconceptions.
For example:

Terminology
Life Process that can retain its complexity and replicate
Intelligence Ability to accomplish complex goals
Artificial General Intelligence (AGI) Ability to accomplish any cognitive task at least as well as humans
Superintelligence General intelligence far beyond the human level

Some common misconceptions are:

Mythical worry/Myth Actual worry/Fact
AI turning evil or conscious AI turning competent, with goals misaligned with ours
Superintelligence by 2100 is inevitable or impossible It may happen in decades, centuries or never: AI experts disagree & we simply don't know

-What future should we aim for and how?

2. Can a material have intelligence and learn?

Intelligence, using our definition from above, can't be measured by a single IQ, only by a range of abilities. Today's artificial intelligence is narrow, each system can only complete a particular goal(s), while human intelligence is broad. For example, saying a sentence with different emphasis can change the meaning of the sentence dramatically, something today's computers are not able to interpret.

Memory, computation and learning have an abstract, intangible feel to them because they're able to take on a life on their own that doesn't depend on or reflect the details of their underlying physical material.

Memory

Any chunk of matter can be the physical layer for memory as long as it has many different stable states. If we imagine ourselves an egg mattress that has 16 valleys. Anyone of these 16 valleys can represent a memory. For example, if you put a piece of paper in valley number 7, then you know that you can always go to valley number 7 to get the information on that piece of paper. So we can define memory something that has stored information in a stable location. Other examples are books (information on a specific page number) and hard drives (storing data on a particular location on the disk).

Computation

Any matter can compute as long as it contains certain universal building blocks, that can be combined to implement any function. NAND (NOT-AND) gates and neurons are important examples of such universal "computational atoms." The reason NAND is essential is that any true or false mathematical function can be achieved by using a combination of NAND gates.

Learning

When humans are learning, we are building, changing and/or connecting our current understanding to get better. We are arranging our knowledge to increase our understanding. Pocket calculators, for example, never learn new things. They always give the same result at the same speed with the same accuracy. That is because humans arranged the pocket calculator.
For a matter to learn, it must rearrange itself to get better. A neural network is a robust foundation for learning because merely by obeying the laws of physics it can rearrange itself to get better and better at implementing desired computation.

Similar to Moore's law, once the technology gets twice as powerful, it can often design and build technology that's twice as powerful. That decreases the cost of information technology, enabling the information age.

3. What can happen in the near future?

Near term AI progress could improve our laws, making our personal lives, power grids and financial markets more efficient and save lives with self-driving cars, surgical bots and AI diagnosis systems.

Systems controlled by AI need to be robust. Solving tough technical problems in an association to verification, validation, security and control. Especially for AI controlled weapon systems, where the stakes can be huge. AI researches and roboticists are calling for an international treaty banning certain kinds of autonomous weapon, to avoid an out of control arms race.

The legal system could be more fair and efficient if we figure out how to make AI transparent and unbiased. Laws need updating since AI poses tough legal questions involving privacy, liability and regulation.

Intelligent machines are increasingly replacing us in the job market. AI-created wealth could be redistributed to make everyone better off. Otherwise, inequality will increase according to economists. Low employment society should flourish financially, and people get a sense of purpose from activities other than jobs, with advance planning.

Career advice: Go into a profession that machines are bad at - involving people, unpredictability and creativity.

4. Can we control the intelligence explosion?

Building AGI may trigger an intelligence explosion, leaving us far behind. If a group of humans control the intelligence explosion, they might take over the world in years. AI may take over the world if we can't control it.

Slow intelligence explosion, dragging for years or decade, is likelier to lead to a multipolar scenario with a balance of power between a large number of independent entities. Superintelligence could lead to a "big brother" control or more individual empowerment.

-Will AI be the best or worst thing to happen to humanity?
-Which outcome do we prefer?
-How do we steer in that direction?

If we don't know what we want, we're unlikely to get it.

5. What could happen the next 10,000 years?

The race toward AGI can end up in a broad range of scenarios:

  • Superintelligence coexists peacefully with humans, either forced or because it's "friendly" and it wants to.
  • Superintelligence will be prevented by an AI or humans, by forgetting the technology or lack of motivations to build it.
  • Humanity goes extinct and replaced by AI's or nothing.

-Which scenario is desirable?

We need to have that conversation, so we don't drift or steer in an unfortunate direction.

6. What about the next billion years and beyond?

Intelligence explosion is a sudden event, compared to the universe existence, where technology deviates to the limits by laws of physics. This technology could generate 10 billion times more energy, store 12-18 orders of magnitude more information, and even compute 31-41 times faster, given a certain amount of matter.
Superintelligence would not only increase the efficiency of existing resources but grow today's ecosystem by about 32 orders of magnitude by acquiring resources from the universe.

The main asset shared or traded across planetary distances is likely to be information. A distant central hub, using a local guard AI, may encourage cooperation in communication. A collision of two civilisation may result in collaboration or war. It's entirely plausible that we're the only life form capable of making our observable universe come alive in the future.

Our technology needs to improve in order to prevent our extinction.
Life has the potential to flourish on earth and far beyond for many billions of years if we keep improving technology with care.

7. What are our goals?

Like mentioned before, intelligence is the ability to accomplish complex goals.
Human life evolved where replication was the goal. However, humans no longer have a simple goal like replication, and when our feelings conflict with the aim of our gene, we obey our feelings by using birth control, for example.

We're building more intelligent machines increasingly to help us accomplish our goals. Aligning machine goals with our own involves three unsolved problems; making a machine learn them, adopt them and retain them.

-How do we apply ethical principles to non-human animals and future AI?
-How do we install a goal to superintelligence AI that neither is undefined nor leads to the elimination of humanity, making it timely to rekindle research on some of the thorniest issues in philosophy?

8. What is consciousness?

A broad definition of consciousness is 'subjective experience', in this book.
Three problems of consciousness are:

  • The 'pretty hard problem' - Predicting which physical systems are conscious.
  • The 'even harder problem' - Predicting subjective, conscious experience.
  • The 'really hard problem' - Why anything at all is conscious.

The 'pretty hard problem' is scientific, since a theory that predicts which of your brain processes are conscious is experimentally testable and falsifiable. Many behaviours and brain regions are unconscious according to neuroscience experiments, with our conscious experience representing a summary of large amounts of unconscious information.

Generalising consciousness predictions from brains to machines requires a theory. Consciousness appears to require a particular kind of information processing that's fairly autonomous and integrated so that the whole system is rather autonomous, but it's parts aren't.

If consciousness is the way information feels when being processed in certain complex ways, then it's merely the structure of the information processing that matters not the structure of the matter doing the information processing.

Possible are AI experiences likely to be huge compared to what we humans can experience if artificial consciousness is possible.

There is no meaning without consciousness, therefore are conscious beings giving meaning to our universe.


I really liked this book. Max brings up many questions and gets you thinking about what needs to discussing about AI future.
I sometimes had difficulties understanding some concepts but overall I would recommend it to anyone interested in AI.


Thank you for reading. For weekly updates from me, you can sign up to my newsletter.

Top comments (0)