How to understand the future of artificial intelli

2022-09-21
  • Detail

How to understand the future of AI

from language, writing to printing, modern science, to computers and Internet, the connection between wisdom of human individual oil return valve becomes increasingly close when the sample is loaded. The precipitation of human overall wisdom will no longer be limited by the limited capacity of anyone's brain. This is the beauty of Life 2.0, and also lays the foundation for moving towards life 3.0

"life 3.0" is a book that has been popular recently, imagining the future of artificial intelligence. Max tegmark, the author of the book, is a tenured professor of physics at MIT and the founder of the future of life institute. The organization brings together more than 8000 top AI experts in the world, including Stephen Hawking, Elon Musk and Bill Gates, and is committed to avoiding the risks of human survival brought by advanced AI. This week, the author had an interesting conversation with Professor tagmark and put forward many interesting observations on the future development of artificial intelligence

Max tagmark (left). The English book cover of life 3.0 (right), the Chinese version of which has been published by Zhejiang Education Press in 2018

the theme of life 3.0 is the syllogism of life development. The state of life 1.0 is the original state at the beginning of life. Although there is life, it is not thought. The state of Life 2.0 is the current situation of our human beings. If we use computer language to describe it, our flesh and blood is the hardware of human life, which has evolved for millions of years, but our minds and thoughts are not engraved on the body, not innate evolution, but acquired. In other words, our thoughts can be designed by the day after tomorrow, which is the difference between Life 2.0 and 1.0. Moreover, with the progress of science and technology, from language, writing to printing, modern science, to computer and Internet, the connection between human individual wisdom has become increasingly close, and the precipitation of human overall wisdom will no longer be limited by the limited capacity of anyone's brain. This is the beauty of Life 2.0, and also lays the foundation for moving towards life 3.0

at the stage of life 3.0, not only software can be designed day after tomorrow, but also hardware. Life no longer depends on the evolution of flesh and blood, and silicon-based (that is, silicon-based) hardware can also become the carrier of life. At that time, for the first time, life can get rid of flesh and blood, no longer rely on biological evolution, and become the master of the body. In short, the most important point in "life 3.0" is that machines can become the carrier of wisdom like human beings. The biggest difference between machines and humans is that the hardware of machines is no longer constrained by evolution. Another subtext of life 3.0 is that intelligence similar to human beings does not necessarily depend on the human brain evolved naturally. There are many similar examples in the invention of machines. For example, humans yearn for flying, but the aircraft designed by humans is different from birds and insects in nature

life 3.0 is a book with rich imagination and rigorous logical reasoning, which makes a grand analysis of the relationship between man and machine after the great development of artificial intelligence. Tagmark is optimistic about the future of artificial intelligence. He is an optimist who believes that AgI (Artificial General Intelligence) may appear in our lifetime. However, unlike technology optimists, he believes that the progress of technology will inevitably lead to human progress and the problems faced by mankind will be solved. What he is thinking about is how to frame some consensus and rules for the development of artificial intelligence to ensure that in the future, even if artificial intelligence reaches or even exceeds human intelligence, it will still be consistent with human goals

I have some reservations about tagmark's view that machines also have wisdom and can make wisdom no longer rely on flesh and blood. At present, other experts have proposed that wisdom may not only be related to the brain, but the senses and feedback brought by the flesh and blood may, together with the brain as a whole, contribute to the generation of human wisdom. If so, the pace of AI development may be much slower

nevertheless, tagmark still raised some important issues about the future of "human and machine". No matter whether the development of artificial intelligence is faster or slower than we thought, artificial intelligence will become more and more powerful under the promotion of multiplier effect. At that time, the space for human trial and error will be smaller and smaller. After mankind has developed nuclear weapons, a convention on the limitation of nuclear weapons must be reached, because once the nuclear war starts, the earth, the home of mankind, may face destruction. Nuclear war will not give mankind a second chance, and so will artificial intelligence. When it becomes more powerful, we will have to consider whether it will cause harm to mankind, so we must reach certain operating rules to ensure that its goals are consistent with mankind

there are too many examples of dimension reduction attacks in the history of the earth. The extinction of the black rhinoceros in West Africa in 2011 is a recent example. The fundamental reason is that the survival goals of humans with more powerful intelligence and black rhinoceros are far from each other. By analogy, when AgI exceeds human wisdom, if their goal deviates from the goal of human survival, human extinction may be a high probability event. Therefore, the development of artificial intelligence must be determined by people, of which two points are particularly important: first, we should ensure the stability and security of artificial intelligence. On the one hand, we should minimize the possibility of mistakes and downtime of artificial intelligence, on the other hand, we should ensure that artificial intelligence will not be manipulated by hackers; Second, AI must be credible, which is a hot issue in AI research

let AI understand human goals in three steps: learning, adopting and maintaining. However, in terms of the current situation of artificial intelligence research, there is not necessarily a clear correlation between human behaviors and goals that they can observe, and it is not easy to understand the real goals from a large number of behaviors. "Life 3.0" gives an example. Seeing firefighters go to the burning house to save girls, AI can be understood as firefighters willing to devote their lives to girls, or firefighters are cold and want to go to the burning house to warm up. How to make artificial intelligence form correct judgments and understand some ideas behind human behavior that can be understood but not expressed is the first step to ensure that the future of artificial intelligence is consistent with human goals

a question I raised with tagmark is, is AI more likely to promote centralization or decentralization in the future? His answer is very interesting: on the one hand, from the history of human development, this discovery encourages Thomas Swan to carry out further research. The development of science and technology does continue to promote human beings to become more centralized. From scattered tribes and villages to towns, cities and empires, it is actually a trend of gradual centralization. As the latest general technology, AI will certainly further promote the trend of centralization and make centralized planning more efficient. But on the other hand, the development of AI also allows everyone to acquire more knowledge, everyone will have stronger judgment, and everyone can be empowered. From this perspective, decentralization does have its value. Therefore, it is worth observing how AI will play the game in the end

the conversation also involves the problem of cyborg, that is, man-machine integration. I asked Professor tagmark, if Life 2.0 is our current model - the body is hardware, the idea is software, the operability of the scrap market is improved, and life 3.0 is a possibility in the future, we no longer need the body, we can design new bodies, and ideas can be better transmitted and shared as software, then will the human-machine combination of biochemical people be the middle stage

tagmark is very familiar with musk, and also knows about neuralink, an enterprise founded by musk and others to study brain computer integration. On the one hand, he believes that the development prospect of brain machine integration as a scientific experiment is worth observing. On the other hand, he also believes that the biggest difference between man and machine is that the evolution of human brain as nature is subject to many limitations, including self-sufficiency, self replication, limited bandwidth and so on. However, machines are different, and it is not easy to combine biological and physical interfaces (how flesh and blood nerves are combined with silicon-based chips) at a certain height. Whether it is meaningful to combine human brain with chips with many limitations (such as bandwidth) is also debatable

the most important question in our conversation is what important nodes deserve our attention in the process of moving towards AgI. Tagmark's analysis is very interesting: in fact, we can get enlightenment from the difference between human and cat intelligence. Cats, like humans, have complex neural networks, which enable better hand brain coordination (brain claw coordination) than machines, and can perceive the external environment and respond. The biggest difference between cats and people is that cats have no logical thinking ability. Looking back at the history of machine learning, first is the logic based programming and algorithm dominated by people, and then is this wave of AI based on neural networks. In the next wave, if we can combine logical thinking with neural network machine learning, it may promote the breakthrough from cat to human, that is, the breakthrough of machine intelligence

since the concept of life 3.0 is put forward, is tegmark willing to practice it personally? Tagmark's answer to this question is no, "unless I am terminally ill, I have no other choice." This answer is somewhat unexpected. It can be seen that even researchers like tagmark who are optimistic about the future, like most people, will still choose a conservative perspective in the face of an uncertain future

Copyright © 2011 JIN SHI