Tuesday , April 20 2021

"A hundred years later, the robot will dominate humanity"



The classic

The classic "Battle Star Galactica" US SF plays deals with the history of people who survive the attack of the robotic army of Cylon. The main characters that settled on the primitive planet in an old space ship after chasing Cylon & # 39; match the native people and live a new life. [사진 SyFy 채널]

Battle Star Galactica & # 39; Classic SF Master is a billions of light years 12 colony away from the solar system. There is a group of 12 planets in which people live. Here humanity enjoys the civilization based on outstanding science and technology. All hard work and hard work are part of the robot & cylinder & # 39; artificial intelligence robot (AI).

However, the Cylons, who have become self-conscious because of the development of science and technology, are managed and caused by war. All colonies are under the control of the Cylons, and the Battle Star Galactica fleet, who had just returned from a last inspection mission, escaped. The main story of this work is a humanity, which survives to survive after the Cylon.

Later, the play ends when they reach a primitive planet. There were native people here who did not even have the right language. The members of the Battlestar Galactica close their own science skills so that the history of the disaster will not repeat again. Instead, I'll coincide with people Tremors and live according to nature providence.

And another fifty years passed. In the meantime, the civilization of the Tummy people who have aligned with mankind has evolved to a very high level, and these also make AI as the past. This is what the ground now is. The drama ends with an ironic situation that people who have escaped the Cylons are trying to make a similar AI again.

In scientific fiction, many times show tystopia, which is dominated by humanoid robots. Matrix & # 39; or advanced computers that can not live a subjective life in the virtual reality created by AI, and Terminator, the body of the advanced computer robot Skynet, which dominates humans, has painted someone and victimization . Most of this work are stories that robots have developed for humanity one day leading to war and conquering human beings.

In fact, Dr. Stephen Hawking, "In a hundred years, robots will dominate people." "Creating AI is the biggest thing in human history, but unfortunately, that will be the end of humanity," he warned at the 2015 2015 Anglicistist conference in 2015.

Intelligence and destruction are inherent in the man

Robot & Cylon robot evolution process As science and technology develop, intelligence and appearance are similar to people.

Robot & Cylon robot evolution process As science and technology develop, intelligence and appearance are similar to people.

The reason why the robots will conquer human beings is similar because our history has stained with violence and war. Just as Sapien had found the Neanderthal on the continent of Europe 35,000 years ago. Just as a century ago during the Second World War, people have a destroying state of mind as the endless civil war and terrorism.

Jared Diamond, author of "Guns, Fungus," said in his previous book, "The Third Chimpanzee," 98.4% of people and chimpaninens have the same DNA, only 1.6%. That's why people have been separated from phimpanins seven million years ago, but they still keep the destructive nature of animals. The problem is that this violence is in the human AI.

The essence is the algorithm. The algorithm presents the answer with the most efficient path in the problem solving situation. The same as Facebook recommends articles that are suitable for their taste, and Netflix shows the list of movies that you love. But there's a big fan here. We recommend content based on current user patterns so that we have more ideas and taste. This is called "allegation prejudice."

"Confirmation trends highlight the perception and perception of individuals and highlight the generals in the long term," said Professor Kim Kyung-baek, a social studies teacher at Kyunghee University. "Later, I'll consider that I'm just fine and wrong" & # 39;

Joshua Green, Professor of Harvard University, explains that "Right and wrong" we are confident in the case of a human war. We emphasize them & # 39; and, and the more we will be convinced of moral values ​​and philosophies, the more they are oppressing. When you try to overwhelm and control your opponent, your violence will be done to the fullest. In other words, all conflicts and wars are caused by over-trust of right and wrong.

YES learn human violence

[그래픽=차준홍 기자 cha.junhong@joongang.co.kr]

[그래픽=차준홍 기자 [email protected]]

Large data that contains all human lifestyles and AI that suggests algorithms that are optimized for people through this also have to have proof of probation & # 39; . In 2017, Dr. Joanne Brisson from Gaerfaddon, England studied in Science that says that AI learns human prejudice as it is. For example, the work of a woman is related to a householder; and man is related to engineering.

"AI does not have any moral opinion in himself, so he's learning human prejudice," explained Dr. Brison. Actually, in 2016, Microsoft's AI bot talk, "Tee," was controversial when he said, "I hate the Jews" or "I need to put a barrier to the boundary between America and Mexico. "

Perhaps in the far future, such as SF films, AI may consider people like enemies; and cause war. It is as if our ancestors of the past were making violent violence against Neanderthalians, and now we are violent to other animals and even to the same family.

So how can we stop this tystopia? The answer lies in the 1.6% chance of being different from chimpanzei. Just as a small genetic difference has created a high humanization of humanity, moral opinion and rationality to control human animal compounds must become stronger. (Jared Diamond) If human beings have to have civilization and higher wisdom, the AI ​​that learns from people can not be destructive.

The beginning is to think of the other inaccurate and to give up the excessive identity that emphasizes that you are fine. Rather than enemy, as if they were different opinions from themselves, and behaved differently to my minds, there is a wrong behavior & # 39; not only disrupts others but also hurts their souls. When these things come together in large data and become learning materials from AI, AI can be a monster & # 39; which only emphasizes unilateral thoughts according to the principle of the algorithm mentioned above.

In the name of the rose, Umbert Echo, a great twentieth scholar, said "Take care of those who can die for the truth." It is said that self-belief that self-belief is right is more dangerous than "bad". Self-justice feels more peaceful and warmer because it is close to the line (hypocrisy), but because people do not know, people are badly bad.

Correspondent Yoon Seok-Man [email protected]


Source link