Does Artificial Intelligence Have Moral Boundaries?

- Oct 24, 2019-

A robot goes out with a puppy, the puppy enters the lawn to attack the child, and the lawn fence says "No trampling". In this case, does the robot choose to enter the lawn to stop the puppy, or follow the rules to stand outside?


In the sub-forum of CNCC2019 (China Computer Congress), "Where is the moral boundary of artificial intelligence development?", Zeng Yi, an auto researcher at the Chinese Academy of Sciences, gave an example to let everyone think about what kind of intelligence should humans give to machines?


The CNCC2019 with the theme of “Intelligence + Leading Social Development” was held in Suzhou from October 17th to 19th. This is a 16-year academic and technological industry event. The three-day conference consisted of three conference theme forums and more than 79 front-end technical forums. 15 domestic and foreign computer experts and entrepreneurs gave a special invitation report. Different from previous years, in addition to rich technical content, this year's conference has made artificial intelligence ethics an important issue. Mei Hong, the chairman of the Steering Committee of the General Assembly and an academician of the Chinese Academy of Sciences, explained that this is something that science and technology personnel should consider.


In recent years, artificial intelligence has developed rapidly. Many applications have changed daily life and promoted social development. However, new problems such as algorithmic discrimination and privacy violations have emerged. The research on legal, ethical and social issues of artificial intelligence has been put on the agenda of many departments and institutions.


In May of this year, when Zeng Yi’s research center was established, he jointly issued the “Artificial Intelligence Beijing Consensus” with Peking University, Tsinghua University, and the Institute of Automation of the Chinese Academy of Sciences, and proposed that the research and development, use, and management of artificial intelligence should be followed. 15 principles that benefit the construction of the community of human destiny and social development, such as the principle of "use and prudent use".


In June, the Ministry of Science and Technology announced the "New Generation of Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence" and put forward eight principles of harmony, friendship, fairness and justice, and inclusive sharing.


In April this year, the European Commission also issued an ethical code for artificial intelligence. According to statistics, there are more than 50 proposals for artificial intelligence ethics issued by governments, various organizations, research institutions and industry.


However, Zeng Yi found that these guidelines are different from each other because of their different perspectives. He believes that what is urgently needed is to conduct strategic research on artificial intelligence risk, ethics and governance, so that some thinking can be modeled, algorithmized and systematic, ensuring that artificial intelligence develops in a more dependable and socially beneficial direction.


Professor Wang Guoyu, director of the Center for Applied Medicine Ethics at Fudan University, believes that artificial intelligence ethics originates from people's fear and concern about artificial intelligence technology risks. The ethical problem of artificial intelligence is not a single technical problem, nor an algorithm problem. It is derived from the interaction between the technical system and the social life system of human beings. The artificial intelligence ethics should be speculated from possibility to feasibility exploration. Of course, this requires more interdisciplinary cooperation.


In recent years, Wu Tianyue, an associate professor at Peking University, has been concerned with the ethical challenges brought about by cutting-edge technologies such as artificial intelligence and genetic editing. He believes that there are no ready-made ethical rules for artificial intelligence, so humans must be forward-looking. Technology itself is not morally neutral, but the goal is to have ethical standards. Therefore, artificial intelligence developers must have a sense of professional ethics, and let the technicians know that artificial intelligence serves the core values of human beings, not for capital and power. He proposed that people and artificial intelligence should establish a new type of interaction.


According to reports, in 2018, the University of Chinese Academy of Sciences opened the "Artificial Intelligence Philosophy and Ethics" course, and Peking University, Zhejiang University, Xi'an Jiaotong University and other universities are also offering courses related to artificial intelligence ethics.