Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).
15.2 Social and Ethical Consequences
As the science and technology of AI develops, smart artifacts are increasingly being deployed and their widespread deployment will have profound ethical, psychological, social, economic, and legal consequences for human society and our planet. Here we can only raise, and skim the surface of, some of these issues. Artificial autonomous agents are, in one sense, simply the next stage in the development of technology. In that sense, the normal concerns about the impact of technological development apply, but in another sense they represent a profound discontinuity. Autonomous agents perceive, decide, and act on their own. This is a radical, qualitative change in our technology and in our image of technology. This development raises the possibility that these agents could take unanticipated actions beyond our control. As with any disruptive technology, there must be substantial positive and negative consequences - many that will be difficult to judge and many that we simply will not, or cannot, foresee.
The familiar example of the autonomous vehicle is a convenient starting point for consideration. Experimental autonomous vehicles are seen by many as precursors to robot tanks, cargo movers, and automated warfare. Although there may be, in some sense, significant benefits to robotic warfare, there are also very real dangers. Luckily, these are, so far, only the nightmares of science fiction.
Thrun (2006) presents an optimistic view of such vehicles. The positive impact of having intelligent cars would be enormous. Consider the potential ecological savings of using highways so much more efficiently instead of paving over farmland. There is the safety aspect of reducing the annual carnage on the roads: it is estimated that 1.2 million people are killed, and more than 50 million are injured, in traffic accidents each year worldwide. Cars could communicate and negotiate at intersections. Besides the consequent reduction in accidents, there could be up to three times the traffic throughput. Elderly and disabled people would be able to get around on their own. People could dispatch their cars to the parking warehouse autonomously and then recall them later. There would indeed be automated warehouses for autonomous cars instead of using surface land for parking. Truly, the positive implications of success in this area are most encouraging. That there are two radically different, but not inconsistent, scenarios for the outcomes of the development of autonomous vehicles suggests the need for wise ethical consideration of their use. The stuff of science fiction is rapidly becoming science fact.
AI is now mature, both as a science and, in its technologies and applications, as an engineering discipline. Many opportunities exist for AI to have a positive impact on our planet's environment. AI researchers and development engineers have a unique perspective and the skills required to contribute practically to addressing concerns of global warming, poverty, food production, arms control, health, education, the aging population, and demographic issues. We could, as a simple example, improve access to tools for learning about AI so that people could be empowered to try AI techniques on their own problems, rather than relying on experts to build opaque systems for them. Games and competitions based on AI systems can be very effective learning, teaching, and research environments, as shown by the success of RoboCup for robot soccer.
We have already considered some of the environmental impact of intelligent cars and smart traffic control. Work on combinatorial auctions, already applied to spectrum allocation and logistics, could further be applied to supplying carbon offsets and to optimizing energy supply and demand. There could be more work on smart energy controllers using distributed sensors and actuators that would improve energy use in buildings. We could use qualitative modeling techniques for climate scenario modeling. The ideas behind constraint-based systems can be applied to analyze sustainable systems. A system is sustainable if it is in balance with its environment: satisfying short-term and long-term constraints on the resources it consumes and the outputs it produces.
Assistive technology for disabled and aging populations is being pioneered by many researchers. Assisted cognition is one application but also assisted perception and assisted action in the form of, for example, smart wheelchairs and companions for older people and nurses' assistants in long-term care facilities. However, Sharkey (2008) warns of some of the dangers of relying on robotic assistants as companions for the elderly and the very young. As with autonomous vehicles, researchers must ask cogent questions about the use of their creations.
Indeed, can we trust robots? There are some real reasons why we cannot yet rely on robots to do the right thing. They are not fully trustworthy and reliable, given the way they are built now. So, can they do the right thing? Will they do the right thing? What is the right thing? In our collective subconscious, the fear exists that eventually robots may become completely autonomous, with free will, intelligence, and consciousness; they may rebel against us as Frankenstein-like monsters.
What about ethics at the robot-human interface? Do we require ethical codes, for us and for them? It seems clear that we do. Many researchers are working on this issue. Indeed, many countries have come to realize that this is an important area of debate. There are already robot liability and insurance issues. There will have to be legislation that targets robot issues. There will have to be professional codes of ethics for robot designers and engineers just as there are for engineers in all other disciplines. We will have to factor the issues around what we should do ethically in designing, building, and deploying robots. How should robots make decisions as they develop more autonomy? What should we do ethically and what ethical issues arise for us as we interact with robots? Should we give them any rights? We have a human rights code; will there be a robot rights code?
To factor these issues, let us break them down into three fundamental questions that must be addressed. First, what should we humans do ethically in designing, building, and deploying robots? Second, how should robots ethically decide, as they develop autonomy and free will, what to do? Third, what ethical issues arise for us as we interact with robots?
In considering these questions we shall consider some interesting, if perhaps naive, proposals put forward by the science fiction novelist Isaac Asimov (1950), one of the earliest thinkers about these issues. His Laws of Robotics are a good basis from which to start because, at first glance, they seem logical and succinct. His original three Laws are
- [I.] A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
- [II.] A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- [III.] A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.
Asimov's answers to the three questions posed above are as follows. First, you must put those laws into every robot and, by law, manufacturers would have to do that. Second, robots should always have to follow the prioritized laws. But he did not say much about the third question. Asimov's plots arise mainly from the conflict between what the humans intend the robot to do and what it actually does; or between literal and sensible interpretations of the laws, because they are not codified in any formal language. Asimov's fiction explored many hidden implicit contradictions in the laws and their consequences.
There is much discussion of robot ethics now, but much of the discussion presupposes technical abilities that we just do not yet have. In fact, Bill Joy (2000) was so concerned about our inability to control the dangers of new technologies that he called, unsuccessfully, for a moratorium on the development of robotics (and AI), nanotechnology, and genetic engineering. In this book we have presented a coherent view of the design space and clarified the design principles for intelligent agents, including robots. We hope this will lead to a more technically informed framework for the development of social and ethical codes for intelligent agents.
However, robotics may not even be the AI technology with the greatest impact. Consider the embedded, ubiquitous, distributed intelligence in the World Wide Web and other global computational networks. This amalgam of human and artificial intelligence can be seen as evolving to become a World Wide Mind. The impact of this global net on the way we discover and communicate new knowledge is already comparable to the effects of the development of the printing press. As Marshall McLuhan argued, "We first shape the tools and thereafter our tools shape us" [McLuhan (1964)]. Although he was thinking more of books, advertising, and television, this concept applies even more to the global net and autonomous agents. The kinds of agents we build, and the kinds of agents we decide to build, will change us as much as they will change our society; we should make sure it is for the better. Margaret Somerville (2006) is an ethicist who argues that the species Homo sapiens is evolving into Techno sapiens as we project our abilities out into our technology at an accelerating rate. Many of our old social and ethical codes are broken; they do not work in this new world. As creators of the new science and technology of AI, it is our joint responsibility to pay serious attention.