#U s. military wants to teach robots how to make moral and ethical decisions How do you code something as abstract as moral logic into a bunch of transistors? Are robots capable of moral or ethical reasoning, knowing right from wrong? Not yet. But the U s. government is spending millions on developing machines that understand moral consequence. The Office of Naval Research will award $7. 5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems. ven though today unmanned systems are umbin comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than wee seen before, Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One. or example, Google self-driving cars are legal and in use in several states at this point. As researchers, we are playing catch up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake. The United states military prohibits lethal fully autonomous robots. And semi-autonomous robots can elect and engage individual targets or specific target groups that have not been selected previously by an authorized human operator, even in the event that contact with the operator is cut off, according to a 2012 Department of defense policy directive. ven if such systems aren armed, they may still be forced to make moral decisions, Bello said. For instance, in a disaster scenario, a robot may be forced to make a choice about whom to evacuate or treat first, a situation where a bot might use some sense of ethical or moral reasoning. hile the kinds of systems we envision have much broader use in first-response, search-and-rescue and in the medical domain, we can take the idea of in-theater robots completely off the table, Bello said. Some members of the artificial intelligence, or AI, research and machine ethics communities were quick to applaud the grant. ith drones, missile defines, autonomous vehicles, etc.,the military is rapidly creating systems that will need to make moral decisions, AI researcher Steven Omohundro told Defense One. uman lives and property rest on the outcomes of these decisions and so it is critical that they be made carefully and with full knowledge of the capabilities and limitations of the systems involved. The military has had always to define he rules of warand this technology is likely to increase the stakes for that. ee talking about putting robots in more and more contexts in which we can predict what theye going to do, what kind of situations theyl encounter. So they need to do some kind of ethical reasoning in order to sort through various options said Wendell Wallach, the chair of the Yale Technology and Ethics Study Group and author of the book Moral Machines: Teaching Robots Right From Wrong. The sophistication of cutting-edge drones like British BAESYSTEMS batwing-shaped Taranis and Northrop grumman X-47b reveal more self-direction creeping into ever more heavily armed systems. The X-47b, Wallach said, is normous and it does an awful lot of things autonomously. But how do you code something as abstract as moral logic into a bunch of transistors? The vast openness of the problem is why the framework approach is important, says Wallach. Some types of morality are more basic, thus more code-able, than others. here operational morality, functional morality, and full moral agency, Wallach said. perational morality is what you already get when the operator can discern all the situations that the robot may come under and program in appropriate responsesfunctional morality is where the robot starts to move into situations where the operator can always predict what the robot will encounter and the robot will need to bring some form of ethical reasoning to bear. It a thick knot of questions to work through. But Wallach says, with a high potential to transform the battlefield. ne of the arguments for moral robots is that they may be even better than humans in picking a moral course of action because they may consider more courses of action, he said. Ronald Arkin, an AI expert from Georgia Tech and author of the book Governing Lethal Behavior in Autonomous Robots, is a proponent of giving machines a moral compass. t is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of, Arkin wrote in a 2007 research paper (PDF. Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more inconsistent. AI robotics expert Noel Sharkey is a detractor. He been highly critical of armed drones in general . and has argued that autonomous weapons systems cannot be trusted to conform to international law. do not think that they will end up with a moral or ethical robot, Sharkey told Defense One . or that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won really care. It will follow a human designer idea of ethics. he simple example that has been given to the press about scheduling help for wounded soldiers is a good one. My concern would be if the military were to extend a system like this for lethal autonomous weapons weapons where the decision to kill is delegated to a machine; that would be deeply troubling, he said. This week, Sharkey and Arkin are debating the issue of whether or not morality can be built into AI systems before theu. N. where they may find an audience very sympathetic to the idea that a moratorium should be placed on the further development of autonomous armed robots. Christof Heyns, U n. special rapporteur on extrajudicial, summary or arbitrary executions for the Office of the High commissioner for Human rights, is calling for a moratorium. here is reason to believe that states will, inter alia, seek to use lethal autonomous robotics for targeted killing, Heyns said in an April 2013 report to the U n. The Defense department policy directive on lethal autonomy offers little reassurance here since the department can change it without congressional approval, at the discretion of the chairman of the Joint Chiefs of Staff and two undersecretaries of Defense. University of Denver scholar Heather Roff, in an op-ed for the Huffington Post, calls that a isconcertinglack of oversight and notes that ielding of autonomous weapons then does not even raise to the level of the Secretary of defense, let alone the president. If researchers can prove that robots can do moral math even if in some limited form, they may be able to diffuse rising public anger and mistrust over armed unmanned vehicles. But it no small task. his is a significantly difficult problem and it not clear we have an answer to it, said Wallach. obots both domestic and militarily are going to find themselves in situations where there are a number of courses of actions and they are going to need to bring some kinds of ethical routines to bear on determining the most ethical course of action. If wee moving down this road of increasing autonomy in robotics and that the same as Google cars as it is for military robots, we should begin now to do the research to how far can we get in ensuring the robot systems are safe and can make appropriate decisions in the context they operate. f
Overtext Web Module V3.0 Alpha
Copyright Semantic-Knowledge, 1994-2011