In the continuous attempt for more efficiency, organizationsincreasingly robotize processes, meaning that processes are mainly orcompletely operated by robots that are programmed with self-learning and/oradvanced mechanisms for high-complex tasks. The idea is progressivelyformalized by policy makers to include the application of robots in publicservices (e.g. SPARC euRobotics program; OECD Forum: “Teaching & Learningwith Robots”; 2016 DAVOS “the rise of robots”) with main applications in publicsafety and surveillance (intelligent drones), health care services (SociallyAssistive Robots - SARs), and Smart City traffic and utility planning.
However, data suggests that the introduction of robotizedservices in public services will likely meet with resistance. For example, in alarge survey conducted in 27 European countries (Eurobarometer 382), over 50%of the respondents indicated they wanted to see robots being banned from providingcare. In addition, almost 90% of respondents expressed being uncomfortable withthe thought of robots taking care for children and the elderly. In contrast,citizens see great opportunities for robots in public security issues and inrescue operations.
One barrier in the acceptance of robots is the citizens’concerns about adequate ethical standards for robotized services (Winfield etal., 2014). More autonomous robots could increase efficiency of servicedelivery. However, increased robot autonomy implies that smart robots should beable to balance citizens’, often conflicting rights without ongoingsupervision. Therefore, as the cognitive, perceptual and motor capabilities ofrobots expand, they are expected to have an improved capacity for making moraljudgements (Picard and Picard, 1997). Several scholars have started thediscussion on the need for ethical standards in robotized services. However, nothorough insight exists yet in the public expectations on these ethicalstandards (Murphy & Woods, 2009; author reference). Therefore, our researchquestion is: Do citizens have different expectations with respect to ethicalstandards for public service robots compared to public service humans?
We conducted two experiments in which we presented respondentswith cases of unethical behavior in public services. In the first experiment(n=198), nine cases were tested with a between-subjects-design, where theunethical behavior concerned either a robot or a human. In the secondexperiment (n = 55) we tested with a within-subjects design whether respondentschange their opinion for a single case of unethical behavior when it is firstperformed by a human and then by a robot (or in the opposite order).
Both experiments show that respondents have no differentexpectations with respect to ethical standards for robots and humans in publicservices, and do not change their opinion when the care giver is changes formhuman to robot, or vice versa. Based on our findings, we discuss further andalternative testing; and we discuss the consequences for the broader scientificand policy discussion on how ethical standards for robotized services should beconcretized as a close approximation of human ethical standards.
F1b - Behavioral and Experimental Public Administration: Leadership and Decision-Making