TY - JOUR T1 - Outline of a sensory-motor perspective on intrinsically moral agents JF - Adaptive Behavior Y1 - 2016 A1 - Christian Balkenius A1 - Lola Cañamero A1 - Philip Pärnamets A1 - Birger Johansson A1 - Martin V Butz A1 - Andreas Olsson AB - We propose that moral behaviour of artificial agents could (and should) be intrinsically grounded in their own sensory-motor experiences. Such an ability depends critically on seven types of competencies. First, intrinsic morality should be grounded in the internal values of the robot arising from its physiology and embodiment. Second, the moral principles of robots should develop through their interactions with the environment and with other agents. Third, we claim that the dynamics of moral (or social) emotions closely follows that of other non-social emotions used in valuation and decision making. Fourth, we explain how moral emotions can be learned from the observation of others. Fifth, we argue that to assess social interaction, a robot should be able to learn about and understand responsibility and causation. Sixth, we explain how mechanisms that can learn the consequences of actions are necessary for a robot to make moral decisions. Seventh, we describe how the moral evaluation mechanisms outlined can be extended to situations where a robot should understand the goals of others. Finally, we argue that these competencies lay the foundation for robots that can feel guilt, shame and pride, that have compassion and that know how to assign responsibility and blame. PB - SAGE VL - 24 UR - https://journals.sagepub.com/doi/10.1177/1059712316667203 IS - 5 N1 - Download ER - TY - CHAP T1 - Anticipating Rewards in Continuous Time and Space: A Case Study in Developmental Robotics T2 - Anticipatory Behavior in Adaptive Learning Systems: From Brains to Individual and Social Behavior Y1 - 2007 A1 - Arnaud J Blanchard A1 - Lola Cañamero ED - Martin V Butz ED - Olivier Sigaud ED - Giovanni Pezzulo ED - Gianluca Baldassarre AB - This paper presents the first basic principles, implementation and experimental results of what could be regarded as a new approach to reinforcement learning, where agents—physical robots interacting with objects and other agents in the real world—can learn to anticipate rewards using their sensory inputs. Our approach does not need discretization, notion of events, or classification, and instead of learning rewards for the different possible actions of an agent in all the situations, we propose to make agents learn only the main situations worth avoiding and reaching. However, the main focus of our work is not reinforcement learning as such, but modeling cognitive development on a small autonomous robot interacting with an “adult” caretaker, typically a human, in the real world; the control architecture follows a Perception-Action approach incorporating a basic homeostatic principle. This interaction occurs in very close proximity, uses very coarse and limited sensory-motor capabilities, and affects the “well-being” and affective state of the robot. The type of anticipatory behavior we are concerned with in this context relates to both sensory and reward anticipation. We have applied and tested our model on a real robot. JF - Anticipatory Behavior in Adaptive Learning Systems: From Brains to Individual and Social Behavior T3 - Lecture Notes in Artificial Intelligence PB - Springer CY - Berlin, Heidelberg VL - 4520 SN - 978-3-540-74261-6 UR - https://www.springer.com/gp/book/9783540742616 ER -