<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>19</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">M Simon</style></author><author><style face="normal" font="default" size="100%">P Canet</style></author><author><style face="normal" font="default" size="100%">R Soussignan</style></author><author><style face="normal" font="default" size="100%">Philippe Gaussier</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Jacqueline Nadel</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Reconnaissance et résonance émotionnelle face à un humain et à un robot chez des enfants typiques et des enfants avec autisme de haut niveau</style></title><secondary-title><style face="normal" font="default" size="100%">Bulletin scientifique de l’Arapi</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2008</style></year></dates><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Arnaud J Blanchard</style></author><author><style face="normal" font="default" size="100%">Jacqueline Nadel</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Attachment Bonds for Human-Like Robots</style></title><secondary-title><style face="normal" font="default" size="100%">International Journal of Humanoid Robotics</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2006</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.worldscientific.com/doi/abs/10.1142/S0219843606000771</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">3</style></volume><pages><style face="normal" font="default" size="100%">301–320</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">If robots are to be truly integrated in humans' everyday environment, they cannot be simply (pre-)designed and directly taken &quot;off the shelf&quot; and embedded into a real-life setting. Also, technical excellence and human-like appearance and &quot;superficial&quot; traits of their behavior are not enough to make social robots trusted, believable, and accepted. Fuller and deeper integration into human environments would require that, like children, robots develop embedded in the social environment in which they will fulfill their roles. An important element to bootstrap and guide this integration is the establishment of affective bonds between the &quot;infant&quot; robot and the adults among whom it develops, from whom it learns, and who it will later have to look after. In this paper, we present a Perception–Action architecture and experiments to simulate imprinting — the establishment of strong attachment links with a &quot;caregiver&quot; — in a robot. Following recent theories, we do not consider imprinting as rigidly timed and irreversible, but as a more flexible phenomenon that allows for further adaptation as a result of reward-based learning through experience. After the initial imprinting, adaptation is achieved in the context of a history of &quot;affective&quot; interactions between the robot and a human, driven by &quot;distress&quot; and &quot;comfort&quot; responses in the robot.</style></abstract><issue><style face="normal" font="default" size="100%">3</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Jacqueline Nadel</style></author><author><style face="normal" font="default" size="100%">M Simon</style></author><author><style face="normal" font="default" size="100%">P Canet</style></author><author><style face="normal" font="default" size="100%">R Soussignan</style></author><author><style face="normal" font="default" size="100%">P Blancard</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Philippe Gaussier</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Human Responses to an Expressive Robot</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the Sixth International Workshop on Epigenetic Robotics</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lund University Cognitive Studies</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2006</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.lucs.lu.se/LUCS/128/Nadeletal.pdf</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Lund University</style></publisher><pub-location><style face="normal" font="default" size="100%">Paris, France</style></pub-location><volume><style face="normal" font="default" size="100%">128</style></volume><pages><style face="normal" font="default" size="100%">79–86</style></pages><isbn><style face="normal" font="default" size="100%">91-974741-6-9</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">This paper reports the results of the first study comparing subjects' responses to robotic emotional facial displays and human emotional facial displays.
It describes step by step the building of believable emotional expressions in a robotic head, the problems raised by a comparative approach of robotic and human expressions, and the solutions found in order to ensure a valid comparison. Twenty adults and 15 children aged 3 were presented static (photos) and dynamic (2-D videoclips, or 3-D live) displays of emotional expressions presented by a robot or a person.
The study compares two dependent variables: emotional resonance (automatic facial feed-back during an emotional display) and emotion recognition (emotion labeling) according to partners (robot or person) and to the nature of the display (static or dynamic). Results for emotional resonance were similar with young children and with adults. Both groups resonated significantly more to dynamic displays than to static displays, be they robotic expressions or human expressions. In both groups, emotion recognition was easier for human expressions than for robotic ones.
Unlike children that recognized more easily emotional expressions dynamically displayed, adults scored higher with static displays thus reflecting a cognitive strategy independent from emotional resonance. Results are discussed in the perspective of the therapeutic use of this comparative approach with children with autism that are described as impaired in emotion sharing and communication.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Philippe Gaussier</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Jacqueline Nadel</style></author><author><style face="normal" font="default" size="100%">Darwin Muir</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotion Understanding: Robots as Tools and Models</style></title><secondary-title><style face="normal" font="default" size="100%">Emotional Development: Recent Research Advances</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2005</style></year></dates><publisher><style face="normal" font="default" size="100%">Oxford University Press</style></publisher><pages><style face="normal" font="default" size="100%">235–258</style></pages><isbn><style face="normal" font="default" size="100%">0-19-85-2883-3 (Hbk) 0-19-85-2884-1 (Pbk)</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><section><style face="normal" font="default" size="100%">9</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Arnaud J Blanchard</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Luc Berthouze</style></author><author><style face="normal" font="default" size="100%">Frédéric Kaplan</style></author><author><style face="normal" font="default" size="100%">Hideki Kozima</style></author><author><style face="normal" font="default" size="100%">Hiroyuki Yano</style></author><author><style face="normal" font="default" size="100%">Jürgen Konczak</style></author><author><style face="normal" font="default" size="100%">Giorgio Metta</style></author><author><style face="normal" font="default" size="100%">Jacqueline Nadel</style></author><author><style face="normal" font="default" size="100%">Giulio Sandini</style></author><author><style face="normal" font="default" size="100%">Georgi Stojanov</style></author><author><style face="normal" font="default" size="100%">Christian Balkenius</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">From Imprinting to Adaptation: Building a History of Affective Interaction</style></title><secondary-title><style face="normal" font="default" size="100%">Fifth International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems (EpiRob2005)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2005</style></year></dates><publisher><style face="normal" font="default" size="100%">Lund University Cognitive Studies</style></publisher><pages><style face="normal" font="default" size="100%">23–30</style></pages><isbn><style face="normal" font="default" size="100%">91-974741-4-2</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We present a Perception-Action architecture and experiments to simulate imprinting—the establishment of strong attachment links with a &quot;caregiver&quot;—in a robot. Following recent theories, we do not consider imprinting as rigidly timed and irreversible, but as a more flexible phenomenon that allows for further adaptation as a result of reward-based learning through experience. Our architecture reconciles these two types of perceptual learning traditionally considered as different and even incompatible. After the initial imprinting, adaptation is achieved in the context of a history of &quot;affective&quot; interactions between the robot and a human, driven by &quot;distress&quot; and &quot;comfort&quot; responses in the robot.</style></abstract></record></records></xml>