<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Coninx, Alexandre</style></author><author><style face="normal" font="default" size="100%">Paul E. Baxter</style></author><author><style face="normal" font="default" size="100%">Oleari, Elettra</style></author><author><style face="normal" font="default" size="100%">Bellini, Sara</style></author><author><style face="normal" font="default" size="100%">Bierman, Bert</style></author><author><style face="normal" font="default" size="100%">Henkemans, Olivier Blanson</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Valentin Enescu</style></author><author><style face="normal" font="default" size="100%">Espinoza, Raquel Ros</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Remi Humbert</style></author><author><style face="normal" font="default" size="100%">Kiefer, Bernd</style></author><author><style face="normal" font="default" size="100%">Kruijff-Korbayová, Ivana</style></author><author><style face="normal" font="default" size="100%">Looije, Rosmarijn</style></author><author><style face="normal" font="default" size="100%">Mosconi, Marco</style></author><author><style face="normal" font="default" size="100%">Mark A. Neerincx</style></author><author><style face="normal" font="default" size="100%">Giulio Paci</style></author><author><style face="normal" font="default" size="100%">Patsis, Georgios</style></author><author><style face="normal" font="default" size="100%">Pozzi, Clara</style></author><author><style face="normal" font="default" size="100%">Sacchitelli, Francesca</style></author><author><style face="normal" font="default" size="100%">Hichem Sahli</style></author><author><style face="normal" font="default" size="100%">Alberto Sanna</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Yiannis Demiris</style></author><author><style face="normal" font="default" size="100%">Tony Belpaeme</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Towards Long-Term Social Child-Robot Interaction: Using Multi-Activity Switching to Engage Young Users</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Human-Robot Interaction</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2016</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://dl.acm.org/doi/abs/10.5898/JHRI.5.1.Coninx</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><pages><style face="normal" font="default" size="100%">32–67</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://dl.acm.org/doi/abs/10.5898/JHRI.5.1.Coninx&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Kruijff-Korbayová, Ivana</style></author><author><style face="normal" font="default" size="100%">Oleari, Elettra</style></author><author><style face="normal" font="default" size="100%">Pozzi, Clara</style></author><author><style face="normal" font="default" size="100%">Sacchitelli, Francesca</style></author><author><style face="normal" font="default" size="100%">Bagherzadhalimi, Anahita</style></author><author><style face="normal" font="default" size="100%">Bellini, Sara</style></author><author><style face="normal" font="default" size="100%">Kiefer, Bernd</style></author><author><style face="normal" font="default" size="100%">Racioppa, Stefania</style></author><author><style face="normal" font="default" size="100%">Coninx, Alexandre</style></author><author><style face="normal" font="default" size="100%">Paul E. Baxter</style></author><author><style face="normal" font="default" size="100%">Bierman, Bert</style></author><author><style face="normal" font="default" size="100%">Henkemans, Olivier Blanson</style></author><author><style face="normal" font="default" size="100%">Mark A. Neerincx</style></author><author><style face="normal" font="default" size="100%">Rosemarijn Looije</style></author><author><style face="normal" font="default" size="100%">Yiannis Demiris</style></author><author><style face="normal" font="default" size="100%">Espinoza, Raquel Ros</style></author><author><style face="normal" font="default" size="100%">Mosconi, Marco</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Remi Humbert</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Hichem Sahli</style></author><author><style face="normal" font="default" size="100%">Joachim de Greeff</style></author><author><style face="normal" font="default" size="100%">James Kennedy</style></author><author><style face="normal" font="default" size="100%">Robin Read</style></author><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Giulio Paci</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Athanasopoulos, Georgios</style></author><author><style face="normal" font="default" size="100%">Patsis, Georgios</style></author><author><style face="normal" font="default" size="100%">Verhelst, Werner</style></author><author><style face="normal" font="default" size="100%">Alberto Sanna</style></author><author><style face="normal" font="default" size="100%">Tony Belpaeme</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Let’s Be Friends: Perception of a Social Robotic Companion for children with T1DM</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. New Friends 2015</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">10/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://mheerink.home.xs4all.nl/pdf/ProceedingsNF2015-3.pdf</style></url></web-urls></urls><pub-location><style face="normal" font="default" size="100%">Almere, The Netherlands</style></pub-location><pages><style face="normal" font="default" size="100%">32–33</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We describe the social characteristics of a robot developed to support children with Type 1 Diabetes Mellitus (T1DM) in the process of education and care. We evaluated the perception of the robot at a summer camp where diabetic children aged 10-14 experienced the robot in group interactions. Children in the intervention condition additionally interacted with it also individually, in one-to-one sessions featuring several game-like activities. These children perceived the robot significantly more as a friend than those in the control group. They also readily engaged with it in dialogues about their habits related to healthy lifestyle as well as personal experiences concerning diabetes. This indicates that the one-on-one interactions added a special quality to the relationship of the children with the robot.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://mheerink.home.xs4all.nl/pdf/ProceedingsNF2015-3.pdf&quot;&gt;Download full proceedings&lt;/a&gt; (PDF)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Aryel Beck</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Luisa Damiano</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Interpretation of Emotional Body Language Displayed by a Humanoid Robot: A Case Study with Children</style></title><secondary-title><style face="normal" font="default" size="100%">International Journal of Social Robotics</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">emotion</style></keyword><keyword><style  face="normal" font="default" size="100%">emotional body language</style></keyword><keyword><style  face="normal" font="default" size="100%">perception</style></keyword><keyword><style  face="normal" font="default" size="100%">Social robotics</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2013</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/article/10.1007/s12369-013-0193-z</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><pages><style face="normal" font="default" size="100%">325–334</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The work reported in this paper focuses on giving humanoid robots the capacity to express emotions with their body. Previous results show that adults are able to interpret different key poses displayed by a humanoid robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy) and valence (positive or negative emotion) whereas moving the head up produces an increase along these dimensions. Hence, changing the head position during an interaction should send intuitive signals. The study reported in this paper tested children’s ability to recognize the emotional body language displayed by a humanoid robot. The results suggest that body postures and head position can be used to convey emotions during child-robot interaction.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://link.springer.com/article/10.1007/s12369-013-0193-z&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Tony Belpaeme</style></author><author><style face="normal" font="default" size="100%">Paul E. Baxter</style></author><author><style face="normal" font="default" size="100%">Robin Read</style></author><author><style face="normal" font="default" size="100%">Rachel Wood</style></author><author><style face="normal" font="default" size="100%">Cuayáhuitl, Heriberto</style></author><author><style face="normal" font="default" size="100%">Kiefer, Bernd</style></author><author><style face="normal" font="default" size="100%">Racioppa, Stefania</style></author><author><style face="normal" font="default" size="100%">Kruijff-Korbayová, Ivana</style></author><author><style face="normal" font="default" size="100%">Athanasopoulos, Georgios</style></author><author><style face="normal" font="default" size="100%">Valentin Enescu</style></author><author><style face="normal" font="default" size="100%">Rosemarijn Looije</style></author><author><style face="normal" font="default" size="100%">Mark A. Neerincx</style></author><author><style face="normal" font="default" size="100%">Yiannis Demiris</style></author><author><style face="normal" font="default" size="100%">Raquel Ros-Espinoza</style></author><author><style face="normal" font="default" size="100%">Aryel Beck</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Baroni, Ilaria</style></author><author><style face="normal" font="default" size="100%">Nalin, Marco</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Giulio Paci</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Remi Humbert</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Multimodal Child-Robot Interaction: Building Social Bonds</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Human-Robot Interaction</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://dl.acm.org/doi/10.5555/3109688.3109691</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">1</style></volume><pages><style face="normal" font="default" size="100%">33–53</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competencies and integrating them to form an autonomous robotic system for evaluation “in the wild.” The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://dl.acm.org/doi/10.5555/3109688.3109691&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Aryel Beck</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Luisa Damiano</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Children Interpretation of Emotional Body Language Displayed by a Robot</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 3rd International Conference on Social Robotics (ICSR 2011)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2011</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007%2F978-3-642-25504-5_7</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><pub-location><style face="normal" font="default" size="100%">Amsterdam, The Netherlands</style></pub-location><pages><style face="normal" font="default" size="100%">62–70</style></pages><isbn><style face="normal" font="default" size="100%">978-3-642-25504-5</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Previous results show that adults are able to interpret different key poses displayed by the robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy), valence (positive or negative) and stance (approaching or avoiding) whereas moving the head up produces an increase along these dimensions [1]. Hence, changing the head position during an interaction should send intuitive signals which could be used during an interaction. The ALIZ-E target group are children between the age of 8 and 11. Existing results suggest that they would be able to interpret human emotional body language [2, 3].

Based on these results, an experiment was conducted to test whether the results of [1] can be applied to children. If yes body postures and head position could be used to convey emotions during an interaction.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://link.springer.com/chapter/10.1007%2F978-3-642-25504-5_7&quot;&gt;Download&lt;/a&gt;</style></notes></record></records></xml>