<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Nalin, Marco</style></author><author><style face="normal" font="default" size="100%">Baroni, Ilaria</style></author><author><style face="normal" font="default" size="100%">Kruijff-Korbayová, Ivana</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Aryel Beck</style></author><author><style face="normal" font="default" size="100%">Cuayáhuitl, Heriberto</style></author><author><style face="normal" font="default" size="100%">Alberto Sanna</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Children's Adaptation in Multi-session Interaction with a Humanoid Robot</style></title><secondary-title><style face="normal" font="default" size="100%">2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://ieeexplore.ieee.org/document/6343778/</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pages><style face="normal" font="default" size="100%">351–357</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">This work presents preliminary observations from a study of children (N=19, age 5–12) interacting in multiple sessions with a humanoid robot in a scenario involving game activities. The main purpose of the study was to see how their perception of the robot, their engagement, and their enjoyment of the robot as a companion evolve across multiple interactions, separated by one-two weeks. However, an interesting phenomenon was observed during the experiment: most of the children soon adapted to the behaviors of the robot, in terms of speech timing, speed and tone, verbal input formulation, nodding, gestures, etc. We describe the experimental setup and the system, and our observations and preliminary analysis results, which open interesting questions for further research.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://ieeexplore.ieee.org/document/6343778&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Aryel Beck</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Luisa Damiano</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Children Interpretation of Emotional Body Language Displayed by a Robot</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 3rd International Conference on Social Robotics (ICSR 2011)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2011</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007%2F978-3-642-25504-5_7</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><pub-location><style face="normal" font="default" size="100%">Amsterdam, The Netherlands</style></pub-location><pages><style face="normal" font="default" size="100%">62–70</style></pages><isbn><style face="normal" font="default" size="100%">978-3-642-25504-5</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Previous results show that adults are able to interpret different key poses displayed by the robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy), valence (positive or negative) and stance (approaching or avoiding) whereas moving the head up produces an increase along these dimensions [1]. Hence, changing the head position during an interaction should send intuitive signals which could be used during an interaction. The ALIZ-E target group are children between the age of 8 and 11. Existing results suggest that they would be able to interpret human emotional body language [2, 3].

Based on these results, an experiment was conducted to test whether the results of [1] can be applied to children. If yes body postures and head position could be used to convey emotions during an interaction.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://link.springer.com/chapter/10.1007%2F978-3-642-25504-5_7&quot;&gt;Download&lt;/a&gt;</style></notes></record></records></xml>