<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Coninx, Alexandre</style></author><author><style face="normal" font="default" size="100%">Paul E. Baxter</style></author><author><style face="normal" font="default" size="100%">Oleari, Elettra</style></author><author><style face="normal" font="default" size="100%">Bellini, Sara</style></author><author><style face="normal" font="default" size="100%">Bierman, Bert</style></author><author><style face="normal" font="default" size="100%">Henkemans, Olivier Blanson</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Valentin Enescu</style></author><author><style face="normal" font="default" size="100%">Espinoza, Raquel Ros</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Remi Humbert</style></author><author><style face="normal" font="default" size="100%">Kiefer, Bernd</style></author><author><style face="normal" font="default" size="100%">Kruijff-Korbayová, Ivana</style></author><author><style face="normal" font="default" size="100%">Looije, Rosmarijn</style></author><author><style face="normal" font="default" size="100%">Mosconi, Marco</style></author><author><style face="normal" font="default" size="100%">Mark A. Neerincx</style></author><author><style face="normal" font="default" size="100%">Giulio Paci</style></author><author><style face="normal" font="default" size="100%">Patsis, Georgios</style></author><author><style face="normal" font="default" size="100%">Pozzi, Clara</style></author><author><style face="normal" font="default" size="100%">Sacchitelli, Francesca</style></author><author><style face="normal" font="default" size="100%">Hichem Sahli</style></author><author><style face="normal" font="default" size="100%">Alberto Sanna</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Yiannis Demiris</style></author><author><style face="normal" font="default" size="100%">Tony Belpaeme</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Towards Long-Term Social Child-Robot Interaction: Using Multi-Activity Switching to Engage Young Users</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Human-Robot Interaction</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2016</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://dl.acm.org/doi/abs/10.5898/JHRI.5.1.Coninx</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><pages><style face="normal" font="default" size="100%">32–67</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://dl.acm.org/doi/abs/10.5898/JHRI.5.1.Coninx&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Kruijff-Korbayová, Ivana</style></author><author><style face="normal" font="default" size="100%">Oleari, Elettra</style></author><author><style face="normal" font="default" size="100%">Pozzi, Clara</style></author><author><style face="normal" font="default" size="100%">Sacchitelli, Francesca</style></author><author><style face="normal" font="default" size="100%">Bagherzadhalimi, Anahita</style></author><author><style face="normal" font="default" size="100%">Bellini, Sara</style></author><author><style face="normal" font="default" size="100%">Kiefer, Bernd</style></author><author><style face="normal" font="default" size="100%">Racioppa, Stefania</style></author><author><style face="normal" font="default" size="100%">Coninx, Alexandre</style></author><author><style face="normal" font="default" size="100%">Paul E. Baxter</style></author><author><style face="normal" font="default" size="100%">Bierman, Bert</style></author><author><style face="normal" font="default" size="100%">Henkemans, Olivier Blanson</style></author><author><style face="normal" font="default" size="100%">Mark A. Neerincx</style></author><author><style face="normal" font="default" size="100%">Rosemarijn Looije</style></author><author><style face="normal" font="default" size="100%">Yiannis Demiris</style></author><author><style face="normal" font="default" size="100%">Espinoza, Raquel Ros</style></author><author><style face="normal" font="default" size="100%">Mosconi, Marco</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Remi Humbert</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Hichem Sahli</style></author><author><style face="normal" font="default" size="100%">Joachim de Greeff</style></author><author><style face="normal" font="default" size="100%">James Kennedy</style></author><author><style face="normal" font="default" size="100%">Robin Read</style></author><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Giulio Paci</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Athanasopoulos, Georgios</style></author><author><style face="normal" font="default" size="100%">Patsis, Georgios</style></author><author><style face="normal" font="default" size="100%">Verhelst, Werner</style></author><author><style face="normal" font="default" size="100%">Alberto Sanna</style></author><author><style face="normal" font="default" size="100%">Tony Belpaeme</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Let’s Be Friends: Perception of a Social Robotic Companion for children with T1DM</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. New Friends 2015</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">10/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://mheerink.home.xs4all.nl/pdf/ProceedingsNF2015-3.pdf</style></url></web-urls></urls><pub-location><style face="normal" font="default" size="100%">Almere, The Netherlands</style></pub-location><pages><style face="normal" font="default" size="100%">32–33</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We describe the social characteristics of a robot developed to support children with Type 1 Diabetes Mellitus (T1DM) in the process of education and care. We evaluated the perception of the robot at a summer camp where diabetic children aged 10-14 experienced the robot in group interactions. Children in the intervention condition additionally interacted with it also individually, in one-to-one sessions featuring several game-like activities. These children perceived the robot significantly more as a friend than those in the control group. They also readily engaged with it in dialogues about their habits related to healthy lifestyle as well as personal experiences concerning diabetes. This indicates that the one-on-one interactions added a special quality to the relationship of the children with the robot.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://mheerink.home.xs4all.nl/pdf/ProceedingsNF2015-3.pdf&quot;&gt;Download full proceedings&lt;/a&gt; (PDF)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Wang, Weiyi</style></author><author><style face="normal" font="default" size="100%">Athanasopoulos, Georgios</style></author><author><style face="normal" font="default" size="100%">Yilmazyildiz, Selma</style></author><author><style face="normal" font="default" size="100%">Patsis, Georgios</style></author><author><style face="normal" font="default" size="100%">Valentin Enescu</style></author><author><style face="normal" font="default" size="100%">Hichem Sahli</style></author><author><style face="normal" font="default" size="100%">Verhelst, Werner</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Natural Emotion Elicitation for Emotion Modeling in Child-Robot Interactions</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 4th Workshop on Child Computer Interaction (WOCCI 2014)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2014</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.isca-speech.org/archive/wocci_2014/wc14_051.html</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">ICSA</style></publisher><pub-location><style face="normal" font="default" size="100%">Singapore</style></pub-location><pages><style face="normal" font="default" size="100%">51–56</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Obtaining spontaneous emotional expressions is the very first and vital step in affective computing studies, for both psychologists and computer scientists. However, it is quite challenging to record them in real life, especially when certain modalities are required (e.g.  3D representation of the body).  Traditional elicitation and capturing protocols either introduce the awareness of the recording, which may impair the naturalness of the behaviors, or cause too much information loss.  In this paper, we  present  natural  emotion  elicitation  and  recording  experiments, which were set in child-robot interaction scenarios. Several state-of-the-art technologies were employed to acquire the multi-modal expressive data that will be further used for emotion modeling and recognition studies. The obtained recordings exhibit the expected emotional expressions.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://www.isca-speech.org/archive/wocci_2014/wc14_051.html&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record></records></xml>