<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Josep Lluís Arcos</style></author><author><style face="normal" font="default" size="100%">D Cañamero</style></author><author><style face="normal" font="default" size="100%">Ramon López de Mántaras</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Althoff, Klaus-Dieter</style></author><author><style face="normal" font="default" size="100%">Bergmann, Ralph</style></author><author><style face="normal" font="default" size="100%">L Karl Branting</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Affect-Driven CBR to Generate Expressive Music</style></title><secondary-title><style face="normal" font="default" size="100%">Case-Based Reasoning Research and Development. Third International Conference on Case-Based Reasoning, ICCBR'99</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Artificial Intelligence</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">1999</style></year></dates><publisher><style face="normal" font="default" size="100%">Springer Berlin Heidelberg</style></publisher><volume><style face="normal" font="default" size="100%">1650</style></volume><pages><style face="normal" font="default" size="100%">1–13</style></pages><isbn><style face="normal" font="default" size="100%">978-3-540-66237-2</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We present an extension of an existing system, called SaxEx, capable of generating expressive musical performances based on Case-Based Reasoning (CBR) techniques. The previous version of SaxEx did not take into account the possibility of using affective labels to guide the CBR task. This paper discusses the introduction of such affective knowledge to improve the retrieval capabilities of the system. Three affective dimensions are considered—tender-aggressive, sad-joyful, and calm-restless that allow the user to declaratively instruct the system to perform according to any combination of five qualitative values along these three dimensions.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">D Cañamero</style></author><author><style face="normal" font="default" size="100%">Josep Lluís Arcos</style></author><author><style face="normal" font="default" size="100%">Ramon López de Mántaras</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Imitating Human Performances to Automatically Generate Expressive Jazz Ballads</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. AISB'99 Symposium on Imitation in Animals and Artifacts</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">1999</style></year></dates><publisher><style face="normal" font="default" size="100%">AISB</style></publisher><pub-location><style face="normal" font="default" size="100%">Edinburgh, Scotland</style></pub-location><pages><style face="normal" font="default" size="100%">115–20</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">One of the main problems with the automatic generation of expressive musical performances is to grasp the way in which human performers use musical knowledge that is not explicitly noted in musical scores. Moreover, this knowledge is tacit, difficult to verbalize, and therefore it must be acquired through a process of observation, imitation, and experimentation. For this reason, AI approaches based on declarative knowledge representations have serious limitations. An alternative approach is that of directly using the implicit knowledge that is in examples from recordings of human performances. In this paper, we describe a case-based reasoning system that generates expressive musical performances imitating examples of expressive human performances.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Josep Lluís Arcos</style></author><author><style face="normal" font="default" size="100%">D Cañamero</style></author><author><style face="normal" font="default" size="100%">Ramon López de Mántaras</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Affect-Driven Generation of Expressive Musical Performances</style></title><secondary-title><style face="normal" font="default" size="100%">Emotional and Intelligent: The Tangled Knot of Cognition. Papers from the 1998 AAAI Fall Symposium</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">1998</style></year></dates><publisher><style face="normal" font="default" size="100%">AAAI Press</style></publisher><pages><style face="normal" font="default" size="100%">1–6</style></pages><language><style face="normal" font="default" size="100%">eng</style></language></record></records></xml>