TY - JOUR
T1 - Hedonic Value: Enhancing Adaptation for Motivated Agents
JF - Adaptive Behavior
Y1 - 2013
A1 - Ignasi Cos
A1 - Lola Cañamero
A1 - Gillian M Hayes
A1 - Gillies, Andrew
KW - Actor-Critic
KW - Grounding
KW - Hedonic Value
KW - Motivation
KW - Reinforcement Learning
AB - Reinforcement learning (RL) in the context of artificial agents is typically used to produce behavioural responses as a function of the reward obtained by interaction with the environment. When the problem consists of learning the shortest path to a goal, it is common to use reward functions yielding a fixed value after each decision, for example a positive value if the target location has been attained and a negative one at each intermediate step. However, this fixed strategy may be overly simplistic for agents to adapt to dynamic environments, in which resources may vary from time to time. By contrast, there is significant evidence that most living beings internally modulate reward value as a function of their context to expand their range of adaptivity. Inspired by the potential of this operation, we present a review of its underlying processes and we introduce a simplified formalisation for artificial agents. The performance of this formalism is tested by monitoring the adaptation of an agent endowed with a model of motivated actor-critic, embedded with our formalisation of value and constrained by physiological stability, to environments with different resource distribution. Our main result shows that the manner in which reward is internally processed as a function of the agent’s motivational state, strongly influences adaptivity of the behavioural cycles generated and the agent’s physiological stability.
PB - SAGE
VL - 21
UR - https://journals.sagepub.com/doi/10.1177/1059712313486817
IS - 6
N1 - Download
ER -
TY - JOUR
T1 - Learning Affordances of Consummatory Behaviors: Motivation-Driven Adaptive Perception
JF - Adaptive Behavior
Y1 - 2010
A1 - Ignasi Cos
A1 - Lola Cañamero
A1 - Gillian M Hayes
AB - This article introduces a formalization of the dynamics between sensorimotor interaction and homeostasis, integrated in a single architecture to learn object affordances of consummatory behaviors. We also describe the principles necessary to learn grounded knowledge in the context of an agent and its surrounding environment, which we use to investigate the constraints imposed by the agent’s internal dynamics and the environment. This is tested with an embodied, situated robot, in a simulated environment, yielding results that support this formalization. Furthermore, we show that this methodology allows learned affordances to be dynamically redefined, depending on object similarity, resource availability, and the rhythms of the agent’s internal physiology. For example, if a resource becomes increasingly scarce, the value assigned by the agent to its related effect increases accordingly, encouraging a more active behavioral strategy to maintain physiological stability. Experimental results also suggest that a combination of motivation-driven and affordance learning in a single architecture should simplify its overall complexity while increasing its adaptivity.
PB - SAGE
VL - 18
UR - https://journals.sagepub.com/doi/10.1177/1059712310375471
IS - 3-4
N1 - Download
ER -
TY - CONF
T1 - Ecological Integration of Affordances and Drives for Behaviour Selection
T2 - Proc. IJCAI 2005 Workshop on Modeling Natural Action Selection
Y1 - 2005
A1 - Cos-Aguilera, Ignasi
A1 - Lola Cañamero
A1 - Gillian M Hayes
A1 - Gillies, Andrew
ED - Joanna J Bryson
ED - Tony J Prescott
ED - Anil K Seth
AB - This paper shows a study of the integration of physiology and perception in a biologically inspired robotic architecture that learns behavioural patterns by interaction with the environment. This implements a hierarchical view of learning and behaviour selection which bases adaptation on a relationship between reinforcement and the agent’s inner motivations. This view ingrains together the basic principles necessary to explain the underlying processes of learning behavioural patterns and the way these change via interaction with the environment. These principles have been experimentally tested and the results are presented and discussed throughout the paper.
JF - Proc. IJCAI 2005 Workshop on Modeling Natural Action Selection
CY - Edinburgh, Scotland
SN - 1-902956-40-9
ER -
TY - CONF
T1 - Motivation Driven Learning of Action Affordances
T2 - Proceedings of the Symposium on Agents that Want and Like: Motivational and Emotional Roots of Cognition and Action (SSAISB'05)
Y1 - 2005
A1 - Cos-Aguilera, Ignasi
A1 - Lola Cañamero
A1 - Gillian M Hayes
ED - Lola Cañamero
AB - Survival in the animal realm often depends on the ability to elucidate the potentialities for action offered by every situation. This paper argues that affordance learning is a powerful ability for adaptive, embodied, situated agents, and presents a motivation-driven method for their learning. The method proposed considers the agent and its environment as a single unit, thus intrinsically relating agent's interactions to fluctuations of the agent's internal motivation. Being that the motivational state is an expression of the agent's physiology, the existing causality of interactions and their effect on the motivational state is exploited as a principle to learn object affordances. The hypothesis is tested in a Webots 4.0 simulator with a Khepera robot.
JF - Proceedings of the Symposium on Agents that Want and Like: Motivational and Emotional Roots of Cognition and Action (SSAISB'05)
PB - AISB
CY - Hatfield, UK
SN - 1-902956-41-7
UR - https://aisb.org.uk/wp-content/uploads/2019/12/2_Agents_Final.pdf
N1 - Download symposium proceedings (pdf)
ER -
TY - CONF
T1 - Using a SOFM to learn Object Affordances
T2 - Proc. 5th Workshop of Physical Agents (WAF'04)
Y1 - 2004
A1 - Cos-Aguilera, Ignasi
A1 - Gillian M Hayes
A1 - Lola Cañamero
AB - Learning affordances can be defined as learning action potentials, i.e., learning that an object exhibiting certain regularities offers the possibility of performing a particular action. We propose a method to endow an agent with the capability of acquiring this knowledge by relating the object invariants with the potentiality of performing an action via interaction episodes with each object. We introduce a biologically inspired model to test this learning hypothesis and a set of experiments to check its validity in a Webots simulator with a Khepera robot in a simple environment. The experiment set aims to show the use of a GWR network to cluster the sensory input of the agent; furthermore, that the aforementioned algorithm for neural clustering can be used as a--starting point to build agents that learn the relevant functional bindings between the cues in the environment and the internal needs of an agent.
JF - Proc. 5th Workshop of Physical Agents (WAF'04)
PB - University of Edinburgh
CY - Girona, Spain
UR - https://uhra.herts.ac.uk/handle/2299/9905
N1 - Download (Open Access)
ER -
TY - CONF
T1 - Learning Object Functionalities in the Context of Action Selection
T2 - Towards Intelligent Mobile Robots, TIMR'03: 4th British Conference on Mobile Robotics
Y1 - 2003
A1 - Cos-Aguilera, Ignasi
A1 - Lola Cañamero
A1 - Gillian M Hayes
ED - U Nehmzow
ED - C Melhuish
JF - Towards Intelligent Mobile Robots, TIMR'03: 4th British Conference on Mobile Robotics
CY - University of the West of England, Bristol
ER -
TY - CONF
T1 - Motivation-driven learning of object affordances: First experiments using a simulated khepera robot
T2 - Proc. 5th International Conference in Cognitive Modelling (ICCM'03)
Y1 - 2003
A1 - Cos-Aguilera, Ignasi
A1 - Lola Cañamero
A1 - Gillian M Hayes
ED - Detjer, Frank
ED - Dörner, Dietrich
ED - Harald Schaub
JF - Proc. 5th International Conference in Cognitive Modelling (ICCM'03)
CY - Bamberg, Germany
ER -
TY - CONF
T1 - Relating Behavior Selection Architectures to Environmental Complexity
T2 - From Animals to Animats: Proc. 7th International Conference on Simulation of Adaptive Behavior
Y1 - 2002
A1 - Avila-García, Orlando
A1 - Hafner, Elena
A1 - Lola Cañamero
ED - Bridget Hallam
ED - Dario Floreano
ED - John Hallam
ED - Gillian M Hayes
ED - Jean-Arcady Meyer
JF - From Animals to Animats: Proc. 7th International Conference on Simulation of Adaptive Behavior
PB - MIT Press
CY - Edinburgh, Scotland
SN - 9780-262-58217-9
ER -