Hedonic Value: Enhancing Adaptation for Motivated Agents

TitleHedonic Value: Enhancing Adaptation for Motivated Agents
Publication TypeJournal Article
Year of Publication2013
AuthorsCos, I, Cañamero, L, Hayes, GM, Gillies, A
JournalAdaptive Behavior
Volume21
Issue6
Pagination465–483
PublisherSAGE
ISSN Number1059-7123
KeywordsActor-Critic, Grounding, Hedonic Value, Motivation, Reinforcement Learning
Abstract

Reinforcement learning (RL) in the context of artificial agents is typically used to produce behavioural responses as a function of the reward obtained by interaction with the environment. When the problem consists of learning the shortest path to a goal, it is common to use reward functions yielding a fixed value after each decision, for example a positive value if the target location has been attained and a negative one at each intermediate step. However, this fixed strategy may be overly simplistic for agents to adapt to dynamic environments, in which resources may vary from time to time. By contrast, there is significant evidence that most living beings internally modulate reward value as a function of their context to expand their range of adaptivity. Inspired by the potential of this operation, we present a review of its underlying processes and we introduce a simplified formalisation for artificial agents. The performance of this formalism is tested by monitoring the adaptation of an agent endowed with a model of motivated actor-critic, embedded with our formalisation of value and constrained by physiological stability, to environments with different resource distribution. Our main result shows that the manner in which reward is internally processed as a function of the agent’s motivational state, strongly influences adaptivity of the behavioural cycles generated and the agent’s physiological stability.

Notes

Download

URLhttps://journals.sagepub.com/doi/10.1177/1059712313486817
DOI10.1177/1059712313486817