A Novel Adaptive Weight Selection Algorithm for Multi-Objective Multi-Agent Reinforcement Learning Host Publication: 2014 IEEE World Congress on Computational Intelligence (WCCI) Authors: K. Marguerite Van Moffaert, T. Brys, A. Chandra, L. Esterle, P. Lewis and A. Nowé UsePubPlace: China Publisher: IEEE Publication Date: Jul. 2014 Number of Pages: 8 ISBN: 978-1-4799-6627-1
Abstract: To solve multi-objective problems, multiple reward signals are often scalarized into a single value and further processed using established single-objective problem solving techniques. While the field of multi-objective optimization has made many advances in applying scalarization techniques to obtain good solution trade-offs, the utility of applying these techniques in the multi-objective multi-agent learning domain has not yet been thoroughly investigated. Agents learn the value of their decisions by linearly scalarizing their reward signals at the local level, while acceptable system wide behaviour results. However, the non-linear relationship between weighting parameters of the algorithm and the learned policy makes the discovery of system wide trade-offs time consuming.
Our first contribution is a thorough analysis of well known scalarization schemes within the multi-objective multi-agent reinforcement learning setup. The analysed approaches intelligently explore the weight-space in order to find a wider range of system trade-offs. In our second contribution, we propose a novel adaptive weight algorithm which interacts with the underlying local multi-objective solvers and allows for a better coverage of the Pareto front. Our third contribution is the experimental validation of our approach by learning bi- objective policies in self-organising smart camera networks. We note that our algorithm (i) explores the objective space faster on many problem instances, (ii) obtained solutions that exhibit a larger hypervolume, while (iii) acquiring a greater spread in the objective space.
|