Self-Organizing Synchronicity and Desynchronicity using Reinforcement Learning Host Publication: International Conference on Agents and Artificial Intelligence Authors: M. Emilov Mihaylov, Y. Le Borgne, K. Tuyls and A. Nowé Publication Year: 2011 Number of Pages: 10
Abstract: We present a self-organizing reinforcement learning (RL) approach for coordinating the wake-up cycles of nodes in a wireless sensor network in a decentralized manner. To the best of our knowledge we are the first to demonstrate how global synchronicity and desynchronicity can emerge through local interactions alone without the need of central mediator or any form of explicit coordination. We apply this RL approach to wireless sensor nodes arranged in different topologies and study how agents, starting with a random policy, are able to self-adapt their behavior based only on their interaction with neighboring nodes. Each agent independently learns to which nodes it should synchronize to improve message throughput and at the same with whom to desynchronize in order to reduce communication interference. The obtained results show how simple and computationally bounded sensor nodes are able to coordinate their wake-up cycles in a distributed way in order to improve the global system performance through (de)synchronicity.
|