Neuro-Evolution for Multi-Agent Policy Transfer in RoboCup Keep-Away

Didi, Sabre and Nitschke, Geoff (2016) Neuro-Evolution for Multi-Agent Policy Transfer in RoboCup Keep-Away, Proceedings of International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016), Singapore, 1281-1282, ACM.

[img] PDF
2016-Neuro-Evolution_for_Multi-Agent_Policy_Transfer.pdf

Download (70kB)

Abstract

An objective of transfer learning is to improve and speedup learning on target tasks after training on a different, but related source tasks. This research is a study of comparative Neuro-Evolution (NE) methods for transferring evolved multi-agent policies (behaviors) between multi-agent tasks of varying complexity. The efficacy of five variants of two NE methods are compared for multi-agent policy transfer. The NE method variants include using the original versions (search directed by a fitness function), behavioural and genotypic diversity based search to replace objective based search (fitness functions) as well as hybrid objective and diversity (behavioral and genotypic) maintenance based search approaches. The goal of testing these variants to direct policy search is to ascertain an appropriate method for boosting the task performance of transferred multi-agent behaviours. Results indicate that an indirect encoding NE method using hybridized objective based search and behavioral diversity maintenance yields significantly improved task performance for policy transfer between multi-agent tasks of increasing complexity. Comparatively, NE methods not using behavioral diversity maintenance to direct policy search performed relatively poorly in terms of efficiency (evolution times) and quality of solutions in target tasks.

Item Type: Conference poster
Subjects: Computing methodologies > Artificial intelligence
Date Deposited: 23 Nov 2017
Last Modified: 10 Oct 2019 15:32
URI: http://pubs.cs.uct.ac.za/id/eprint/1193

Actions (login required)

View Item View Item