Complex Learning
In the field of multi-agent systems (MAS), a well known challenge faced by practitioners is the exploration- exploitation dilemma. This dilemma arises from the fact that the process of gathering information (i.e., exploration) and its usage (i.e., exploitation) tend to be two mutually exclusive activities. In the scenario of collective learning, where a system is tasked with learning the true state of its environment, a strongly exploration-biased system would take excessive amounts of time to come to a final consensus about its environment. Conversely, a strongly exploitation-biased one may wrongly characterize its environment, especially when the agents have to contend with noise (Raoufi et al., 2021). One method to alter the exploration-exploitation balance of a system is through changing an agent’s level of connectivity (i.e., changing the number of neighbors with which an agent communicates directly) (Kwa et al., 2022). Indeed, it has been shown in several scenarios that there exists an optimal level of connectivity to maximize a system’s performance (Mateo et al., 2019; Kwa et al., 2020, 2021). The work presented here expands on the research previously done by Crosscombe and Lawry (2021) on a decentralized MAS carrying out a collective learning task and further explores the role that agent connectivity plays in regulating a system’s transition from exploration to exploitation during such a task.