Andreagiovanni Reina
Research Associate in Collective Robotics, University of Sheffield, UK

My research interests are

collective decision making, swarm robotics and
more generally
distributed cognition in natural and artificial systems.

Previously, I also worked on path planning.


Here below, you find a list of the main (present and past) projects I worked on.


Value sensitive decision making


Value-sensitive decision-making is an essential task for organisms at all levels of biological complexity, from cells to insect colonies or primate brains. It consists of choosing options among a set of alternatives and being rewarded according to the quality value of the chosen option. We study collective consensus decisions where a group needs to reach a consensus without any leader, thus all members of the group contribute to making a single collective decision. Our goal is to better understand decisions in different natural systems and to possibly identify unifying theories behind these processes.

Value-sensitive collective decisions are also interesting from an engineering point of view. We employ our understanding of collective decision making to design distributed systems, such as robot swarms. The video below shows a swarm of 150 robots that take a value-sensitive decision. The robots follow a set of rules that are inspired by the behaviour of house-hunting honeybees. See the DiODe project or the cited papers below for more info.


Reina honeybee house-hunting decision


A design pattern for decentralised decision making


Each day, our world is more and more connected and the number of intelligent (smart) devices that interact with each other is exponentially growing. In general, large scale distributed systems, also called swarm systems, are becoming pervasive in several domains. Such systems are composed of a large number of autonomous agents (e.g. robots, or smart devices) with typically limited capabilities, that coordinate and cooperate to perform a common task. A peculiar characteristic of such systems is the absence of any central leader. The underlying principle that allows a decentralised swarm to coherently operate is called self-organisation. A self-organising process results in an emerging swarm behaviour that is often difficult to design, predict, and it may easily go out of the engineer’s control. While, the engineering of decentralised swarm systems may be very arduous and challenging, diverse domains would benefit from a reliable implementation of such systems. Some examples are nano-robots for surgical operations, cooperative spectrum allocation in cognitive radio networks, or collective exploration in remote or unaccessible locations, such as other planets or disaster areas.

While a general-purpose design methodology for distributed systems may be unworkable, we believe that a viable solution is the definition of design patterns for specific class of problems. The underlying idea is to put in use the knowledge of well-understood macroscopic models of distributed systems to formalise the implementation steps of engineered systems. A design pattern offers a set of models to analyse and predict the system behaviour at the macroscopic level, and a set of rules and guidelines to allow a multi-agent implementation that quantitatively link macroscopic predictions to multi-agent outcome. The main novel contribution consists in providing the engineer with a top-down implementation methodology that allows full control over the system dynamics and accurate performance guarantees. We propose a design pattern for collective decision making inspired by the models conceived to describe the nest site selection process in honeybee swarms.


Reina micro-macro link bifurcation Reina micro-macro link bifurcation 3D Reina micro-macro link

The video above shows an implementation of the design pattern on a swarm of simulated e-puck robots. The robot swarm is asked to select (and exploit) the target area (black spot) that is the closest to the home area (gray spot). Each robot has limited sensing and communication capabilities and the task is challenging due to the absence of a central leader coordinating the operation. Exloiting the desing pattern, the designer is provided with complete control and accurate predictions of the swarm dynamics. The agreement between macroscopic dynamics and multiagent implementation is shown in the figures here above. The density maps and the hisotgrams report multi-agent simulation results, while the bifurcation lines and phase portrait describe the macroscopic dynamics (green unstable and blue stable). See cited papers below for more info.


zePPeLIN: Distributed Path Planning Using an Overhead Camera Network


We study a distributed approach to path planning. We focus on holonomic kinematic motion in cluttered 2D areas. The problem consists in defining the precise sequence of roto-translations of a rigid object of arbitrary shape that has to be transported from an initial to a final location through a large, cluttered environment. Our planning system is implemented as a swarm of flying robots that are initially deployed in the environment and take static positions at the ceiling. Each robot is equipped with a camera and only sees a portion of the area below. Each robot acts as a local planner: it calculates the part of the path relative to the area it sees, and exchanges information with its neighbors through a wireless connection. This way, the robot swarm realizes a cooperative distributed calculation of the path. The path is communicated to ground robots, which move the object. We introduce a number of strategies to improve the system's performance in terms of scalability, resource efficiency, and robustness to alignment errors in the robot camera network. We report extensive simulation results that show the validity of our approach, considering a variety of object shapes and environments. We also validated the proposed approach on a set of experiments in a real setup. The holonomic object moving on the ground is implemented through a set of 2 non-holonomic robots, the e-pucks, interconnected by a rigid structure. In this way, they form an object with a relatively large shape, which is able to rotate and move in any direction. The size of the moving area is 33 m2. The multi-robot system on the ceiling is implemented with a set of 4 cameras connected to different computers. Each camera is controlled by an independent process, which cooperates and communicates with the other processes, locally plans the path, and then directs the navigation of the e-puck system through the ground area under its local field of view. The videos below show an example of path planning and movement execution (the camera logo image shows the camera which is currenly in charge to drive the robots).




Technology and tools for improving Swarm Robotics experiments


While, on the one hand, robot swarms promise characteristics such as scalability, robustness, adaptivity and low-cost, on the other hand, robot swarms are complex to analyse, model and design because of the large number of nonlinear interactions among the robots. Mathematical and statistical tools to describe robot swarms are still under development, and a theoretical methodology to forecast the swarm dynamics given the individual robot behaviour is missing. As a consequence, it is common practice to resort to empirical studies to assess the performance of robot swarms.

IRIDIA's Arena Tracking System. The main purpose of the tracking system we implemented at the IRIDIA Lab is to provide a tool that allows a researcher to record and control the state of the experiment throughout its complete execution. Other than experimental analysis, the tracking system has also been extended to allow augmented reality for robots.

  • A. Stranieri, A.E. Turgut, G. Francesca, A. Reina, M. Dorigo, M. Birattari. IRIDIA's Arena Tracking System. Technical Report TR/IRIDIA/2011-020, IRIDIA, Universitè Libre de Bruxelles, Brussels, Belgium, 2013.


Augmented reality for robot swarms. Experiments may be either in simulation or with physical hardware. The former are easier to run and less time consuming than the latter. However, when experiments are performed only in simulation, they may not guarantee that the estimated performance matches the one measured with real hardware. In contrast, experiments with robots demonstrate and confirm that the investigated system functions on real devices, which include challenging aspects intrinsic of reality and out of the designer’s control, such as noise and device failures. However, experimentation with physical hardware is expensive, both in terms of money and time. In addition, hardware modifications are impractical and often impossible to realise when time and money resources are limited. We believe that a viable solution to these issues is performing hybrid experiments that combine real robots with simulation. This work proposes a novel technology to endow a robot swarm with virtual sensors and actuators, immersing the robots in an augmented reality environment.

The video below showcases the functionalities of ARK, Augmented Reality for Kilobots, through three demos. In Demo A, ARK automatically assigns unique IDs to a swarm of 100 Kilobots. Demos B shows the possibility of employing ARK for the automatic positioning of 50 Kilobots, which is one of the typical preliminary operations in swarm robotics experiments. These operations are typically tedious and time consuming when done manually. ARK saves researchers' time and makes operating large swarms considerably easier. Additionally, automating the operation gives more accurate control of the robots' start positions and removes undesired biases in comparative experiments. Demo C shows a simple foraging scenario where 50 Kilobots collect material from a source location and deposit it at a destination. The robots are programmed to pick up one virtual flower inside the source area (green flower field), carry it to the destination (yellow nest), and deposit the flower there. When performing actions in the virtual environments, the robot signals by lighting its LED in blue. When picking up a virtual flower from the source, the robot reduces the source's size for the rest of the robots (by reducing the area’s diameter by 1cm). Similarly when a robot deposits flowers at its destination, the area increases by 1 cm. This demo shows that robots can perceive (and navigate) a virtual gradient, can modify the virtual environment by moving material from one location to another, and can autonomously decide when to change the virtual environment that they sense (either the source or the destination).
More information are available at: http://diode.group.shef.ac.uk/kilobots/index.php/ARK




The video below illustrates the proposed virtual sensing technology through an experiment involving 15 e-pucks. In this experiment, the robots are equipped with a virtual pollutant sensor. The pollutant is simulated via the ARGoS simulator. The sensor returns a binary value indicating whether there is a presence of pollutant at the robot location. In our experiment, we assume that the pollutant is present in an area within a diffusion cone that we show in the video through a green overlay area. The robots move randomly within an hexagonal arena and, when a robot perceives the pollutant (i.e., it lies within the diffusion cone), it stops and lights up its red LEDs with probability P=0.3 per timestep.





Automatic robot placement. Through the virtual sensing technology, we developed a tool for automatic robot placement. This tool eases the setup of experiments by automatically navigating robots through the environment till the user-defined final destination. This functionality is particularly useful when conducting extensive real robot experiments that require a controlled initial formation of robot. An alternative application of this tool is the generation of artistic coordinated motion of robots.



Wearable device for human-robot interaction. We designed and implemented a wearable device to allow an e-puck robot to localise a human. This tool is a pair of e-Geta, designed after the Japanese geta footwear. Each e-Geta has a LED ring (red for the left foot and green for the right foot) that glows when the human leans his/her foot activating the switch under the e-Geta's heel. The height of the LED ring is similar to the one of the epuck LED ring, so that the same camera calibration parameters may be valid for localising both other robots and humans through the e-Geta. During the movement of the foot, the eGeta switches off its LEDs to reduce unreliable readings that may be caused by vertical shifting. We tested this tool in a robot swarm-human interaction case study. In this study, the innovative idea has been to allow the robot swarm to control the human. The underlying idea is having robots capable to perceive environmental features that a human cannot. In case of hazardous features, the swarm has the role of escorting the human preventing him/her to step into dangerous areas. In our work, the swarm encircles the human and signals both through LEDs and physically obstructing the movements the presence of nearby forbidden areas.

e-Geta Birattari Reina IRIDIA e-Geta Birattari Reina IRIDIA Human-swarm interaction Reina Birattari Podevijn Debruyn IRIDIA

Andreagiovanni Reina - Research Associate in Collective Robotics - The University Of Sheffield, UK Design support from SoftyWeb