Autonomous Agent Learning Lab

Gary Parker, Jim O'Connor, and ConnColl Students/Alumni


The central theme for research in this lab is autonomous agent learning. The agents are robots, models of robots, and interactive video game players. The learning is usually a form of evolutionary computation and almost always some type of computational intelligence. The agents are autonomous in the respect that they can operate on their own and the learning either takes place before operation or can happen during operation using learning systems that are offline from the agent. Most of the learning is for control programs, although some is for the morphology of the robot or control/morphology in combination. The following is a list of current or recent research topics.

Colony Robotics and Robot Construction, Gary Parker, Jim O'Connor, and Aaron Saporito '25. An 8x8 foot area has been set aside in the Robotics Lab for colony robotics research. We are working on developing a power supply for the robots, establishing communication links from the learning system computer to the robot, and implementing an overhead camera for colony observation. Several robots are being developed and constructed, including bipeds, quadrupeds, hexapods, mini-robots, two wheel balancing robots, autonomous sailboats, and flying robots.

Genetic Algorithms, Gary Parker and Jim O'Connor. This research involves studying the genetic algorithm methods of selection, crossover, and mutation to improve results for categories of problem sets.

Speciation, Gary Parker, Thomas B. Edwards '18, and Jay Nash '26. This research involves the development of a genetic algorithm (GA - an algorithm that matches evolutionary processes to solve difficult computing problems) that replicates speciation.

Xpilot-AI, Gary Parker and Jim O'Connor. Xpilot is an online computer game that models space combatants in a 2D environment. We have determined how to access the client program to be able to allow agents with artificial intelligence to play the game (Xpilot-AI). With proper configuration, these agents look and act just like those controlled by human players. The long-term goal is to have them continually learn as they join games in progress and compete against human players.

Xpilot-AI and Neural Network Learning, Gary Parker, Jim O'Connor, and Nick Lorentzen '24. This research involves the use of NEAT (NeuroEvolution of Augmenting Topologies) to learn controllers for Xpilot-AI agents maneuvering through a race track.

Punctuated Anytime Learning in Evolutionary Robotics, Gary Parker, William Tarimo, Jim O'Connor, Ryan Zgombic '22, and Annika Hoag '26. This research uses periodic tests on the actual robot to test the control programs learned on the robot model by evolutionary computation and improve the learning process by altering the learning algorithm (Fitness Biasing) or changing the model (Co-Evolving Model Parameters).

The Co-Evolution of a Team of Cooperative Autonomous Agents, Gary Parker. In this research we are using evolutionary computation and punctuated anytime learning to evolve the control programs for the individuals in a team of robots or Xpilot-AI autonomous agents. They have a common task (such as pushing a box into the corner or catch a prey) and need to cooperate to successfully accomplish it.

Hardware Implementation of Neural Networks, Gary Parker and Mohammad Khan '17. We have are working on the implementation of a neural network on Arduino chips. Each chip will have a single neuron and the program needed to learn using backpropagation. We have used 3 chips to create a neural network that can solve the AND, OR, and XOR, and have applied it to robot control.

Autonomous Deep Learning Robot, Gary Parker and Mohammad Khan '17. We have been working to configure this robot with the intent of using the robot's 3D sensor to build a map of the environment, which will allow it to detect humans within the environment. In addition, we've been looking at training it on datasets for human face recognition.

Using Cyclic Genetic Algorithms to Generate Gaits for Legged Robots, Gary Parker, William Tarimo, and Manan Isak '24. In this project we use CGAs (a form of evolutionary computation) to learn walking patterns for eight, six, and four-legged robots. Learning takes place on a model of the robot with the new control programs downloaded to the actual robot for testing. The hexapod (six-legged) robots have two degrees of freedom per leg, whereas the eight and four-legged robots have three degrees of freedom per leg.

The Co-Evolution of Robot Control and Morphology, Gary Parker. This research involves concurrently evolving the body and the mind of a robot. We are using LEGO Mindstorms for the evolution of full body robots and the ServoBot for the evolution of sensor morphology.

Using Cyclic Genetic Algorithms to Evolve Multi-Loop Control Programs, Gary Parker. In this research we are expanding the use of CGAs to evolve multi-loop programs. Previous versions were limited to a single loop.

Emergent Gaits Through the Co-Evolution of Leg Cycles, Gary Parker and Cam Angliss '22. Incremental learning used a genetic algorithm to coordinate previously learned leg cycles. In this research, we attempt to have gaits emerge during the co-evolution of the leg cycles.


Research Videos

Evolved ServoBot Gait
Evolved ServoBot Gait from Ground Level
Evolved Stiquito Gait (2x speed)
ServoBot with Capacitors Recharging
ServoBot Sensing Low Power and Navigating to Charger (2x)
Xpilot-AI Evolved Controller
Xpilot-AI Evolved Controller 2
ServoBot with Evolved Sensor Morphology/Control
ServoBot with Evolved Sensor Morphology/Control 2


General Areas of Research

Evolutionary Robotics
Adaptive Learning Systems for Autonomous Robot Control
Gait Generation for Multi-Legged Robots
Cyclic Genetic Algorithms
Punctuated Anytime Learning
Co-Evolving Cooperative Teams of Robots



RETURN