In recent years, massive improvements have been made surrounding gameplaying artificial intelligence. Simple board games to complex multiplayer battle games have been solved with reinforcement learning approaches. Although reinforcement learning has become the most popular method for solving these tasks, approaches that make use of neuroevolution have still shown promising results. In this work, we investigate the performance of evolutionary algorithms on the popular console fighting game Super Smash Bros. Melee (SSBM).
Our agents were trained with two different objectives in mind——offense and defense. We measure these two objectives in terms of damage. The more damage an agent deals, the more offensive it is. Likewise, agents that receive less damage are more defensive. By making use of DEAP's non-dominated sorting genetic algorithm (NSGA-II), we can optimize both of these objectives to evolve a diverse set of agents that implement varying degrees of offensive and defensive strategies. A few of these strategies can be seen below:
Our agent is Captain Falcon and his opponent is a level 9 Falco cpu.
Our most defensive agents do not exhibit complex defensive strategies like running away and shielding. Instead, the strategy consists of rolling behind the opponent and approaching with a safe attacking option. This results in a fighting style that both deals and receives low amounts of damage.
As the agents get more offensive, they choose options that deal more damage but put them at risk for receiving more damage. This strategy combines the safe attacking options seen in our defensive agents, along with the riskier close ranged jabs of our most offensive agents.
Our most offensive agents opt for grabs and close ranged jabs. While this strategy is most likely to deal lots of damage, the agent is also prone to receiving high amounts of damage as well.
Tested on: Ubuntu 14.04 LTS & macOS Sierra
- Download Current stable version of Dolphin for Ubuntu or Dolphin 5.0 for macOS.
- Super Smash Bros. Melee (NTSC 1.02) iso
- Python 3
- Python packages: DEAP, numpy
Before running, set up a pipe in Dolphin to control your agents: https://github.com/luckycharms14/MeleeAI_Dolphin. Turn cheats on, and make sure netplay community settings are on.
Pull the repo and run with python3 -m p3
before opening Dolphin. Stop with ^C.
My final report over this project can be found here
Thanks to https://github.com/spxtr/p3 for the memory watcher, as well as to https://github.com/luckycharms14/MeleeAI_Dolphin for help with setting up Dolphin pipe configuration. Inspiration for this project comes from https://github.com/vladfi1/phillip