RL Collision Avoidance provides a complete pipeline for developing, training, and deploying multi-robot collision avoidance policies. It offers a set of Gym-compatible simulation scenarios where agents learn collision-free navigation through reinforcement learning algorithms. Users can customize environment parameters, leverage GPU acceleration for faster training, and export learned policies. The framework also integrates with ROS for real-world testing, supports pre-trained models for immediate evaluation, and features tools for visualizing agent trajectories and performance metrics.
What is StarCraft II Reinforcement Learning Agent?
This repository provides an end-to-end reinforcement learning framework for StarCraft II gameplay research. The core agent uses Proximal Policy Optimization (PPO) to learn policy networks that interpret observation data from the PySC2 environment and output precise in-game actions. Developers can configure neural network layers, reward shaping, and training schedules to optimize performance. The system supports multiprocessing for efficient sample collection, logging utilities for monitoring training curves, and evaluation scripts for running trained policies against scripted or built-in AI opponents. The codebase is written in Python and leverages TensorFlow for model definition and optimization. Users can extend components such as custom reward functions, state preprocessing, or network architectures to suit specific research objectives.
StarCraft II Reinforcement Learning Agent Core Features