Dive into the next generation of reinforcement learning in Unreal Engine 5! AMD Schola is a free, hardware-agnostic (yes, it works perfectly on Nvidia too!) plugin that bridges your 3D UE5 environments with powerful Python AI frameworks like Stable Baselines 3 and Ray RLlib.
What's New in Schola v2?
Modular Architecture: A "plug-and-play" system that completely decouples your AI's "brain" (decision-making) from its "body" (Unreal Actor).
Imitation Learning: Native Minari dataset support lets you train AI using recorded human gameplay.
Dynamic Agent Management: Spawn and despawn AI mid-episode—perfect for Battle Royales and procedurally generated worlds.
Blueprint Power: Full Blueprint support for setting up your AI's vision and actions without writing C++.
Modern RL Support: Seamless compatibility with the latest tools like Gymnasium (1.1+), Ray RLlib, and Stable-Baselines3.
Getting Started
To jump in, you will need Unreal Engine 5.5+ and Python 3.10+.
(Note: If you want to see the installation steps, CLI commands, and the code to get this running, please visit the website!)
Ready to train smarter NPCs and build dynamic procedural behaviors?

Top comments (0)