Introduction: Play Video Games with Generative AI
Get an overview of the topics that will be covered in this chapter.
In addition to generating complex data using deep neural networks, neural networks can also be used to learn rules for how an entity (such as a video game character or a vehicle) should respond to an environment to optimize a reward, known as reinforcement learning (RL). While RL is not intrinsically tied to either deep learning or generative AI, the union of these fields has created a powerful set of techniques for optimizing complex behavioral functions.
In this chapter, we’ll learn how to apply GANs to learn optimal policies for different figures to navigate within the OpenAI simulation environment. To understand the powerful combination of these methods with traditional approaches in RL, we’ll first review the more general problem that RL is trying to solve: how to determine the right action for an entity given a state, yielding a new state and a reward. The rules that optimize such rewards are known as a policy.
We’ll focus on the following topics:
How deep neural networks were used to learn complex policies for high-dimensional data, such as the raw pixels from Atari video games.
How the problem of inverse reinforcement learning (IRL) involves learning the reward function from observing examples of the policy given by an “expert” agent that makes optimal decisions; this type of algorithm is also known as imitation learning.
How we can use the GAN training function to distinguish between expert and non-expert behavior (just as we distinguished between simulated and natural data in prior examples) to optimize a reward function.
Get hands-on with 1400+ tech skills courses.