Reinforcement learning agents have demonstrated remarkable achievements in simulated environments. Data efficiency, however, significantly impedes carrying this success over to real environments. The design of data-efficient agents that address this problem calls for a deeper understanding of information acquisition and representation. This tutorial offers a framework that can guide associated agent design decisions. This framework is inspired in part by concepts from information theory that has grappled with data efficiency for many years in the design of communication systems. In this tutorial, the authors shed light on questions of what information to seek, how to seek that information, and what information to retain. To illustrate the concepts, they design simple agents that build on them and present computational results that highlight data efficiency. This book will be of interest to students and researchers working in reinforcement learning and information theorists wishing to apply their knowledge in a practical way to reinforcement learning problems.