We’ve covered a lot of content for computer vision by now, and one of the most important remaining parts to deepening our expertise, is interpreting how a model arrives at its predictions.

Interpreting What Convnets Learn

We will end this chapter by getting you familiar with a range of different techniques for visualizing what convnets learn and understanding the decisions they make. It’s often said that deep learning models are “black boxes”, which is certainly the case for some models such as Dense ones, but certainly not convnets. There are many techniques for visualizing and interpreting these representations. We’ll learn a few of the important ones:

  • Visualizing intermediate convnet outputs (intermediate activations)
  • Visualizing convnet filters
  • Visualizing heat-maps of class activation in an image

For the first method—activation visualization—we’ll use the small convnet that we trained from scratch on the dogs-versus-cats classification problem in section 8.2. For the next two methods, we’ll use a pre-trained Xception model.

→ Visualizing Intermediate Activations

COME BACK TO THIS LATER → NOT NEEDED AT THE MOMENT