Explanation of CNN-based image classifier (brief intro)

Jijie Liu
2 min readMay 24, 2020

My last internship to get my diplôme d’ingénieur (Master Degree) is at EDF (Electricity of France) Lab Paris Saclay. The aim is to explain how CNN classify images. After 6 months of studying, our team proposed a prototype of explanation, with 3 levels:

  • Local Explanation: It shows the important areas for an image considered by the model.
  • Class Explanation: It captures the special features from the important areas for each class.
  • Global Explanation: It analyses the relationship of the special features among all the classes.

Local Explanation

Local explanation is to identify the important areas in an image, and important areas are constructed by the pixels used in CNN. To find these pixels, we looked at the gradients in the network from top to bottom, and rebuild the input image with these gradients.

The method we used is Grad-CAM combined with DeconvNet.

Class Explanation

On this level, we capture special features from the important areas for each class. For example, as a human being, the wheel is a special feature for the class Car. We want to see how the CNN consider special features for a class.

The methods we proposed is to transform important areas into vector, and cluster these vectors using KMeans. Each cluster is a special feature.

Global Explanation

With the special features of each class, we are interested of understanding the relationship among these features.

The method we used is to visualise the distribution of the important areas using T-SNE. Before applying T-SNE, we transform these areas into vectors using the same way presented in Class Explanation.

The original paper is presented as follow.

--

--

Jijie Liu

Developer in Paris, interested in building small cool stuffs