Neural network image classifiers look at images and determine what they represent. When analyzing a network's behavior, we usually want to figure out what parts of an image are especially important to the network. What parts of a steak, in the net's eyes, are the most steak-y?
This work slides a black box over an image of steak, and sharpens the areas where the network's confidence dips the most. A small, Perlin-noised tint is applied for that extra meaty sheen.