Hierarchical abstraction appears to be what our brains do, with increasing support in the field of Neuroscience. While some protest drawing lines between the computation the brain does and silicone computing found in computers, some support the parallels, such as Dana H. Ballard in his book "Brain Computation as Hierarchical Abstraction", who works as a Computer Science professor at the University of Texas, with ties to Psychology, Neuroscience and the Center for Perceptual Systems.
Inspired by hierarchical abstraction of the visual cortex, CNNs are hierarchical, and hierarchical abstraction is what allows them to do what they do. Exploiting exactly this property is what allows us to create a really fun (and practical) algorithm, and the focus of this lesson. It's called DeepDream, because we associate odd, almost-there-but-not-there visuals with dreams, and the images are induced by a deep convolutional neural network. Here's a visual created with the code in this lesson:
In "Being You" by British neuroscientist Anil Seth, Anil explains how they used deep dream to create a "hallucination machine". A "hallucination machine" was meant to computationally simulate overreactive perceptual priors. That's a different way to say "emphasize what you expect". Where I'm from, folk wisdom states that "In fear, eyes are big", and it's oftentimes used to explain how you can easily see things that aren't there when you're afraid. You're embedding overreactive perceptual priors in your projection so the shirt you forgot to take off the chair now looks like an intruder in your home during night. In a similar way, you might recognize a cloud as being the shape of a country on a map, or the sequence of lines "-_-" as a face with closed eyes and a flat mouth. This is formally known as pareidolia. In 2015, Google engineer Alexander Mordvintsev popularized a way to embed perceptual priors (induce and visualize pareidolia) into CNNs. This algorithm is known as DeepDream.