Artificial intelligence is all the rage these days with AI solutions for everything from scheduling meetings to Big Data mining in gastronomy appearing at an unprecedented rate. Many of these products rely on an algorithm called "deep learning" which Wikipedia defines as:
"… a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using model architectures, with complex structures or otherwise, composed of multiple non-linear transformations."
One of the key technologies used to implement deep learning systems is artificial neural networks, or ANNs:
"… a family of statistical learning models inspired by biological neural networks (the central nervous systems of animals, in particular the brain) and are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. Artificial neural networks are generally presented as systems of interconnected "neurons" which exchange messages between each other. The connections have numeric weights that can be tuned based on experience, making neural nets adaptive to inputs and capable of learning."
But ANNs present researchers and engineers with a problem; how they work is next to impossible to understand. In an attempt to grok how ANNs function Google engineers created Deep Dream, an ANN that, when given an image, looks for things it’s already been taught and inserts them into the image. It does this operation iteratively and the end results are, to say the least, psychedelic; what has been termed by Google’s geeks as "Inceptionalism" (see the main image above).
Many commentators have pointed out that the resulting images have a sort of Hieronymus Bosch feel but that’s very much a result of what Google's ANN had been taught which apparently included lots of dogs (as above) and pagoda-like buildings.
If you want to play, er, evaluate this technology there are several Web sites such as Google’s own Deep Dream Generator and Psychic VR Lab that will transmogrify your own images using the Deep Dream software that Google open sourced as an iPython notebook. Here’s one of my photos morphed into a Deep Dream by Google (click to enlarge).
Deep Dream has also been used to morph videos; here’s a clip from the movie “Fear & Loathing in Las Vegas” that’s been processed and it is, quite obviously, really disturbing:
The problem with most on-line Deep Dream implementations is that you might have to wait for hours for your image to be processed (which is the case with Psychic VR Lab) and there’s not a lot of control over the parameters of the transmogrification (as with Google’s Deep Dream Generator). So, if you’d like greater control and faster processing (your gear withstanding) you can either run up the source code (not trivial) or use one of the first commercial offerings; Realmac’s Deep Dreamer.
Currently in beta for OS X Yosemite and above, Deep Dreamer provides more control over how the ANN processes an image than most of the online services I’ve tried and not only handles static images but also creates Deep Dream videos! For £9.90 this is a cheap way to explore the world of Deep Dreaming. Highly recommended.