Inception - From Meme to State of the Art (2014)

David Landup
David Landup

The Inception network competed in the 2014 ILSVRC challenge and outperformed VGGNets in terms of both accuracy and training speed, winning the number 1 position that year. The first network in the family tree is known as GoogLeNet, followed by InceptionV2, InceptionV3, InceptionV4 and Inception ResNet.

GoogLeNet is also known as InceptionV1. InceptionV2 and InceptionV3 are a redesign of the network and come joint in a single paper and InceptionV4/Inception ResNet come again in a single paper.

The GoogLeNet name is an homage to LeCun's LeNet5. However, the "Inception" name comes from the meme "We need to go deeper", from the movie "Inception", which was viral at the time, reflecting the fact that the network was made to... go deeper than previous networks. The paper that started the Inception tree was aptly named "Going deeper with convolutions" by Szegedy et al.

The authors noted that the most straightforward way to increase performance is to scale the network up - both in depth and in width. Scaling up means that you'll want to utilize the parameters more efficiently, lest you end up wasting precious compute. The authors of the Inception network note that you can fundamentally solve scaling issues with sparsely connected architectures, rather than densely connected architectures. With this in mind - GoogLeNet had only 6M parameters compared to 60M of AlexNet and 139M of VGGNet, and had better performance (accuracy, training and parameter). This was a nail in the coffin to the previous approach of scaling networks up.

Start project to continue
Lessson 5/14
You must first start the project before tracking progress.
Mark completed

© 2013-2024 Stack Abuse. All rights reserved.

AboutDisclosurePrivacyTerms