NASNet - Letting Machines Do (Part Of) The Work (2017)

David Landup
David Landup

When designing networks, you test a lot of things out. When you try things out a lot, you look for ways to automate it. While you can use tuning frameworks such as Keras Tuner that allow you to input a space of different variations, and have it run them (randomly or guided by an optimization algorithm) - why should you have to create this space yourself? Well, you don't - and this idea is at the heart of Neural Architecture Search. NAS is not a new technique, and it's both a very wide and very diverse field, with a lot of research hours going into expanding it. NAS has proven to create state-of-the-art networks, by searching through a search space (network architectures) guided by a strategy (how it navigates the space) after which the found architecture is evaluated with an evaluation strategy and the search continues. Some NAS methods use reinforcement learning, while some use evolutionary algorithms such as genetic algorithms, while others use other classical optimization algorithms. Some believe that the prohibitive time it takes to perform NAS doesn't justify the performance benefits, while some believe it to be the future of architecture design. In any case - the search space is usually constrained. Either possible operations are given, or the search is made to be metric-aware (making sure that, for example, inference/training time doesn't explode into an unpractical range).

Start project to continue
Lessson 11/14
You must first start the project before tracking progress.
Mark completed

© 2013-2024 Stack Abuse. All rights reserved.

AboutDisclosurePrivacyTerms