Multi-fidelity Neural Architecture Search with Knowledge Distillation

Ilya Trofimov1Nikita Klyuchnikov1Mikhail Salnikov1Alexander Filippov2Evgeny Burnaev1

1Skolkovo Institute of Science and Technology2Huawei Noah's Ark Lab

arXiv 2020

Pearson correlation between high-fidelity and low-fidelity evaluations of architectures.

Abstract

Neural architecture search (NAS) targets at finding the optimal architecture of a neural network for a problem or a family of problems. Evaluations of neural architectures are very time-consuming. One of the possible ways to mitigate this issue is to use low-fidelity evaluations, namely training on a part of a dataset, fewer epochs, with fewer channels, etc. In this paper, we propose to improve low-fidelity evaluations of neural architectures by using a knowledge distillation. Knowledge distillation adds to a loss function a term forcing a network to mimic some teacher network. We carry out experiments on CIFAR-100 and ImageNet and study various knowledge distillation methods. We show that training on the small part of a dataset with such a modified loss function leads to a better selection of neural architectures than training with a logistic loss. The proposed low-fidelity evaluations were incorporated into a multi-fidelity search algorithm that outperformed the search based on high-fidelity evaluations only (training on a full dataset).

Materials

Paper

Code

Copy bibtex