Title: TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning
Authors: Sung-Whan Yoon, Jun Seo and Jaekyun Moon
Few-shot learning promises to allow machines to carry out tasks that are previously unencountered, using only a small number of relevant examples. As such, few-shot learning finds wide applications, where labeled data are scarce or expensive, which is far more often the case than not. Unfortunately, despite immense interest and active research in recent years, few-shot learning remains an elusive challenge to machine learning community. For example, while deep networks now routinely offer near-perfect classification scores on standard image test datasets given ample training, reported results on few-shot learning still fall well below the levels that would be considered reliable in crucial real world settings.
We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning. See Figure. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. Excellent generalization results with this combination. When tested on standard datasets, we obtain state of the art classification accuracies under various few-shot scenarios. As seen in Table, our method gives the best accuracy when compared with existing world-renowned few-shot learners.