Online Meta-Learning. (arXiv:1902.08438v4 [cs.LG] UPDATED)

A central capability of intelligent systems is the ability to continuously
build upon previous experiences to speed up and enhance learning of new tasks.
Two distinct research paradigms have studied this question. Meta-learning views
this problem as learning a prior over model parameters that is amenable for
fast adaptation on a new task, but typically assumes the set of tasks are
available together as a batch. In contrast, online (regret based) learning
considers a sequential setting in which problems are revealed one after the
other, but conventionally train only a single model without any task-specific
adaptation. This work introduces an online meta-learning setting, which merges
ideas from both the aforementioned paradigms to better capture the spirit and
practice of continual lifelong learning. We propose the follow the meta leader
algorithm which extends the MAML algorithm to this setting. Theoretically, this
work provides an $mathcal{O}(log T)$ regret guarantee with only one
additional higher order smoothness assumption in comparison to the standard
online setting. Our experimental evaluation on three different large-scale
tasks suggest that the proposed algorithm significantly outperforms
alternatives based on traditional online learning approaches.

Source link

Related posts

Decolonising Artificial Intelligence


Deepomatic raises $6.2 million for its industrial computer vision technology


Modeling growth limits of Bacillus spp. spores by using deep-learning algorithm.


This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy