AI/ML

Probabilistic Model-Agnostic Meta-Learning. (arXiv:1806.02817v2 [cs.LG] UPDATED)

Meta-learning for few-shot learning entails acquiring a prior over previous
tasks and experiences, such that new tasks be learned from small amounts of
data. However, a critical challenge in few-shot learning is task ambiguity:
even when a powerful prior can be meta-learned from a large number of prior
tasks, a small dataset for a new task can simply be too ambiguous to acquire a
single model (e.g., a classifier) for that task that is accurate. In this
paper, we propose a probabilistic meta-learning algorithm that can sample
models for a new task from a model distribution. Our approach extends
model-agnostic meta-learning, which adapts to new tasks via gradient descent,
to incorporate a parameter distribution that is trained via a variational lower
bound. At meta-test time, our algorithm adapts via a simple procedure that
injects noise into gradient descent, and at meta-training time, the model is
trained such that this stochastic adaptation procedure produces samples from
the approximate model posterior. Our experimental results show that our method
can sample plausible classifiers and regressors in ambiguous few-shot learning
problems. We also show how reasoning about ambiguity can also be used for
downstream active learning problems.

Source link




WordPress database error: [Error writing file '/tmp/MY65fJVe' (Errcode: 28 - No space left on device)]
SELECT SQL_CALC_FOUND_ROWS wp_posts.ID FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) WHERE 1=1 AND wp_posts.ID NOT IN (294410) AND ( wp_term_relationships.term_taxonomy_id IN (313) ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') GROUP BY wp_posts.ID ORDER BY RAND() LIMIT 0, 3

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy