AI/ML

Deep Reinforcement One-Shot Learning for Artificially Intelligent Classification Systems. (arXiv:1808.01527v1 [cs.LG])

In recent years there has been a sharp rise in networking applications, in
which significant events need to be classified but only a few training
instances are available. These are known as cases of one-shot learning.
Examples include analyzing network traffic under zero-day attacks, and computer
vision tasks by sensor networks deployed in the field. To handle this
challenging task, organizations often use human analysts to classify events
under high uncertainty. Existing algorithms use a threshold-based mechanism to
decide whether to classify an object automatically or send it to an analyst for
deeper inspection. However, this approach leads to a significant waste of
resources since it does not take the practical temporal constraints of system
resources into account. Our contribution is threefold. First, we develop a
novel Deep Reinforcement One-shot Learning (DeROL) framework to address this
challenge. The basic idea of the DeROL algorithm is to train a deep-Q network
to obtain a policy which is oblivious to the unseen classes in the testing
data. Then, in real-time, DeROL maps the current state of the one-shot learning
process to operational actions based on the trained deep-Q network, to maximize
the objective function. Second, we develop the first open-source software for
practical artificially intelligent one-shot classification systems with limited
resources for the benefit of researchers in related fields. Third, we present
an extensive experimental study using the OMNIGLOT dataset for computer vision
tasks and the UNSW-NB15 dataset for intrusion detection tasks that demonstrates
the versatility and efficiency of the DeROL framework.

Source link




Related posts

HyperScience, the machine learning startup tackling data entry, raises $30 million Series B

Newsemia

Machine learning finds biomarker for rare Castleman Disease – Korea Biomedical Review

Newsemia

Joint Optimization of AI Fairness and Utility: A Human-Centered Approach. (arXiv:2002.01621v1 [cs.AI])

Newsemia

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy