Exploratory Combinatorial Optimization with Reinforcement Learning. (arXiv:1909.04063v1 [cs.LG])

Many real-world problems can be reduced to combinatorial optimization on a
graph, where the subset or ordering of vertices that maximize some objective
function must be found. With such tasks often NP-hard and analytically
intractable, reinforcement learning (RL) has shown promise as a framework with
which efficient heuristic methods to tackle these problems can be learned.
Previous works construct the solution subset incrementally, adding one element
at a time, however, the irreversible nature of this approach prevents the agent
from revising its earlier decisions, which may be necessary given the
complexity of the optimization task. We instead propose that the agent should
seek to continuously improve the solution by learning to explore at test time.
Our approach of exploratory combinatorial optimization (ECO-DQN) is, in
principle, applicable to any combinatorial problem that can be defined on a
graph. Experimentally, we show our method to produce state-of-the-art RL
performance on the Maximum Cut problem. Moreover, because ECO-DQN can start
from any arbitrary configuration, it can be combined with other search methods
to further improve performance, which we demonstrate using a simple random

Source link

Related posts

Enlarged perivascular spaces in brain MRI: Automated quantification in four regions.


What Boards Need to Know About AI


Predict In-Hospital Code Blue Events using Monitor Alarms through Deep Learning Approach.


This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy