Many real-world problems can be reduced to combinatorial optimization on a
graph, where the subset or ordering of vertices that maximize some objective
function must be found. With such tasks often NP-hard and analytically
intractable, reinforcement learning (RL) has shown promise as a framework with
which efficient heuristic methods to tackle these problems can be learned.
Previous works construct the solution subset incrementally, adding one element
at a time, however, the irreversible nature of this approach prevents the agent
from revising its earlier decisions, which may be necessary given the
complexity of the optimization task. We instead propose that the agent should
seek to continuously improve the solution by learning to explore at test time.
Our approach of exploratory combinatorial optimization (ECO-DQN) is, in
principle, applicable to any combinatorial problem that can be defined on a
graph. Experimentally, we show our method to produce state-of-the-art RL
performance on the Maximum Cut problem. Moreover, because ECO-DQN can start
from any arbitrary configuration, it can be combined with other search methods
to further improve performance, which we demonstrate using a simple random
search.

Source link