The solution convergence of Markov Decision Processes (MDPs) can be
accelerated by prioritized sweeping of states ranked by their potential impacts
to other states. In this paper, we present new heuristics to speed up the
solution convergence of MDPs. First, we quantify the level of reachability of
every state using the Mean First Passage Time (MFPT) and show that such
reachability characterization very well assesses the importance of states which
is used for effective state prioritization. Then, we introduce the notion of
backup differentials as an extension to the prioritized sweeping mechanism, in
order to evaluate the impacts of states at an even finer scale. Finally, we
extend the state prioritization to the temporal process, where only partial
sweeping can be performed during certain intermediate value iteration stages.
To validate our design, we have performed numerical evaluations by comparing
the proposed new heuristics with corresponding classic baseline mechanisms. The
evaluation results showed that our reachability based framework and its
differential variants have outperformed the state-of-the-art solutions in terms
of both practical runtime and number of iterations.