Explainable planning is widely accepted as a prerequisite for autonomous
agents to successfully work with humans. While there has been a lot of research
on generating explanations of solutions to planning problems, explaining the
absence of solutions remains an open and under-studied problem, even though
such situations can be the hardest to understand or debug. In this paper, we
show that hierarchical abstractions can be used to efficiently generate reasons
for unsolvability of planning problems. In contrast to related work on
computing certificates of unsolvability, we show that these methods can
generate compact, human-understandable reasons for unsolvability. Empirical
analysis and user studies show the validity of our methods as well as their
computational efficacy on a number of benchmark planning domains.

Source link