AI/ML

Why Couldn't You do that? Explaining Unsolvability of Classical Planning Problems in the Presence of Plan Advice. (arXiv:1903.08218v1 [cs.AI])





Explainable planning is widely accepted as a prerequisite for autonomous
agents to successfully work with humans. While there has been a lot of research
on generating explanations of solutions to planning problems, explaining the
absence of solutions remains an open and under-studied problem, even though
such situations can be the hardest to understand or debug. In this paper, we
show that hierarchical abstractions can be used to efficiently generate reasons
for unsolvability of planning problems. In contrast to related work on
computing certificates of unsolvability, we show that these methods can
generate compact, human-understandable reasons for unsolvability. Empirical
analysis and user studies show the validity of our methods as well as their
computational efficacy on a number of benchmark planning domains.

Source link




Related posts

“Particle robot” works as a cluster of simple units

Newsemia

Transfer Learning in Visual and Relational Reasoning. (arXiv:1911.11938v1 [cs.CV])

Newsemia

Knowledge-guided convolutional networks for chemical-disease relation extraction

Newsemia

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy