AI/ML

Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Black Box Simulators. (arXiv:2002.01080v1 [cs.AI])




As more and more complex AI systems are introduced into our day-to-day lives,
it becomes important that everyday users can work and interact with such
systems with relative ease. Orchestrating such interactions require the system
to be capable of providing explanations and rationale for its decisions and be
able to field queries about alternative decisions. A significant hurdle to
allowing for such explanatory dialogue could be the mismatch between the
complex representations that the systems use to reason about the task and the
terms in which the user may be viewing the task. This paper introduces methods
that can be leveraged to provide contrastive explanations in terms of
user-specified concepts for deterministic sequential decision-making settings
where the system dynamics may be best represented in terms of black box
simulators. We do this by assuming that system dynamics can at least be partly
captured in terms of symbolic planning models, and we provide explanations in
terms of these models. We implement this method using a simulator for a popular
Atari game (Montezuma’s Revenge) and perform user studies to verify whether
people would find explanations generated in this form useful.

Source link





Related posts

[SSCAIT] StarCraft Artificial Intelligence Tournament Live Stream

Newsemia

How to Build a Chatbot for Real Estate

Newsemia

Competing endogenous RNA expression profiling in pre-eclampsia identifies hsa_circ_0036877 as a potential novel blood biomarker for early pre-eclampsia.

Newsemia

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy