On the Utility of Learning about Humans for Human-AI Coordination. (arXiv:1910.05789v2 [cs.LG] UPDATED)

While we would like agents that can coordinate with humans, current
algorithms such as self-play and population-based training create agents that
can coordinate with themselves. Agents that assume their partner to be optimal
or similar to them can converge to coordination protocols that fail to
understand and be understood by humans. To demonstrate this, we introduce a
simple environment that requires challenging coordination, based on the popular
game Overcooked, and learn a simple model that mimics human play. We evaluate
the performance of agents trained via self-play and population-based training.
These agents perform very well when paired with themselves, but when paired
with our human model, they are significantly worse than agents designed to play
with the human model. An experiment with a planning algorithm yields the same
conclusion, though only when the human-aware planner is given the exact human
model that it is playing with. A user study with real humans shows this pattern
as well, though less strongly. Qualitatively, we find that the gains come from
having the agent adapt to the human's gameplay. Given this result, we suggest
several approaches for designing agents that learn about humans in order to
better coordinate with them. Code is available at

Source link

WordPress database error: [Error writing file '/tmp/MYMleM0u' (Errcode: 28 - No space left on device)]
SELECT SQL_CALC_FOUND_ROWS wp_posts.ID FROM wp_posts LEFT JOIN wp_term_relationships ON (wp_posts.ID = wp_term_relationships.object_id) WHERE 1=1 AND wp_posts.ID NOT IN (327348) AND ( wp_term_relationships.term_taxonomy_id IN (313) ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish') GROUP BY wp_posts.ID ORDER BY RAND() LIMIT 0, 3

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy