AI/ML

Research Centre for Machine Learning meeting, Fri 29 Nov, 4:00pm

Research Centre for Machine Learning meeting on Explainable AI

When: Fri, 29 November 2019, 4:00pm
Where: AG01, College Building

SHAP is an increasingly popular method for providing local explanations of AI system predictions. SHAP is based on the game-theory concept of Shapley Values. Shapley Values are the unique solution for fairly attributing the benefits of a cooperative game between players, when subject to a set of local accuracy and consistency constraints (an excellent introduction to Shapley Values is provided at https://www.youtube.com/watch?v=qcLZMYPdpH4&t=437s)

We will be discussing Lundeberg and Lee’s paper ‘A Unified Approach to Interpreting Model Predictions’ (2017) in which they propose SHAP and claim that it unifies six explainable AI methods. The aim of the meeting will be both to gain a better understanding of SHAP and to evaluate its usefulness. Dr Adam White will begin the meeting by providing a critical overview of SHAP.

As always – all welcome!

Source link

Related posts

Drexel University cuts ribbon on new artificial intelligence lab – WPVI-TV

Newsemia

Taking innovation beyond pilot tests

Newsemia

How to Model Human Activity From Smartphone Data

Newsemia

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy

COVID-19

COVID-19 (Coronavirus) is a new illness that is having a major effect on all businesses globally LIVE COVID-19 STATISTICS FOR World