Coleman McCormick

A New Approach to Understanding How Machines Think →

#

An interview with Been Kim on developing deep learning 'translators' for humans.

This is an interesting interview with Been Kim from Google Brain on developing systems for seeing how trained machines make decisions. One of the major challenges with neural network-based based deep learning systems is that the decision chain used by the AI is a black box to humans. It’s difficult (or impossible) for even the creators to figure out what factors influenced a decision, and how the AI “weighted” the inputs. What Kim is developing is a “translation” framework for giving operators better insight into the decision chain of AI:

Kim and her colleagues at Google Brain recently developed a system called “Testing with Concept Activation Vectors” (TCAV), which she describes as a “translator for humans” that allows a user to ask a black box AI how much a specific, high-level concept has played into its reasoning. For example, if a machine-learning system has been trained to identify zebras in images, a person could use TCAV to determine how much weight the system gives to the concept of “stripes” when making a decision.