colemanm.org

  • Archive
  • Books
  • Links
  • About

A New Approach to Understanding How Machines Think →

January 22, 2019 • #

This is an interesting interview with Been Kim from Google Brain on developing systems for seeing how trained machines make decisions. One of the major challenges with neural network-based based deep learning systems is that the decision chain used by the AI is a black box to humans. It’s difficult (or impossible) for even the creators to figure out what factors influenced a decision, and how the AI “weighted” the inputs. What Kim is developing is a “translation” framework for giving operators better insight into the decision chain of AI:

Kim and her colleagues at Google Brain recently developed a system called “Testing with Concept Activation Vectors” (TCAV), which she describes as a “translator for humans” that allows a user to ask a black box AI how much a specific, high-level concept has played into its reasoning. For example, if a machine-learning system has been trained to identify zebras in images, a person could use TCAV to determine how much weight the system gives to the concept of “stripes” when making a decision.

✦
Related Posts
  • Amazon SageMaker Ground Truth — 
  • Voice and the uncanny valley of AI — Benedict Evans on voice as the next big computing platform
  • Archive
  • Maps
  • Talks
© Coleman McCormick, 2010-2022. •