Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update my contributions to the interpretability section a bit #1020

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

delton137
Copy link
Contributor

Hi,

I looked at the changes I made and tweaked them.

I added some new references (marked with @doi and @arXiv tags). I hope updating the tags to the new style isn't too much of a headache. If there's anything I can do let me know.

My feeling overall is this section is indeed a bit outdated and there is much more that could be said here which might be useful in terms of providing a high level overview to the reader, such as discussing the terminology and motivations and desiderata for interpretation methods. Currently I'm a little pressed for time as I have two paper deadlines coming up so I didn't feel comfortable trying to attempt that now. If there is a push to publish an updated review though I'd be happy to work more on this section.

A few ideas for how to improve this section:

  • More discussion of the many pitfalls of saliency maps. There are numerous papers on this subject (for instance "Sanity Checks for Saliency Maps").

  • Discuss why layerwise relevance propagation heatmapping is theoretically better (would require digging in to explain this correctly) than saliency methods. LRP is becoming more popular, saliency maps are becoming less so.

  • More discussion of LIME and Shapley values , as these are very popular, and possible pitfalls for these methods.

  • Discuss the need for better benchmarking of explainabiliity techniques, such as "scientific" testing (asking people to make predictions based on interpretation method outputs). A great paper on this is Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?.

@AppVeyorBot
Copy link

AppVeyor build 1.0.102 for commit 9e6b23e by @delton137 is now complete. The rendered manuscript from this build is temporarily available for download at:

@AvantiShri
Copy link
Contributor

Definitely agree that we should include a reference to "Sanity checks for saliency maps". Some of the drawbacks of DeconvNet/GuidedBackprop are talked about in Mahendran2016_salient, which is cited in the current version, but it's mentioned very much only mentioned in passing (I think some of the original text was cut down to meet space constraints). People should absolutely be warned up-front that Guided Backprop and DeconvNet are insensitive to the weights in higher network layers.

One point about Layerwise Relevance Propagation: some concerns have been raised about the LRP alpha-beta rule not passing sanity checks (see this ICML 2019 paper https://arxiv.org/abs/1912.09818), so we should be careful about the recommendations we make. Also the terminology might be confusing; in the "sanity checks for saliency maps" paper, the term "saliency" refers to a family of methods that includes LRP. Which saliency methods are you referring to when you say "layerwise relevance propagation heatmapping is theoretically better...than saliency methods"?

A few points of feedback regarding the changes proposed in the commit:

  • I think it would be better to avoid listing deconvnet under "several tools have been developed for visualizing the learned feature maps", given the issues with sanity checks
  • The mentions of LRP can be clubbed with the discussion of backpropagation-based methods under "Assigning example-specific importance scores"
  • Regarding the point "The distributed nature of representations appears to be related to a curious finding by Szegedy et al. where they found that if they took a linear combination of units from a given layer instead of a single unit (or more precisely perform a random rotation / change in basis), and maximized that instead, they ended up with similar types of visualizations [@arXiv:1312.6199]" - I am concerned that this may be specific to computer vision, where the architecture itself has been found to places a strong prior on what the saliency maps look like (this point was also made in "Sanity checks for saliency maps"). This observation of the architecture placing a strong prior unfortunately doesn't generalize to domains like genomics.

Thanks for taking the initiative!

Avanti

@agitter
Copy link
Collaborator

agitter commented Aug 15, 2020

@delton137 thanks for continuing this pull request. It looks like you were able to work with the delton137-interpret branch successfully. I checked the AppVeyor HTML preview, and all of your tags and citations worked. I'll do a closer copy editing review once the content is finalized.

@AvantiShri thanks for the review. I'll let you take the lead on reviewing the scientific content here, and I can help with any Manubot formatting isssues.

I'm also tagging @akundaje who previously discussed changes to this section in #986 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants