From edc156a3d71256b4e98164f6d71714129c99b3f9 Mon Sep 17 00:00:00 2001 From: Shin Jia <134384192+shinjiaaa@users.noreply.github.com> Date: Wed, 24 Sep 2025 16:04:28 +0900 Subject: [PATCH] shinjia - readme update Added explanation of image contributions for model predictions. --- README.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/README.md b/README.md index b02f20582..662f46dcf 100644 --- a/README.md +++ b/README.md @@ -51,6 +51,14 @@ Negative (blue) words indicate atheism, while positive (orange) words indicate c +This image visually explains why the model predicted ‘cat’. + +Positive contribution pixels (green): Regions that are important for predicting the cat class. In the image, the cat’s body and face are highlighted. + +Negative contribution pixels (red): Regions that decrease the probability of the cat class. In the image, part of a dog’s face and body are included. + +LIME divides the input image into small regions (superpixels) and measures how the model’s output changes when each region is removed or altered. + ## Tutorials and API For example usage for text classifiers, take a look at the following two tutorials (generated from ipython notebooks):