Open Access Open Access  Restricted Access Subscription Access

Evaluating the Trade-Offs of Explainable AI for Galaxy Classification

Ayush Gouda, Gajanan M Naik

Abstract


Deep learning models have been advanced to excel at Galaxy image classification, yet there’s a limit to their trust and utility. This is due to their inherent “black box” nature, which hides the thinking and reasoning of the deep learning framework. A quantitative comparison of popular Explainable AI (XAI) methods is needed to determine their practical trade-offs in an astronomical context. This can be achieved by evaluating three prominent XAI techniques i.e. Grad-CAM, LIME, and DeepSHAP, by applying to a fine-tuned ResNet-18 model for four-class galaxy classification. We quantitatively assessed each method based on computational efficiency and explanation fidelity. The obtained results re- veal that there lies a trade-off: Grad-CAM, de- spite being the least computationally efficient in our implementation, demonstrated significantly higher fidelity, proving its explanations were most aligned with the model's core reasoning. This work concludes that for scientific applica- tions where trustworthiness is paramount, expla- nation fidelity outweighs computational cost, making Grad-CAM the recommended tool. These findings underscore the importance of careful method selection for generating reliable scientific insights and suggest future work should validate these trade-offs on larger-scale models.


Full Text:

PDF

References


M. Mohammadi, J. Mutatiina, T. Saifollahi, and K. Bunte, "Detection of extragalactic Ultra-compact dwarfs and Globular Clusters using Explainable AI techniques," Astronomy and Computing, vol. 39, 2022, Art. no. 100555.

K. He, X. Zhang, S. Ren, and J. Sun, "Deep Re- sidual Learning for Image Recognition," in Proc. IEEE Conf. Comput. Vis. Pattern Recognition (CVPR), 2016, pp. 770–778.

S. M. Lundberg and S. I. Lee, "A Unified Ap- proach to Interpreting Model Predictions," in Ad- vances in Neural Information Processing Systems (NIPS), 2017.

M. T. Ribeiro, S. Singh, and C. Guestrin, ""Why Should I Trust You?": Explaining the Predictions of Any Classifier," in Proc. 22nd ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, 2016, pp. 1135–1144.

R. R. Selvaraju, M. Cogswell, A. Das, R. Vedan- tam, D. Parikh, and D. Batra, "Grad-CAM: Visual.

Explanations from Deep Networks via Gradient- Based Localization," in Proc. IEEE Int. Conf. Com- puter Vision (ICCV), 2017, pp. 618–626.

J. Brasse et al., "Explainable artificial intelli- gence in information systems: A review of the status quo and future research directions," Electronic Mar- kets, vol. 33, no. 26, 2023.

N. Aftab. (2025, May). Deep Learning Model In- terpretability with SHAP. [Online]. Available: https://medium.com/@naveed88375/deep-learning- model-interpretability-with-shap-63598b7aeff8

C. J. Lintott et al., "Galaxy Zoo: morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey," Monthly Notices of the Royal Astronomical Society, vol. 389, no. 3, pp. 1179–1187, 2008.

S. Dieleman, K. W. Willett, and J. Dambre, "Ro- tation-invariant convolutional neural networks for galaxy morphology prediction," Monthly Notices of the Royal Astronomical Society, vol. 450, no. 2, pp. 1441–1459, 2015.


Refbacks

  • There are currently no refbacks.