๐ LLM
AI generated
Neural Networks: Adaptive Inference and Interpretability with Explanation-Guided Training
## Adaptive Inference and Interpretability in Neural Networks
Early-exit neural networks offer the possibility of making predictions at intermediate layers, reducing computational costs. However, these early exits often lack interpretability and may focus on different features than deeper layers, limiting trust and explainability.
## Explanation-Guided Training (EGT): A New Approach
A new study introduces Explanation-Guided Training (EGT), a multi-objective framework that aims to improve interpretability and consistency in early-exit networks through attention-based regularization. EGT introduces an attention consistency loss that aligns early-exit attention maps with the final exit.
## Experimental Results
Results obtained on a real-world image classification dataset demonstrate that EGT achieves up to 98.97% overall accuracy (matching baseline performance) with a 1.97x inference speedup through early exits, while improving attention consistency by up to 18.5% compared to baseline models. The proposed method provides more interpretable and consistent explanations across all exit points, making early-exit networks more suitable for explainable AI applications in resource-constrained environments.
๐ฌ Commenti (0)
๐ Accedi o registrati per commentare gli articoli.
Nessun commento ancora. Sii il primo a commentare!