This repository contains the PyTorch implementation of the lightweight CNN-based ECG classification approach with integrated explainable AI (XAI) capabilities presented in the paper "Lightweight Data-driven ECG Classification Approach with Explainable CAM Output" at The 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 2025 by Rytis Augustauskas, Ana Santos Rodrigues, Daivaras Sokas, Otilia Bularca, and Vaidotas Marozas.
This repository describes data wrangling for the PTBXL dataset and binary classification problem, where the data is separated into 2 classes NORMAL and ABNORMAL. While the current implementation is designed for binary classification with integrated explainability in a single iteration, the approach is also adaptable to multiclass classification tasks and supports different CNN architectures. The codebase includes the full workflow:
- Data loading and overview
- Dataset splitting and preprocessing (with optional augmentation)
- Model training and evaluation
- Explainability analysis and visualization
The code implements all essential steps for signal preprocessing and augmentation, following the structure of the provided pipeline:
Scaling (standardization) transforms the signal into the range [−1,1] according to the following expression:Where:
X_d– detrended input signalmedian(X_d)– median of the detrended signalX_st– standardized signal
Original Signal |
Detrended Signal |
Scaled Signal |
Augmentation Pipeline (Sample) |
- This work adapts a class activation map (CAM)-based explainability technique, designed for 1D time-series ECG signals. The method enhances model interpretability by identifying which parts of the input signal most influenced the prediction.
- A non-trainable CAM output layer is added after the global average pooling layer. It computes feature importance by taking the dot product of the latent feature maps and the output layer weights. The result is a single explainability heatmap aligned with the input signal, rescaled to match its original length and normalized to the [0, 1] range.
- The explainability output is generated post-training and does not affect model parameters. In binary classification, one CAM map is produced. For multiclass problems, separate maps can be created per class. Importantly, this addition increases computational complexity by only ~15% while maintaining the same number of parameters (on the presented architecture).
- Clone repository:
git clone https://github.com/rytisss/XAI_SignalCAM.git
cd XAI_SignalCAM
- Execute the line in the terminal to set up the environment (install 3rdParties):
pip install -r requirements.txt
- Open Jupyter:
jupyter notebook
- Launch ptbxl_data_classification.ipynb!
overview_SignalXAI.mp4
@inproceedings{augustauskas2025lightweight,
title={Lightweight Data-driven ECG Classification Approach with Explainable CAM Output},
author={Augustauskas, Rytis and Rodrigues, Ana Santos and Sokas, Daivaras and Bularca, Otilia and Marozas, Vaidotas},
booktitle={2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
year={2025},
organization={IEEE},
pages={1--7},
doi={10.1109/EMBC58623.2025.11253260}
}
This research was supported by the CVDLINK project (EU Horizon grant agreement N°101137278)









