Modified ResNet for CIFAR-10: Achieving 94.74% training accuracy with 2.8M Parameters
This project implements a modified ResNet architecture optimized for the CIFAR-10 dataset, achieving 94.74% test accuracy with only 2.8 million parameters (56% of the 5M parameter limit). The model is designed to balance accuracy and computational efficiency, making it suitable for resource-constrained environments.
- Progressive Channel Scaling: Doubling feature channels at each downsampling stage (16→32→64) to balance accuracy and efficiency.
- Lightweight Residual Blocks: Basic blocks with two 3×3 convolutions reduce parameter inflation while maintaining performance.
- Data Augmentation: Random cropping, horizontal flipping, and normalization improve generalization.
- Training Strategy: ADAM optimizer with learning rate scheduling over 300 epochs ensures stable convergence.
- Parameter Efficiency: Achieves state-of-the-art performance using only 56% of the 5M parameter budget.
- Test Accuracy: 94.74%
- Parameter Count: 2,797,610 (56% of 5M limit)
- Training Duration: 300 epochs
We compared our modified ResNet with a DenseNet model trained under similar constraints. Key findings:
- Accuracy: ResNet outperformed DenseNet by 1.22% (94.74% vs. 93.52%).
- Parameter Efficiency: ResNet used 9.7% fewer parameters than DenseNet.
- Generalization: ResNet showed better generalization, with lower validation loss and higher test accuracy.
