This project focuses on recognizing human emotions from facial images using a Convolutional Neural Network (CNN). The system classifies facial expressions into four categories: Angry, Happy, Neutral, and Sad.
The project demonstrates dataset preparation, image preprocessing, CNN training, and performance evaluation using real-world facial expression data.
- Converted a YOLO-formatted dataset into a classification dataset
- Preprocessed images (grayscale, resizing, normalization)
- Applied data augmentation to improve generalization
- Trained a CNN model for emotion classification
- Evaluated performance on unseen test data
- Python
- OpenCV
- NumPy, Pandas
- TensorFlow / Keras
- Scikit-learn
- Jupyter Notebook
- Facial expression dataset originally formatted for YOLO detection
- Converted into a CNN-friendly classification dataset
- Image size: 48×48 grayscale
- Convolutional layers with ReLU activation
- MaxPooling layers
- Fully connected Dense layers
- Dropout for regularization
- Softmax output layer (4 classes)
- Training accuracy: ~90%
- Test accuracy: ~54%
The gap between training and test accuracy indicates overfitting, which is common in facial emotion recognition tasks and highlights areas for future improvement.
- Use deeper CNN architectures
- Apply transfer learning (e.g., MobileNet, ResNet)
- Improve dataset balance
- Tune regularization and augmentation strategies
This project is for educational and research purposes only.
Academic / learning project