- Implemented FP-32 to FP-16 quantization on OAK-D camera for MirrorNet model and reduced the model’s memory size by 49.28%.
- Performed Post Training Quantization for the MirrorNet model achieving a reduction in the model’s memory size by 61.19% and an improvement in inference speed by 50.48%.
- Performed Post Training Quantization for the GDNet model achieving a reduction in the model’s memory size by 69.39% and an improvement in inference speed by 51.06%.
- Implemented a joint model structure using Quantized MirrorNet and GDNet models.
For details on our project implementation and results, please refer to our project presentation. For a more in-depth analysis, including methodologies and findings, please see our project report.
- Link to project presentation - https://drive.google.com/file/d/1nd55v0WIXSTq1JCiPaat4qH54rYZ7Wmm/view?usp=sharing
- Link to project report - https://drive.google.com/file/d/1h2MZxJobZ8aIZM3p0UyWSc6Rtq_aMajR/view?usp=sharing