This section focuses on the fundamental approach and the applications by training the model to align the human preference.
| Title | Venue | Year | Code | Keywords |
|---|---|---|---|---|
| RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback | CVPR | 2024 | Official | RLHF-V |
| Diffusion Model Alignment Using Direct Preference Optimization | CVPR | 2024 | Official | DiffusionDPO |
| Training Diffusion Models with Reinforcement Learning | ICLR | 2024 | Official | DDPO |
| RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback | ICML | 2024 | Official | RL-VLM-F |
| Aligning Diffusion Models by Optimizing Human Utility | NeurlPS | 2024 | Official | Diffusion-KTO |
| Title | Venue | Year |
|---|---|---|
| Data Distillation: A Survey | TMLR | 2023 |
| A Comprehensive Survey of Dataset Distillation | T-PAMI | 2024 |
| Title | Venue | Year | Code | Keywords |
|---|---|---|---|---|
| Dataset Distillation | arXiv | 2018 | Non-Official | |
| Dataset Condensation with Gradient Matching | ICLR | 2021 | Official | gradient matching |
| CAFE: Learning to Condense Dataset by Aligning Features | CVPR | 2022 | Official | CAFE |
| Dataset Distillation by Matching Training Trajectories | CVPR | 2022 | Official | MTT, trajectory matching |
| Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching | ICLR | 2024 | Official | lossless |
| Multisize Dataset Condensation | ICLR | 2024 | Official | mltisize |
| Embarassingly Simple Dataset Distillation | ICLR | 2024 | Official | RaT-BPTT |
| D4M: Dataset Distillation via Disentangled Diffusion Model | CVPR | 2024 | Official | D4M |
| Dataset Distillation by Automatic Training Trajectories | ECCV | 2024 | Official | ATT |
| Elucidating the Design Space of Dataset Condensation | NeurlPS | 2024 | Official | EDC |
| Title | Venue | Year | Code | Keywords |
|---|---|---|---|---|
| Dataset Distillation with Attention Labels for Fine-tuning BERT | ACL | 2023 | Official | |
| Vision-Language Dataset Distillation | TMLR | 2024 | Official | |
| Low-Rank Similarity Mining for Multimodal Dataset Distillation | ICML | 2024 | Official | LoRS |
| Dancing with Still Images: Video Distillation via Static-Dynamic Disentanglement | CVPR | 2024 | Official | |
| DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation | NAACL | 2024 | Official | DiLM |
| Textual Dataset Distillation via Language Model Embedding | EMNLP | 2024 | N/A |
| Title | Venue | Year | Code | Keywords |
|---|---|---|---|---|
| TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters | arXiv | 2024 | Official | TokenFormer |
| Large Concept Models: Language modeling in a sentence representation space | arXiv | 2024 | Official | LCM |
| Byte Latent Transformer: Patches Scale Better Than Tokens | arXiv | 2024 | Official | BLT |
Thanks to the following repositories: