Skip to content
@parmanu-lcs2

Parmanu @ LCS2 IIT Delhi

Our research group is dedicated to pioneering techniques from advanced quantization to sparse architectures that make LLMs accessible and deployable.

Popular repositories Loading

  1. llm-distil llm-distil Public

    Toolkit implementing knowledge distillation techniques for LLMs with efficient fine-tuning. Based on ACL 2025 paper: **On the Generalization vs Fidelity Paradox in Knowledge Distillation**

    Python 1

  2. peft peft Public

    Forked from huggingface/peft

    🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

    Python

  3. mpdistil mpdistil Public

    MPDistil is a teacher-student collaborative knowledge distillation framework that enables compact student models to outperform larger teacher models through meta-learning and curriculum learning. B…

    Python

  4. TransJect TransJect Public

    Source Code of TransJect: Manifold-Preserving Transformer with Enforced Injectivity. Based on the EMNLP 2023 paper **Manifold-Preserving Transformers are Effective for Short-Long Range Encoding**

    Python

  5. efficient_pruners efficient_pruners Public

    Toolkit implementing efficient compression methods for compressing LLMs. Implements PruneNet - ICLR 2025 paper: **You only prune once: Designing calibration-free model compression with policy learn…

    Python

Repositories

Showing 5 of 5 repositories

Top languages

Loading…

Most used topics

Loading…