Skip to content

Clarification: Backbone freezing and ablation studies during fine-tuning #23

@HasanOJ

Description

@HasanOJ

Request for clarification regarding backbone freezing during fine-tuning and ablation studies:

I would like to clarify if the backbone (e.g., ResNet50) is frozen during the fine-tuning process in this repository. From reviewing the training scripts and documentation, it appears that the backbone is not explicitly frozen (i.e., requires_grad = False is not set for backbone layers), and the whole model is trained end-to-end.

However, in the paper (Appendix K: Ablation Studies, Section K.1), it is mentioned that ablation experiments were conducted by loading a pretrained FF-2048 ResNet50 model and initializing a new MRL layer, then selectively unfreezing different layers of the backbone to study the effect on inducing nesting. This suggests that freezing and partial unfreezing of backbone layers are relevant in some experimental contexts.

Could you please clarify:

  1. Is the backbone ever frozen during fine-tuning or training?
  2. Is there code/configuration in this repository to reproduce the selective freezing/unfreezing ablation experiments referenced in the paper?
  3. If not, is there a recommended way to freeze or partially unfreeze the backbone if desired?
  4. Any experiments done comparing how freezing vs unfreezing the backbone affects performance?

Thank you for clarifying!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions