Skip to content

Neko : Insights on Interpretability in MML  #3

@bhavul

Description

@bhavul

The survey paper does not go into much details around interpretability besides just leaving a few references to be studied:

  • Why and how Transformers perform so well in multimodal learning has been investigated [106], [299], [300], [301], [302], [303], [304], [305], [306]

This issue is around studying these references and extracting strategies and/or insights among these references, if any of them are useful towards Neko.

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentationgood first issueGood for newcomershelp wantedExtra attention is needed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions