Skip to content
#

visual-question

Here are 2 public repositories matching this topic...

Language: All
Filter by language

This project demonstrates fine-tuning of Vision-Language Models (VLMs) using BLIP (Bootstrapped Language-Image Pretraining) for a variety of multimodal AI tasks. Whether you're working on image captioning, image-text retrieval, or visual question answering (VQA), this repository provides a comprehensive, hands-on guide to adapt BLIP to your own dat

  • Updated Apr 9, 2025
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the visual-question topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the visual-question topic, visit your repo's landing page and select "manage topics."

Learn more