A workshop for collections of multi-modal LLM examples, samples, reference architecture and demos on Amazon SageMaker.
-
Updated
Mar 16, 2025 - Jupyter Notebook
A workshop for collections of multi-modal LLM examples, samples, reference architecture and demos on Amazon SageMaker.
(Accepted: NeurIPS 2025 Workshop Mexico City 7HVU) AdCare-VLM: Leveraging Large Vision Language Models (LVLMs) to Monitor Long-Term Medication Adherence and Care
Add a description, image, and links to the video-llava topic page so that developers can more easily learn about it.
To associate your repository with the video-llava topic, visit your repo's landing page and select "manage topics."