Skip to content

Conversation

@pockers21
Copy link
Contributor

@pockers21 pockers21 commented Oct 14, 2025

Update Notes (2025‑11‑6)

  • CLI Merge
    • Fold the standalone Jina CLI into mtmd-cli’s projector‑only flow; remove the extra binary.
  • Conversion Script (set_gguf_parameters)
    • Emit vision keys using the standard naming: clip.has_vision_encoder, clip.vision.image_size/patch_size/embedding_length/
      block_count/projection_dim/feed_forward_length/attention.head_count.
    • Write only projector_type (set to 'jinaclip2'); do not introduce projector_version.
  • Inference (mtmd)
    • Use ggml_rope_ext to implement 2D RoPE; reuse bicubic for image preprocessing.
  • Minimal Validation
    • Conversion succeeds; gguf_dump shows clip.projector_type='jinaclip2'.
    • Minimal inference passes for both text and image; C++ vs Python cosine/RMSE are within the expected range.
      Reproduction
Minimal commands & data (CPU)
  • Produce GGUF (with ST pooling metadata)
    • Text: jina-bert-v3.pooling_type = MEAN/CLS/LAST
    • Vision: clip.projector_type = jinaclip2, clip.vision.rope_theta = 10000 (default)
  • Text parity
    • C++: CUDA_VISIBLE_DEVICES= ./build/bin/llama-embedding -m /path/jina-text-converted.gguf -p "hello world" --n-gpu-layers 0 --pooling mean --embd-normalize 2 --embd-output-format array
    • Python: python3 <ref>/debug.py --mode text --input "hello world" --out-dir <dir> --fa off
    • Metric: read both 512-d outputs and compute cosine / RMSE
  • Image parity
    • C++: CUDA_VISIBLE_DEVICES= ./build/bin/llama-mtmd-cli --mmproj /path/mmproj-jina-vision-converted.gguf --image /path/img.jpg --n-gpu-layers 0 --embd-normalize 2 --embd-output-format array
    • Python: python3 <ref>/debug.py --mode image --input /path/img.jpg --out-dir <dir> --fa off
    • Metric: read both 512-d outputs and compute cosine / RMSE

mtmd: Add JinaCLIP v2 vision projector + GGUF support for jina-bert-v3 (merged-LoRA or adapter)

Overview

  • Converter: write jina-bert-v3 text tower params into GGUF (supports both merged-LoRA checkpoints and adapter-based inputs), and export vision metadata (projector_type=jinaclip, vision.rope_theta, image_size, patch_size, projection_dim, etc.).
  • Runtime: introduce PROJECTOR_TYPE_JINACLIP in the MTMD path (JinaCLIP v2 vision tower: 2D RoPE with shared frequency cache, attention/FFN internal LayerNorm, single-token output), and normalize with common_embd_normalize(..., 2).
  • CLI (core): add a minimal validation tool llama-jinaclip-cli (built by default) for text/image embedding numerical/performance checks; depends only on common+mtmd+Threads, cross-platform buildable, no third-party deps.
  • Compatibility: only activates when related GGUF metadata exists; doesn’t affect other projectors (e.g., LLaVA/Qwen2VL); no ggml op changes; no external dependencies.

Scope of changes

  • convert_hf_to_gguf.py
    • Text: support both merged-LoRA single checkpoints and adapter-based export.
    • Vision (JinaCLIP v2): export clip.projector_type=jinaclip, clip.vision.rope_theta (configurable), image_size/patch_size/projection_dim, and map tensors for fused/non-fused QKV.
  • tools/mtmd/clip.cpp, tools/mtmd/clip-impl.h
    • Add PROJECTOR_TYPE_JINACLIP: JinaCLIP v2 vision tower (2D RoPE with shared freq cache), attention internal LN, FFN sub-layer LN (enabled when both weight/bias present), single-token output (CLS-equivalent), unified L2 normalize.
    • clip_n_output_tokens() returns 1 for JinaCLIP; clip_n_mmproj_embd() returns projection_dim.
  • tools/mtmd/jinaclip-cli.cpp, tools/mtmd/CMakeLists.txt
    • Add llama-jinaclip-cli target (default); one command covers text/image minimal validation, thread scaling, encode_ms reporting, and saves embeddings for Python parity.

Validation summary

  • CI: CPU-only ci/run.sh passes locally; no ggml op changes in this PR.
  • Correctness: embedding models have no perplexity; we verify via C++ vs Python parity.
    • TEXT (CPU, minimal sample): cosine=0.999996, RMSE=0.000125
    • IMAGE (CPU, minimal sample): cosine=0.990261, RMSE=0.006168
  • Performance: checked with CLI encode_ms and thread scaling; no regression observed. More data can be added if requested.
  • Compatibility: activated only when GGUF metadata (projector_type=jinaclip, etc.) is present; other projectors unaffected.
  • Reference: ModelScope uniontech-yourong/split_jina (used for Python-side parity).

Performance (absolute metrics, CPU-only minimal samples)

  • Environment
    • OS: Ubuntu 22.04.5 LTS
    • CPU: Intel Xeon Platinum 8352V (dual-socket, 2×32C/64T, SMT on), 128 threads total
    • Build: Release, GGML_CUDA=OFF (CPU-only), GCC 11.4, CMake 3.22
    • Model: JinaCLIP v2 vision tower (image_size=512, patch=14, depth=24, hidden=1024; official: https://huggingface.co/jinaai/jina-clip-v2); text tower (Jina Embeddings v3, output truncated to 512 dims)
    • Threads: primarily 8 threads for both text/image (with 1-thread comparison)
  • Metric definitions
    • Text: use CLI-reported JINACLIP_ENCODE_MS (pure inference, excludes load)
    • Image: use CLI line “image … done in … ms” (pure inference, excludes load)
  • Results (single sample, minimal)
    • Text (“hello world”, ≈5 tokens)
      • 1 thread: encode_ms ≈ 180.48 ms
      • 8 threads: encode_ms ≈ 34.08 ms
    • Image (512×512, single)
      • 8 threads: image done in ≈ 6154 ms (stabilizes ~6.1–6.4 s after warm-up)
  • Notes
    • Above numbers are CPU-only pure inference; end-to-end (including model load) is higher and not included.

GPU group (absolute metrics, minimal samples)

  • Environment
    • GPU: NVIDIA vGPU-32GB (cc=8.9, 32 GB), Driver 550.107, CUDA 12.4
    • Build: Release, GGML_CUDA=ON (CUDA backend), CUDA arch=89
    • Threads: -t 8 (host-side preprocessing threads)
  • Results (pure inference, excludes load)
    • Text (“hello world”, ≈5 tokens): encode_ms ≈ 84.88 ms
    • Image (512×512, single): image done in ≈ 827 ms

@pockers21 pockers21 requested review from CISC and ngxson as code owners October 14, 2025 09:04
@github-actions github-actions bot added examples python python script changes labels Oct 14, 2025
Copy link
Collaborator

@ngxson ngxson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a minimal validation tool llama-jinaclip-cli (built by default) for text/image embedding numerical/performance checks;

I don't see why wee need to add this new CLI. The mtmd-cli can do this with -p and --image params


# Top-level direct mappings
if src_no_vm == 'cls_token':
return [('v.cls_token', data_torch)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use proper mapping instead

Comment on lines 2229 to 2237
if (!ctx->jinaclip_rope_initialized) {
const int half_dim = rope_dim / 2;
std::vector<float> base_freqs(half_dim);
for (int i = 0; i < half_dim; i++) {
float arange_val = i * 2.0f; // [0, 2, 4, ..., 30]
float normalized = arange_val / rope_dim; // [0, 2/32, 4/32, ..., 30/32]
float theta_powered = powf(freq_base, normalized); // theta^normalized
base_freqs[i] = 1.0f / theta_powered; // 1.0 / theta^normalized
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you're trying to do here, is this just 2D RoPE? (which we already supported)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn’t re‑implementing generic 2D RoPE; it implements JinaCLIP’s VisionRotaryEmbeddingFast.
It uses fractional‑position 2D RoPE (t = arange(ft)/ft * pt) and precomputes a full H×W cos/sin grid; the official 2D RoPE uses integer grid positions (pos_h/pos_w) with ggml_rope_ext and does not include these steps.
This is done to strictly match Jina’s Python semantics.

Copy link
Collaborator

@ngxson ngxson Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fractional‑position 2D RoPE (t = arange(ft)/ft * pt)

Based on your code:

time_seq[i] = (float) i / ft_seq_len * pt_seq_len;  // [0, 16/36, 32/36, ..., 560/36]
...
freqs_h[t * half_dim + f] = time_seq[t] * base_freqs[f];

Then why don't we scale base_freqs[f] instead? The third param of ggml_rope_ext, the c tensor (freq_scale) is made for this purpose.

Honestly I think this is just YaRN

@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch from fd37a5c to 9d02918 Compare October 22, 2025 08:39
@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch from 9d02918 to e19eb27 Compare October 22, 2025 10:35
@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch from e19eb27 to 2d8885b Compare October 22, 2025 10:36
@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch from 2d8885b to b9f78de Compare October 22, 2025 11:07
@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch from b9f78de to 2787888 Compare October 23, 2025 02:07
@pockers21 pockers21 marked this pull request as draft October 24, 2025 05:45
@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch 2 times, most recently from 46f9ee2 to 542ed6a Compare October 28, 2025 03:17
@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch 4 times, most recently from 445e0d5 to bd46020 Compare October 28, 2025 10:02
@CISC
Copy link
Collaborator

CISC commented Oct 28, 2025

@pockers21 What's up?

@pockers21
Copy link
Contributor Author

pockers21 commented Oct 29, 2025

@pockers21 What's up?

I’m currently adjusting the code and fixing issues. I originally planned to answer your questions together when
moving the PR from draft to a formal PR, let me explain now. The link you shared (https://huggingface.co/jinaai/jina-clip-v2/blob/main/config.json#L15-L38) points to the official Jina config that includes LoRA. In our work, we
modified the official Jina to fuse the text-side LoRA into the base model and then exported it to GGUF. Under JINA
logic, those fields won’t take effect when loading Jina v2; they are only triggered when loading the embeddings v3
model.

@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch 2 times, most recently from 7e0b15b to 2338880 Compare October 29, 2025 08:43
pockers21 pushed a commit to pockers21/llama.cpp that referenced this pull request Jan 22, 2026
…icubic;switch to 'jinaclip2'; fix converter constants
@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch 7 times, most recently from 764be54 to c93a390 Compare January 26, 2026 09:29
pockers21 pushed a commit to pockers21/llama.cpp that referenced this pull request Jan 26, 2026
…icubic;switch to 'jinaclip2'; fix converter constants
@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch 7 times, most recently from 5428aed to 6e89a8b Compare January 28, 2026 13:06
liyang and others added 3 commits February 3, 2026 01:45
…icubic;switch to 'jinaclip2'; fix converter constants
Remove unnecessary try/except Jina text hparams.

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
@pockers21 pockers21 force-pushed the feature/jinaclip-v2-projector branch from 00a6e01 to 62ce232 Compare February 4, 2026 10:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

examples ggml changes relating to the ggml tensor library for machine learning python python script changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants