Skip to content

[MIEB] CVBenchCount with laion/CLIP-ViT-B-32-DataComp.XL-s13B-b90K fails #1393

@Muennighoff

Description

@Muennighoff
ERROR:mteb.evaluation.MTEB:Error while evaluating CVBenchCount: Sizes of tensors must match except in dimension 1. Expected size 768 but got size 7 for tensor number 1 in the list.
Traceback (most recent call last):
  File "/data/niklas/mieb/mteb/scripts/run_mieb.py", line 86, in <module>
    results = evaluation.run(model, output_folder="/data/niklas/mieb/results-mieb-final", batch_size=1)
  File "/data/niklas/mieb/mteb/mteb/evaluation/MTEB.py", line 464, in run
    raise e
  File "/data/niklas/mieb/mteb/mteb/evaluation/MTEB.py", line 425, in run
    results, tick, tock = self._run_eval(
  File "/data/niklas/mieb/mteb/mteb/evaluation/MTEB.py", line 300, in _run_eval
    results = task.evaluate(
  File "/data/niklas/mieb/mteb/mteb/abstasks/AbsTask.py", line 126, in evaluate
    scores[hf_subset] = self._evaluate_subset(
  File "/data/niklas/mieb/mteb/mteb/abstasks/Image/AbsTaskAny2TextMultipleChoice.py", line 62, in _evaluate_subset
    scores = evaluator(model, encode_kwargs=encode_kwargs)
  File "/data/niklas/mieb/mteb/mteb/evaluation/evaluators/Image/Any2TextMultipleChoiceEvaluator.py", line 78, in __call__
    query_embeddings = model.get_fused_embeddings(
  File "/data/niklas/mieb/mteb/mteb/models/openclip_models.py", line 116, in get_fused_embeddings
    image_embeddings = self.get_image_embeddings(images, batch_size)
  File "/data/niklas/mieb/mteb/mteb/models/openclip_models.py", line 82, in get_image_embeddings
    image_outputs = self.model.encode_image(inputs.to(self.device))
  File "/env/lib/conda/gritkto4/lib/python3.10/site-packages/open_clip/model.py", line 268, in encode_image
    features = self.visual(image)
  File "/env/lib/conda/gritkto4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/env/lib/conda/gritkto4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/env/lib/conda/gritkto4/lib/python3.10/site-packages/open_clip/transformer.py", line 614, in forward
    x = torch.cat([_expand_token(self.class_embedding, x.shape[0]).to(x.dtype), x], dim=1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 768 but got size 7 for tensor number 1 in the list.
(/

Metadata

Metadata

Assignees

Labels

miebThe image extension of MTEB

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions