Skip to content

Conversation

lsz05
Copy link
Contributor

@lsz05 lsz05 commented Jul 18, 2025

This PR intends to add a Japanese dataset JapaneseSentimentClassification.

We made this dataset based on MultilingualSentimentClassification. However, in the Japanese split of MultilingualSentimentClassification, sentences are splitted with spaces (that do not typically exist in natural Japanese texts) by morphological analysis tools. We found that the performances with/without spaces are totally different, so we reverted morphological analysis to remove unnatural spaces. Our method is not perfect but best-effort, as there're some corner cases in border of Japanese and non-Japanese words.

We made it available in JMTEB, and here we cited JMTEB dataset.

  • I have outlined why this dataset is filling an existing gap in mteb
  • I have tested that the dataset runs with the mteb package.
  • I have run the following models on the task (adding the results to the pr). These can be run using the mteb run -m {model_name} -t {task_name} command.
    • sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
    • intfloat/multilingual-e5-small
  • I have checked that the performance is neither trivial (both models gain close to perfect scores) nor random (both models gain close to random scores).
  • I have considered the size of the dataset and reduced it if it is too big (2048 examples is typically large enough for most tasks)

Here are some examples that show the difference between this dataset (JapaneseSentimentClassification) and the Japanese split of the original MultilingualSentimentClassification.

JapaneseSentimentClassification
In [1]: import mteb

In [2]: ja_sent_cls = mteb.get_task("JapaneseSentimentClassification")

In [3]: ja_sent_cls
Out[3]: JapaneseSentimentClassification(name='JapaneseSentimentClassification', languages=['jpn'])

In [4]: ja_sent_cls.load_data()

In [5]: ja_sent_cls.dataset
Out[5]: 
DatasetDict({
    train: Dataset({
        features: ['text', 'label'],
        num_rows: 9831
    })
    validation: Dataset({
        features: ['text', 'label'],
        num_rows: 1677
    })
    test: Dataset({
        features: ['text', 'label'],
        num_rows: 2552
    })
})

In [8]: ja_sent_cls.dataset["test"][500:510]
Out[8]: 
{'text': ['良い商品でしたよ、ツムツムしてる人にはおすすめです!また買います',
  '凄くいい感じです。厚みも気にならないし、買って良かったです。',
  '今使っている物が使えなくなったらのために買いました。早く発送されましたし、梱包も良かったです。',
  '今まで、いくつも延長ケーブルを使っていましたが、これは非常にいいです。がっしり接続しますし、接触不良が皆無です。最近 USB経由でMicroSDカードの変換アダプタを使っていたのですが、よく、数回に1度くらい認識しなかったり、"unkown device"になったり、最悪 "フォーマットしますか"とか出たり、結構、遭遇しました。いくつも、延長コードを替えて試しましたが、これがいい。密着したと言う、すごい安心感があります。家と会社、両方ともこれに置き換えました。お勧めです。',
  '床置きのミドルタワーPCに接続していたUSB2.0ハブ(ケーブル長1.5m)が購入後12年経過して壊れて(寿命と思われます)しまいました。当製品を購入し、ノートPC用として過去に購入済みのUSB2.0ハブ(ケーブル長6cm)を接続し、机上に置いて使用しています。ケーブル長がちょっと長過ぎたため、余分な長さは輪っかにして向かい合う2か所をそれぞれ結束バンドで括りました。 USBハブには、主に、コードレスマウスのレシーバーを接続しています。時々、フライトシミュレイターのジョイスティックを接続します。たまーに、USB外付けHDD、プリンター、iPod nano(第五世代)を接続します。 iPod nanoは電池残量なしの状態からフル充電まで3時間くらいかかりました。使用開始から3ヶ月程度経過したところです。『抜けやすい』とのカスタマーレビューを見かけましたが、届いた製品はしっかり噛んでくれます。USBハブ(コードレスマウスのレシーバーを接続)を接続したまま、ケーブルを持って逆さにして軽く10秒間ほど振っても外れませんでした。横方向に力を加えると少しグラグラしますが、外れることはないです。今のところ、抜き差しはちょうどいい感じです。特に問題ありません。',
  'PCから 1.5mケーブル付のUSBハブ→ 3m延長ケーブル(この商品)→ 1.5mUSBケーブル。という合計6mの長さで3Dプリンターを操作していますが、問題ありません。案外、大丈夫なんですね。',
  'PS3コントローラーを充電中でもいつもの位置で利用するために購入しました。問題なく充電できますし、操作遅延等もありませんでした。',
  '問題なく使えています。適度な弾力があり斜めにつって使用しても90度折れることはありません。対ノイズ効果については他の製品との違いや具体的な効果が目に見えて理解できてるわけではありませんので?です。',
  'PC本体を机下に置いているため、手元での接続用に購入しました。長すぎるかと思いつつ2mを選びましたが、モニタの支柱に1巻きしておけばずり落ち防止にもなるので、余裕のある長さを選んで正解でした。接続部は、邪魔にならずかつ見失うほど小さくもない、という大きさで、机の上での収まりがよいと思います。抜差しは、硬すぎずかつ外れにくい硬さだと思いますが、操作には両手が必要です。接続機器の認識速度はPCへの直挿しより数秒遅い気がします。',
  '接触不良を心配していたけれど、5本購入して1つも異常が無いのがありがたい。'],
 'label': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
the Japanese split of MultilingualSentimentClassification
In [1]: import mteb

In [2]: senti = mteb.get_task("MultilingualSentimentClassification", languages=["jpn"])

In [3]: senti
Out[3]: MultilingualSentimentClassification(name='MultilingualSentimentClassification', languages=['jpn'])

In [4]: senti.load_data()

In [5]: senti
Out[5]: MultilingualSentimentClassification(name='MultilingualSentimentClassification', languages=['jpn'])

In [6]: senti.dataset
Out[6]: 
{'jpn': DatasetDict({
     train: Dataset({
         features: ['label', 'text'],
         num_rows: 9831
     })
     test: Dataset({
         features: ['label', 'text'],
         num_rows: 2552
     })
     validation: Dataset({
         features: ['label', 'text'],
         num_rows: 1677
     })
 })}

In [7]: senti.dataset["jpn"]["test"][500:510]
Out[7]: 
{'label': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
 'text': ['良い 商品 でした よ 、 ツムツム して る 人 に は お すすめ です ! また 買い ます\n',
  '凄く いい 感じ です 。 厚み も 気 に なら ない し 、 買って 良かった です 。\n',
  '今 使って いる 物 が 使え なく なったら の ため に 買い ました 。  早く 発送 さ れ ました し 、 梱包 も 良かった です 。\n',
  '今 まで 、 いく つ も 延長 ケーブル を 使って い ました が 、 これ は 非常に いい です 。  がっしり 接続 し ます し 、 接触 不良 が 皆無です 。  最近  USB 経由 で MicroSD カード の 変換 アダプタ を 使って いた のです が 、  よく 、 数 回 に 1 度 くらい 認識 し なかったり 、 "unkown  device" に なったり 、  最悪  " フォーマット し ます か " と か 出たり 、 結構 、 遭遇 し ました 。  いく つ も 、 延長 コード を 替えて 試し ました が 、 これ が いい 。  密着 した と 言う 、 すごい 安心 感 が あり ます 。  家 と 会社 、 両方 と も これ に 置き 換え ました 。  お 勧め です 。\n',
  '床 置き の ミドル タワー PC に 接続 して いた USB2.0 ハブ ( ケーブル 長 1.5m ) が  購入 後 12 年 経過 して 壊れて ( 寿命 と 思わ れ ます ) しまい ました 。  当 製品 を 購入 し 、 ノート PC 用 と して 過去 に 購入 済み の USB2.0 ハブ ( ケーブル 長 6cm ) を  接続 し 、 机上 に 置いて 使用 して い ます 。  ケーブル 長 が ちょっと 長 過ぎた ため 、 余分な 長 さ は 輪 っ か に して 向かい合う 2 か 所 を  それぞれ 結束 バンド で 括り ました 。  USB ハブ に は 、 主に 、 コードレス マウス の レシーバー を 接続 して い ます 。  時々 、 フライト シミュレイター の ジョイスティック を 接続 し ます 。  たまーに 、 USB 外 付け HDD 、 プリンター 、 iPod  nano ( 第 五 世 代 ) を 接続 し ます 。  iPod  nano は 電池 残 量 なし の 状態 から フル 充電 まで 3 時間 くらい かかり ました 。  使用 開始 から 3 ヶ月 程度 経過 した ところ です 。  『 抜け やすい 』 と の カスタマー レビュー を 見かけ ました が 、 届いた 製品 は  しっかり 噛んで くれ ます 。 USB ハブ ( コードレス マウス の レシーバー を 接続 ) を 接続 した まま 、  ケーブル を 持って 逆さ に して 軽く 10 秒 間 ほど 振って も 外れ ませ ん でした 。  横 方向 に 力 を 加える と 少し グラグラ し ます が 、 外れる こと は ない です 。  今 の ところ 、 抜き差し は ちょうど いい 感じ です 。  特に 問題 あり ませ ん 。\n',
  'PC から  1.5m ケーブル 付 のUSB ハブ → 3m 延長 ケーブル ( この 商品 ) → 1.5mUSB ケーブル 。  と いう 合計 6m の 長 さ で 3D プリンター を 操作 して い ます が 、 問題 あり ませ ん 。  案外 、 大丈夫な んです ね 。\n',
  'PS3 コント ローラー を 充電 中 でも いつも の 位置 で 利用 する ため に 購入 し ました 。  問題 なく 充電 でき ます し 、 操作 遅延 等 も あり ませ ん でした 。\n',
  '問題 なく 使えて い ます 。 適度な 弾力 が あり 斜めに つって 使用 して も 90 度 折れる こと は あり ませ ん 。 対 ノイズ 効果 に ついて は 他の 製品 と の 違い や 具体 的な 効果 が 目に見えて 理解 できて る わけで は あり ませ ん ので ? です 。\n',
  'PC 本体 を 机 下 に 置いて いる ため 、 手元 で の 接続 用 に 購入 し ました 。 長 すぎる か と 思い つつ 2 m を 選び ました が 、 モニタ の 支柱 に 1 巻き して おけば ずり 落ち 防止 に も なる ので 、 余裕 の ある 長 さ を 選んで 正解 でした 。 接続 部 は 、 邪魔に なら ず かつ 見失う ほど 小さく も ない 、 と いう 大き さ で 、 机 の 上 で の 収まり が よい と 思い ます 。 抜差し は 、 硬 すぎ ず かつ 外れ にくい 硬 さ だ と 思い ます が 、 操作 に は 両手 が 必要です 。 接続 機器 の 認識 速度 は PC へ の 直 挿し より 数 秒 遅い 気 が し ます 。\n',
  '接触 不良 を 心配 して いた けれど 、 5 本 購入 して 1 つ も 異常 が 無い の が ありがたい 。\n']}

We tested several models to show that there is significant difference in whether spaces are removed.

evaluation script
import mteb
import torch

from sentence_transformers import SentenceTransformer

model_names = [
    "cl-nagoya/ruri-v3-30m",
    "cl-nagoya/ruri-v3-70m",
    "cl-nagoya/ruri-v3-130m",
    "cl-nagoya/ruri-v3-310m",
    "intfloat/multilingual-e5-small",
    "intfloat/multilingual-e5-base",
    "intfloat/multilingual-e5-large",
    "sbintuitions/sarashina-embedding-v1-1b",
    "pkshatech/GLuCoSE-base-ja-v2",
    "pkshatech/RoSEtta-base-ja",
]

tasks = mteb.get_tasks(tasks=["MultilingualSentimentClassification", "JapaneseSentimentClassification"], languages=["jpn"])

all_results = {}

def evaluate(model_name):
    model = SentenceTransformer(model_name, trust_remote_code=True, model_kwargs={"torch_dtype": torch.bfloat16})
    evaluation = mteb.MTEB(tasks=tasks)
    results = evaluation.run(model, encode_kwargs={"batch_size": 4}, output_folder=f"results/{model_name.replace('/', '_')}")
    return results

for model_name in model_names:
    all_results[model_name] = evaluate(model_name)

test accuracy:

model name with spaces without spaces
cl-nagoya/ruri-v3-30m 76.80 87.71
cl-nagoya/ruri-v3-70m 80.97 88.47
cl-nagoya/ruri-v3-130m 84.40 89.42
cl-nagoya/ruri-v3-310m 89.13 90.47
intfloat/multilingual-e5-small 72.03 74.97
intfloat/multilingual-e5-base 72.38 78.44
intfloat/multilingual-e5-large 76.97 80.21
sbintuitions/sarashina-embedding-v1-1b 91.74 94.29
pkshatech/GLuCoSE-base-ja-v2 70.31 80.58
pkshatech/RoSEtta-base-ja 65.27 73.28

test f1:

model name with spaces without spaces
cl-nagoya/ruri-v3-30m 75.98 87.21
cl-nagoya/ruri-v3-70m 80.35 87.98
cl-nagoya/ruri-v3-130m 83.84 88.94
cl-nagoya/ruri-v3-310m 88.65 90.00
intfloat/multilingual-e5-small 71.46 74.40
intfloat/multilingual-e5-base 71.62 77.79
intfloat/multilingual-e5-large 76.48 79.67
sbintuitions/sarashina-embedding-v1-1b 91.32 93.97
pkshatech/GLuCoSE-base-ja-v2 69.70 80.00
pkshatech/RoSEtta-base-ja 64.74 72.74

@isaac-chung isaac-chung changed the title Add JapaneseSentimentClassification dataset: Add JapaneseSentimentClassification Jul 18, 2025
@isaac-chung
Copy link
Collaborator

@lsz05 thanks for this interesting addition!

@isaac-chung isaac-chung merged commit 57438c2 into embeddings-benchmark:main Jul 19, 2025
8 of 9 checks passed
isaac-chung added a commit that referenced this pull request Aug 26, 2025
* model: add image support for jina embeddings v4 (#2893)

* feat: unify text and image embeddings for all tasks

* fix: uniform batch size

* fix: update error message

* fix: update code task

* fix: update max length

* fix: apply review suggestions

* model: add kalm_models (kalm-emb-v2) ModelMeta (new PR) (#2889)

* feat: add KaLM_Embedding_X_0605 in kalm_models

* Update kalm_models.py for lint format

* kalm-emb-v2

* kalm-emb-v2

* kalm-emb-v2

* kalm-emb-v2

* kalm-emb-v2

---------

Co-authored-by: xinshuohu <xinshuohu@tencent.com>
Co-authored-by: Xinshuo Hu <yanshek.woo@gmail.com>

* Add Classification Evaluator unit test (#2838)

* Adding Classification Evaluator test

* Modifications due to the comments

* Update tests/test_evaluators/test_ClassificationEvaluator.py

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* Update tests/test_evaluators/test_ClassificationEvaluator.py

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* Modifications due to the comments

* Modifications due to the comments

---------

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* fix: update colpali engine models (#2905)

* adding vidore benchmarks

* fix typo

* clean vidore names + per lang eval

* lint

* vidore names

* bibtex fix

* fix revision

* vidore v2 citation

* update citation format and fix per-language mappings

* lint: citations

* typo citations

* fix revisiions

* lint

* fix colnomic3b revision

* fix colqwen2.5 revision + latest repo version

* fix query agmentation tokens

* colsmol revision

* 1.38.35

Automatically generated by python-semantic-release

* Evaluator tests (#2910)

* Adding Classification Evaluator test

* Modifications due to the comments

* Update tests/test_evaluators/test_ClassificationEvaluator.py

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* Update tests/test_evaluators/test_ClassificationEvaluator.py

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* Modifications due to the comments

* Modifications due to the comments

* Adding STSEvaluator and SummarizationEvaluator tests

* Correcting due to the comments

* Correcting due to the comments

---------

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* Classification dataset cleaning (#2900)

* Classification dataset cleaning

* Update pull request number

* Fix metadata test

* fix formatting

* add script for cleaning

* Update tasks & benchmarks tables

* dataset: Add JapaneseSentimentClassification (#2913)

Add JapaneseSentimentClassification

* Update tasks & benchmarks tables

* fix: change `passage` prompt to `document`  (#2912)

* change document to passage

* fix prompt names

* fix kwargs check

* fix default prompt

* 1.38.36

Automatically generated by python-semantic-release

* model: Add OpenSearch inf-free sparse encoding models (#2903)

add opensearch inf-free models

Co-authored-by: Isaac Chung <chungisaac1217@gmail.com>

* dataset: add BarExamQA dataset (#2916)

* Add BareExamQA retrieval task

* ran linter

* updated details

* updated details

* fixed subtype name

* fixed changes

* ran linter again

* Use `mteb.get_model` in adding_a_dataset.md (#2922)

Update adding_a_dataset.md

* fix: specify revision for opensearch (#2919)

specify revision for opensearch

* 1.38.37

Automatically generated by python-semantic-release

* Update the link for gemini-embedding-001 (#2928)

* fix: replace with passage (#2934)

* fix: Only import SparseEncoder once sentence-transformer version have been checked (#2940)

* fix: Only import SparseEncoder once sentence-transformer version have been checked

fixes #2936

* Update mteb/models/opensearch_neural_sparse_models.py

Co-authored-by: Isaac Chung <chungisaac1217@gmail.com>

---------

Co-authored-by: Isaac Chung <chungisaac1217@gmail.com>

* fix: Prevent incorrectly passing "selector_state" to `get_benchmark` (#2939)

The leaderboard would have (silent) errors where `get_benchmark` lead to a KeyError due to "selector_state" being passed as a default value. Setting `DEFAULT_BENCMARK_NAME` as the value solves this issue.

* docs: Update adding_a_dataset.md (#2947)

* docs: Update adding_a_dataset.md

* Update docs/adding_a_dataset.md

* ci: bump semantic release

* 1.38.38

Automatically generated by python-semantic-release

* dataset: Add BSARD v2, fixing the data loading issues of v1 (#2935)

* BSARD loader fixed

* BSARDv2 metadata fixed

* Update mteb/tasks/Retrieval/fra/BSARDRetrieval.py

---------

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* Update tasks & benchmarks tables

* dataset: add GovReport dataset (#2953)

* Added govreport task

* Updated description

* dataset: add BillSum datasets (#2943)

* Added BillSum datasets

* fixed billsumca

* Updated BillSumCA description

* Updated BillSumUS description

* Update mteb/tasks/Retrieval/eng/BillSumCA.py

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* Update mteb/tasks/Retrieval/eng/BillSumUS.py

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* lint

* lint

---------

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>
Co-authored-by: Isaac Chung <chungisaac1217@gmail.com>

* Update tasks & benchmarks tables

* fix: Add new benchmark beRuSciBench along with AbsTaskTextRegression (#2716)

* Add RuSciBench

* fix bitext mining lang

* Add regression task

* fix init

* add missing files

* Improve description

* Add superseded_by

* fix lint

* Update regression task to match with v2

* Add stratified_subsampling for regression task

* Add boostrap for regression task

* Rename task class, add model as evaluator argument

* fix import

* fix import 2

* fixes

* fix

* Rename regression model protocol

* Update tasks & benchmarks tables

* 1.38.39

Automatically generated by python-semantic-release

* qzhou-embedding model_meta & implementation (#2975)

* qzhou-embedding model_meta & implementation

* Update qzhou_models.py

* Update qzhou_models.py

Processing todo items(Add default instruction)

* Update qzhou_models.py

correct bge datalist

* Update qzhou_models.py

correct 'public_training_data'

* Update qzhou_models.py

* Update qzhou_models.py

* Update qzhou_models.py

* Update qzhou_models.py

* Update mteb/models/qzhou_models.py

Co-authored-by: Roman Solomatin <samoed.roman@gmail.com>

* Update mteb/models/qzhou_models.py

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* format qzhou_models.py for ruff check

---------

Co-authored-by: Roman Solomatin <samoed.roman@gmail.com>
Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* model: Add Voyage 3.5 model configuration (#3005)

Add Voyage 3.5 model configuration

- Add voyage_3_5 ModelMeta with 1024 embed dimensions and 32000 max tokens
- Set release date to 2025-01-21 with revision 1
- Configure for cosine similarity with instruction support
- Include standard Voyage training datasets reference

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>

* model: BAAI/bge-m3-unsupervised Model (#3007)

* Add BAAI/bge-m3-unsupervised Model
(BAAI/bge_m3_retromae is commented out - the details are proper, but it fails during loading the model for me, so i commented out)

* Remove the commented retromae model

---------

Co-authored-by: fzowl <zoltan@voyageai.com>

* lint: Correcting lint errors (#3004)

* Adding Classification Evaluator test

* Modifications due to the comments

* Update tests/test_evaluators/test_ClassificationEvaluator.py

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* Update tests/test_evaluators/test_ClassificationEvaluator.py

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* Modifications due to the comments

* Modifications due to the comments

* Correcting the lint errors

---------

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* dataset: Added 50 Vietnamese dataset from vn-mteb (#2964)

* [ADD] 50 vietnamese dataset from vn-mteb

* [UPDATE] task metadata

* [UPDATE] import dependencies

* [UPDATE] task metadata, bibtext citation

* [UPDATE-TEST] test_model_meta

* [UPDATE] sample_creation to machine-translated and LM verified

* [ADD] sample creation machine-translated and LM verified

* [REMOVE] default fields metadata in Classfication tasks

* Update tasks & benchmarks tables

* model: Add Cohere embed-v4.0 model support (#3006)

* Add Cohere embed-v4.0 model support

- Add text-only embed-v4.0 model in cohere_models.py
- Add multimodal embed-v4.0 model in cohere_v.py
- Support configurable dimensions (256, 512, 1024, 1536)
- Support 128,000 token context length
- Support multimodal embedding (text, images, mixed PDFs)

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Add Cohere embed-v4.0 model support

Update cohere_v.py and cohere_models.py to include the new embed-v4.0 model with proper configuration and integration.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>

* Add OpenAI models with 512 dimension (#3008)

* Add OpenAI/text-embedding-3-small (512 dim)
Add OpenAI/text-embedding-3-large (512 dim)

* Correcting due to comments

---------

Co-authored-by: fzowl <zoltan@voyageai.com>

* Standardise task names and fix citation formatting (#3026)

fixes for name formatting

* Update tasks & benchmarks tables

* fix: Add missing training sets for qzhou (#3023)

* Supplement missing training sets

* reformat code

* Reorganize the data list format

* update qzhou_model meta

* 1.38.40

Automatically generated by python-semantic-release

* model: Add samilpwc_models meta (#3028)

* model: Add samilpwc_models meta

* Fix: Remove CONST

* Fix: Reformat File

* Update: model revision

* model: Add granite-vision-embedding model  (#3029)

* Add files via upload

* Address review comments

* Address review comments

* ruff format

* Update mteb/models/granite_vision_embedding_models.py

* lint error fix

---------

Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>

* fix: incorrect revision for SNLRetrieval (#3033)

The provided revisions doesn't seem to be present on:
adrlau/navjordj-SNL_summarization_copy

Replacing with latest revision

* dataset: Add HumanEvalRetrieval task (#3022)

* Add HumanEvalRetrieval dataset

* Fix TaskMetadata structure and remove descriptive_stats

- Use TaskMetadata class instead of dict
- Remove descriptive_stats as requested in PR review
- Add date field and proper import structure

* Fix dataset path and use verified metadata

- Change path from zeroshot/humaneval-embedding-benchmark to embedding-benchmark/HumanEval
- Use actual description from HuggingFace dataset page
- Remove fabricated citation and reference
- Remove revision field that was incorrect
- Reference HuggingFace dataset page instead of arxiv

* Add correct revision hash to HumanEval

- Add revision hash: ed1f48a for reproducibility

* Fix HumanEval metadata validation

- Add date field for metadata completeness
- Add bibtex_citation field (empty string)
- Required for TaskMetadata validation to pass
- Should resolve PR test failure

* Address reviewer feedback

- Remove trust_remote_code parameter as requested
- Add revision parameter to load_dataset() calls for consistency
- Use metadata revision hash in dataset loading for reproducibility

* Fix field names in HumanEval dataset loading

Changed query_id/corpus_id to query-id/corpus-id to match actual dataset format.

* Fix deprecated metadata_dict usage

Use self.metadata.dataset instead of self.metadata_dict for v2.0 compatibility.

* Fix data structure for MTEB compatibility

- Organize data by splits as expected by MTEB retrieval tasks
- Convert scores to integers for pytrec_eval compatibility

* Address PR feedback for HumanEval dataset

- Add descriptive statistics using calculate_metadata_metrics()
- Enhance metadata description with dataset structure details
- Add complete BibTeX citation for original paper
- Update to full commit hash revision
- Add python-Code language tag for programming language
- Explain retrieval task formulation clearly

* Fix BibTeX citation formatting for HumanEvalRetrieval

- Update citation to match bibtexparser formatting requirements
- Fields now in alphabetical order with lowercase names
- Proper trailing commas and indentation

* Update tasks & benchmarks tables

* 1.38.41

Automatically generated by python-semantic-release

* ci: reduce parallel runs for when checking if a dataset exists (#3035)

The hope is that this will prevent many of the current [errors](https://github.com/embeddings-benchmark/mteb/actions/runs/17019125199/job/48245690831)

* ci: Updating rerun delays to prevent false positives errors

* ci: Updating rerun delays to prevent false positives errors

* model: Add GreenNode Vietnamese Embedding models (#2994)

* [ADD] 50 vietnamese dataset from vn-mteb

* [UPDATE] task metadata

* [UPDATE] import dependencies

* [UPDATE] task metadata, bibtext citation

* [UPDATE-TEST] test_model_meta

* [UPDATE] sample_creation to machine-translated and LM verified

* [ADD] sample creation machine-translated and LM verified

* [ADD] Vietnamese Embedding models

* [REMOVE] default fields metadata in Classfication tasks

* [UPDATE] model to vi-vn language specific file

* [FIX] lint

* [FIX] model loader

* model: add granite-embedding-english R2 models (#3050)

* fix: Updated revision for jina-embeddings-v4 (#3046)

* fix: jinav4 revision

Signed-off-by: admin <bo.wang@jina.ai>

* change revision instead of removing it

Signed-off-by: admin <bo.wang@jina.ai>

---------

Signed-off-by: admin <bo.wang@jina.ai>
Co-authored-by: admin <bo.wang@jina.ai>

* 1.38.42

Automatically generated by python-semantic-release

* Fix 3 VN-MTEB Pair Classification tasks (#3053)

* [ADD] 50 vietnamese dataset from vn-mteb

* [UPDATE] task metadata

* [UPDATE] import dependencies

* [UPDATE] task metadata, bibtext citation

* [UPDATE-TEST] test_model_meta

* [UPDATE] sample_creation to machine-translated and LM verified

* [ADD] sample creation machine-translated and LM verified

* [ADD] Vietnamese Embedding models

* [REMOVE] default fields metadata in Classfication tasks

* [UPDATE] model to vi-vn language specific file

* [FIX] lint

* [FIX] model loader

* [FIX] VN-MTEB 3 datasets PairClassification rename column

* dataset: Add mbpp retrieval (#3037)

* Add MBPP retrieval task

- Code retrieval task based on 378 Python programming problems
- Natural language queries matched to Python code implementations
- Uses python-Code evaluation language for code-specific metrics
- Includes proper citations and descriptive statistics

* Add MBPPRetrieval to imports

* Add descriptive statistics for MBPPRetrieval

* Reformatting

* Reformatting

* Update tasks & benchmarks tables

* dataset: Added wikisql retrieval (#3039)

* Add WikiSQL retrieval task

- Code retrieval task based on WikiSQL natural language to SQL dataset
- Natural language questions matched to SQL query implementations
- Uses sql-Code evaluation language for SQL-specific metrics
- Includes proper citations and descriptive statistics

* Add WikiSQLRetrieval to imports

* Add descriptive statistics for WikiSQLRetrieval

* Reformatting

* Reformatting

* Reformatting, correcting the revision

* Update tasks & benchmarks tables

* ci: Temporarily limit pytrec version to "pytrec-eval-terrier>=0.5.6, <0.5.8" to prevent errors

try to fix CI

* fix MBPPRetrieval revision (#3055)

Update MBPPRetrieval.py

Co-authored-by: Roman Solomatin <36135455+Samoed@users.noreply.github.com>

* fix: Add VN-MTEB benchmark and Leaderboard (#2995)

* [ADD] 50 vietnamese dataset from vn-mteb

* [UPDATE] task metadata

* [UPDATE] import dependencies

* [UPDATE] task metadata, bibtext citation

* [UPDATE-TEST] test_model_meta

* [UPDATE] sample_creation to machine-translated and LM verified

* [ADD] sample creation machine-translated and LM verified

* [ADD] VN-MTEB benchmark and leaderboard

* [FIX] wrong benchmark name

* [REMOVE] default fields metadata in Classfication tasks

* Update tasks & benchmarks tables

* 1.38.43

Automatically generated by python-semantic-release

* Add hc3finance retrieval (#3041)

* Add HC3Finance retrieval task

- Financial retrieval task based on HC3 Finance dataset
- Financial questions matched to human and AI-generated content
- Covers financial explanations, analysis, and educational content
- Includes proper citations and descriptive statistics

* Add HC3FinanceRetrieval to imports

* Add descriptive statistics for HC3FinanceRetrieval

* Reformatting

* Reformatting, correcting the revision

* Update mteb/tasks/Retrieval/eng/HC3FinanceRetrieval.py

---------

Co-authored-by: Isaac Chung <chungisaac1217@gmail.com>

* Add finqa retrieval (#3042)

* Add FinQA retrieval task

- Financial numerical reasoning retrieval task based on FinQA dataset
- Numerical financial questions matched to relevant document data
- Covers earnings reports with tables and quantitative financial data
- Includes proper citations and descriptive statistics

* Add FinQARetrieval to imports

* Add descriptive statistics for FinQARetrieval

* Reformatting

* Reformatting

* Update mteb/tasks/Retrieval/eng/FinQARetrieval.py

---------

Co-authored-by: Isaac Chung <chungisaac1217@gmail.com>

* Update tasks & benchmarks tables

* Add FinanceBenchRetrieval task (#3044)

* Add FinanceBenchRetrieval

* Update mteb/tasks/Retrieval/eng/FinanceBenchRetrieval.py

---------

Co-authored-by: Isaac Chung <chungisaac1217@gmail.com>

* Update tasks & benchmarks tables

* Add FreshStackRetrieval task (#3043)

* Add FreshStackRetrieval

* Reformatting, correcting the revision

* Dataset correction

* Update tasks & benchmarks tables

* dataset: Add ds1000 retrieval (#3038)

* Add DS1000 retrieval task

- Code retrieval task based on 1,000 data science programming problems
- Natural language queries matched to Python data science code
- Uses python-Code evaluation language for code-specific metrics
- Covers pandas, numpy, matplotlib, scikit-learn, and scipy libraries

* Add DS1000Retrieval to imports

* Add descriptive statistics for DS1000Retrieval

* Reformatting

* Reformatting

* Update tasks & benchmarks tables

* Add ChatDoctorRetrieval (#3045)

* Add ChatDoctorRetrieval

* Reformatting, correcting the revision

* Correct the dataset citation

* Correcting due to comments

* Update tasks & benchmarks tables

* Correcting the (new) DS1000 dataset's revision (#3063)

* Add DS1000 retrieval task

- Code retrieval task based on 1,000 data science programming problems
- Natural language queries matched to Python data science code
- Uses python-Code evaluation language for code-specific metrics
- Covers pandas, numpy, matplotlib, scikit-learn, and scipy libraries

* Add DS1000Retrieval to imports

* Add descriptive statistics for DS1000Retrieval

* Reformatting

* Reformatting

* Add DS1000Retrieval task implementation

* dataset: Add JinaVDR (#2942)

* feat: added jinavdr benchmark

* feat: added description for jinavdr

* feat: fixed licenses and added bibtex

* feat: made jinav4 compatible with vidore benchmark

* feat: corrected query numbers

* feat: removed print

* feat: added max pixel argument for jina models

* feat: score calculation on cpu

* feat: adjust jina model for new mteb code

* feat: code cleanup

* feat: corrected bibtex

* feat: make colpali run with jinavdr

* feat: fixed comments

* feat: better reference and fixed comments

* feat: added date for tasks

* feat: fixed missing metadata and bibtex

* feat: added descriptions per dataset

* Update tasks & benchmarks tables

* model: Add CoDi-Embedding-V1 (#3054)

* add codiemb-minicpm

* replace codiemb_minicpm with codi_model

* Update mteb/models/codi_model.py

Co-authored-by: Roman Solomatin <samoed.roman@gmail.com>

* Update mteb/models/codi_model.py

Co-authored-by: Roman Solomatin <samoed.roman@gmail.com>

* Update mteb/models/codi_model.py

Co-authored-by: Roman Solomatin <samoed.roman@gmail.com>

* update code

* update code

* reformat

---------

Co-authored-by: Roman Solomatin <samoed.roman@gmail.com>

* fix: ensure that there are always relevant docs attached to query (#3058)

* fix: ensure that there are always relevant docs attached to query

Here is brief test that it doesn't influence scores:
```py
t1 = mteb.get_task("TwitterHjerneRetrieval")
meta = mteb.get_model_meta("minishlab/potion-base-2M")

eval = mteb.MTEB(tasks=[t1])
res = eval.run(model=meta.load_model())

# before fix:
res[0].get_score()  # np.float64(0.02837)
res[0].scores
before_fix = {
    "train": [
        {
            "ndcg_at_1": 0.02597,
            "ndcg_at_3": 0.02213,
            "ndcg_at_5": 0.0262,
            "ndcg_at_10": 0.02837,
            "ndcg_at_20": 0.04548,
            "ndcg_at_100": 0.13527,
            "ndcg_at_1000": 0.24507,
            "map_at_1": 0.00866,
            "map_at_3": 0.01317,
            "map_at_5": 0.0149,
            "map_at_10": 0.01562,
            "map_at_20": 0.01898,
            "map_at_100": 0.02968,
            "map_at_1000": 0.03841,
            "recall_at_1": 0.00866,
            "recall_at_3": 0.02056,
            "recall_at_5": 0.02922,
            "recall_at_10": 0.03355,
            "recall_at_20": 0.08268,
            "recall_at_100": 0.43766,
            "recall_at_1000": 1.0,
            "precision_at_1": 0.02597,
            "precision_at_3": 0.02165,
            "precision_at_5": 0.01818,
            "precision_at_10": 0.01039,
            "precision_at_20": 0.01234,
            "precision_at_100": 0.01481,
            "precision_at_1000": 0.0034,
            "mrr_at_1": 0.025974,
            "mrr_at_3": 0.041126,
            "mrr_at_5": 0.04632,
            "mrr_at_10": 0.048485,
            "mrr_at_20": 0.058356,
            "mrr_at_100": 0.070186,
            "mrr_at_1000": 0.071349,
            "nauc_ndcg_at_1_max": 0.33969,
            "nauc_ndcg_at_1_std": -0.202864,
            "nauc_ndcg_at_1_diff1": -0.127,
            "nauc_ndcg_at_3_max": 0.409376,
            "nauc_ndcg_at_3_std": -0.039352,
            "nauc_ndcg_at_3_diff1": -0.022816,
            "nauc_ndcg_at_5_max": 0.250499,
            "nauc_ndcg_at_5_std": -0.115263,
            "nauc_ndcg_at_5_diff1": -0.057017,
            "nauc_ndcg_at_10_max": 0.238696,
            "nauc_ndcg_at_10_std": -0.138396,
            "nauc_ndcg_at_10_diff1": -0.045287,
            "nauc_ndcg_at_20_max": 0.154456,
            "nauc_ndcg_at_20_std": -0.070635,
            "nauc_ndcg_at_20_diff1": 0.074499,
            "nauc_ndcg_at_100_max": -0.005753,
            "nauc_ndcg_at_100_std": -0.074738,
            "nauc_ndcg_at_100_diff1": -0.005851,
            "nauc_ndcg_at_1000_max": 0.109439,
            "nauc_ndcg_at_1000_std": -0.089797,
            "nauc_ndcg_at_1000_diff1": -0.021634,
            "nauc_map_at_1_max": 0.33969,
            "nauc_map_at_1_std": -0.202864,
            "nauc_map_at_1_diff1": -0.127,
            "nauc_map_at_3_max": 0.385244,
            "nauc_map_at_3_std": -0.080638,
            "nauc_map_at_3_diff1": -0.060991,
            "nauc_map_at_5_max": 0.294871,
            "nauc_map_at_5_std": -0.119069,
            "nauc_map_at_5_diff1": -0.06234,
            "nauc_map_at_10_max": 0.285698,
            "nauc_map_at_10_std": -0.132856,
            "nauc_map_at_10_diff1": -0.055015,
            "nauc_map_at_20_max": 0.236619,
            "nauc_map_at_20_std": -0.100673,
            "nauc_map_at_20_diff1": -0.002619,
            "nauc_map_at_100_max": 0.15345,
            "nauc_map_at_100_std": -0.138888,
            "nauc_map_at_100_diff1": -0.02257,
            "nauc_map_at_1000_max": 0.171402,
            "nauc_map_at_1000_std": -0.134644,
            "nauc_map_at_1000_diff1": -0.034477,
            "nauc_recall_at_1_max": 0.33969,
            "nauc_recall_at_1_std": -0.202864,
            "nauc_recall_at_1_diff1": -0.127,
            "nauc_recall_at_3_max": 0.375072,
            "nauc_recall_at_3_std": -0.009643,
            "nauc_recall_at_3_diff1": -0.089168,
            "nauc_recall_at_5_max": 0.147691,
            "nauc_recall_at_5_std": -0.128654,
            "nauc_recall_at_5_diff1": -0.084259,
            "nauc_recall_at_10_max": 0.141055,
            "nauc_recall_at_10_std": -0.165932,
            "nauc_recall_at_10_diff1": -0.060966,
            "nauc_recall_at_20_max": 0.043863,
            "nauc_recall_at_20_std": -0.028374,
            "nauc_recall_at_20_diff1": 0.157575,
            "nauc_recall_at_100_max": -0.157183,
            "nauc_recall_at_100_std": -0.019437,
            "nauc_recall_at_100_diff1": 0.013395,
            # "nauc_recall_at_1000_max": nan,
            # "nauc_recall_at_1000_std": nan,
            # "nauc_recall_at_1000_diff1": nan,
            "nauc_precision_at_1_max": 0.33969,
            "nauc_precision_at_1_std": -0.202864,
            "nauc_precision_at_1_diff1": -0.127,
            "nauc_precision_at_3_max": 0.406318,
            "nauc_precision_at_3_std": 0.007031,
            "nauc_precision_at_3_diff1": -0.034709,
            "nauc_precision_at_5_max": 0.178131,
            "nauc_precision_at_5_std": -0.112493,
            "nauc_precision_at_5_diff1": -0.045535,
            "nauc_precision_at_10_max": 0.167897,
            "nauc_precision_at_10_std": -0.150626,
            "nauc_precision_at_10_diff1": -0.027811,
            "nauc_precision_at_20_max": 0.081428,
            "nauc_precision_at_20_std": -0.042304,
            "nauc_precision_at_20_diff1": 0.17278,
            "nauc_precision_at_100_max": -0.150619,
            "nauc_precision_at_100_std": 0.016133,
            "nauc_precision_at_100_diff1": -0.065571,
            "nauc_precision_at_1000_max": -0.017244,
            "nauc_precision_at_1000_std": 0.046614,
            "nauc_precision_at_1000_diff1": -0.028258,
            "nauc_mrr_at_1_max": 0.33969,
            "nauc_mrr_at_1_std": -0.202864,
            "nauc_mrr_at_1_diff1": -0.127,
            "nauc_mrr_at_3_max": 0.409511,
            "nauc_mrr_at_3_std": -0.064671,
            "nauc_mrr_at_3_diff1": -0.01911,
            "nauc_mrr_at_5_max": 0.319584,
            "nauc_mrr_at_5_std": -0.103546,
            "nauc_mrr_at_5_diff1": -0.025109,
            "nauc_mrr_at_10_max": 0.309614,
            "nauc_mrr_at_10_std": -0.117564,
            "nauc_mrr_at_10_diff1": -0.019691,
            "nauc_mrr_at_20_max": 0.262976,
            "nauc_mrr_at_20_std": -0.092222,
            "nauc_mrr_at_20_diff1": 0.024507,
            "nauc_mrr_at_100_max": 0.256052,
            "nauc_mrr_at_100_std": -0.094249,
            "nauc_mrr_at_100_diff1": 0.012432,
            "nauc_mrr_at_1000_max": 0.260112,
            "nauc_mrr_at_1000_std": -0.098845,
            "nauc_mrr_at_1000_diff1": 0.009697,
            "main_score": 0.02837,
            "hf_subset": "default",
            "languages": ["dan-Latn"],
        }
    ]
}

# with update:
res[0].get_score()  # np.float64(0.02837)
res[0].scores
with_fix = {
    "train": [
        {
            "ndcg_at_1": 0.02597,
            "ndcg_at_3": 0.02213,
            "ndcg_at_5": 0.0262,
            "ndcg_at_10": 0.02837,
            "ndcg_at_20": 0.04548,
            "ndcg_at_100": 0.13527,
            "ndcg_at_1000": 0.24507,
            "map_at_1": 0.00866,
            "map_at_3": 0.01317,
            "map_at_5": 0.0149,
            "map_at_10": 0.01562,
            "map_at_20": 0.01898,
            "map_at_100": 0.02968,
            "map_at_1000": 0.03841,
            "recall_at_1": 0.00866,
            "recall_at_3": 0.02056,
            "recall_at_5": 0.02922,
            "recall_at_10": 0.03355,
            "recall_at_20": 0.08268,
            "recall_at_100": 0.43766,
            "recall_at_1000": 1.0,
            "precision_at_1": 0.02597,
            "precision_at_3": 0.02165,
            "precision_at_5": 0.01818,
            "precision_at_10": 0.01039,
            "precision_at_20": 0.01234,
            "precision_at_100": 0.01481,
            "precision_at_1000": 0.0034,
            "mrr_at_1": 0.025974,
            "mrr_at_3": 0.041126,
            "mrr_at_5": 0.04632,
            "mrr_at_10": 0.048485,
            "mrr_at_20": 0.058356,
            "mrr_at_100": 0.070186,
            "mrr_at_1000": 0.071349,
            "nauc_ndcg_at_1_max": 0.33969,
            "nauc_ndcg_at_1_std": -0.202864,
            "nauc_ndcg_at_1_diff1": -0.127,
            "nauc_ndcg_at_3_max": 0.409376,
            "nauc_ndcg_at_3_std": -0.039352,
            "nauc_ndcg_at_3_diff1": -0.022816,
            "nauc_ndcg_at_5_max": 0.250499,
            "nauc_ndcg_at_5_std": -0.115263,
            "nauc_ndcg_at_5_diff1": -0.057017,
            "nauc_ndcg_at_10_max": 0.238696,
            "nauc_ndcg_at_10_std": -0.138396,
            "nauc_ndcg_at_10_diff1": -0.045287,
            "nauc_ndcg_at_20_max": 0.154456,
            "nauc_ndcg_at_20_std": -0.070635,
            "nauc_ndcg_at_20_diff1": 0.074499,
            "nauc_ndcg_at_100_max": -0.005753,
            "nauc_ndcg_at_100_std": -0.074738,
            "nauc_ndcg_at_100_diff1": -0.005851,
            "nauc_ndcg_at_1000_max": 0.109439,
            "nauc_ndcg_at_1000_std": -0.089797,
            "nauc_ndcg_at_1000_diff1": -0.021634,
            "nauc_map_at_1_max": 0.33969,
            "nauc_map_at_1_std": -0.202864,
            "nauc_map_at_1_diff1": -0.127,
            "nauc_map_at_3_max": 0.385244,
            "nauc_map_at_3_std": -0.080638,
            "nauc_map_at_3_diff1": -0.060991,
            "nauc_map_at_5_max": 0.294871,
            "nauc_map_at_5_std": -0.119069,
            "nauc_map_at_5_diff1": -0.06234,
            "nauc_map_at_10_max": 0.285698,
            "nauc_map_at_10_std": -0.132856,
            "nauc_map_at_10_diff1": -0.055015,
            "nauc_map_at_20_max": 0.236619,
            "nauc_map_at_20_std": -0.100673,
            "nauc_map_at_20_diff1": -0.002619,
            "nauc_map_at_100_max": 0.15345,
            "nauc_map_at_100_std": -0.138888,
            "nauc_map_at_100_diff1": -0.02257,
            "nauc_map_at_1000_max": 0.171402,
            "nauc_map_at_1000_std": -0.134644,
            "nauc_map_at_1000_diff1": -0.034477,
            "nauc_recall_at_1_max": 0.33969,
            "nauc_recall_at_1_std": -0.202864,
            "nauc_recall_at_1_diff1": -0.127,
            "nauc_recall_at_3_max": 0.375072,
            "nauc_recall_at_3_std": -0.009643,
            "nauc_recall_at_3_diff1": -0.089168,
            "nauc_recall_at_5_max": 0.147691,
            "nauc_recall_at_5_std": -0.128654,
            "nauc_recall_at_5_diff1": -0.084259,
            "nauc_recall_at_10_max": 0.141055,
            "nauc_recall_at_10_std": -0.165932,
            "nauc_recall_at_10_diff1": -0.060966,
            "nauc_recall_at_20_max": 0.043863,
            "nauc_recall_at_20_std": -0.028374,
            "nauc_recall_at_20_diff1": 0.157575,
            "nauc_recall_at_100_max": -0.157183,
            "nauc_recall_at_100_std": -0.019437,
            "nauc_recall_at_100_diff1": 0.013395,
            # "nauc_recall_at_1000_max": nan,
            # "nauc_recall_at_1000_std": nan,
            # "nauc_recall_at_1000_diff1": nan,
            "nauc_precision_at_1_max": 0.33969,
            "nauc_precision_at_1_std": -0.202864,
            "nauc_precision_at_1_diff1": -0.127,
            "nauc_precision_at_3_max": 0.406318,
            "nauc_precision_at_3_std": 0.007031,
            "nauc_precision_at_3_diff1": -0.034709,
            "nauc_precision_at_5_max": 0.178131,
            "nauc_precision_at_5_std": -0.112493,
            "nauc_precision_at_5_diff1": -0.045535,
            "nauc_precision_at_10_max": 0.167897,
            "nauc_precision_at_10_std": -0.150626,
            "nauc_precision_at_10_diff1": -0.027811,
            "nauc_precision_at_20_max": 0.081428,
            "nauc_precision_at_20_std": -0.042304,
            "nauc_precision_at_20_diff1": 0.17278,
            "nauc_precision_at_100_max": -0.150619,
            "nauc_precision_at_100_std": 0.016133,
            "nauc_precision_at_100_diff1": -0.065571,
            "nauc_precision_at_1000_max": -0.017244,
            "nauc_precision_at_1000_std": 0.046614,
            "nauc_precision_at_1000_diff1": -0.028258,
            "nauc_mrr_at_1_max": 0.33969,
            "nauc_mrr_at_1_std": -0.202864,
            "nauc_mrr_at_1_diff1": -0.127,
            "nauc_mrr_at_3_max": 0.409511,
            "nauc_mrr_at_3_std": -0.064671,
            "nauc_mrr_at_3_diff1": -0.01911,
            "nauc_mrr_at_5_max": 0.319584,
            "nauc_mrr_at_5_std": -0.103546,
            "nauc_mrr_at_5_diff1": -0.025109,
            "nauc_mrr_at_10_max": 0.309614,
            "nauc_mrr_at_10_std": -0.117564,
            "nauc_mrr_at_10_diff1": -0.019691,
            "nauc_mrr_at_20_max": 0.262976,
            "nauc_mrr_at_20_std": -0.092222,
            "nauc_mrr_at_20_diff1": 0.024507,
            "nauc_mrr_at_100_max": 0.256052,
            "nauc_mrr_at_100_std": -0.094249,
            "nauc_mrr_at_100_diff1": 0.012432,
            "nauc_mrr_at_1000_max": 0.260112,
            "nauc_mrr_at_1000_std": -0.098845,
            "nauc_mrr_at_1000_diff1": 0.009697,
            "main_score": 0.02837,
            "hf_subset": "default",
            "languages": ["dan-Latn"],
        }
    ]
}

# check
with_fix == before_fix  # True

* restructure

* format

* relax pytrec versions

* fix incorrect parsing

* 1.38.44

Automatically generated by python-semantic-release

* Correcting the JINA models with SentenceTransformerWrapper (#3071)

* ci: Add stale workflow (#3066)

* add stale workflow

* add permissions

* add bug label to bug issue template

* revert bug issue and only look at more info needed issues

* more accurate name

* override default

* fix: open_clip package validation (#3073)

* 1.38.45

Automatically generated by python-semantic-release

* fix: Update revision for  qzhou models (#3069)

* 1.38.46

Automatically generated by python-semantic-release

* Fix the reference link for CoDi-Embedding-V1 (#3075)

Fix reference link

* rename passage to document

* format

---------

Signed-off-by: admin <bo.wang@jina.ai>
Co-authored-by: Mohammad Kalim Akram <kalim.akram@jina.ai>
Co-authored-by: ItsukiFujii <42373615+ItsukiFujii@users.noreply.github.com>
Co-authored-by: xinshuohu <xinshuohu@tencent.com>
Co-authored-by: Xinshuo Hu <yanshek.woo@gmail.com>
Co-authored-by: fzowl <160063452+fzowl@users.noreply.github.com>
Co-authored-by: Kenneth Enevoldsen <kenevoldsen@pm.me>
Co-authored-by: Paul Teiletche <73120933+paultltc@users.noreply.github.com>
Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Alexey Vatolin <vatolinalex@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: lsz05 <shengzhe.li@sbintuitions.co.jp>
Co-authored-by: zhichao-aws <zhichaog@amazon.com>
Co-authored-by: Isaac Chung <chungisaac1217@gmail.com>
Co-authored-by: Abdur-Rahman Butler <79828536+abdurrahmanbutler@users.noreply.github.com>
Co-authored-by: Feiyang <feiyangc@google.com>
Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsen@gmail.com>
Co-authored-by: semantic-release <semantic-release>
Co-authored-by: Nikolay Banar <nikc20008@gmail.com>
Co-authored-by: Penny Yu <51702222+PennyYu123@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: fzoll <5575946+fzoll@users.noreply.github.com>
Co-authored-by: fzowl <zoltan@voyageai.com>
Co-authored-by: Bao Loc Pham <67360122+BaoLocPham@users.noreply.github.com>
Co-authored-by: Kritias <50093609+ElPlaguister@users.noreply.github.com>
Co-authored-by: roipony <roipony@gmail.com>
Co-authored-by: Aashka Trivedi <aashka.trivedi@gmail.com>
Co-authored-by: Saba Sturua <45267439+jupyterjazz@users.noreply.github.com>
Co-authored-by: admin <bo.wang@jina.ai>
Co-authored-by: Maximilian Werk <maximilian.werk@gmx.de>
Co-authored-by: Victor <zbwkeepgoing@126.com>
Co-authored-by: Yong woo Song <ywsong.dev@kakao.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants