-
Notifications
You must be signed in to change notification settings - Fork 9
Open
Description
- Use your model to generate all the results. For other tasks, please run the coir package(not MTEB) to directly obtain the results. For CodeSearchNet and CodeSearchNet-CCR, you will get results for six programming languages, namely Codesearchnet-go, Codesearchnet-java, xxx and CodeSearchNet-ccr-go, CodeSearchNet-ccr-java, xxx respectively. This will result in a total of 12 files. You need to calculate the average for all sub-tasks within these two tasks separately.
import coir
from coir.data_loader import get_tasks
from coir.evaluation import COIR
from coir.models import YourCustomDEModel
model_name = "intfloat/e5-base-v2"
# Load the model
model = YourCustomDEModel(model_name=model_name)
# Get tasks
#all task ["codetrans-dl","stackoverflow-qa","apps","codefeedback-mt","codefeedback-st","codetrans-contest","synthetic-
# text2sql","cosqa","codesearchnet","codesearchnet-ccr"]
tasks = get_tasks(tasks=["codetrans-dl"])
# Initialize evaluation
evaluation = COIR(tasks=tasks,batch_size=128)
# Run evaluation
results = evaluation.run(model, output_folder=f"results/{model_name}")
print(results)
-
Create a new issue in the repository with the following format:
- Title: Upload new model performance
- Content:
{ "Model": "Contriever", "Model Size (Million Parameters)": 110, "URL": "https://huggingface.co/facebook/contriever-msmarco", "Apps": 5.14, "CosQA": 14.21, "Synthetic Text2sql": 45.46, "CodeSearchNet": 34.72, "CodeSearchNet-CCR": 35.74, "CodeTrans-Contest": 44.16, "CodeTrans-DL": 24.21, "StackOverFlow QA": 66.05, "CodeFeedBack-ST": 55.11, "CodeFeedBack-MT": 39.23, "Avg": 36.40 }
-
The repository administrator will update the results on the website.
Metadata
Metadata
Assignees
Labels
No labels