-
Notifications
You must be signed in to change notification settings - Fork 2k
Fix loading GAIA dataset in Open DeepResearch example #499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I'm submitting tests on the test set this afternoon, after that we can merge this PR! |
@albertvillanova tests are submitted, and we rank n°2 on the test set, so even better than our n°3 on the validation set, and we're the best pass@1 solution! 🥳 |
@aymeric-roucher, awesome news!!! 🚀 Regarding having custom local data, I am afraid that would make our results unreproducible. |
I added a condition to download GAIA data files only if not exist locally: 9f8237a
|
Failing test is unrelated. See fixing PR: |
@albertvillanova the results are already based on custom local data. Could you add this disclaimer somewhere in the script or the Readme as you deem more fit? # FULL REPRODUCIBILITY OF RESULTS
# The data used in our submissions to GAIA was augmented in this way:
# For each single-page .pdf or .xls file, it was opened in a file reader (MacOS Sonoma Numbers or Preview), and a ".png" screenshot was taken and added to the folder.
# Then for any file used in a question, the file loading system checks if there is a ".png" extension version of the file, and loads it instead of the original if it exists.
# This process was done manually but could be automatized. |
@aymeric-roucher: just to be sure, is it OK to do that data manipulation with the GAIA private test set as well? |
eval_ds = eval_ds.rename_columns({"Question": "question", "Final answer": "true_answer", "Level": "task"}) | ||
|
||
|
||
def preprocess_file_paths(row): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@albertvillanova removing this change and using full file paths instead of shorter file names would put a big hit on agent scores.
Indeed full paths in the eval_ds["file_path"]
look like this: '/Users/aymeric/.cache/huggingface/datasets/downloads/80a0374969d2717780654fca0340a4670da0ca36a97aab0459ad5daeb29c84a6'
It's much harder for a LLM to write this in a tool call to inspect the file, than to write the shortened name.
So I think we should keep relative file paths rather than absolute. They could be renamed, something like relative_file_path
.
By the way I don't know why file_paths in the dataset don't match file names and have no extension, this could be another problem to fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those paths appear in your case because you did not load the dataset with:
load_dataset("data/gaia/GAIA.py", "2023_all")[SET]
but with:
load_dataset("gaia-benchmark/GAIA", "2023_all")[SET]
Let me show you what I get.
Superseded by #1266. |
Fix loading GAIA dataset in Open DeepResearch example.
Currently, a (non-documented) manual download of the data files is required. Otherwise, while running the script, a FileNotFoundError is raised: