-
Notifications
You must be signed in to change notification settings - Fork 290
Closed
Description
I am trying to run evaluation on imagenet for basic vit following the instruction
data = load_dataset("imagenet-1k", split="validation", use_auth_token=True)
pipe = pipeline(
task="image-classification",
model="google/vit-base-patch16-224"
)
task_evaluator = evaluator("image-classification")
eval_results = task_evaluator.compute(
model_or_pipeline=pipe,
data=data,
metric="accuracy",
label_mapping=pipe.model.config.label2id,
device=0
)
However, although it is argued that it would automatically detect gpu and runs on it, it does not. This can be fixed by changing the pipeline to model and feature extractor as shown as
data = load_dataset("imagenet-1k", split="validation", use_auth_token=True)
model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224")
feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224")
task_evaluator = evaluator("image-classification")
eval_results = task_evaluator.compute(
feature_extractor=feature_extractor,
model_or_pipeline=model,
data=data,
metric="accuracy",
label_mapping=model.config.label2id,
device=0
)
Is it possible to use pipeline directly on gpu instead of load the model and feature extractor separately. Meanwhile, can we control the batch size during the evaluation?
I am using transformers =4.21.2, evaluate=0.2.2, datasets=2.4.0
Metadata
Metadata
Assignees
Labels
No labels