Skip to content

Add runtime metrics to Evaluator #125

@lewtun

Description

@lewtun

Currently, the Evaluator class for text-classification computes the model metrics on a given dataset. In addition to model metrics, it would be nice if the Evaluator could also report runtime metrics like eval_runtime (latency) and eval_samples_per_second (throughput).

cc @philschmid

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions