You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now that Whisper is integrated into transformers and the large number of speech2text models on the Hub, it might be interesting having a speech2text evaluator that computes common metrics like word error rate etc.
This would be especially useful for optimum users, who want to know if their exported models suffer any degradation from quantization etc.
I don't have bandwidth to tackle this right now, but maybe someone in the community is excited to?