You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The README.md says "more efficient batch inference resulting in large-v2 with *60-70x REAL TIME speed (not provided in this repo)".
Will this eventually be integrated into this repo, too? That would be really awesome. If so, is there a rough time estimate when it will be integrated?