Skip to content

More efficient batch inference resulting in large-v2 with *60-70x REAL TIME speed (now in custom v3 branch, see comment for details) #159

@DavidFarago

Description

@DavidFarago

The README.md says "more efficient batch inference resulting in large-v2 with *60-70x REAL TIME speed (not provided in this repo)".

Will this eventually be integrated into this repo, too? That would be really awesome. If so, is there a rough time estimate when it will be integrated?

Is this related to #57?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestquestionFurther information is requested

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions