Skip to content

Using Ollama models #2

@adieyal

Description

@adieyal

OSError: ollama/llama3.1 is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'

Took me a while to figure out why I couldn't use llama3.1 given that another I could in another virtual environment. I realised that the pipeline-llm extra is needed.

Perhaps it would be helpful to provide the option when installing?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions