Skip to content

Conversation

bradyjoslin
Copy link
Contributor

  1. Added current Perplexity models to default configuration template.

  2. Online models were providing low quality responses. Perplexity docs state, It is recommended to use only single-turn conversations and avoid system prompts for the online LLMs (sonar-small-online and sonar-medium-online).

    Experimented, and omitting some of the ChatCompletionRequest properties fixes response quality issues for these online models, so added a conditional to mods.go.

Closes #201

@caarlos0 caarlos0 merged commit 51c32c5 into charmbracelet:main Mar 28, 2024
@caarlos0
Copy link
Member

thanks @bradyjoslin

@bradyjoslin
Copy link
Contributor Author

Happy to contribute, thanks for the review and approval, @caarlos0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

support perplexity
2 participants