Support Local LLM
complete
Daniel Nguyen
complete
I've added this in version 1.16.0.
Daniel Nguyen
in progress
Daniel Nguyen
Z you can already customize the endpoint in Settings > Models > OpenAI > Advanced Settings
Z
Z
Daniel Nguyen: Tried setting it to localhost (which runs LiteLLM proxying to Ollama) and I can see
GET /v1/models
coming through locally but when chatting with the docs it seems to still hits OpenAI instead.Z
Z
Ah ignore. It is now working with the latest version 1.4 (I was on 1.1)
Daniel Nguyen
Good idea. I will add this in the next versions