From 8679294bed12e72052dce2a095826e856bef0749 Mon Sep 17 00:00:00 2001 From: Debanjum Date: Tue, 5 Nov 2024 17:03:17 -0800 Subject: [PATCH] Remove need to set server chat settings from use openai proxies docs This was previously required, but now it's only usefuly for more advanced settings, not typical for self-hosting users. With recent updates, the user's selected chat model is used for both Khoj's train of thought and response. This makes it easy to switch your preferred chat model directly from the user settings page and not have to update this in the admin panel as well. Reflect these code changse in the docs, by removing the unnecessary step for self-hosted users to create a server chat setting when using an OpenAI proxy service like Ollama, LiteLLM etc. --- documentation/docs/advanced/litellm.md | 5 +---- documentation/docs/advanced/lmstudio.md | 5 +---- documentation/docs/advanced/ollama.md | 5 +---- documentation/docs/advanced/use-openai-proxy.md | 5 +---- 4 files changed, 4 insertions(+), 16 deletions(-) diff --git a/documentation/docs/advanced/litellm.md b/documentation/docs/advanced/litellm.md index ccc06170..9dfaaf34 100644 --- a/documentation/docs/advanced/litellm.md +++ b/documentation/docs/advanced/litellm.md @@ -31,7 +31,4 @@ Using LiteLLM with Khoj makes it possible to turn any LLM behind an API into you - Openai Config: `` - Max prompt size: `20000` (replace with the max prompt size of your model) - Tokenizer: *Do not set for OpenAI, Mistral, Llama3 based models* -5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel - - Default model: `` - - Summarizer model: `` -6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. +5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. diff --git a/documentation/docs/advanced/lmstudio.md b/documentation/docs/advanced/lmstudio.md index c08aeeec..5c5ab567 100644 --- a/documentation/docs/advanced/lmstudio.md +++ b/documentation/docs/advanced/lmstudio.md @@ -24,7 +24,4 @@ LM Studio can expose an [OpenAI API compatible server](https://lmstudio.ai/docs/ - Openai Config: `` - Max prompt size: `20000` (replace with the max prompt size of your model) - Tokenizer: *Do not set for OpenAI, mistral, llama3 based models* -5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel - - Default model: `` - - Summarizer model: `` -6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. +5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. diff --git a/documentation/docs/advanced/ollama.md b/documentation/docs/advanced/ollama.md index c65da0b8..7e90f767 100644 --- a/documentation/docs/advanced/ollama.md +++ b/documentation/docs/advanced/ollama.md @@ -28,9 +28,6 @@ Ollama exposes a local [OpenAI API compatible server](https://github.com/ollama/ - Model Type: `Openai` - Openai Config: `` - Max prompt size: `20000` (replace with the max prompt size of your model) -5. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel - - Default model: `` - - Summarizer model: `` -6. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. +5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. That's it! You should now be able to chat with your Ollama model from Khoj. If you want to add additional models running on Ollama, repeat step 6 for each model. diff --git a/documentation/docs/advanced/use-openai-proxy.md b/documentation/docs/advanced/use-openai-proxy.md index 7e52020e..ec674767 100644 --- a/documentation/docs/advanced/use-openai-proxy.md +++ b/documentation/docs/advanced/use-openai-proxy.md @@ -31,7 +31,4 @@ For specific integrations, see our [Ollama](/advanced/ollama), [LMStudio](/advan - Openai Config: `` - Max prompt size: `2000` (replace with the max prompt size of your model) - Tokenizer: *Do not set for OpenAI, mistral, llama3 based models* -4. Create a new [Server Chat Setting](http://localhost:42110/server/admin/database/serverchatsettings/add/) on your Khoj admin panel - - Default model: `` - - Summarizer model: `` -5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown. +4. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.