mirror of
https://github.com/khoaliber/khoj.git
synced 2026-03-03 05:29:12 +00:00
Clean AI model API providers documentation
This commit is contained in:
@@ -61,14 +61,14 @@ Restart your Khoj server after first run or update to the settings below to ensu
|
||||
ollama pull llama3.1
|
||||
```
|
||||
3. Create a new [AI Model API](http://localhost:42110/server/admin/database/aimodelapi/add) on your Khoj admin panel
|
||||
- Name: `ollama`
|
||||
- Api Key: `any string`
|
||||
- Api Base Url: `http://localhost:11434/v1/` (default for Ollama)
|
||||
- **Name**: `ollama`
|
||||
- **Api Key**: `any string`
|
||||
- **Api Base Url**: `http://localhost:11434/v1/` (default for Ollama)
|
||||
4. Create a new [Chat Model](http://localhost:42110/server/admin/database/chatmodel/add) on your Khoj admin panel.
|
||||
- Name: `llama3.1` (replace with the name of your local model)
|
||||
- Model Type: `Openai`
|
||||
- Openai Config: `<the ollama config you created in step 3>`
|
||||
- Max prompt size: `20000` (replace with the max prompt size of your model)
|
||||
- **Name**: `llama3.1` (replace with the name of your local model)
|
||||
- **Model Type**: `Openai`
|
||||
- **AI Model API**: *the ollama AI Model API you created in step 3*
|
||||
- **Max prompt size**: `20000` (replace with the max prompt size of your model)
|
||||
5. Go to [your config](http://localhost:42110/settings) and select the model you just created in the chat model dropdown.
|
||||
|
||||
If you want to add additional models running on Ollama, repeat step 4 for each model.
|
||||
|
||||
Reference in New Issue
Block a user