mirror of
https://github.com/khoaliber/khoj.git
synced 2026-03-02 21:19:12 +00:00
3.3 KiB
3.3 KiB
sidebar_position
| sidebar_position |
|---|
| 3 |
Advanced Usage
Search across Different Languages (Self-Hosting)
To search for notes in multiple, different languages, you can use a multi-lingual model.
For example, the paraphrase-multilingual-MiniLM-L12-v2 supports 50+ languages, has good search quality and speed. To use it:
- Manually update the search config in server's admin settings page. Go to the search config. Either create a new one, if none exists, or update the existing one. Set the bi_encoder to
sentence-transformers/multi-qa-MiniLM-L6-cos-v1and the cross_encoder tocross-encoder/ms-marco-MiniLM-L-6-v2. - Regenerate your content index from all the relevant clients. This step is very important, as you'll need to re-encode all your content with the new model.
Query Filters
Use structured query syntax to filter entries from your knowledge based used by search results or chat responses.
- Word Filter: Get entries that include/exclude a specified term
- Entries that contain term_to_include:
+"term_to_include" - Entries that contain term_to_exclude:
-"term_to_exclude"
- Entries that contain term_to_include:
- Date Filter: Get entries containing dates in YYYY-MM-DD format from specified date (range)
- Entries from April 1st 1984:
dt:"1984-04-01" - Entries after March 31st 1984:
dt>="1984-04-01" - Entries before April 2nd 1984 :
dt<="1984-04-01"
- Entries from April 1st 1984:
- File Filter: Get entries from a specified file
- Entries from incoming.org file:
file:"incoming.org"
- Entries from incoming.org file:
- Combined Example
what is the meaning of life? file:"1984.org" dt>="1984-01-01" dt<="1985-01-01" -"big" -"brother"- Adds all filters to the natural language query. It should return entries
- from the file 1984.org
- containing dates from the year 1984
- excluding words "big" and "brother"
- that best match the natural language query "what is the meaning of life?"
Use OpenAI compatible LLM API Server (Self Hosting)
Use this if you want to use non-standard, open or commercial, local or hosted LLM models for Khoj chat
- Setup your desired chat LLM by installing an OpenAI compatible LLM API Server like LiteLLM, llama-cpp-python
- Set environment variable
OPENAI_API_BASE="<url-of-your-llm-server>"before starting Khoj - Add ChatModelOptions with
model-typeOpenAI, andchat-modelto anything (e.ggpt-3.5-turbo) during Config- (Optional) Set the
tokenizerandmax-prompt-sizerelevant to the actual chat model you're using
- (Optional) Set the
Sample Setup using LiteLLM and Mistral API
# Install LiteLLM
pip install litellm[proxy]
# Start LiteLLM and use Mistral tiny via Mistral API
export MISTRAL_API_KEY=<MISTRAL_API_KEY>
litellm --model mistral/mistral-tiny --drop_params
# Set OpenAI API Base to LiteLLM server URL and start Khoj
export OPENAI_API_BASE='http://localhost:8000'
khoj --anonymous-mode