Simplify integrating Ollama, OpenAI proxies with Khoj on first run

- Integrate with Ollama or other openai compatible APIs by simply
  setting `OPENAI_API_BASE' environment variable in docker-compose etc.
- Update docs on integrating with Ollama, openai proxies on first run
- Auto populate all chat models supported by openai compatible APIs
- Auto set vision enabled for all commercial models

- Minor
  - Add huggingface cache to khoj_models volume. This is where chat
  models and (now) sentence transformer models are stored by default
  - Reduce verbosity of yarn install of web app. Otherwise hit docker
  log size limit & stops showing remaining logs after web app install
  - Suggest `ollama pull <model_name>` to start it in background
This commit is contained in:
Debanjum
2024-11-16 23:53:11 -08:00
parent 2366fa08b9
commit 69ef6829c1
6 changed files with 164 additions and 84 deletions

View File

@@ -37,7 +37,7 @@ ENV PYTHONPATH=/app/src:$PYTHONPATH
# Go to the directory src/interface/web and export the built Next.js assets
WORKDIR /app/src/interface/web
RUN bash -c "yarn install --frozen-lockfile --verbose && yarn ciexport && yarn cache clean"
RUN bash -c "yarn install --frozen-lockfile && yarn ciexport && yarn cache clean"
WORKDIR /app
# Run the Application