Update docs to show how to setup llama-cpp with Khoj

- How to pip install khoj to run offline chat on GPU
  After migration to llama-cpp-python more GPU types are supported but
  require build step so mention how
- New default offline chat model
- Where to get supported chat models from on HuggingFace
This commit is contained in:
Debanjum Singh Solanky
2024-03-16 04:25:14 +05:30
parent 8ca39a436c
commit dcdd1edde2
3 changed files with 27 additions and 7 deletions

View File

@@ -10,4 +10,4 @@ Many Open Source projects are used to power Khoj. Here's a few of them:
- Charles Cave for [OrgNode Parser](http://members.optusnet.com.au/~charles57/GTD/orgnode.html)
- [Org.js](https://mooz.github.io/org-js/) to render Org-mode results on the Web interface
- [Markdown-it](https://github.com/markdown-it/markdown-it) to render Markdown results on the Web interface
- [GPT4All](https://github.com/nomic-ai/gpt4all) to chat with local LLM
- [Llama.cpp](https://github.com/ggerganov/llama.cpp) to chat with local LLM