mirror of
https://github.com/khoaliber/khoj.git
synced 2026-03-02 13:18:18 +00:00
Update docs to show how to setup llama-cpp with Khoj
- How to pip install khoj to run offline chat on GPU After migration to llama-cpp-python more GPU types are supported but require build step so mention how - New default offline chat model - Where to get supported chat models from on HuggingFace
This commit is contained in:
@@ -10,4 +10,4 @@ Many Open Source projects are used to power Khoj. Here's a few of them:
|
||||
- Charles Cave for [OrgNode Parser](http://members.optusnet.com.au/~charles57/GTD/orgnode.html)
|
||||
- [Org.js](https://mooz.github.io/org-js/) to render Org-mode results on the Web interface
|
||||
- [Markdown-it](https://github.com/markdown-it/markdown-it) to render Markdown results on the Web interface
|
||||
- [GPT4All](https://github.com/nomic-ai/gpt4all) to chat with local LLM
|
||||
- [Llama.cpp](https://github.com/ggerganov/llama.cpp) to chat with local LLM
|
||||
|
||||
Reference in New Issue
Block a user