mirror of
https://github.com/khoaliber/khoj.git
synced 2026-03-06 13:22:12 +00:00
Replace Falcon 🦅 model with Llama V2 🦙 for offline chat (#352)
* Working example with LlamaV2 running locally on my machine - Download from huggingface - Plug in to GPT4All - Update prompts to fit the llama format * Add appropriate prompts for extracting questions based on a query based on llama format * Rename Falcon to Llama and make some improvements to the extract_questions flow * Do further tuning to extract question prompts and unit tests * Disable extracting questions dynamically from Llama, as results are still unreliable
This commit is contained in:
@@ -58,7 +58,7 @@ dependencies = [
|
||||
"pypdf >= 3.9.0",
|
||||
"requests >= 2.26.0",
|
||||
"bs4 >= 0.0.1",
|
||||
"gpt4all==1.0.5",
|
||||
"gpt4all >= 1.0.7",
|
||||
]
|
||||
dynamic = ["version"]
|
||||
|
||||
|
||||
Reference in New Issue
Block a user