Replace Falcon 🦅 model with Llama V2 🦙 for offline chat (#352)

* Working example with LlamaV2 running locally on my machine

- Download from huggingface
- Plug in to GPT4All
- Update prompts to fit the llama format

* Add appropriate prompts for extracting questions based on a query based on llama format

* Rename Falcon to Llama and make some improvements to the extract_questions flow

* Do further tuning to extract question prompts and unit tests

* Disable extracting questions dynamically from Llama, as results are still unreliable
This commit is contained in:
sabaimran
2023-07-28 03:51:20 +00:00
committed by GitHub
parent 55965eea7d
commit 124d97c26d
11 changed files with 248 additions and 141 deletions

View File

@@ -58,7 +58,7 @@ dependencies = [
"pypdf >= 3.9.0",
"requests >= 2.26.0",
"bs4 >= 0.0.1",
"gpt4all==1.0.5",
"gpt4all >= 1.0.7",
]
dynamic = ["version"]