sabaimran 124d97c26d Replace Falcon 🦅 model with Llama V2 🦙 for offline chat (#352)
* Working example with LlamaV2 running locally on my machine

- Download from huggingface
- Plug in to GPT4All
- Update prompts to fit the llama format

* Add appropriate prompts for extracting questions based on a query based on llama format

* Rename Falcon to Llama and make some improvements to the extract_questions flow

* Do further tuning to extract question prompts and unit tests

* Disable extracting questions dynamically from Llama, as results are still unreliable
2023-07-27 20:51:20 -07:00
2023-07-27 15:28:47 -07:00
2023-07-11 18:43:44 -07:00
2023-07-18 19:59:27 -07:00
2023-07-21 23:36:38 -07:00
2023-07-18 19:59:27 -07:00

Khoj Logo

test dockerize pypi

An AI personal assistant for your digital brain

Our goal with Khoj is to make something that can live on your desktop and give you a privacy-first, open-source, and extensible way to search and chat with your digital brain.

Install Get on the Cloud Get Involved
See the docs Visit the website Join the community
Description
No description provided
Readme AGPL-3.0 116 MiB
Languages
Python 51%
TypeScript 36.1%
CSS 4.1%
HTML 3.2%
Emacs Lisp 2.4%
Other 3.1%