Add support for our first Local LLM 🤖🏠 (#330)

* Add support for gpt4all's falcon model as an additional conversation processor
- Update the UI pages to allow the user to point to the new endpoints for GPT
- Update the internal schemas to support both GPT4 models and OpenAI
- Add unit tests benchmarking some of the Falcon performance
* Add exc_info to include stack trace in error logs for text processors
* Pull shared functions into utils.py to be used across gpt4 and gpt
* Add migration for new processor conversation schema
* Skip GPT4All actor tests due to typing issues
* Fix Obsidian processor configuration in auto-configure flow
* Rename enable_local_llm to enable_offline_chat
This commit is contained in:
sabaimran
2023-07-26 23:27:08 +00:00
committed by GitHub
parent 23d77ee338
commit 8b2af0b5ef
34 changed files with 1258 additions and 291 deletions

View File

@@ -30,7 +30,6 @@ jobs:
fail-fast: false
matrix:
python_version:
- '3.8'
- '3.9'
- '3.10'
- '3.11'