mirror of
https://github.com/khoaliber/khoj.git
synced 2026-03-08 05:39:13 +00:00
Use single extract questions method across all LLMs for doc search
Using model specific extract questions was an artifact from older times, with less guidable models. New changes collate and reuse logic - Rely on send_message_to_model_wrapper for model specific formatting. - Use same prompt, context for all LLMs as can handle prompt variation. - Use response schema enforcer to ensure response consistency across models. Extract questions (because of its age) was the only tool directly within each provider code. Put it into helpers to have all the (mini) tools in one place.
This commit is contained in:
@@ -4,10 +4,11 @@ import freezegun
|
||||
import pytest
|
||||
from freezegun import freeze_time
|
||||
|
||||
from khoj.processor.conversation.openai.gpt import converse_openai, extract_questions
|
||||
from khoj.processor.conversation.openai.gpt import converse_openai
|
||||
from khoj.processor.conversation.utils import message_to_log
|
||||
from khoj.routers.helpers import (
|
||||
aget_data_sources_and_output_format,
|
||||
extract_questions,
|
||||
generate_online_subqueries,
|
||||
infer_webpage_urls,
|
||||
schedule_query,
|
||||
|
||||
Reference in New Issue
Block a user