Improve the admin experience, add more metadata to the list_display

- Don't propagate max_tokens to the openai chat completion method. the max for the newer models is fixed at 4096 max output. The token limit is just used for input
This commit is contained in:
sabaimran
2024-05-27 00:49:20 +05:30
parent 01cdc54ad0
commit 9ebf3a4d80
4 changed files with 32 additions and 7 deletions

View File

@@ -67,7 +67,6 @@ def extract_questions(
messages=messages,
model=model,
temperature=temperature,
max_tokens=max_tokens,
api_base_url=api_base_url,
model_kwargs={"response_format": {"type": "json_object"}},
openai_api_key=api_key,