Fix passing temp kwarg to non-streaming openai completion endpoint

It is already being passed in model_kwargs, so not required to be
passed explicitly as well.

This code path isn't being used currently, but better to fix for
if/when it is used
This commit is contained in:
Debanjum
2025-08-19 15:10:19 -07:00
parent 8862394c15
commit 34dca8e114

View File

@@ -195,7 +195,6 @@ def completion_with_backoff(
chunk = client.beta.chat.completions.parse(
messages=formatted_messages, # type: ignore
model=model_name,
temperature=temperature,
timeout=httpx.Timeout(30, read=read_timeout),
**model_kwargs,
)