You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Confirm this is a feature request for the Python library and not the underlying OpenAI API.
This is a feature request for the Python library
Describe the feature or improvement you're requesting
I've been playing with the batch API and have encountered many TPM limits within the last 24 hours. Although my current TPM limit caused it, I think there are two parts to consider/improve.
I initially thought that the requested *.jsonl file would be automatically removed, but later, I realized that I was required to manually remove all the failed batch job posts from my end. In this regard, I think it would be helpful if I could provide an additional parameter to "client.batches.create" so that it automatically removes a file if the batch request fails.
I expected that the "client.batches.create" call would at least throw an error if the batch posting failed due to the TPM limit, but it didn't, and I always need to check by calling "client.batches.retrieve." I think it would be great to either throw an error for the API limit failure cases or at least provide any details in the 'errors' or 'failed_at' parameter from the response of "client.batches.create."
Additional context
Both of them may need some edits on the backend API server as well (either way would work, and I think server-side support would be a bit clearer, but anyway), but I believe it would be worthwhile to review at least.
The text was updated successfully, but these errors were encountered:
Confirm this is a feature request for the Python library and not the underlying OpenAI API.
Describe the feature or improvement you're requesting
I've been playing with the batch API and have encountered many TPM limits within the last 24 hours. Although my current TPM limit caused it, I think there are two parts to consider/improve.
I initially thought that the requested *.jsonl file would be automatically removed, but later, I realized that I was required to manually remove all the failed batch job posts from my end. In this regard, I think it would be helpful if I could provide an additional parameter to "client.batches.create" so that it automatically removes a file if the batch request fails.
I expected that the "client.batches.create" call would at least throw an error if the batch posting failed due to the TPM limit, but it didn't, and I always need to check by calling "client.batches.retrieve." I think it would be great to either throw an error for the API limit failure cases or at least provide any details in the 'errors' or 'failed_at' parameter from the response of "client.batches.create."
Additional context
Both of them may need some edits on the backend API server as well (either way would work, and I think server-side support would be a bit clearer, but anyway), but I believe it would be worthwhile to review at least.
The text was updated successfully, but these errors were encountered: