API not fully OpenAI-compatible

We noticed in the Vercel AI SDK that the implementation of your API is not fully compatible with the OpenAI chat api.

Specifically, the usage information is under choices, not top level. We first had a workaround in our adapter, but removed it since we want to keep it as lean as possible ( vercel/ai#7934 ).

Would is be possible to expose the usage information in a top level usage property similar to openai chat?

This has historical reasons.

For stream requests, we initially added usage to each choice at every finish point based on user requests. In fact, when N > 1, considering the usage requires prompt_tokens + sum(completion_tokens).

Later, OpenAI updated the API, and we also made our protocol compatible with OpenAI’s, allowing users to add the parameter {"stream_options": { "include_usage": true }} when making requests. We will add usage at the top level at the end. This usage will sum the total completion tokens when N > 1.

The usage returned in each choice originally is retained for compatibility reasons.

Also, thanks for your contributions to the open-source world.

thank you for your quick answer!