**Describe the bug** This is a schema change that causes invoking against o1-preview and o1-mini to fail. This [page](https://community.openai.com/t/why-was-max-tokens-changed-to-max-completion-tokens/938077) describes. It is causing tooling that doesn't handle this mapping to fail due to invalid request. I just experienced this when trying to use mods with an `o1-preview` model in azure openai. related work from continue team [related compatibility change](https://github.com/continuedev/continue/issues/2250)