Chat completion
Send a chat completion request to a selected model. The request must contain a “messages” array. All advanced options from the base request are also supported.
Headers
Bearer authentication of the form Bearer <token>, where token is your auth token.
Request
The model ID to use. If unspecified, the user’s default is used.
Alternate list of models for routing overrides.
Preferences for provider routing.
Configuration for model reasoning/thinking tokens
Whether to include usage information in the response
List of prompt transforms (OpenRouter-only).
Enable streaming of results.
Maximum number of tokens (range: [1, context_length)).
Sampling temperature (range: [0, 2]).
Seed for deterministic outputs.
Top-p sampling value (range: (0, 1]).
Top-k sampling value (range: [1, Infinity)).
Frequency penalty (range: [-2, 2]).
Presence penalty (range: [-2, 2]).
Repetition penalty (range: (0, 2]).
Mapping of token IDs to bias values.
Number of top log probabilities to return.
Minimum probability threshold (range: [0, 1]).
Alternate top sampling parameter (range: [0, 1]).
Response
Successful completion