Skip to main content
POST
/
api
/
v1
/
chat
/
completions
Chat Completions
curl --request POST \
  --url https://api.mor.org/api/v1/chat/completions \
  --header 'Authorization: <authorization>' \
  --header 'Content-Type: application/json' \
  --data '{
  "messages": [
    {
      "role": "<string>",
      "content": "<string>",
      "name": "<string>",
      "tool_calls": [
        {}
      ],
      "tool_call_id": "<string>"
    }
  ],
  "model": "<string>",
  "temperature": 123,
  "top_p": 123,
  "n": 123,
  "stream": true,
  "stop": {},
  "max_tokens": 123,
  "presence_penalty": 123,
  "frequency_penalty": 123,
  "tools": [
    {}
  ],
  "tool_choice": {},
  "session_id": "<string>"
}'
{
  "id": "<string>",
  "object": "<string>",
  "created": 123,
  "model": "<string>",
  "choices": [
    {
      "index": 123,
      "message": {},
      "finish_reason": "<string>"
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123
  }
}
Create a chat completion with automatic session creation if enabled. Supports both streaming and non-streaming responses based on the stream parameter. Tool calling is supported but may work better with streaming enabled.

Headers

Authorization
string
required
API key in format: Bearer sk-xxxxxx

Body

messages
array
required
Array of message objects representing the conversation
model
string
Model ID to use for completion (blockchain hex address or name)
temperature
number
default:1
Sampling temperature between 0 and 2. Higher values make output more random.
top_p
number
default:1
Nucleus sampling parameter. Alternative to temperature.
n
integer
default:1
Number of completions to generate
stream
boolean
default:false
Whether to stream the response as server-sent events
stop
string | array
Up to 4 sequences where the API will stop generating
max_tokens
integer
Maximum number of tokens to generate
presence_penalty
number
default:0
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far.
frequency_penalty
number
default:0
Number between -2.0 and 2.0. Positive values penalize new tokens based on their frequency in the text so far.
tools
array
List of tools the model can call
tool_choice
string | object
Controls which tool is called by the model
session_id
string
Optional session ID to use. If not provided, uses the session associated with the API key.

Response

id
string
Unique completion ID
object
string
Always “chat.completion”
created
integer
Unix timestamp of completion creation
model
string
Model used for the completion
choices
array
Array of completion choices
usage
object
Token usage statistics
The API is fully compatible with the OpenAI SDK. Simply change the base_url to point to the Morpheus Gateway.
Streaming responses return server-sent events. Set stream: true in your request to enable streaming.