API Endpoint Access URL
https://llm.pixlab.io/coder
Get Your API Key & Try CODER Now ↗Description
The PixLab CODER LLM endpoint provides an OpenAI-compatible API, granting access to advanced coding models. Users can interact with these models via a standard OpenAI-compatible API ↗. This API endpoint enables access to high-performance code generation models and powers the PixLab APP UI/UX mobile apps UI and Views code generator.
HTTP Methods
JSON POST
HTTP Parameters
Required
Fields | Type | Description |
---|---|---|
messages |
array | string | A list of messages formatted for compatibility with the OpenAI conversation format. The code generation prompt can include anything from simple code snippets requests to more complex coding instructions and contextual information. For a single coding prompt, you can pass it directly as a string without enclosing it in a JSON array. |
key |
string | Your PixLab API Key ↗. You can also embed your key in the WWW-Authenticate: HTTP header and omit this parameter if you want to. |
Optional
Fields | Type | Description |
---|---|---|
format |
string | Specify the desired output format for the generated output. You can choose from the following tones: text (the default), json (for JSON formatted text output), and markdown (for a markdown formatted text). |
openai-reply |
boolean | If set to true , respond with a JSON object compatible with OpenAI format. Otherwise, return the PixLab simple format (see below ↓ for the JSON fields that will be returned). This gives you more control over the response. |
Optional LLM Parameters
For most applications, the default LLM parameter values are a good starting point. Only change these values if you have a solid grasp of how each parameter works.
Fields | Type | Description |
---|---|---|
temperature |
float | The sampling temperature, ranging from 0 to 2, influences the randomness of the output. Higher values, such as 0.8, increase randomness, whereas lower values, such as 0.2, promote focused and deterministic outputs. It is generally recommended to adjust either this parameter or top_p , but not both simultaneously. |
max_tokens |
integer | An integer between 1 and 180,000, representing the maximum number of tokens to be generated in a chat completion. The combined length of input and generated tokens is constrained by the model's context length. |
tools |
array | Specifies an array of tools compatible with the OpenAI format that the underlying model can call. Tools can include one or more tool objects. The model chooses one or more tools for each function-calling procedure. PixLab Vision provides pre-built tools through the llm-tools API endpoint, which are ready for use with this endpoint. |
frequency_penalty |
float | A number between -2.0 and 2.0. Positive values penalize new tokens according to their frequency in the preceding text, thereby reducing the model's tendency to repeat phrases verbatim. |
presence_penalty |
float | A number between -2.0 and 2.0. Positive values penalize new tokens based on their presence in the preceding text, thus encouraging the model to explore novel topics. |
top_p |
float | An alternative to temperature-based sampling is nucleus sampling, which considers tokens based on the top_p probability mass. For example, a top_p value of 0.1 means only the tokens within the top 10% of the probability mass are considered. It's generally recommended to adjust either top_p or temperature , but not both. |
logprobs |
boolean | Indicates whether to return log probabilities for the output tokens. If set to true , the log probabilities of each output token will be included in the message content. |
top_logprobs |
integer | An integer between 0 and 20, indicating the number of most probable tokens to return at each token position, along with their associated log probabilities. If this parameter is used, logprobs must be set to true |
POST Request Body
This section outlines the requirements for POST requests. Allowed Content-Types:
application/json
JSON is the default format for POST requests. If you are uploading a file via a JSON POST request, please ensure the file is base64 encoded within the JSON payload.
HTTP Response
application/json
The default response format is the PixLab simple LLM response format which is unified across our vLM API endpoints, and is suitable for most applications that includes the bare minimum information including the full generated output, tokens count, etc. If an OpenAI compatible response format is needed, then set the openai-reply
boolean HTTP Parameter this API endpoint takes is set to true
.
PixLab Simple vLM Response Format
{
"status": 200,
"id": "6783E34342",
"output": "Fully generated code output by the underlying LLM",
"role": "Role of the output generator"
"format": "Desired output format",
"object": "chat",
"created": 1694623155,
"model": "pix-llm",
"total_input_tokens": 25,
"total_output_tokens": 57,
}
Fields | Type | Description |
---|---|---|
status |
Integer | HTTP 200 indicates success. Any other code indicates failure. |
id |
Integer | random ID to identify the generated response output. |
output |
String | Fully generated code output by the underlying LLM. |
role |
String | Role of the output generator, such as assistant or user . |
format |
String | Desired output format set in the format HTTP parameter. The parameter accepts values such as text (default response format), JSON , or Markdown . |
object |
String | Invoked vLM API endpoint such as answer , chat , coder , etc. |
created |
Timestamp | Timestamp of generated output creation. |
model |
String | Underlying LLM model ID/Name. |
total_input_tokens |
Integer | total number of ingested tokens. |
total_output_tokens |
Integer | Total number of output tokens. |
error |
String | Error description when status != 200. |
OpenAI Compatible Response Format
An OpenAI compatible ↗ response format (JSON schema below) will be returned when the openai-reply
boolean HTTP Parameter this API endpoint takes is set to true
:
{
"status": 200,
"id": "6783E34342",
"object": "chat",
"created": 1694623155,
"model": "pix-llm",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": " Hello! how can I help you with your coding tasks today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 15,
"completion_tokens": 16,
"total_tokens": 31,
}
}
Code Samples
# For a comprehensive list of production-ready code samples, please consult the PixLab Github Repository: https://github.com/symisc/pixlab.
// For a comprehensive list of production-ready code samples, please consult the PixLab Github Repository: https://github.com/symisc/pixlab.
<?php
# For a comprehensive list of production-ready code samples, please consult the PixLab Github Repository: https://github.com/symisc/pixlab.
Similar API Endpoints
tagimg, llm-tools, docscan, llm-parse, chat, llm-tools, text-embed, describe, image-embed, summarize, query