API Endpoint Access URL
https://llm.pixlab.io/toolcall
Get Your API Key & Try LLM Tool Call Now ↗Description
The PixLab Tool Call endpoint provides seamless tool calling integration for your LLM workflows using one or more OpenAI-compatible tools exposed via the LLM-TOOLS REST API. Each tool is defined in a standardized JSON schema format, enabling large language models (LLMs) to execute external functions in a predictable, structured manner. This unlocks powerful real-world automation capabilities—allowing your models to interact with systems for document parsing, image processing, media analysis, data scraping, GitHub operations, and more—bridging the gap between natural language understanding and actionable execution.
HTTP Methods
JSON POST
HTTP Parameters
Required
Fields | Type | Description |
---|---|---|
tools |
Array | An array of tool call objects selected or returned by your LLM model. Each entry must match the name and schema of a PixLab-supported tool (OpenAI Compatible Function Calling Specification) from the available tool list. Tool names must exactly match those defined by the LLM-TOOLS API to ensure compatibility and proper execution. |
key |
String | Your PixLab API Key ↗. You can also embed your key in the WWW-Authenticate: HTTP header and omit this parameter if you want to. |
HTTP Response
All responses returned by the LLM-TOOLS
endpoint follow PixLab's standardized JSON schema, designed for seamless consumption by large language models (LLMs) and agent frameworks. This structure is fully compliant with the OpenAI Tool Call specification ↗. The output array contains one or more tool results in OpenAI-compatible format that can be passed directly to models such as GPT-4, Claude, or DeepSeek.
Sample Response
{
"status": 200,
"id": "ac129b4df6",
"output": [
{
"type": "function",
"name": "analyze_image",
"description": "Analyze the uploaded image using vision tools.",
"parameters": {
"type": "object",
"properties": {
"image": {
"type": "string",
"format": "url",
"description": "Public URL to the image file"
}
},
"required": ["image"]
}
}
],
"object": "llm.tool_call",
"created": 1718553014,
"model": "pix-vlm",
"total_input_tokens": 1431,
"total_output_tokens": 262
}
Field | Type | Description |
---|---|---|
status |
Integer | HTTP status code. 200 indicates success. Non-200 codes indicate an error. |
id |
String | A unique identifier for the tool call execution. |
output |
Array | Array of OpenAI-compatible tool call(s) results that you can pass verbatim to your underlying large language model such as GPT, DeepSeek, Gemma, Qwen, etc. |
object |
String | Object type identifier. Usually set to llm-tool-call . |
created |
Timestamp | UNIX timestamp (in seconds) when the output was generated. |
model |
String | Identifier of the language model used (e.g., pix-vlm ). |
total_input_tokens |
Integer | Total number of tokens used in the input request. |
total_output_tokens |
Integer | Total number of tokens generated in the response. |
error |
String | Error message (if any). Populated only if status is not 200. |
Code Samples
# For a comprehensive list of production-ready code samples, please consult the PixLab Github Repository: https://github.com/symisc/pixlab.
// For a comprehensive list of production-ready code samples, please consult the PixLab Github Repository: https://github.com/symisc/pixlab.
<?php
# For a comprehensive list of production-ready code samples, please consult the PixLab Github Repository: https://github.com/symisc/pixlab.
Similar API Endpoints
tagimg, nsfw, docscan, image-embed, chat, llm-tools, answer, describe, text-embed, llm-parse, query, coder