LLM Tools API Endpoint

Version 2.197 (Release Notes ↗)

Description

The PixLab Tool Listing API provides a robust and production-grade implementation of the OpenAI Function Calling Specification ↗, allowing large language models (LLMs) to seamlessly invoke external tools and services in a structured, schema-driven format. This capability empowers developers to integrate LLMs into real-world workflows by bridging model predictions with actionable operations, such as document parsing, image analysis, image editing data scraping, GitHub interactions, and much more. Each tool is defined as a callable function with a machine-readable JSON schema, including fields like name, description, parameters, and expected output. These schemas are automatically compatible with any OpenAI-based or OpenAI-compatible model, including DeepSeek, Claude, Mistral, Gemini, Qwen, Gemma, and others deployed via frameworks such as OpenRouter, DeepInfra, or your own custom inference pipelines.

A Tool is a discrete, callable unit of logic such as “transcribe audio,” “remove image background,” or “query arXiv.” that can executed immediately once you pass them verbatim (as returned by your large language model) to the LLM-TOOL-CALL REST-API endpoint. Tools are grouped into Toolkits, which are collections of related functionality serving a focused purpose (e.g., file operations, media processing, spreadsheet parsing, terminal commands, etc.). This structure improves modularity, discoverability, and fine-grained control for LLM-based agents. Tools are defined in a way that supports runtime invocation with JSON-serialized arguments and structured JSON output, fully aligned with OpenAI's function call protocol. This enables easy chaining, programmatic orchestration, and interaction with external APIs or systems without sacrificing safety or transparency.

Developers integrating with PixLab's Vision Platform can register or consume pre-built toolkits that expose dozens of callable tools. These are particularly well-suited for automating workflows in domains like computer vision, media processing, document understanding, knowledge retrieval, and human-AI collaboration. For example, the ImageAnalysisToolkit integrates tightly with PixLab's vision endpoints, enabling AI agents to answer questions about images, detect objects, OCR text, or identify faces. The BrowserToolkit lets models simulate browser navigation and extract live web content, while the CodeExecutionToolkit allows LLMs to safely run Python or Jupyter cells in isolated sandboxes.

Every tool and toolkit provided by PixLab is versioned, documented, and exposed via a consistent schema that you can query directly from the API or import into your LLM runtime. Whether you're building autonomous agents, data processing pipelines, developer copilots, or interactive chat interfaces, PixLab's Tool Call API provides the necessary building blocks to connect your models with real-world functionality; reliably, securely, and at scale.

Below is a categorized listing of all built-in toolkits currently available through the PixLab Tool Call API. These are fully compatible with OpenAI's tool calling specification and are ready to be integrated into any LLM-supported application or agent runtime.

Toolkit Name Description
ArxivToolkitSearch and retrieve academic papers from the arXiv API.
AudioAnalysisToolkitTranscribe and analyze audio content with contextual Q&A.
BrowserToolkitSimulate browsing, extract page content, and interact with web pages.
CodeExecutionToolkitExecute code via Python, Jupyter, subprocess, Docker, or e2b sandboxes.
DataCommonsToolkitQuery statistical and graph data from Data Commons using SPARQL.
ExcelToolkitConvert Excel sheets to markdown tables and extract structured content.
FunctionToolDefine custom tools with JSON schema parsing for OpenAI-compatible calls.
FileWriteToolCreate, write, or modify plain text files.
GitHubToolkitInteract with GitHub: issues, PRs, repository data.
HumanToolkitEnable human-in-the-loop tasks for agent reinforcement or fallback actions.
ImageAnalysisToolkitPerform vision-language reasoning using PixLab's QUERY API.
MediaToolkitOCR, table/formula detection, and image content extraction via PixLab APIs.
MathToolkitExecute arithmetic operations and symbolic math functions.
MCPToolkitBridge to external tools using the Model Context Protocol (MCP).
MeshyToolkitManage and manipulate 3D mesh geometry.
PPTXToolkitProgrammatically create PowerPoint slides with text and images.
RetrievalToolkitQuery custom vector stores for semantic search results.
SearchToolkitPerform web and knowledge base searches (DuckDuckGo, Wikipedia, etc.).
SlackToolkitAutomate Slack operations: messages, channels, roles.
SyNumPyToolkitRun NumPy operations using the SyNumPy C++ library.
TerminalToolkitExecute CLI commands and file operations across OSes.
VideoAnalysisToolkitAnalyze video content with frame-based Q&A using PixLab APIs.
WhatsAppToolkitSend messages and manage templates via WhatsApp Business API.
ZapierToolkitTrigger Zapier workflows using natural language commands.

For image analysis, we recommend leveraging the PixLab APIs, such as the QUERY, TAG-IMG and DESCRIBE API endpoints, in addition to the comprehensive suite of Vision Language Models API endpoints.

HTTP Methods

GET

HTTP Parameters

Required

Fields Type Description
key String Your PixLab API Key ↗. You can also embed your key in the WWW-Authenticate: HTTP header and omit this parameter if you want to.

HTTP Response

application/json

An array or list with an OpenAI compatible tool call format of the PixLab Tools listed above.

LLM Tools Response Example


// List of tools returned by the llm-tools API endpoint
[{...
    "type": "function",
    "name": "analyze_image",
    "description": "Analyze image content, and respond to user query.",
    "parameters": {
        "type": "object",
        "properties": {
            "image": {
                "type": "URL",
                "description": "Input Image URL or Base64 Image Body Encoding"
            }
        },
        "required": [
            "image"
        ],
        "additionalProperties": False
      }
}]
Fields Type Description
status Integer HTTP 200 indicates success. Any other code indicates failure.
tools List An array (or list) of the tools listed above in an OpenAI tool call compatible format
object String Invoked vLM API endpoint.
created Timestamp Timestamp of generated output creation.
error String Error description when status != 200.

Code Samples


# For a comprehensive list of production-ready code samples, please consult the PixLab Github Repository: https://github.com/symisc/pixlab.

← Return to API Endpoint Listing