API Endpoint Access URL
https://llm.pixlab.io/llmtools
Get Your API Key & Try LLM Tools Now ↗Description
The PixLab Tool Listing API provides a robust and production-grade implementation of the OpenAI Function Calling Specification ↗, allowing large language models (LLMs) to seamlessly invoke external tools and services in a structured, schema-driven format. This capability empowers developers to integrate LLMs into real-world workflows by bridging model predictions with actionable operations, such as document parsing, image analysis, image editing data scraping, GitHub interactions, and much more. Each tool is defined as a callable function with a machine-readable JSON schema, including fields like name
, description
, parameters
, and expected output
. These schemas are automatically compatible with any OpenAI-based or OpenAI-compatible model, including DeepSeek, Claude, Mistral, Gemini, Qwen, Gemma, and others deployed via frameworks such as OpenRouter, DeepInfra, or your own custom inference pipelines.
A Tool is a discrete, callable unit of logic such as “transcribe audio,” “remove image background,” or “query arXiv.” that can executed immediately once you pass them verbatim (as returned by your large language model) to the LLM-TOOL-CALL REST-API endpoint. Tools are grouped into Toolkits, which are collections of related functionality serving a focused purpose (e.g., file operations, media processing, spreadsheet parsing, terminal commands, etc.). This structure improves modularity, discoverability, and fine-grained control for LLM-based agents. Tools are defined in a way that supports runtime invocation with JSON-serialized arguments and structured JSON output, fully aligned with OpenAI's function call protocol. This enables easy chaining, programmatic orchestration, and interaction with external APIs or systems without sacrificing safety or transparency.
Developers integrating with PixLab's Vision Platform can register or consume pre-built toolkits that expose dozens of callable tools. These are particularly well-suited for automating workflows in domains like computer vision, media processing, document understanding, knowledge retrieval, and human-AI collaboration. For example, the ImageAnalysisToolkit integrates tightly with PixLab's vision endpoints, enabling AI agents to answer questions about images, detect objects, OCR text, or identify faces. The BrowserToolkit lets models simulate browser navigation and extract live web content, while the CodeExecutionToolkit allows LLMs to safely run Python or Jupyter cells in isolated sandboxes.
Every tool and toolkit provided by PixLab is versioned, documented, and exposed via a consistent schema that you can query directly from the API or import into your LLM runtime. Whether you're building autonomous agents, data processing pipelines, developer copilots, or interactive chat interfaces, PixLab's Tool Call API provides the necessary building blocks to connect your models with real-world functionality; reliably, securely, and at scale.
Below is a categorized listing of all built-in toolkits currently available through the PixLab Tool Call API. These are fully compatible with OpenAI's tool calling specification and are ready to be integrated into any LLM-supported application or agent runtime.
Toolkit Name | Description |
---|---|
ArxivToolkit | Search and retrieve academic papers from the arXiv API. |
AudioAnalysisToolkit | Transcribe and analyze audio content with contextual Q&A. |
BrowserToolkit | Simulate browsing, extract page content, and interact with web pages. |
CodeExecutionToolkit | Execute code via Python, Jupyter, subprocess, Docker, or e2b sandboxes. |
DataCommonsToolkit | Query statistical and graph data from Data Commons using SPARQL. |
ExcelToolkit | Convert Excel sheets to markdown tables and extract structured content. |
FunctionTool | Define custom tools with JSON schema parsing for OpenAI-compatible calls. |
FileWriteTool | Create, write, or modify plain text files. |
GitHubToolkit | Interact with GitHub: issues, PRs, repository data. |
HumanToolkit | Enable human-in-the-loop tasks for agent reinforcement or fallback actions. |
ImageAnalysisToolkit | Perform vision-language reasoning using PixLab's QUERY API. |
MediaToolkit | OCR, table/formula detection, and image content extraction via PixLab APIs. |
MathToolkit | Execute arithmetic operations and symbolic math functions. |
MCPToolkit | Bridge to external tools using the Model Context Protocol (MCP). |
MeshyToolkit | Manage and manipulate 3D mesh geometry. |
PPTXToolkit | Programmatically create PowerPoint slides with text and images. |
RetrievalToolkit | Query custom vector stores for semantic search results. |
SearchToolkit | Perform web and knowledge base searches (DuckDuckGo, Wikipedia, etc.). |
SlackToolkit | Automate Slack operations: messages, channels, roles. |
SyNumPyToolkit | Run NumPy operations using the SyNumPy C++ library. |
TerminalToolkit | Execute CLI commands and file operations across OSes. |
VideoAnalysisToolkit | Analyze video content with frame-based Q&A using PixLab APIs. |
WhatsAppToolkit | Send messages and manage templates via WhatsApp Business API. |
ZapierToolkit | Trigger Zapier workflows using natural language commands. |
For image analysis, we recommend leveraging the PixLab APIs, such as the QUERY, TAG-IMG and DESCRIBE API endpoints, in addition to the comprehensive suite of Vision Language Models API endpoints.
HTTP Methods
GET
HTTP Parameters
Required
Fields | Type | Description |
---|---|---|
key |
String | Your PixLab API Key ↗. You can also embed your key in the WWW-Authenticate: HTTP header and omit this parameter if you want to. |
HTTP Response
application/json
An array or list with an OpenAI compatible tool call format of the PixLab Tools listed above.
LLM Tools Response Example
// List of tools returned by the llm-tools API endpoint
[{...
"type": "function",
"name": "analyze_image",
"description": "Analyze image content, and respond to user query.",
"parameters": {
"type": "object",
"properties": {
"image": {
"type": "URL",
"description": "Input Image URL or Base64 Image Body Encoding"
}
},
"required": [
"image"
],
"additionalProperties": False
}
}]
Fields | Type | Description |
---|---|---|
status |
Integer | HTTP 200 indicates success. Any other code indicates failure. |
tools |
List | An array (or list) of the tools listed above in an OpenAI tool call compatible format |
object |
String | Invoked vLM API endpoint. |
created |
Timestamp | Timestamp of generated output creation. |
error |
String | Error description when status != 200. |
Code Samples
# For a comprehensive list of production-ready code samples, please consult the PixLab Github Repository: https://github.com/symisc/pixlab.
// For a comprehensive list of production-ready code samples, please consult the PixLab Github Repository: https://github.com/symisc/pixlab.
<?php
# For a comprehensive list of production-ready code samples, please consult the PixLab Github Repository: https://github.com/symisc/pixlab.
Similar API Endpoints
tagimg, tool-call, docscan, image-embed, chat, llm-parse, answer, describe, text-embed, pdftoimg, query, coder