NSFW API Endpoint

Version 2.197 (Release Notes ↗)

Description

Detect and censor not suitable for work (NSFW) content in images and videos using the NSFW API endpoint. Automate media processing with blur, encrypt, or mogrify to filter user uploads based on NSFW scores. Detect not suitable for work (i.e. nudity & adult) content in a given image or video frame. NSFW is of particular interest, if mixed with some media processing API endpoints like blur, encrypt or mogrify to censor images on the fly according to their nsfw score. This can help the developer automate things such as filtering user's uploads. See the example section for a concrete usage.

HTTP Methods

GET, POST

HTTP Parameters

Required

Fields Type Description
img URL Input image URL. If you want to upload your image directly from your app, then submit a multipart/form-data POST request.
key String Your PixLab API Key ↗.

POST Request Body

This section details the requirements for using a POST request instead of a simple GET request.

Allowed Content-Types:

  • multipart/form-data
  • application/json

To facilitate direct image uploads, utilize multipart/form-data. Please consult the REST API code samples for a functional implementation. For JSON submissions, the image must be pre-uploaded. Call store to upload images before invoking this endpoint.

HTTP Response

application/json

This endpoint returns a JSON response. The score field is the key metric - values closer to 1 indicate higher NSFW probability. Response structure:
Fields Type Description
status Integer HTTP status code (200 indicates success)
score Float NSFW probability score (0.0 - 1.0 range)
error String Error description when status ≠ 200

Code Samples


import requests
from typing import Dict, Any

def censor_nsfw_image(image_url: str, api_key: str) -> None:
    """Check and censor NSFW content from an image using PixLab API."""
    
    # NSFW detection endpoint
    nsfw_params = {
        'img': image_url,
        'key': api_key
    }
    
    try:
        # Check for NSFW content
        response = requests.get(
            'https://api.pixlab.io/nsfw',
            params=nsfw_params,
            timeout=10
        )
        response.raise_for_status()
        nsfw_data: Dict[str, Any] = response.json()

        if nsfw_data['status'] != 200:
            print(f"Error: {nsfw_data.get('error', 'Unknown error')}")
            return

        if nsfw_data['score'] < 0.5:
            print("No adult content detected in this picture")
            return

        # Censor NSFW content
        print("Censoring NSFW picture...")
        blur_params = {
            'img': image_url,
            'key': api_key,
            'rad': 50,
            'sig': 30
        }
        
        blur_response = requests.get(
            'https://api.pixlab.io/blur',
            params=blur_params,
            timeout=10
        )
        blur_response.raise_for_status()
        blur_data: Dict[str, Any] = blur_response.json()

        if blur_data['status'] != 200:
            print(f"Error: {blur_data.get('error', 'Unknown error')}")
        else:
            print(f"Censored image: {blur_data['link']}")

    except requests.exceptions.RequestException as e:
        print(f"API request failed: {e}")

if __name__ == "__main__":
    # Configuration
    TARGET_IMAGE = 'https://i.redd.it/oetdn9wc13by.jpg'
    API_KEY = 'PIXLAB_API_KEY'  # Replace with your actual API key
    
    censor_nsfw_image(TARGET_IMAGE, API_KEY)
← Return to API Endpoint Listing