Download the latest release of the SOD embedded Computer Vision library & Get access to state-of-the-art pre-trained, Machine Learning models
Connect to the PixLab ConsoleConnect to the PixLab Console and elevate your projects with powerful Vision & Creativity APIs, designed to enhance your development workflow and productivity.
The REST API code samples are practical usage, real-world working code implemented in C intended to familiarize the reader with the SOD Embedded API and is also available to consult online here. For an introduction course to the API, see Getting Started with SOD and
The C/C++ API Reference Guide. You’re welcome to copy/paste and run these examples to see the API in action.
Smallest & fastest object detection CNN model pre-trained on the Pascal VOC dataset that is able to detect 20 classes of different objects (i.e. car, person, dog, chair, etc.). This model works at Real-time on a Core I7 and similar CPUs with proper compiler optimizations (AVX, SSE, etc.) or using the proprietary multi-core enabled SOD
release. ID - tiny20.sod
Small & fast object detection CNN model pre-trained on the MS COCO dataset that is able to detect 80 classes of different objects (i.e. bus, person, airplane, stop sign, etc.). This model works at Real-time on a Core I7 and similar CPUs with proper compiler optimizations (AVX, SSE, etc.) or using the proprietary multi-core enabled SOD
release. ID - tiny80.sod
Most accurate but largest & slowest (compared to :voc or :coco) object detection CNN model pre-trained on the MS COCO dataset that is able to detect 80 classes of different objects including car, motorbike, horse, bicycle, and so forth. This model does not work at Real-time even with the proprietary multi-core enabled SOD release. ID -
full.sod
Distill useful information including building, shapes, forms, etc. from satellite imagery at Real-time using the proprietary multi-core enabled SOD release. This model require prior approbation before delivery. ID - sat.sod
Production ready, pre-trained models to be used in conjunction with the SOD
RealNets interfaces.
Model
Ram Consumption
Model Size
Description
Usage
Availability
Frontal Face Detector
234 KB
< 10MB
Real-time (5 ~ 15 ms on HD video stream), frontal face detector RealNet model pre-trained on the Genki-4K datatset. This is the recommended model if you are capturing video stream from user's Webcam or smartphone frontal camera to implement Snapchat-like filters, face recognition and so forth. RealNets are designed to analyze & extract useful information
from video stream rather than static images thanks to their fast processing speed (less than 10 milliseconds on 1920*1080 HD stream) and low memory footprint making them suitable for use on mobile devices. You are encouraged to connect the RealNets APIs with the OpenCV Video capture interfaces or any proprietary Video capture API to see them in action. ID -
face.realnet.sod
Frontal face detector, WebAssemby model pre-trained on the Genki-4K datatset for Web oriented applications. The model is production ready, works at Real-Time on all modern browsers (mobile devices included). Usage instruction already included in the package. ID - Webassembly.face.model
Transform an input image or video frame into printable ASCII characters at real-time using a single decision tree. Real-time performance is achieved by using pixel intensity comparison inside internal nodes of the tree. The library repository is available on Github.This is the hex output model generated during the training phase. It contains both the codebook and the decision tree that let you render your images or video frames at Real-time. ID - ascii_art.hex