InferX Quickstart

What is InferX?

InferX is model wrapper tool we’ve been using internally to easily test and benchmark ML models across various hardware configurations. It automatically detects your hardware and prepares and executes model inference based on that device. This was mainly built to test models on A100/H100s and also Jetsons. Though this can be extended to support any device. 📊 View Model & Platform Compatibility Matrix

Setup Guide

This guide will walk you through setting up your environment for using InferX. We’ll cover installing the necessary tools, creating a virtual environment, and installing the SDK.

Prerequisites

Before you begin, make sure you have:
  • Python 3.10 or later installed
  • Git installed

Step 1: Set up docker in sudo mode

sudo usermod -aG docker $USER
Restart your terminal for this to take effect.

Step 2: Install uv

First, install uv, a fast Python package installer and resolver that we recommend for managing dependencies:
curl -LsSf https://astral.sh/uv/install.sh | sh
This will install uv on your system. After installation, you may need to restart your terminal or source your shell configuration file to use uv.

Step 3: Create a Project Directory

Create a new directory for your project and navigate into it:
mkdir inferx-project && cd inferx-project

Step 4: Create a Virtual Environment

Create a Python virtual environment using uv. We recommend using Python 3.10 for optimal compatibility:
uv venv --python 3.13
This creates a virtual environment in the .venv directory. Activate the virtual environment:
# On Linux/macOS
source .venv/bin/activate

# On Windows
.venv\Scripts\activate

Step 5: Install InferX

Install InferX directly from the GitHub repository:
uv pip install git+https://github.com/exla-ai/InferX.git
If everything is set up correctly, you should see the InferX version and a success message!

Next Steps

Now that you have set up your environment and tested InferX, you can:

Troubleshooting

If you encounter any issues during setup:
  • Make sure you’re using Python 3.10 or later
  • Check that all dependencies are properly installed
  • Please don’t hesitate to reach out to us on email at contact@exla.ai

Getting Started with Your First Model

Now that you have InferX installed, let’s run your first model! We’ll use the CLIP model, which is a powerful multimodal model that connects text and images.

Using CLIP for Image-Text Matching

CLIP (Contrastive Language-Image Pretraining) allows you to find the best matching images for a given text description or vice versa. Here’s how to use it:
from inferx.models.clip import clip
import json

# Initialize the model (automatically detects your hardware)
model = clip()

# Run inference with sample images and text queries
results = model.inference(
    image_paths=["path/to/image1.jpg", "path/to/image2.jpg"],
    text_queries=["a photo of a dog", "a photo of a cat", "a photo of a bird"]
)

# Print results
print(json.dumps(results, indent=2))

What’s Happening Behind the Scenes

When you run this code:
  1. InferX automatically detects your hardware (Jetson, GPU, or CPU)
  2. It loads the appropriate optimized implementation of CLIP
  3. The model processes your images and text queries
  4. It returns similarity scores between each image and text query

Sample Output

The output will look something like this:
[
  {
    "a photo of a dog": [
      {
        "image_path": "data/dog.png",
        "score": "23.1011"
      },
      {
        "image_path": "data/cat.png",
        "score": "17.1396"
      }
    ]
  },
  {
    "a photo of a cat": [
      {
        "image_path": "data/cat.png",
        "score": "25.3045"
      },
      {
        "image_path": "data/dog.png",
        "score": "18.7532"
      }
    ]
  }
]

Next Steps with Models

Now that you’ve run your first model, you can explore other models in InferX:
  • DeepSeek: For large language model capabilities
  • RoboPoint: For keypoint affordance prediction in robotics
  • SAM2: For advanced image segmentation
  • MobileNet: For efficient image classification
  • ResNet34: For high-accuracy image classification
Check out the Models section for detailed documentation on each model.

Exploring Example Code

To help you get started quickly, we provide a repository of example code for all our models and features. These examples demonstrate real-world usage and best practices.

Setting Up the Examples Repository

  1. Clone the examples repository:
git clone https://github.com/exla-ai/InferX-examples.git
  1. Navigate to the examples directory:
cd InferX-examples
  1. Explore the available examples:
ls
You’ll see directories for each model and feature, including:
  • clip/ - Examples for the CLIP model
  • deepseek_r1/ - Examples for the DeepSeek language model
  • robopoint/ - Examples for the RoboPoint model
  • custom_model/ - Examples for optimizing your own models
  • And more!

Running an Example

Let’s run a simple example using the CLIP model:
  1. Navigate to the CLIP examples directory:
cd clip
  1. Run the example:
python example_clip.py
This will demonstrate how to use CLIP for image-text matching with sample images.

Running the RoboPoint Example

For a more advanced example, try the RoboPoint model:
  1. Navigate to the RoboPoint examples directory:
cd ../robopoint
  1. Run the example:
python example_robopoint.py
This will demonstrate how to use RoboPoint for robotic perception tasks.

Optimizing Your Own Models

To see how to optimize your own custom models:
  1. Navigate to the custom model examples directory:
cd ../custom_model
  1. Run the example:
python example_optimize_custom_model.py
This example shows how to optimize a pre-trained EfficientNet model for faster inference.

Next Steps

After exploring the examples, you can: