InferX Quickstart
What is InferX?
InferX is model wrapper tool we’ve been using internally to easily test and benchmark ML models across various hardware configurations. It automatically detects your hardware and prepares and executes model inference based on that device. This was mainly built to test models on A100/H100s and also Jetsons. Though this can be extended to support any device. 📊 View Model & Platform Compatibility MatrixSetup Guide
This guide will walk you through setting up your environment for using InferX. We’ll cover installing the necessary tools, creating a virtual environment, and installing the SDK.Prerequisites
Before you begin, make sure you have:- Python 3.10 or later installed
- Git installed
Step 1: Set up docker in sudo mode
Step 2: Install uv
First, installuv
, a fast Python package installer and resolver that we recommend for managing dependencies:
uv
on your system. After installation, you may need to restart your terminal or source your shell configuration file to use uv
.
Step 3: Create a Project Directory
Create a new directory for your project and navigate into it:Step 4: Create a Virtual Environment
Create a Python virtual environment usinguv
. We recommend using Python 3.10 for optimal compatibility:
.venv
directory. Activate the virtual environment:
Step 5: Install InferX
Install InferX directly from the GitHub repository:Next Steps
Now that you have set up your environment and tested InferX, you can:- Explore the Quickstart Guide for more examples
- Check out the CLIP model documentation for image-text matching
- Try the RoboPoint model for keypoint affordance prediction
- Learn about hardware compatibility for optimized performance
Troubleshooting
If you encounter any issues during setup:- Make sure you’re using Python 3.10 or later
- Check that all dependencies are properly installed
- Please don’t hesitate to reach out to us on email at contact@exla.ai
Getting Started with Your First Model
Now that you have InferX installed, let’s run your first model! We’ll use the CLIP model, which is a powerful multimodal model that connects text and images.Using CLIP for Image-Text Matching
CLIP (Contrastive Language-Image Pretraining) allows you to find the best matching images for a given text description or vice versa. Here’s how to use it:What’s Happening Behind the Scenes
When you run this code:- InferX automatically detects your hardware (Jetson, GPU, or CPU)
- It loads the appropriate optimized implementation of CLIP
- The model processes your images and text queries
- It returns similarity scores between each image and text query
Sample Output
The output will look something like this:Next Steps with Models
Now that you’ve run your first model, you can explore other models in InferX:- DeepSeek: For large language model capabilities
- RoboPoint: For keypoint affordance prediction in robotics
- SAM2: For advanced image segmentation
- MobileNet: For efficient image classification
- ResNet34: For high-accuracy image classification
Exploring Example Code
To help you get started quickly, we provide a repository of example code for all our models and features. These examples demonstrate real-world usage and best practices.Setting Up the Examples Repository
- Clone the examples repository:
- Navigate to the examples directory:
- Explore the available examples:
clip/
- Examples for the CLIP modeldeepseek_r1/
- Examples for the DeepSeek language modelrobopoint/
- Examples for the RoboPoint modelcustom_model/
- Examples for optimizing your own models- And more!
Running an Example
Let’s run a simple example using the CLIP model:- Navigate to the CLIP examples directory:
- Run the example:
Running the RoboPoint Example
For a more advanced example, try the RoboPoint model:- Navigate to the RoboPoint examples directory:
- Run the example:
Optimizing Your Own Models
To see how to optimize your own custom models:- Navigate to the custom model examples directory:
- Run the example:
Next Steps
After exploring the examples, you can:- Modify the examples to fit your specific use case
- Integrate the code into your own projects
- Learn about advanced optimization techniques
- Explore hardware-specific optimizations