InferX Android SDK

The InferX Android SDK allows you to run state-of-the-art AI models directly on your Android device, without requiring an internet connection for inference. Same API, optimized for mobile.

Features

  • Offline Inference: Run models locally without internet connection
  • Cross-Device Compatibility: Same API across mobile and server deployments
  • Optimized Performance: Hardware-accelerated inference on mobile devices
  • Easy Integration: Simple SDK integration into existing Android apps

We provide a complete Android application that demonstrates the usage of the InferX Android SDK:

Example Repository: github.com/exla-ai/InferX-android-example

Quick Start

Prerequisites

  1. Android Studio installed
  2. Android device or emulator (API level 21+)
  3. JitPack repository access

To view a demo of using the SDK, check out the example: github.com/exla-ai/InferX-android-example

Setting Up the SDK

1. Configure JitPack Repository

Add JitPack to your project-level build.gradle or build.gradle.kts:

// build.gradle.kts (Project level)
allprojects {
    repositories {
        google()
        mavenCentral()
        maven { url = uri("https://jitpack.io") }
    }
}

2. Add SDK Dependency

Add the InferX Android SDK dependency to your app’s build.gradle.kts:

// build.gradle.kts (App level)
dependencies {
    implementation("com.github.exla-ai:InferX-android-sdk:latest-version")
    
    // Required dependencies
    implementation("androidx.core:core-ktx:1.12.0")
    implementation("androidx.appcompat:appcompat:1.6.1")
    implementation("com.google.android.material:material:1.11.0")
}

3. Add Permissions

Add required permissions to your AndroidManifest.xml:

<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />

Basic Usage

Initialize the SDK

import com.inferx.android.InferXSdk

class MainActivity : AppCompatActivity() {
    private lateinit var sdk: InferXSdk
    
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        
        // Initialize InferX SDK
        sdk = InferXSdk.getInstance(applicationContext)
        
        // Setup model download and initialization
        setupModel()
    }
    
    private fun setupModel() {
        // Download and initialize the model
        sdk.downloadModel("clip") { progress ->
            runOnUiThread {
                Log.d("InferX", "Download progress: $progress%")
                // Update your progress UI here
            }
        }
        
        sdk.initializeModel("clip") { success ->
            runOnUiThread {
                if (success) {
                    Log.d("InferX", "Model ready")
                    // Enable inference UI
                } else {
                    Log.e("InferX", "Model initialization failed")
                }
            }
        }
    }
}

Run Inference

// Image-text matching with CLIP
private fun runImageTextMatching() {
    val imageBitmap = loadImageFromAssets("sample_image.jpg")
    val textQueries = listOf("a photo of a dog", "a photo of a cat", "a landscape")
    
    sdk.runInference("clip", imageBitmap, textQueries) { results ->
        runOnUiThread {
            // Process results
            results?.let { 
                Log.d("InferX", "AI Response: $results")
                displayResults(results)
            }
        }
    }
}

private fun loadImageFromAssets(fileName: String): Bitmap? {
    return try {
        val inputStream = assets.open(fileName)
        BitmapFactory.decodeStream(inputStream)
    } catch (e: IOException) {
        Log.e("InferX", "Error loading image: ${e.message}")
        null
    }
}

Advanced Usage

Custom Model Configuration

// Configure model with custom parameters
val modelConfig = ModelConfig.Builder()
    .setModelName("clip")
    .setMaxInputSize(512)
    .setOptimizationLevel(OptimizationLevel.HIGH)
    .setCacheSize(100) // MB
    .build()

sdk.initializeModel(modelConfig) { success ->
    // Handle initialization result
}

Real-time Camera Processing

private fun setupCameraInference() {
    val cameraManager = getSystemService(Context.CAMERA_SERVICE) as CameraManager
    
    // Setup camera preview and capture
    sdk.startCameraInference("clip") { bitmap, results ->
        runOnUiThread {
            // Update UI with real-time results
            displayCameraResults(bitmap, results)
        }
    }
}

Batch Processing

private fun processBatchImages() {
    val imageList = listOf(
        loadImageFromAssets("image1.jpg"),
        loadImageFromAssets("image2.jpg"),
        loadImageFromAssets("image3.jpg")
    ).filterNotNull()
    
    val textQueries = listOf("a photo of a dog", "a photo of a cat")
    
    sdk.runBatchInference("clip", imageList, textQueries) { batchResults ->
        runOnUiThread {
            batchResults.forEachIndexed { index, result ->
                Log.d("InferX", "Image $index result: $result")
            }
        }
    }
}

Complete Example

Here’s a basic example of how to use the InferX Android SDK:

class InferXDemoActivity : AppCompatActivity() {
    private lateinit var sdk: InferXSdk
    private lateinit var progressBar: ProgressBar
    private lateinit var resultTextView: TextView
    
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_demo)
        
        progressBar = findViewById(R.id.progressBar)
        resultTextView = findViewById(R.id.resultTextView)
        
        // Initialize SDK
        sdk = InferXSdk.getInstance(applicationContext)
        
        // Setup model
        setupModel()
    }
    
    private fun setupModel() {
        progressBar.visibility = View.VISIBLE
        
        // Download model with progress tracking
        sdk.downloadModel("clip") { progress ->
            runOnUiThread {
                Log.d("InferX", "Download progress: $progress%")
                progressBar.progress = progress
            }
        }
        
        // Initialize model
        sdk.initializeModel("clip") { success ->
            runOnUiThread {
                progressBar.visibility = View.GONE
                if (success) {
                    Log.d("InferX", "Model ready")
                    resultTextView.text = "Model ready! Tap to run inference."
                    setupInferenceButton()
                } else {
                    Log.d("InferX", "Model initialization failed")
                    resultTextView.text = "Failed to initialize model"
                }
            }
        }
    }
    
    private fun setupInferenceButton() {
        findViewById<Button>(R.id.inferenceButton).setOnClickListener {
            runInference()
        }
    }
    
    private fun runInference() {
        val bitmap = loadSampleImage()
        val queries = listOf("a photo of a dog", "a photo of a cat", "a landscape")
        
        sdk.runInference("clip", bitmap, queries) { results ->
            runOnUiThread {
                Log.d("InferX", "AI Response: $results")
                resultTextView.text = "Results: $results"
            }
        }
    }
    
    private fun loadSampleImage(): Bitmap? {
        // Load your sample image here
        return BitmapFactory.decodeResource(resources, R.drawable.sample_image)
    }
}

Performance Tips

  • Model Caching: Models are automatically cached after first download
  • Batch Processing: Use batch inference for multiple images to improve efficiency
  • Memory Management: Release models when not needed to free memory
  • Thread Management: SDK handles threading automatically, but avoid blocking the main thread

Supported Models

The Android SDK currently supports:

  • CLIP: Image-text matching and understanding
  • MobileNet: Efficient image classification
  • ResNet: High-accuracy image classification

More models coming soon!

Troubleshooting

Common Issues

  1. Model Download Fails: Check internet connection and storage space
  2. Inference Slow: Ensure device has sufficient RAM and consider reducing input size
  3. Crashes on Older Devices: Check minimum API level requirements

Debug Mode

Enable debug logging to get more detailed information:

InferXSdk.setDebugMode(true)

Next Steps

InferX Android SDK

The InferX Android SDK allows you to run state-of-the-art AI models directly on your Android device, without requiring an internet connection for inference. Same API, optimized for mobile.

Features

  • Offline Inference: Run models locally without internet connection
  • Cross-Device Compatibility: Same API across mobile and server deployments
  • Optimized Performance: Hardware-accelerated inference on mobile devices
  • Easy Integration: Simple SDK integration into existing Android apps

We provide a complete Android application that demonstrates the usage of the InferX Android SDK:

Example Repository: github.com/exla-ai/InferX-android-example

Quick Start

Prerequisites

  1. Android Studio installed
  2. Android device or emulator (API level 21+)
  3. JitPack repository access

To view a demo of using the SDK, check out the example: github.com/exla-ai/InferX-android-example

Setting Up the SDK

1. Configure JitPack Repository

Add JitPack to your project-level build.gradle or build.gradle.kts:

// build.gradle.kts (Project level)
allprojects {
    repositories {
        google()
        mavenCentral()
        maven { url = uri("https://jitpack.io") }
    }
}

2. Add SDK Dependency

Add the InferX Android SDK dependency to your app’s build.gradle.kts:

// build.gradle.kts (App level)
dependencies {
    implementation("com.github.exla-ai:InferX-android-sdk:latest-version")
    
    // Required dependencies
    implementation("androidx.core:core-ktx:1.12.0")
    implementation("androidx.appcompat:appcompat:1.6.1")
    implementation("com.google.android.material:material:1.11.0")
}

3. Add Permissions

Add required permissions to your AndroidManifest.xml:

<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />

Basic Usage

Initialize the SDK

import com.inferx.android.InferXSdk

class MainActivity : AppCompatActivity() {
    private lateinit var sdk: InferXSdk
    
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)
        
        // Initialize InferX SDK
        sdk = InferXSdk.getInstance(applicationContext)
        
        // Setup model download and initialization
        setupModel()
    }
    
    private fun setupModel() {
        // Download and initialize the model
        sdk.downloadModel("clip") { progress ->
            runOnUiThread {
                Log.d("InferX", "Download progress: $progress%")
                // Update your progress UI here
            }
        }
        
        sdk.initializeModel("clip") { success ->
            runOnUiThread {
                if (success) {
                    Log.d("InferX", "Model ready")
                    // Enable inference UI
                } else {
                    Log.e("InferX", "Model initialization failed")
                }
            }
        }
    }
}

Run Inference

// Image-text matching with CLIP
private fun runImageTextMatching() {
    val imageBitmap = loadImageFromAssets("sample_image.jpg")
    val textQueries = listOf("a photo of a dog", "a photo of a cat", "a landscape")
    
    sdk.runInference("clip", imageBitmap, textQueries) { results ->
        runOnUiThread {
            // Process results
            results?.let { 
                Log.d("InferX", "AI Response: $results")
                displayResults(results)
            }
        }
    }
}

private fun loadImageFromAssets(fileName: String): Bitmap? {
    return try {
        val inputStream = assets.open(fileName)
        BitmapFactory.decodeStream(inputStream)
    } catch (e: IOException) {
        Log.e("InferX", "Error loading image: ${e.message}")
        null
    }
}

Advanced Usage

Custom Model Configuration

// Configure model with custom parameters
val modelConfig = ModelConfig.Builder()
    .setModelName("clip")
    .setMaxInputSize(512)
    .setOptimizationLevel(OptimizationLevel.HIGH)
    .setCacheSize(100) // MB
    .build()

sdk.initializeModel(modelConfig) { success ->
    // Handle initialization result
}

Real-time Camera Processing

private fun setupCameraInference() {
    val cameraManager = getSystemService(Context.CAMERA_SERVICE) as CameraManager
    
    // Setup camera preview and capture
    sdk.startCameraInference("clip") { bitmap, results ->
        runOnUiThread {
            // Update UI with real-time results
            displayCameraResults(bitmap, results)
        }
    }
}

Batch Processing

private fun processBatchImages() {
    val imageList = listOf(
        loadImageFromAssets("image1.jpg"),
        loadImageFromAssets("image2.jpg"),
        loadImageFromAssets("image3.jpg")
    ).filterNotNull()
    
    val textQueries = listOf("a photo of a dog", "a photo of a cat")
    
    sdk.runBatchInference("clip", imageList, textQueries) { batchResults ->
        runOnUiThread {
            batchResults.forEachIndexed { index, result ->
                Log.d("InferX", "Image $index result: $result")
            }
        }
    }
}

Complete Example

Here’s a basic example of how to use the InferX Android SDK:

class InferXDemoActivity : AppCompatActivity() {
    private lateinit var sdk: InferXSdk
    private lateinit var progressBar: ProgressBar
    private lateinit var resultTextView: TextView
    
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_demo)
        
        progressBar = findViewById(R.id.progressBar)
        resultTextView = findViewById(R.id.resultTextView)
        
        // Initialize SDK
        sdk = InferXSdk.getInstance(applicationContext)
        
        // Setup model
        setupModel()
    }
    
    private fun setupModel() {
        progressBar.visibility = View.VISIBLE
        
        // Download model with progress tracking
        sdk.downloadModel("clip") { progress ->
            runOnUiThread {
                Log.d("InferX", "Download progress: $progress%")
                progressBar.progress = progress
            }
        }
        
        // Initialize model
        sdk.initializeModel("clip") { success ->
            runOnUiThread {
                progressBar.visibility = View.GONE
                if (success) {
                    Log.d("InferX", "Model ready")
                    resultTextView.text = "Model ready! Tap to run inference."
                    setupInferenceButton()
                } else {
                    Log.d("InferX", "Model initialization failed")
                    resultTextView.text = "Failed to initialize model"
                }
            }
        }
    }
    
    private fun setupInferenceButton() {
        findViewById<Button>(R.id.inferenceButton).setOnClickListener {
            runInference()
        }
    }
    
    private fun runInference() {
        val bitmap = loadSampleImage()
        val queries = listOf("a photo of a dog", "a photo of a cat", "a landscape")
        
        sdk.runInference("clip", bitmap, queries) { results ->
            runOnUiThread {
                Log.d("InferX", "AI Response: $results")
                resultTextView.text = "Results: $results"
            }
        }
    }
    
    private fun loadSampleImage(): Bitmap? {
        // Load your sample image here
        return BitmapFactory.decodeResource(resources, R.drawable.sample_image)
    }
}

Performance Tips

  • Model Caching: Models are automatically cached after first download
  • Batch Processing: Use batch inference for multiple images to improve efficiency
  • Memory Management: Release models when not needed to free memory
  • Thread Management: SDK handles threading automatically, but avoid blocking the main thread

Supported Models

The Android SDK currently supports:

  • CLIP: Image-text matching and understanding
  • MobileNet: Efficient image classification
  • ResNet: High-accuracy image classification

More models coming soon!

Troubleshooting

Common Issues

  1. Model Download Fails: Check internet connection and storage space
  2. Inference Slow: Ensure device has sufficient RAM and consider reducing input size
  3. Crashes on Older Devices: Check minimum API level requirements

Debug Mode

Enable debug logging to get more detailed information:

InferXSdk.setDebugMode(true)

Next Steps