Product Memo: Infinite Canvas Multi-Level Framework
Infinite Canvas is an open-source, local-first AI framework that automatically configures, fine-tunes, and runs state-of-the-art image and video diffusion models on Apple Silicon devices. By dynamically detecting available hardware, selecting the right model presets, and offering distributed training, Infinite Canvas provides a one-click solution for advanced AI use cases—all without sending private data to the cloud.
At its heart, Infinite Canvas solves a deceptively complex challenge: How do we transform everyday Apple computers into enterprise-grade AI infrastructure? The solution required rethinking three fundamental aspects of AI deployment:
This memo outlines the core technology stack, our immediate use cases, and the longer-term vision for expanding private, on-device AI across multiple verticals.
Infinite Canvas is built on top of Apple’s MLX framework, leveraging Metal Performance Shaders (MPS) for low-level GPU acceleration. Our choice of MLX is key to enabling distributed training and advanced generative AI on everyday Apple Silicon hardware. Below, we detail why we use MLX (vs. MPS or Core ML alone), how it underpins our CLI tools (Stable Diffusion 3.5 and Flux1-dev), and the specific advantages for local-first creative workflows.
flowchart TB
subgraph User["User / Creative Team"]
UI[Prompts & Video Assets]
end
subgraph CLI["Infinite Canvas Framework"]
ICLI[CLI / macOS App]
Config[MPI Configuration & Host Setup]
end
subgraph Node1["Node 1: Mac Mini M4"]
MLX1[MLX Runtime]
MPI1[MPI Process Rank 0]
GPU1[Apple Silicon GPU/ANE]
MEM1[Unified Memory]
end
subgraph Node2["Node 2: MacBook M2 Max"]
MLX2[MLX Runtime]
MPI2[MPI Process Rank 1]
GPU2[Apple Silicon GPU/ANE]
MEM2[Unified Memory]
end
subgraph Distribution["MLX Distribution"]
AllReduce[All-Reduce Operations]
DataSplit[Data Parallel Processing]
end
subgraph Output["Model Processing"]
Sync[Gradient Averaging]
Final[Model Output]
end
User --> CLI
CLI --> Config
Config --> Node1
Config --> Node2
MLX1 --> GPU1
MLX1 --> MEM1
MLX2 --> GPU2
MLX2 --> MEM2
MPI1 <--> |TCP Links| MPI2
Node1 --> DataSplit
Node2 --> DataSplit
DataSplit --> AllReduce
AllReduce --> Sync
Sync --> Final
classDef primary fill:#e1eaff,stroke:#9eb9ff,stroke-width:2px
classDef secondary fill:#fff5e1,stroke:#ffd591,stroke-width:2px
classDef hardware fill:#ffe1e1,stroke:#ff9191,stroke-width:2px
classDef output fill:#e1ffe9,stroke:#91ffb6,stroke-width:2px
classDef mpi fill:#f1e1ff,stroke:#d591ff,stroke-width:2px
class User,CLI primary
class Node1,Node2 secondary
class GPU1,GPU2,MEM1,MEM2 hardware
class Output output
class MPI1,MPI2,AllReduce mpi
Infinite Canvas leverages Apple's MLX framework to distribute AI workloads across multiple Mac computers, effectively creating a private AI cluster.
The system uses MPI (Message Passing Interface) to coordinate between machines - for example, a Mac Mini M4 and MacBook M2 Max can work together by splitting the workload, where each device processes a portion of the data in parallel.
When processing large video or image generation tasks, each Mac handles its assigned portion, and the results are efficiently combined through a process called "all-reduce" that averages and synchronizes the results across all devices.
This distributed approach allows creative teams to harness the combined computational power of their existing Mac devices, eliminating the need for expensive cloud GPU resources while keeping all data and processing local.
Note: MLX reduces communication overhead in multi-node Apple setups, accelerating generative tasks that would otherwise saturate VRAM