Anchor Critical Information to the Real World

Don't just look at your equipment - see inside it. Our 3D Object Tracking locks digital content to physical objects with sub-millimeter precision, allowing you to visualize live IoT data, maintenance history, and assembly guides exactly where they are needed. MultiSet turns your 3D model (GLB/CAD/scan) into a production-ready “object  anchor” in minutes—no markers, no fiducials. Publish once, then deploy  to Unity, WebXR, iOS and Android with enterprise-grade privacy options  (public cloud, private cloud, or fully on-prem). Create object anchors  in WebXR with ModelSet.

What is Object Tracking?

Object Tracking recognizes complex physical equipment based on its 3D shape. This allows you to point a camera at a machine and instantly see digital repair guides, safety warnings, or live data "stuck" precisely to the specific bolts, levers, or screens you need to interact with—no QR codes required.

How it works

UPLOAD 3D FILE
GLB or GLTF
Author at true scale (1 unit = 1 m)
Align up = +Y
PICK TRACKING TYPE
360° View
Side View
UPLOAD, PROCESS, PUBLISH
Enable localization and tracking in mobile, tablets, headsets and robots
Why teams choose Object Tracking
Futuristic industrial scene with holographic robotic arm and augmented reality tracking system icons connected to various platforms and cloud security options.
Sub-5 cm accuracy. Rock-solid anchoring with low drift in busy, changing environments.
Scan-agnostic. Works with CAD, GLB/GLTF, or meshes from your reality-capture pipeline.
Fast to live. Create an Object, upload, process, and ship—typically in under 10 minutes.
Built to scale. Indoors or outdoors, multi-floor, any lighting.
Deploy anywhere. Unity, WebXR (browser), iOS, Android, Vision Pro, Quest.
Enterprise security. Choose cloud, private cloud, or on-prem/self-hosted.
Cross-platform SDKs. Unity • Native iOS & Android • WebXR • ROS bridges.
Scan-Agnostic Object Capture
Scan objects with Polycam, Scaniverse or any scanner that outputs GLB &  PLY with textured meshes to MultiSet; Vision Fusion normalizes them into object trackers on one high-fidelity map on a unified coordinate system so teams can use the hardware they already own.
Diagram showing PolyCam in the center connected to six applications: handheld scanner, photogrammetry object scanning, assembly guidance, AR maintenance, interactive packaging, and tangible gaming.

Track Any Object In Camera FOV

Build real, shippable AR that recognizes the exact object and locks content precisely where it belongs - no markers required. With sub-5 cm anchors from our markerless object tracking SDK, experiences stay rock-solid across Unity, WebXR object anchors, iOS, and Android, so one project ships everywhere.
Maintenance & Inspection
Recognize machines and pin SOPs, torque specs, and checklists right where work happens.
Warehouse & Training
Identify tools/parts, guide picks, and standardize onboarding with in-place AR cues.
Field Service
Heads-up diagnostics and step-by-step repairs—works offline in on-prem mode.
Retail, Museums & Exhibits
Interactive product and artifact experiences—no QR codes or markers needed.

Technical Specs

Input formats: GLB/GLTF (CAD and scans convertible)
Tracking modes: 360° View, Side View
Precision: Sub-5 cm (typical in recommended conditions)
Environments: indoor/outdoor, dynamic lighting, multi-floor, large-scale
SDKs: Unity, iOS, Android, WebXR; ROS adapters available
Privacy: cloud / private cloud / on-prem; opt-ins

Create Object Anchors in WebXR With Object Tracking

Frequently asked questions
How do I convert CAD to GLB/GLTF for object tracking?

Export a polygon mesh from your CAD tool (or via a DCC like Blender), apply a PBR base color/diffuse texture, verify real‐world scale (1 unit = 1 m), set +Y up, and export to .glb/.gltf. Keep the file lean (ideally <20 MB), with clean UVs and a single material when possible.

Which file formats are supported for object tracking?

Export .glb or .gltf. (CAD and scans should be converted to polygonal GLB/GLTF before upload.)

What are the limitations on object and file sizes for tracking?

Ideal object sizes are 1 ft to 25 ft; keep the model file under 20 MB for smooth processing. Typical range is 0.3 m to 7.5 m on the longest dimension. Smaller objects may lack features; very large objects work best when the model captures distinctive local details.

Can Object Tracking track moving or deformable objects?

Today Object Tracking is designed for static, rigid objects during the AR session. The user/camera can move around it but lossless tracking with the same level of robustness as MultiSet VPS.

What’s the difference between 360° View and Side View?

360° View (default): recognize from any angle (top/bottom/sides). Side View: optimized when the object is viewed mostly from the sides; usually processes a bit faster.

How long does processing take?

Typically under 10 minutes, depending on model complexity. You’ll see status change to Active once ready.

Do I need to ship the 3D mesh in the final app?

In Unity, the reference mesh helps authoring and content placement; remove it from the final build (it’s not used during runtime localization).