top of page

Scan-Agnostic Visual Positioning System for Enterprise Scale

MultiSet’s Visual Positioning System (VPS) locks phones, wearables, robots and drones to a single, centimetre-accurate coordinate frame - indoors or outdoors, across infinite areas, without markers, beacons or monolithic re-scans.

On-premises AR cloud VPS securing indoor-outdoor localization for regulated industrial sites

How It Works

Scan

Capture with any modality - iOS Pro, Matterport, Leica, NavVis, Faro, XGrid, and more.

Map

Vision Fusion normalizes scale, lighting and noise into a unified, compression-optimised map.

Localize

Devices query the map; hierarchical indexing returns a centimetre-true pose in seconds.

Sub 10-CM & 2° of Accuracy

  • Indoor + Outdoor

  • Dynamic, changing environments

  • Multi-Floor / Multi-Level

  • Low Light + Direct Glare

  • Interference Handling - people, cars & equipment

  • Transition Handling

  • Low Drift

Scan-Agnostic Mapping

Feed LiDAR, point-cloud or textured meshes to MultiSet; Vision Fusion normalizes them into one high-fidelity map on a unified coordinate system so teams can use the hardware they already own.

Scan-Agnostic.png

MapSet: Infinite Venue Scale

MapSet fuses every LiDAR, photogrammetry and 360° scan into one continuous coordinate system so operators navigate entire airports or warehouses without “map islands,” devices localise in < 52 ms, and analytics stay centimetre-true across expansive venues.

SDKs & Open APIs

Unity, native iOS & Android, Meta Quest, WebXR and ROS 2 share one binary. Sample scenes, CLI map-uploader, REST/vps/pose and GraphQL map management integrate with any CI/CD. Zero-install endpoints let WebXR & iOS App Clips launch AR from a QR code.

Unity SDK

Drag-and-drop package, C# helpers and sample scenes - localise a Unity app in under 10 min.

WebXR Kit

Zero-install browser runtime; launch spatial experiences from a QR code in < 1 second.

Native iOS SDK

Swift Package Manager build; leverages LiDAR when present for sub-cm accuracy.

Quest SDK

Native Oculus package with passthrough samples—perfect for industrial AR on Meta Quest Series.

Native Android SDK

Lightweight AAR with Kotlin examples; 52 ms median pose on Snapdragon 8 Gen 3.

Custom Viewport

Bring your device or robot—we’ll build the viewport and provide a tailored endpoint.

Visual Positioning Systems For Augmented Reality

AR Indoor Navigation

MultiSet VPS (Visual Positioning System) enables precise indoor navigation through augmented reality overlays, helping users efficiently navigate complex industrial environments. The system displays intuitive directional indicators, waypoints, and location markers directly in the user's field of view, guiding them to specific destinations like the QA department or automated guided vehicle zones. This real-time AR visualization enhances workplace efficiency, reduces time spent finding locations.

Augmented reality asset-tracking dashboard visualizing real-time equipment locations via VPS
Scan-agnostic digital twin maintenance overlay driven by MultiSet visual positioning system

Visualizing BIM Models in Construction Processes

MultiSet VPS (Visual Positioning System) enables accurate on-site visualization of BIM (Building Information Modeling) models, overlaying digital plans onto physical structures. This helps construction teams assess progress, detect discrepancies.

Real-time AR Data overlay

MultiSet VPS (Visual Positioning System) allows field operators to access real-time machine and IoT data overlaid directly on equipment during maintenance and inspections. This real-time visualization enhances troubleshooting, reduces downtime, and ensures safety by providing critical insights at a glance. 

Spatial computing training on Meta Quest 3 headset guiding warehouse navigation with 6 cm precision
  • What does MultiSet AI provide?
    MultiSet AI equips developers with everything they need to build large-scale, location-aware applications—3D mapping tools, a state-of-the-art Visual Positioning System (VPS) SDK, and a unified developer platform.
  • Can my scanned data stay in a private cloud?
    Yes. MultiSet offers on-premises deployments and offline SDKs, ensuring your scan data never leaves your infrastructure.
  • Is MultiSet AI's VPS technology based on platforms like Google Cloud Anchors or Apple World Map?
    No, MultiSet AI's VPS technology is built from the ground up, allowing it to scale to thousands of square feet. It is device-agnostic and platform-independent, providing a versatile and robust solution for various AR applications. Compatible with a wide range of hardware and software, our VPS supports multi-floor environments and integrates with existing scan data, offering a comprehensive solution for complex spatial mapping needs.
  • What are the steps to map a space using MultiSet AI?
    To map a space using MultiSet AI, you can use the MultiSet app on your iPhone Pro or iPad Pro to scan the environment. Alternatively, you can import an existing scan into the platform for further processing and integration. Our platform supports a wide range of devices and scan formats, ensuring flexibility and ease of use for developers. Additionally, our technology supports LiDAR mapping and map stitching, allowing for the creation of detailed 3D spatial maps that can be used for various AR applications.
  • Can I bring in third-party scans?
    Yes, We accept E57 files from providers such as Matterport, Leica, NavVis, XGrid, Faro, Polycam and more, and we also support Matterport MatterPak files.
  • How large can one map be before I need MapSet?
    A single map performs best up to ≈2,500 m². For larger footprints or multi-floor venues - split capture zones into logical sections and join them in a MapSet to preserve centimetre accuracy and fast look-ups.The MultiSet app can capture up to 5,000 sq ft (≈465 m²) in a single session. Larger areas can be broken into multiple sessions and merged later on the developer platform. For imports, a single E57 file can be as large as 50,000 sq ft (≈4,650 m²), and multiple files can be merged.
  • Which scanning methods can I use to create a MultiSet map?
    MultiSet is scan-agnostic. Upload LiDAR point clouds, textured meshes or raw SLAM captures; Vision Fusion normalizes them into a single, compression-optimised map ready for VPS localization.
  • How much overlap should adjacent maps have?
    We recommend 15 – 20 % visual overlap between neighbouring maps. This gives MapSet enough shared features to compute high-precision transforms and guarantee seamless hand-offs.
  • Can I update one area without re-mapping the whole venue?
    Yes. Re-scan just the affected zone, upload the new fragment, and MapSet automatically realigns it while the rest of the venue stays online - no downtime or full rebuild required.
  • How do I geo-reference a map for outdoor or mixed-reality use?
    Record the WGS-84 latitude, longitude, altitude and compass heading of your origin point, then enter those values in the project’s Geo Reference panel. Devices can then feed GPS or UWB HintPosition data for faster, more accurate localization.
  • Does MultiSet detect and correct drift over time?
    Continuous background validation checks feature consistency; if drift exceeds a 1 cm threshold, the system flags the sector for optional re-capture or automatic drift compensation.
  • What is the size of map files and can they be compressed?
    A typical indoor map averages 5–15 MB after Vision Fusion compression. For richer detail, toggle “High-Density.” For faster streaming to mobile devices, choose “Edge-Optimised” for files under 3 MB.
  • Can I export maps to other spatial tools?
    Yes. Maps and MapSets can be exported as OBJ or PLY with embedded transform metadata, letting you reuse geometry in BIM, game engines or digital-twin analytics platforms - no vendor lock-in.
Azure Spatial Anchors alternative—MultiSet VPS anchoring HoloLens 2 inspection workflow

Contact MultiSet AI

bottom of page