E57 to VPS: Turn Any Point Cloud into Centimeter-Accurate XR

Already have scans from Matterport, Leica, NavVis, Faro, or XGRIDs?

Upload your E57 files to MultiSet and get a VPS-ready map in under an hour. No rescanning. No vendor lock-in.

5 cm median localization accuracy. Any device. Any environment. Any scale.

One Upload Workflow. Five Scanner Ecosystems

MultiSet ingests structured E57 files with embedded panoramic images from the leading reality-capture hardware.
Upload your existing scans. No format conversion. No proprietary middleware. No rescanning.

Supported devices: Pro3, Pro2


Processing software:

Matterport Capture App → MyMatterport Cloud
MapFoundry for MatterPak
Supported devices: VLX, M6



Processing software:

NavVis IVION
Supported devices: RTC360, BLK360 G2, BLK2GO, BLK2FLY

Processing software:
Cyclone REGISTER 360 PLUS
Supported devices: Focus Series, Orbis,Flash/Hybrid


Processing software:

Faro SCENE
Supported devices: L2 Pro, K1


Processing software:

Lixel Studio

From E57 to Positioning & Tracking in Three Steps

Export
Export yourscan as a structured .e57 with embedded panoramic images. Works with Cyclone, SCENE, IVION, Lixel Studio, or MyMatterport. Zip the file if required.
Upload
Log into theMultiSet Developer Portal. Select your scanner type. Drag and drop the .e57file. The platform auto-detects the source format and begins processing. Maps are typically active within an hour.
Localize & Build
Query it via REST API, Unity SDK, native iOS/Android, WebXR, or Meta Quest. Pin XR content with 5 cm median accuracy. Run shared sessions. Scale to campus-wide coverage with MapSet.

Why Teams Choose MultiSet for E57-Based XR

Replace ASA SDK calls with MultiSet SDK calls and swap anchor creation/localization to map‑based visual localization.
What stays: Your app logic, UI, content pipeline, and most AR Foundation components.
Pilot Project
CAPABILITY
Traditional Approach
MultiSet E57 Ingest
Scan input
Platform-specific capture only
Any E57: Matterport, Leica, NavVis, Faro, Xgrids
Time to first AR map
Hours to days (rescan + process)
<1 hour from existing E57 upload
Coordinate preservation
New coordinate system per rescan
Preserves original scan coordinates
Hardware lock-in
Tied to one scanner ecosystem
Open E57 standard. Switch anytime.
Multi-floor coverage
Manual stitching or separate projects
Native MapSet stitching across floors
Deployment
Cloud-only or app-embedded
Cloud / VPC / on-prem / on-device

Key Advantages of Choosing MultiSet as Your Azure Spatial Anchors Alternative

Scan-Agnostic
Mapping
Use LiDAR, 3D scanner or iOS Pro devices - MultiSet ingests them all. Bring your Matterport, Leica, NavVis, Polycam. Want to migrate from ASA datasets? Start fresh with MultiSet's Mapper app.
Private, On-Prem &
On-Device Deployments
Security teams love our flexible options: public cloud, private cloud, edge server, or fully on-device. Perfect for compliance-driven industries replacing MS Azure hosting.
Lower Total Cost of
Ownership
Modern micro-services and pay-as-you-scale pricing cut the ASA engine replacement cost by up to 50%.
Future-Ready
Roadmap
Object Anchors arriving soon on MultiSet, taking you beyond a simple Azure Spatial Anchors replacement while preserving today’s maps and APIs.
Frequently asked questions
What is a Visual Positioning System (VPS) and why do enterprises need it?

A VPS gives devices precise 6-DoF position and orientation in the real world so AR content and robots can align to physical assets exactly indoors, outdoors, and at scale. Compared with GPS, Wi‐Fi, beacons or QR codes, VPS provides far higher accuracy and persistent spatial awareness, which is essential for navigation, training, inspection, and digital‐twin overlays in complex facilities. VPs also provide multiplayer environments which provide a shared sense of space for a fleet of devices.

How do I integrate MultiSet AI's VPS into my applications?

Integrate MultiSet AI's VPS using our REST API, Unity SDK, or native SDKs for iOS and Android. This ensures seamless integration and precise localization in your AR projects. Our platform also supports on‐premises deployments and offline SDKs, keeping your data secure. Designed for various environments, our VPS technology handles everything from small rooms to large stadiums.

How does MultiSet AI's VPS handle different lighting conditions and environmental changes?

MultiSet AI’s neural networks are trained for diverse lighting conditions and dynamic environments. The VPS remains robust and accurate under typical lighting variations, minor physical changes, and the presence of people, ensuring reliable performance. Ideal for manufacturing, warehouse training, and other industrial settings, our VPS technology delivers centimeter‐level accuracy even in challenging environments.

How fast can we go from pilot to campus-wide coverage?

Teams typically stand up a first localized scene in minutes using samples, then expand by stitching scans into map sets for seamless indoor↔outdoor and multi-building continuity. This lets you scale predictably while keeping performance consistent across sites.

What is a visual positioning system (VPS) and how does MultiSet differ?

A visual positioning system uses camera imagery to calculate a device’s 6‐DoF pose. MultiSet’s VPS is scan‐agnostic. It fuses LiDAR data, point clouds and 3D scans, delivers sub‐centimetre accuracy, and scales to 100,000+ m2 venues without markers or beacons.

What hardware do I need to use MultiSet VPS?

Any modern phone, tablet, AR headset, robot or drone equipped with a standard RGB camera. Devices with LiDAR (iPhone Pro, iPad Pro, Quest 3) gain extra accuracy but are not required.