MapAnything: Universal Feed-Forward Metric 3D Reconstruction

1 Meta Reality Labs          2 Carnegie Mellon University
arXiv 2025

TLDR: MapAnything is a simple, end-to-end trained transformer model that directly regresses the factored metric 3D geometry of a scene given various types of inputs (images, calibration, poses, or depth).

Abstract

We introduce MapAnything, a unified transformer-based feed-forward model that ingests one or more images along with optional geometric inputs such as camera intrinsics, poses, depth, or partial reconstructions, and then directly regresses the metric 3D scene geometry and cameras. MapAnything leverages a factored representation of multi-view scene geometry, i.e., a collection of depth maps, local ray maps, camera poses, and a metric scale factor that effectively upgrades local reconstructions into a globally consistent metric frame. Standardizing the supervision and training across diverse datasets, along with flexible input augmentation, enables MapAnything to address a broad range of 3D vision tasks in a single feed-forward pass, including uncalibrated structure-from-motion, calibrated multi-view stereo, monocular depth estimation, camera localization, depth completion, and more. We provide extensive experimental analyses and model ablations demonstrating that MapAnything outperforms or matches specialist feed-forward models while offering more efficient joint training behavior, thus paving the way toward a universal 3D reconstruction backbone.


Qualitative Results

We visualize MapAnything results on a variety of image-only inputs. Try it out yourself! - 🤗 Hugging Face Demo

Click two points to measure distance



Qualitative Comparison

We compare MapAnything with VGGT and π³ on various scenes using only images as input.



Auxiliary Geometric Inputs

MapAnything can leverage various geometric inputs including camera calibration, poses, and depth to improve 3D reconstruction quality across different tasks.

Auxiliary Geometric Inputs

BibTeX

Copied!
@inproceedings{keetha2025mapanything,
  title       = {{MapAnything}: Universal Feed-Forward Metric {3D} Reconstruction},
  author      = {Nikhil Keetha and Norman M\"uller and Johannes Sch\"onberger and Lorenzo Porzi and
                 Yuchen Zhang and Tobias Fischer and Arno Knapitsch and Duncan Zauss and
                 Ethan Weber and Nelson Antunes and Jonathon Luiten and Manuel Lopez-Antequera and Samuel Rota Bul\`o and Christian Richardt and Deva Ramanan and Sebastian Scherer and Peter Kontschieder},
  booktitle   = {arXiv:XXXX.XXXXX},
  year        = {2025}
}

Acknowledgements

We thank Michael Zollhöfer for his initial involvement in project discussions. We thank Jeff Tan, Jianyuan Wang, Jay Karhade, Jensen Zhou, Yifei Liu, Shubham Tulsiani, Khiem Vuong, Yuheng Qiu, Shibo Zhao, Omar Alama, Andrea Simonelli, Corinne Stucker, Denis Rozumny, Bardienus Duisterhof, and Wenshan Wang for their insightful discussions and assistance with parts of the project. Lastly, we appreciate the support for compute infrastructure from Julio Gallegos, Tahaa Karim, and Ali Ganjei.