SAIL-Recon: Large SfM by Augmenting Scene Regression with Localization

Arxiv

Junyuan Deng12*, Heng Li1*, Tao Xie23, Weiqiang Ren2, Qian Zhang2, Ping Tan1, Xiaoyang Guo2,
1Hong Kong University of Science and Technology, 2Horizon Robotics 3Zhejiang University
*Equal Contribution Corresponding Author
Framework

SAIL-Recon Pipeline

Abstract

Scene regression methods, such as VGGT, solve the Structure-from-Motion (SfM) problem by directly regressing camera poses and 3D scene structures from input images. They demonstrate impressive performance in handling images under extreme viewpoint changes. However, these methods struggle to handle a large number of input images. To address this problem, we introduce \ours, a feed-forward Transformer for large scale SfM, by augmenting the scene regression network with visual localization capabilities. Specifically, our method first computes a neural scene representation from a subset of anchor images. The regression network is then fine-tuned to reconstruct all input images conditioned on this neural scene representation. Comprehensive experiments show that our method not only scales efficiently to large-scale scenes, but also achieves state-of-the-art results on both camera pose estimation and novel view synthesis benchmarks, including TUM-RGBD, CO3Dv2, and Tanks & Temples.

Reconstruction Demo

Reconstruction Visualization


Results

Results

Novel View Synthesis


Tanks and Temple & 7-Scenes
Results

BibTeX

@inproceedings{dengli2025sailrecon,
  author    = {Deng, Junyuan and Li, Heng and Xie, Tao and Ren, Weiqiang and Tan, Ping and Xiaoyang, Guo},
  title     = {SAIL-Recon: Large SfM by Augmenting Scene Regression with Localization},
  booktitle={https://arxiv.org/abs/2508.17972},
  year      = {2025},
}