Scene regression methods, such as VGGT, solve the Structure-from-Motion (SfM) problem by directly regressing camera poses and 3D scene structures from input images. They demonstrate impressive performance in handling images under extreme viewpoint changes. However, these methods struggle to handle a large number of input images. To address this problem, we introduce \ours, a feed-forward Transformer for large scale SfM, by augmenting the scene regression network with visual localization capabilities. Specifically, our method first computes a neural scene representation from a subset of anchor images. The regression network is then fine-tuned to reconstruct all input images conditioned on this neural scene representation. Comprehensive experiments show that our method not only scales efficiently to large-scale scenes, but also achieves state-of-the-art results on both camera pose estimation and novel view synthesis benchmarks, including TUM-RGBD, CO3Dv2, and Tanks & Temples.
@inproceedings{dengli2025sailrecon,
author = {Deng, Junyuan and Li, Heng and Xie, Tao and Ren, Weiqiang and Tan, Ping and Xiaoyang, Guo},
title = {SAIL-Recon: Large SfM by Augmenting Scene Regression with Localization},
booktitle={https://arxiv.org/abs/2508.17972},
year = {2025},
}