Predicting Camera Pose from Perspective Descriptions for Spatial Reasoning

Predicting Camera Pose from Perspective Descriptions for Spatial Reasoning
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Multi-image spatial reasoning remains challenging for current multimodal large language models (MLLMs). While single-view perception is inherently 2D, reasoning over multiple views requires building a coherent scene understanding across viewpoints. In particular, we study perspective taking, where a model must build a coherent 3D understanding from multi-view observations and use it to reason from a new, language-specified viewpoint. We introduce CAMCUE, a pose-aware multi-image framework that uses camera pose as an explicit geometric anchor for cross-view fusion and novel-view reasoning. CAMCUE injects per-view pose into visual tokens, grounds natural-language viewpoint descriptions to a target camera pose, and synthesizes a pose-conditioned imagined target view to support answering. To support this setting, we curate CAMCUE-DATA with 27,668 training and 508 test instances pairing multi-view images and poses with diverse target-viewpoint descriptions and perspective-shift questions. We also include human-annotated viewpoint descriptions in the test split to evaluate generalization to human language. CAMCUE improves overall accuracy by 9.06% and predicts target poses from natural-language viewpoint descriptions with over 90% rotation accuracy within 20° and translation accuracy within a 0.5 error threshold. This direct grounding avoids expensive test-time search-and-match, reducing inference time from 256.6s to 1.45s per example and enabling fast, interactive use in real-world scenarios.


💡 Research Summary

The paper introduces CAMCUE, a pose‑aware multimodal large language model (MLLM) framework designed to tackle perspective‑shift spatial reasoning. In this setting, a model receives several contextual images, each annotated with its camera extrinsic and intrinsic parameters, together with a natural‑language description of a target viewpoint and a question that must be answered from that viewpoint. Existing MLLMs either ignore the geometric relationship between views or rely on expensive test‑time search over candidate poses, leading to poor performance and high latency.

CAMCUE solves these issues through three key components. First, a Plücker encoder converts each image’s camera pose (extrinsics Cᵢ and intrinsics Kᵢ) into a dense ray map Rᵢ that is pixel‑aligned with the image. This map is tokenized in the same patch‑wise fashion as the visual backbone, producing pose tokens Zᵢ that share spatial layout with visual tokens Xᵢ. A lightweight MLP then fuses the two token streams patch‑by‑patch (X̃ᵢ = Xᵢ + W


Comments & Academic Discussion

Loading comments...

Leave a Comment