Abstract
We introduce MEt3R, a metric for multi-view consistency in generated images.
Large-scale generative models for multi-view image generation are rapidly
advancing the field of 3D inference from sparse observations. However, due to
the nature of generative modeling, traditional reconstruction metrics are not
suitable to measure the quality of generated outputs and metrics that are
independent of the sampling procedure are desperately needed. In this work, we
specifically address the aspect of consistency between generated multi-view
images, which can be evaluated independently of the specific scene. Our
approach uses DUSt3R to obtain dense 3D reconstructions from image pairs in a
feed-forward manner, which are used to warp image contents from one view into
the other. Then, feature maps of these images are compared to obtain a
similarity score that is invariant to view-dependent effects. Using MEt3R, we
evaluate the consistency of a large set of previous methods for novel view and
video generation, including our open, multi-view latent diffusion model.
BibTeX
@online{Asim_2501.06336, TITLE = {{MEt3R}: {M}easuring Multi-View Consistency in Generated Images}, AUTHOR = {Asim, Mohammad and Wewer, Christopher and Wimmer, Thomas and Schiele, Bernt and Lenssen, Jan Eric}, LANGUAGE = {eng}, URL = {MEt3R: Measuring Multi-View Consistency in Generated Images}, EPRINT = {2501.06336}, EPRINTTYPE = {arXiv}, YEAR = {2025}, MARGINALMARK = {$\bullet$}, ABSTRACT = {We introduce MEt3R, a metric for multi-view consistency in generated images.<br>Large-scale generative models for multi-view image generation are rapidly<br>advancing the field of 3D inference from sparse observations. However, due to<br>the nature of generative modeling, traditional reconstruction metrics are not<br>suitable to measure the quality of generated outputs and metrics that are<br>independent of the sampling procedure are desperately needed. In this work, we<br>specifically address the aspect of consistency between generated multi-view<br>images, which can be evaluated independently of the specific scene. Our<br>approach uses DUSt3R to obtain dense 3D reconstructions from image pairs in a<br>feed-forward manner, which are used to warp image contents from one view into<br>the other. Then, feature maps of these images are compared to obtain a<br>similarity score that is invariant to view-dependent effects. Using MEt3R, we<br>evaluate the consistency of a large set of previous methods for novel view and<br>video generation, including our open, multi-view latent diffusion model.<br>}, }
Endnote
%0 Report %A Asim, Mohammad %A Wewer, Christopher %A Wimmer, Thomas %A Schiele, Bernt %A Lenssen, Jan Eric %+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society %T MEt3R: Measuring Multi-View Consistency in Generated Images : %G eng %U http://hdl.handle.net/21.11116/0000-0010-7934-C %U MEt3R: Measuring Multi-View Consistency in Generated Images %D 2025 %X We introduce MEt3R, a metric for multi-view consistency in generated images.<br>Large-scale generative models for multi-view image generation are rapidly<br>advancing the field of 3D inference from sparse observations. However, due to<br>the nature of generative modeling, traditional reconstruction metrics are not<br>suitable to measure the quality of generated outputs and metrics that are<br>independent of the sampling procedure are desperately needed. In this work, we<br>specifically address the aspect of consistency between generated multi-view<br>images, which can be evaluated independently of the specific scene. Our<br>approach uses DUSt3R to obtain dense 3D reconstructions from image pairs in a<br>feed-forward manner, which are used to warp image contents from one view into<br>the other. Then, feature maps of these images are compared to obtain a<br>similarity score that is invariant to view-dependent effects. Using MEt3R, we<br>evaluate the consistency of a large set of previous methods for novel view and<br>video generation, including our open, multi-view latent diffusion model.<br> %K Computer Science, Computer Vision and Pattern Recognition, cs.CV,Computer Science, Learning, cs.LG,eess.IV