Third Workshop for Learning 3D with Multi-View Supervision (3DMV)

at CVPR 2026




Call for papers:   TBA

Submission Deadline:   TBA

Workshop Day:   TBA, 2026

Location:   TBA

Third Workshop for Learning 3D with Multi-View Supervision @ CVPR 2026

The third 3DMV workshop at CVPR 2026 explores the rapidly expanding frontier of 3D and 4D research from multi-view data. Building on the success of the first and second editions that focused on 3D and object generation, 3DMV 2026 highlights dynamic multi-view datasets, generative 4D models, and the emerging paradigm of world models—systems that learn to predict and simulate evolving 3D scenes with spatial and temporal consistency. We also explore their integration with frontier techniques such as LLMs and video diffusion. A special emphasis is placed on emerging applications in robotics and medical AI, where 3D/4D world models enable robust generalization. 3DMV 2026 aims to unify 2D, 3D, and 4D research communities and foster emerging applications toward scalable, cross-domain multi-view learning. The detailed topics covered in the workshop include the following:

  • Multi-View for 3D Understanding
  • Deep Multi-View Stereo
  • Multi-View for 3D Generation and Novel View Synthesis
  • Dynamic Multi-View Datasets and 4D Generative models
  • LLMs and World Models for 3D/4D
  • Video Diffusion for Multi-View Generation
  • Robotics Applications with Multi-View 3D/4D
  • Medical AI with Multi-View 3D/4D
  • Submission TimeLine

  • Paper Submission start: TBA
  • Paper submission deadline: TBA
  • Review period: TBA
  • Decision to authors: TBA
  • Camera-ready papers: TBA
  • Call For Papers

    We are soliciting papers that use Multi-view deep learning to address problems in 3D/4D Understanding and Generation, including but not limited to the following topics:

  • Bird-Eye View for 3D Object Detection
  • Multi-view fusion for 3D Object Detection
  • Indoor/outdoor scenes segmentation
  • 3D/4D Diffusions for generation
  • Video diffusion for multi-view synthesis
  • 4D understanding, generation, and world models
  • LLMs and foundation models for 3D/4D
  • Language + 3D/4D
  • Medical 3D segmentation and analysis
  • Robotics with multi-view 3D/4D perception
  • 3D shape generation and reconstruction
  • Deep multi-view stereo
  • Inverse Graphics from multi-view images
  • Indoor/outdoor scenes generation and reconstruction
  • Volumetric Multi-view representation for 3D generation and novel view synthesis
  • NeRFs and Gaussian Splatting
  • 3D shape classification and retrieval
  • Vision for XR, AR, VR
  • Paper Submission Guidelines

  • We accept both archival and non-archival paper submissions.
  • Archival submissions should be of max 8 pages (excluding references) on the aforementioned and related topics.
  • Non-archival submissions can be previously published works in major venues (in the last two years or at CVPR 2026) or based on new works (max 8 pages as well).
  • Archival Accepted papers will be included in the proceedings of CVPR 2026, while non-archival accpeted papers will not be included
  • Submitted manuscripts should follow the CVPR 2026 paper template (if they have not been published previously).
  • All submissions (except for previoulsly published) will be peer-reviewed under a double-blind policy (authors should not include names in submissions)
  • PDFs need to be submitted online through the link.
  • Accepted papers' authors will be notified to prepare camera-ready posters to be uploaded based on the schedule above.
  • Every accepted paper will have the opportunity to host a poster presentation at the workshop.
  • Some acccpeted papers will be selcted for oral presentations at the workshop.
  • There will be a `best poster award` announced during the workshop with a sponsored money prize.
  • Schedule

    To be announced.

    Speakers

    To be announced.

    Organizers

    Abdullah Hamdi

    University of Oxford

    Guocheng Qian

    Snap Research

    Jan Held

    University of Liege
    Contact: abdullah.hamdi@kaust.edu.sa
    CVPR 2026 Workshop ©2026