How to create VR180 video with an iPhone (or any other phone), using photorealistic 3D capture


This guide shows how to create VR180 videos by flying through a photorealistic 3D model, which is created from a short video captured with a phone (or potentially other videos as well). VR180 video is an immersive stereoscopic 3D video format which works on Quest and other VR devices. Volurama is an advanced tool for creating neural radiance fields (NeRFs), which are photorealistic 3D models. This tutorial will show you how to capture the necessary video with a phone and configure the output for VR180 rendering.

Step-by-step instructions

  1. Capture an input video with a phone
    Volurama can potentially work with any type of input video, but for reliable results, we suggest starting with the capture process here. We recommend using the widest FOV setting available on the camera. For best results, use a video that is about 5 to 30 seconds long. Move the camera around in a spiral, circle, or square pattern to get each part of the scene from several different points of view. You can capture in portrait or landscape view.
  2. Create a new project in Volurama
    Launch Volurama and click "Create New Project".
  3. Select an input video
    Select a video file on your computer. It can be an mp4, mov, or mkv file.
  4. Select a project directory
    Select a folder where files related to this project will be saved. You should create a new directory just for this. Large amounts of data may be created in this folder, and files may be automatically deleted from it as well.
  5. SFM & NeRF options
    This screen allows you to configure settings which affect the quality of the results, and how long it takes to process. For a faster run, leave these at the default values. If you are planning on rendering VR180 output, it is recommended to press the "HD Settings" button. When ready, click "Start Processing".

    TLDR: Just click "HD Settings", then "Start Processing".
    Tip: change "Structure from Motion" ▸ "# Iterations" from 40 to 25 to save time without greatly affecting quality.
    Tip: for videos which contain a lot of moving objects, try changing "Structure from Motion" ▸ "Outlier Percentile" from 0.8 to a smaller number like 0.5 or 0.25.
  6. Wait for processing, visualization
    It may take a while to process, depending on your settings and computer. While you wait, there are three main visualizations to look at: keypoint tracking, structure from motion optimization, and NeRF optimization. Keypoint tracking is the first step in the computer vision pipeline for determining camera motion. Structure from motion is the part of the system which solves a math problem to determine the cameras position and orientation in every frame of the input video. NeRF optimization is the part where it uses machine learning to create a photorealistic 3D model of the scene.
  7. Real-time 3D preview
    The 3D view is a quick and dirty preview of your scene, not the most photorealistic rendering possible (see the "Preview Render" window for that, and make it bigger). The 3D view only includes nearby objects, it doesn't include the background; its OK if it is missing some parts of the scene, as long as they appear in the "Preview Render" view. The spiral in the video below is the software's estimate of the 3D path the camera followed in the input video.
  8. Adjust timeline duration (optional)
    Volurama renders videos as output, and the length of these videos is determined by the timeline. To change the duration of rendered videos, use the menu: Virtual Camera ▸ Change Timeline Duration. Modify the timeline duration first, before setting up virtual camera motion.
  9. Virtual camera motion presets
    To create a visually pleasing result, Volurama can render the scene from the point of view of a virtual camera, which moves along any path we want. The simplest way to set up this motion is to use the motion presets, e.g. the menu Virtual Camera ▸ Dolly Forward (or any of the others). Most of these presets create two keyframes in the timeline, one at the first frame and one at the last. It is convenient to start with a preset like this, then modify the keyframes to further adjust the virtual camera motion.
  10. Edit keyframes
    Use the Keyframe Editor window to adjust the position and rotation of the virtual camera. The Preview Render window shows what the final output from the virtual camera will be.
  11. World transform - align horizon for VR
    The World Transform editor lets you rotate the world, which is useful for adjusting the horizon to be level, or for artistic effect. When creating VR content, it is extremely important to align the horizon, or the scene may not be comfortable to view in VR. Volurama includes some tools that can help rotate a scene to align with gravity. Change the Camera Type in Virtual Camera Settings to "Equirectangular (Mono 360)" to get a preview of the scene as a 2D 360 photo. In this view, the horizon should be exactly in the middle of the image if the scene is oriented properly with respect to gravity. There is a red line in the preview render image. Adjust the sliders in the World Transform Editor to align the horizon with the red line.

    Tip: it is easier to use the sliders after resizing the World Transform editor window.
    Tip: if there is a body of water in the scene such as an ocean or lake, it should line up perfectly with the red horizon line.
  12. Virtual camera settings - optimized for VR180
    To create VR180 content, use the following Virtual Camera Settings: the camera type should be "VR180 (Stereo)". Image Size (Per Eye) should be 2048 for 4K, or 3072 for 6K, or 4096 for 8K output. Stereo Baseline affects how far appart the virtual left and right cameras are, which changes the viewer's perception of depth and scale. Unfortunately, the input video doesn't necessarily contain enough information to infer the units and scale of the scene, so the number for baseline here isn't in any particular units. For best results, experiment with a few different values to find one that produces a good scale for the scene when viewed in VR.

    Tip: resize the Preview Render window for a more detailed preview of the results, and consider closing other editor windows.
  13. Render video
    To begin rendering the final output, use the menu: File ▸ Render Video. This will open the render config screen, which has the same options as the Virtual Camera Settings, as well as options for which video compression formats to create. For VR180 local playback, h265 is recommended. For VR180 editing or uploading to YouTube or Deo, ProRes is near-lossless and maintains more quality (Volurama uses ProRes 422LT by default).
  14. View rendered frames
    While rendering, each frame is saved as an image in the /render_frames subdirectory of the project. It is useful to look at these results before the full render completes. After all frames are finished rendering, a video .mp4 or .mov file is generated in the project directory.
  15. Prepare for VR180 Uploading to YouTube
    Before uploading a VR180 video file to YouTube, you may need to add meta-data to tell YouTube to interpret the video as a VR video. We recommend using the tool "VR180 Creator" (by Google) for this.
  16. The result: a VR180 flythrough video ready to view on Quest, or upload to YouTube or Deo