Easy to use neural radiance fields (NeRF) for Windows and Mac
Capture volumetric panoramas with a phone, render on Windows or Mac
Capture a short video (~10 seconds) of a scene using an iPhone. Move the camera in a square or circle to capture from different points of view.
Process the video with Volurama on Windows or Mac. Volurama creates a 3D model of the scene from your video. Unlike other NeRF systems which process in the cloud, you maintain control of your data by rendering locally.
Render the scene from the point of view of a moving virtual camera to create professional looking effects, or render 3D VR video or holograms for glasses-free 3D displays such as the Looking Glass Portrait.
What can you do with a volumetric panorama and a virtual camera?
Photorealistically simulate camera motions that would typically require expensive hardware, such as dolly, boom, and orbit.
Render eye-catching 2D videos to stand out and/or save production costs.
Render VR180 (stereoscopic 3D) video. Create 3D VR photos and videos using only a phone.
Render holograms for the Looking Glass Portrait.
Export point clouds with up to 10 million points.
State of the art 3D geometric computer vision engine
Automatically estimates camera motion using structure from motion and non-linear least squares optimization.
Learns a panoramic (unbounded) neural radiance field model capturing the 3D structure of the scene and view-dependent phenomena.
Proprietary structure from motion and neural radiance field engine developed over several years by Lifecast.
Download and install Volurama for Windows and Mac
Volurama for Windows
Requires Windows 11, NVidia GPU (3090 recommended)
You may get a warning while installing that says "Windows protect your PC ... prevented unrecognized app from starting". To get past this, just click "More Info", then "Run Anyway".
When you first open the application, you will get a warning from Mac OS X saying it cannot be verified. To get past this, go to "System Preferences" → "Security & Privacy" → "Open Anyway".
You must install ffmpeg!
To encode and decode video, Volurama requires ffmpeg (a free, open-source tool). You must install ffmpeg separately!
Q: Do you have some example input videos that I can try?
A: Yes, here are a couple of videos that we captured with an iPhone 14, which illustrate how to move the camera to create a suitable input and get a good result.
Example Input Video 1Example Input Video 2
Q: How do I capture a video to use it as input for Volurama?
A: Hold your phone facing toward one direction, then move your hand in a large square shape (in order to capture the scene from different points of view).
It is OK to use either portrait or landscape mode.
Q: What settings should I use when capturing a video with an iPhone?
A: We generally recommend selecting the widest field of view mode possible. Do not change the field of view in the middle of the video.
Q: Can I use videos captured with any camera as input or only iPhone?
A: We have tested Volurama mostly with inputs from iPhones, and can't gaurantee it will work with other videos, but it is definitely possible.
The most important thing is that the input video be "rectilinear" which simply means there is no lens distortion, which implies that straight lines in the real 3D world are straight lines in the 2D image.
Most phone cameras (including iPhones) correct lens distortion to provide you with a rectilinear image automatically.
Q: Does Volurama use COLMAP to estimate camera motion?
A: No. Volurama uses its own custom keypoint tracking and structure-from-motion solver.
Q: Is there a publication or github for the neural radiance field algorithm used in Volurama?
A: No. The NeRF engine in Volurama is written from scratch in C++, and incorporates ideas from many recent papers on neural radiance fields, as well as some proprietary techniques developed by Lifecast.
Examples Results
iPhone → NeRF → VR180
This is a 3D VR video created with Volurama and an iPhone.
NeRF training is now GPU accelerated on Apple Silicon (ARM, M1, M2, M3) using Metal. Previously training on Mac used CPU.
Intel Macs are no longer supported.
Volurama 1.2
Added option for stereoscopic (rectilinear) output.
Added visualization of keypoint tracking.
Added graph of structure from motion optimization.
Added graph of NeRF optimization.
Added progress bars for all long-running commands.
A text log file is now saved in the project directory.
Volurama 1.1
Fixed default ffmpeg path on Apple Silicon.
Fixed formatting of ffmpeg config dialog.
Fixed a bug where tracking visualization images were not cleaned up.
About Us
Lifecast Incorporated
Volurama is created by Lifecast Incorporated. We are dedicated to pushing the limits of immersive volumetric media, while making tools that are practical for creators to unleash their creativity.
Explore more tools from Lifecast: