Skip to content

openvdb/fvdb-core

ƒVDB

fVDB is a Python library of data structures and algorithms for building high-performance and large-domain spatial applications using NanoVDB on the GPU in PyTorch. Applications of fVDB include 3D deep learning, computer graphics/vision, robotics, and scientific computing.

fVDB Teaser

fVDB was first developed by the NVIDIA High-Fidelity Physics Research Group within the NVIDIA Spatial Intelligence Lab, and continues to be developed with the OpenVDB community to suit the growing needs for a robust framework for spatial intelligence research and applications.

The paper is available for more details, kindly consider citing it in your work if you find it useful.

Learning to Use fVDB

After installing fVDB, we recommend starting with our documentation.

Beyond the documentation, the walk-through notebooks in this repository can provide an illustrated introduction to the main concepts in fVDB.

Installing fVDB

The fvdb-core Python package can be installed either using published packages with pip or built from source.

For the most up-to-date information on installing fVDB's pip packages, please see the installation documentation.

Building fVDB from Source

If the pre-built packages do not meet your needs, you can build fVDB from source in this repository.

Environment Management

ƒVDB is a Python library implemented as a C++ PyTorch extension. We provide three paths to constructing reliable environments for building and running ƒVDB. These are separate options not intended to be used together (however with modification you can of course use, for example, a conda or pip environment inside a docker container).

  1. RECOMMENDED conda
  2. Using docker
  3. Python virtual environment. venv

conda tends to be more flexible since reconfiguring toolchains and modules to suit your larger project can be dynamic, but at the same time this can be a more brittle experience compared to using a virtualized docker container. Using conda is generally recommended for development and testing, while using docker is recommended for CI/CD and deployment.

OPTION 1 Conda Environment (Recommended)

fVDB can be used with any Conda distribution installed on your system. Below is an installation guide using miniforge. You can skip steps 1-3 if you already have a Conda installation.

  1. Download and Run Install Script. Copy the command below to download and run the miniforge install script:
curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
bash Miniforge3-$(uname)-$(uname -m).sh
  1. Follow the prompts to customize Conda and run the install. Note, we recommend saying yes to enable conda-init.

  2. Start Conda. Open a new terminal window, which should now show Conda initialized to the (base) environment.

  3. Create the fvdb conda environment. Run the following command from the directory containing this README:

conda env create -f env/dev_environment.yml
  1. Activate the fVDB environment:
conda activate fvdb
Other available environments
  • fvdb_build: Use env/build_environment.yml for a minimum set of dependencies needed just to build/package fVDB (note this environment won't have all the runtime dependencies needed to import fvdb).
  • fvdb_test: Use env/test_environment.yml for a runtime environment which has only the packages required to run the unit tests after building ƒVDB. This is the environment used by the CI pipeline to run the tests after building ƒVDB in the fvdb_build environment.
  • fvdb_learn: Use env/learn_environment.yml for additional runtime requirements and packages needed to run the notebooks or examples and view their visualizations.

OPTION 2 Docker Container

Running a Docker container ensures that you have a consistent environment for building and running ƒVDB. Start by installing Docker and the NVIDIA Container Toolkit.

Our provided Dockerfile constructs a container that pre-installs the dependencies needed to build and run ƒVDB.

  1. In the fvdb-core directory, build the Docker image:
docker build -t fvdb-devel .
  1. When you are ready to build ƒVDB, run the following command within the docker container. TORCH_CUDA_ARCH_LIST specifies which CUDA architectures to build for.
docker run -it --mount type=bind,src="$(pwd)",target=/workspace fvdb-devel bash
cd /workspace;
pip install -r env/build_requirements.txt
TORCH_CUDA_ARCH_LIST="7.5;8.0;9.0;10.0;12.0+PTX" \
./build.sh install -v

In order to extract an artifact from the container such as the Python wheel, query the container ID using docker ps and copy the artifact using docker cp.


OPTION 3 Python Virtual Environment

Using a Python virtual environment enables you to use your system provided compiler and CUDA toolkit. This can be especially useful if you are using ƒVDB in conjunction with other Python packages, especially packages that have been built from source.

  1. Start by installing GCC, the CUDA Toolkit, and cuDNN.

  2. Then, create a Python virtual environment, install the requisite dependencies, and build:

python -m venv fvdb
source fvdb/bin/activate
pip install -r env/build_requirements.txt
TORCH_CUDA_ARCH_LIST="7.5;8.0;9.0;10.0;12.0+PTX" ./build.sh install -v

Note: adjust the TORCH_CUDA_ARCH_LIST to suit your needs. If you are building just to run on a single machine, including only the present GPU architecture(s) reduces build time.


Building fVDB

Tips for Building fVDB

  • ⚠️ Compilation can be very memory-consuming. As part of our build script, we set the CMAKE_BUILD_PARALLEL_LEVEL environment variable to control compilation job parallelism with a value that we find works well for most machines (allowing for one job every 2.5GB of memory) but this can be overridden by setting the CMAKE_BUILD_PARALLEL_LEVEL environment variable to a different value.

  • To save time and trouble on repeated clean builds, configure your CPM_SOURCE_CACHE. Add the following to your shell configuration (e.g. .bashrc)

    export CPM_SOURCE_CACHE=$HOME/.cache/CPM

    If this is not set, CMake Package Manager (CPM) will cache in the fVDB build directory. Keeping the cache outside of the build directory allows build-time dependencies to be reused across fvdb clean-build cycles and saves build time. See the CPM documentation for more detail

Build Commands

You can either perform an install:

./build.sh

or if you would like to build a packaged wheel for installing in other environments, you can run the following command:

./build.sh wheel

The build script automatically detects the CUDA architectures to build for based on the available GPUs on the system. You can override this behavior by setting the --cuda-arch-list option.

./build.sh --cuda-arch-list=8.0;8.6+PTX

Build Modifiers

The build script supports the following build modifiers:

  • gtests: Enable building the gtest C++ unit tests.
  • benchmarks: Enable building the benchmarks.
  • editor_skip: Skip building the nanovdb_editor dependency.
  • editor_force: Force rebuild of the nanovdb_editor dependency.
  • debug: Build in debug mode with full debug symbols and no optimizations.
  • strip_symbols: Strip symbols from the build (will be ignored if debug is enabled).
  • verbose: Enable verbose build output for pip and CMake.

Running Tests

C++ Tests

To run the gtest C++ unit tests

./build.sh ctest

Python Tests

To run the pytests

cd tests
pytest unit

Building Documentation

To build the documentation, simply run:

sphinx-build ./docs -a -E build/sphinx
# View the docs
open build/sphinx/index.html
# View docs as served
cd build/sphinx
python -m http.server
# Open localhost:8000 in browser

Setting up Intellisense with clangd in Visual Studio Code

Please see the guide Clangd for Intellisense in fVDB

Code Structure

The main source code for fVDB lives in the src directory. There are several important files here:

  • src/python/Bindings.cpp exposes functionality directly to Python. It is mainly a wrapper around the core classes such as fvdb::GridBatch and fvdb::JaggedTensor.
  • src/GridBatch.h contains the implementation of fvdb::GridBatch which is the core data structure on which fVDB is built. A GridBatch acts as a map between (i, j, k) voxel coordinates and offsets in linear memory. This mapping can be used to perform a host of operations. The methods in this class are mostly lightweight wrappers around a set of CPU and CUDA kernels. The function prototypes for these kernels are defined in src/detail/ops/*.h.
  • src/detail/ops/*.h contains the function prototypes for the main kernels used by fVDB. Host and device kernel implementations are provided in the src/detail/ops/*.cu source files.
  • src/detail/autograd contains C++ implementations of PyTorch autograd functions for differentiable operations. #include <detail/autograd/Autograd.h> includes all of the functions in this directory.
  • src/detail/utils/nanovdb contains a number of utilities which make it easier to use NanoVDB.

References

Please consider citing the following technical paper, presented at ACM SIGGRAPH 2024, when adopting fVDB in your project:

@article{williams2024fvdb,
  author = {Williams, Francis and Huang, Jiahui and Swartz, Jonathan and Klar, Gergely and Thakkar, Vijay and Cong, Matthew and Ren, Xuanchi and Li, Ruilong and Fuji-Tsang, Clement and Fidler, Sanja and Sifakis, Eftychios and Museth, Ken},
  title = {fVDB : A Deep-Learning Framework for Sparse, Large Scale, and High Performance Spatial Intelligence},
  year = {2024},
  issue_date = {July 2024},
  publisher = {Association for Computing Machinery},
  address = {New York, NY, USA},
  volume = {43},
  number = {4},
  issn = {0730-0301},
  url = {https://doi.org/10.1145/3658226},
  doi = {10.1145/3658226},
  journal = {ACM Trans. Graph.},
  month = jul,
  articleno = {133},
  numpages = {15},
}

Contact

For questions or feedback, please use the GitHub Issues for this repository.