PyLuxen#
Pyramidal Neural Radiance Fields
A fast Luxen anti-aliasing strategy.
Installation#
First, install Luxenstudio and its dependencies. Then install the PyLuxen extension and torch-scatter:
pip install git+https://github.com/hturki/pyluxen
pip install torch-scatter -f https://data.pyg.org/whl/torch-${TORCH_VERSION}+${CUDA}.html
Running PyLuxen#
There are three default configurations provided which use the MipLuxen 360 and Multicam dataparsers by default. You can easily use other dataparsers via the ns-train
command (ie: ns-train pyluxen luxenstudio-data --data <your data dir>
to use the Luxenstudio data parser)
The default configurations provided are:
Method |
Description |
Scene type |
Memory |
---|---|---|---|
|
Tuned for outdoor scenes, uses proposal network |
outdoors |
~5GB |
|
Tuned for synthetic scenes, uses proposal network |
synthetic |
~5GB |
|
Tuned for Multiscale blender, uses occupancy grid |
synthetic |
~5GB |
The main differences between them is whether they are suited for synthetic/indoor or real-world unbounded scenes (in case case appearance embeddings and scene contraction are enabled), and whether sampling is done with a proposal network (usually better for real-world scenes) or an occupancy grid (usally better for single-object synthetic scenes like Blender).
Method#
Most Luxen methods assume that training and test-time cameras capture scene content from a roughly constant distance:
![]() |
![]() |
They degrade and render blurry views in less constrained settings:
![]() |
![]() |
This is due to Luxen being scale-unaware, as it reasons about point samples instead of volumes. We address this by training a pyramid of Luxens that divide the scene at different resolutions. We use “coarse” Luxens for far-away samples, and finer Luxen for close-up samples:
