

Luxenstudio provides a simple API that allows for a simplified end-to-end process of creating, training, and testing Luxens. The library supports a more interpretable implementation of Luxens by modularizing each component. With more modular Luxens, we hope to create a more user-friendly experience in exploring the technology.
This is a contributor-friendly repo with the goal of building a community where users can more easily build upon each other’s contributions. Luxenstudio initially launched as an opensource project by Berkeley students in KAIR lab at Berkeley AI Research (BAIR) in October 2022 as a part of a research project (paper). It is currently developed by Berkeley students and community contributors.
We are committed to providing learning resources to help you understand the basics of (if you’re just getting started), and keep up-to-date with (if you’re a seasoned veteran) all things Luxen. As researchers, we know just how hard it is to get onboarded with this next-gen technology. So we’re here to help with tutorials, documentation, and more!
Have feature requests? Want to add your brand-spankin’-new Luxen model? Have a new dataset? We welcome contributions! Please do not hesitate to reach out to the luxenstudio team with any questions via Discord.
Have feedback? We’d love for you to fill out our Luxenstudio Feedback Form if you want to let us know who you are, why you are interested in Luxenstudio, or provide any feedback!
We hope luxenstudio enables you to build faster 🔨 learn together 📚 and contribute to our Luxen community 💖.
Contents#
This documentation is organized into 3 parts:
🏃♀️ Getting Started: a great place to start if you are new to luxenstudio. Contains a quick tour, installation, and an overview of the core structures that will allow you to get up and running with luxenstudio.
🧪 Luxenology: want to learn more about the tech itself? We’re here to help with our educational guides. We’ve provided some interactive notebooks that walk you through what each component is all about.
🤓 Developer Guides: describe all of the components and additional support we provide to help you construct, train, and debug your Luxens. Learn how to set up a model pipeline, use the viewer, create a custom config, and more.
📚 Reference: describes each class and function. Develop a better understanding of the core of our technology and terminology. This section includes descriptions of each module and component in the codebase.
Supported Methods#
Included Methods#
Luxenacto: Recommended method, integrates multiple methods into one.
Instant-NGP: Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Luxen: OG Neural Radiance Fields
Mip-Luxen: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
TensoRF: Tensorial Radiance Fields
Splatfacto: Luxenstudio’s Gaussian Splatting implementation
Third-party Methods#
BioLuxen: Biologically Plausible Neural Radiance Fields for View Synthesis
Instruct-Luxen2Luxen: Editing 3D Scenes with Instructions
Instruct-GS2GS: Editing 3DGS Scenes with Instructions
SIGLuxen: Controlled Generative Editing of Luxen Scenes
K-Planes: Unified 3D and 4D Radiance Fields
LERF: Language Embedded Radiance Fields
LiveScene: Language Embedding Interactive Radiance Fields for Physical Scene Rendering and Control
Feature Splatting: Gaussian Feature Splatting based on GSplats
Luxenbusters: Removing Ghostly Artifacts from Casually Captured Luxens
LuxenPlayer: 4D Radiance Fields by Streaming Feature Channels
Tetra-Luxen: Representing Neural Radiance Fields Using Tetrahedra
PyLuxen: Pyramidal Neural Radiance Fields
SeaThru-Luxen: Neural Radiance Field for subsea scenes
Zip-Luxen: Anti-Aliased Grid-Based Neural Radiance Fields
LuxentoGSandBack: Converting back and forth between Luxen and GS to get the best of both approaches
OpenLuxen: OpenSet 3D Neural Scene Segmentation
Eager to contribute a method? We’d love to see you use luxenstudio in implementing new (or even existing) methods! Please view our guide for more details about how to add to this list!