New Comprehensive Research Survey Evaluates the Revolution of 3D Scene Editing via NeRF and Gaussian Splatting
National University of Defense Technology researchers provide a systematic review of 3D editing using NeRF and 3DGS in Frontiers of Computer Science.
By: AXL Media
Published: Apr 25, 2026, 8:14 AM EDT
Source: Information for this report was sourced from EurekAlert!

Bridging the Gap Between Digital Fidelity and Computational Efficiency
For decades, the manipulation of 3D scenes required labor-intensive manual adjustments that often struggled to maintain fine details or high computational speed. However, the emergence of radiance field-based methods has fundamentally altered this landscape, providing a scalable solution for high-fidelity digital content creation. According to researchers Chenyang Zhu and Xinyao Liu, Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) now serve as the primary foundations for modern scene representation. These technologies allow for a flexible approach to digital editing that was previously hindered by the limitations of traditional geometric modeling, marking a significant milestone in the evolution of computer vision.
The Evolution of Neural Radiance Fields in Digital Design
Neural Radiance Fields, or NeRF, utilize deep neural networks to model the complex light transport and geometry of 3D environments. By leveraging these networks, creators can synthesize novel views of a scene with unprecedented realism. The survey published in Frontiers of Computer Science notes that NeRF has been instrumental in moving 3D editing away from rigid mesh structures and toward more fluid, neural-based representations. This shift allows for more sophisticated alterations in lighting and texture, as the neural network can "understand" the underlying volumetric data of the object being edited rather than just its surface.
Accelerating Rendering Speeds Through 3D Gaussian Splatting
While NeRF offers high fidelity, 3D Gaussian Splatting, known as 3DGS, has introduced a new level of efficiency to the rendering process. By representing a scene as a collection of visible Gaussians, this method allows for much faster rendering speeds without sacrificing the visual quality of the output. The research team from the National University of Defense Technology highlighted that 3DGS provides the robust foundation needed for real-time 3D editing. This breakthrough is particularly relevant for interactive applications where immediate visual feedback is necessary for the artist or designer to refine their digital assets.
Categories
Topics
Related Coverage
- Cold Spring Harbor Laboratory Unveils Cheese3D AI System for High-Precision 3D Facial Mapping in Mice
- Cheese3D Discovery Platform Uses Multidimensional AI Vision to Quantify Subtlest Facial Expressions in Mouse Models
- New Face-Based AI Algorithm Accurately Estimates Body Height and Weight Using Novel Pose-Disentanglement Technology
- Biomechanical Simulations Reveal How Ancient Marine Predators Shared Ecosystems Through Specialized Hunting Strategies