3D Rendering Using Neural Radiance Fields

Authors

  • Odalis Velasco, Andrea Pilco, Samuel Peña, Gustavo Alomia, Briyit Vallejo

Keywords:

3D Rendering, Neural Rendering, NeRF, Volumetric Rendering.

Abstract

Neural Radiance Fields (NeRF) have been applied to various tasks related to synthesizing view images of objects from multiple-view images by learning a 3D rendering model represented by neural networks. Many studies have focused on applying NeRF for volume rendering of complex objects and on optimizing NeRF with different variants. In this study, we implemented a fully automated 3D rendering process using a scanning station to generate the dataset. The images are immediately fed into TinyNeRF to obtain the volumetric representation of a Clark box gear. In our experiments, we modified the ray parameters, epochs, and the number of images fed into the neural network. The PSNR (Peak Signal-to-Noise Ratio) metric was used to measure the performance of TinyNeRF representations compared to the original images. Six tests were conducted, varying the aforementioned parameters. The highest PSNR value achieved was 25.90 dB, with a training time of 40 minutes, 120 images and 4000 epochs. The variation of the parameters was considered since the camera mounted on the scanner station is in motion, as well as the object. These variations considered the epochs, ray parameters and the number of images to improve the quality of the volumetric representation of the object, measured by PSNR.

Downloads

Download data is not yet available.

References

D. Verbin, P. Hedman, B. Mildenhall, T. Zickler, J. Barron, and P. Srinivasan, "Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1-14.

AKM. Rabby and C. Zhang, "BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields," ArXiv, vol. 2306.03000, 2023, pp.1-15.

J. Congote, L. Kabongo, A. Moreno, A. Segura, A. Beristain, J. Posada, and O. Ruiz, "Volume ray casting in WebGL," Computer Graphics, 2012, pp. 157-178.

S. E. Chen and L. Williams, "View interpolation for image synthesis," in Seminal Graphics Papers: Pushing the Boundaries, Volume 2, 2023, pp. 423-432.

W. Chen, H. Ling, J. Gao, E. Smith, J. Lehtinen, A. Jacobson, and S. Fidler, "Learning to predict 3D objects with an interpolation-based differentiable renderer," Advances in Neural Information Processing Systems, vol. 32, pp. 1-11, 2019.

G. Sharma, D. Rebain, K. M. Yi, y A. Tagliasacchi, "Volumetric Rendering with Baked Quadrature Fields," arXiv preprint arXiv:2312.02202, 2023, pp. 1-13.

G. Mazzacca, A. Karami, S. Rigon, E. Farella, P. Trybala, and F. Remondino, "NeRF for heritage 3D reconstruction," International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 48, no. M-2-2023, pp. 1051-1058, 2023.

M. Tancik, V. Casser, X. Yan, S. Pradhan, B. P. Mildenhall, P. Srinivasan, J. T. Barron, and H. Kretzschmar, "Block-NeRF: Scalable Large Scene Neural View Synthesis," 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 8238-8248

Y. Era, R. Togo, K. Maeda, T. Ogawa and M. Haseyama, "Content-based Image Retrieval Using Effective Synthesized Images from Different Camera Views via pixelNeRF," 2022 IEEE 11th Global Conference on Consumer Electronics (GCCE), 2022, pp. 404-405.

B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis," arXiv preprint arXiv:2003.08934, 2020, pp. 1-12.

J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla and P. P. Srinivasan, "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields," Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5855-5864

A. Chen, Z. Xu, F. Zhao, X. Zhang, F. Xiang, J. Yu, and H. Su, "Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo," Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 14124-14133.

J. I. Pan, J. W. Su, K. W. Hsiao, T. Y. Yen, and H. K. Chu, "Sampling neural radiance fields for refractive objects," SIGGRAPH Asia 2022 Technical Communications, 2022, pp. 1-4.

H. Chen, J. Gu, A. Chen, W. Tian, Z. Tu, L. Liu and H. Su, "Single-stage diffusion nerf: A unified approach to 3D generation and reconstruction," Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 2416-2425.

Y. Lin, "Ced-NeRF: A Compact and Efficient Method for Dynamic Neural Radiance Fields," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, no. 4, 2024, pp. 3504-3512.

Q. Qu, H. Liang, X. Chen, Y. Y. Chung and Y. Shen, "NeRF-NQA: No-Reference Quality Assessment for Scenes Generated by NeRF and Neural View Synthesis Methods," IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 5, May 2024, pp. 2129-2139.

T. Zhao, J. Chen, C. Leng, and J. Cheng, "Tinynerf: Towards 100 x compression of voxel radiance fields," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 3, 2023, pp. 3588-3596.

S. Gante, J. Vasquez, M. Valencia, and M. Carbajal, "A comparison of tiny-NeRF versus spatial representations for 3D reconstruction," arXiv preprint arXiv:2301.11522, 2023, pp. 1-8.

W. Bian, Z. Wang, K. Li, J. Bian, and V. A. Prisacariu, "NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior," The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1-8.

K. Park, P. Henzler, B. Mildenhall, J. T. Barron, and R. Martin-Brualla, "CamP: Camera Preconditioning for Neural Radiance Fields," arXiv:2308.10902, 2023, pp. 1-11.

K. Han, W. Xiang, and L. Yu. “ Volume feature rendering for fast neural radiance field reconstruction,”. Proceedings of the 37th International Conference on Neural Information Processing Systems (NIPS '23) Curran Associates Inc., Red Hook,, 2024, pp. 65416–65427.

H. Aoki and T. Yamanaka, "Improving NeRF with Height Data for Utilization of GIS Data," arXiv preprint, 2023, pp. 1-8.

M. Trupti, C. Basilio, N. Van, W. Ryan, S. Todd. 3D Reconstruction of Non-cooperative Resident Space Objects using Instant NGP-accelerated NeRF and D-NeRF, 10.48550/arXiv.2301.09060, 2023, pp. 1-14.

S. Li, C. Li, W. Zhu, B. Yu, Y. Zhao, C. Wan, H. You, H. Shi, and Y. Lin, "Instant-3D: Instant Neural Radiance Field Training Towards On-Device AR/VR 3D Reconstruction," International Symposium on Computer Architecture (ISCA), 2024, pp.1-13.

V. Croce, G. Caroti, L. De Luca, A. Piemonte, and P. Véron, "Neural radiance fields (NeRF): Review and potential applications to digital cultural heritage," The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 48, 2023, pp. 453-460.

B. Hu, J. Huang, Y. Liu, Y.-W. Tai, and C.-K. Tang, "NeRF-RPN: A General Framework for Object Detection in NeRFs," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1-8.

J. Lv, J. Guo, Y. Zhang, X. Zhao, and B. Lei, "Neural Radiance Fields for High-Resolution Remote Sensing Novel View Synthesis," Remote Sensing, vol. 15, no. 16, 2023, pp. 3920.

M. Rakotosaona, F. Manhardt, D. Arroyo, M. Niemeyer, A. Kundu, and F. Tombari, "NeRFMeshing: Distilling Neural Radiance Fields into Geometrically-Accurate 3D Meshes," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.

Z. Wang, S. Wu, W. Xie, M. Chen, and V. A. Prisacariu, "NeRF−−: Neural Radiance Fields Without Known Camera Parameters," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023. pp 1-17.

Downloads

Published

12.06.2024

How to Cite

Odalis Velasco. (2024). 3D Rendering Using Neural Radiance Fields. International Journal of Intelligent Systems and Applications in Engineering, 12(4), 3705 –. Retrieved from https://ijisae.org/index.php/IJISAE/article/view/6914

Issue

Section

Research Article