research


research/2022rhlee/lee10ab_sm.gif

Estimating Physically-Based Reflectance Parameters From a Single Image With GAN-Guided CNN

We present a method that estimates the physically accurate reflectance of materials from a single image and reproduces real world materials which can be used in well-known graphics engines and tools. Recovering the BRDF (bidirectional reflectance distribution function) from a single image is an ill-posed problem due to the insufficient irradiance and geometry information as well as the insufficient samples on the BRDF parameters. The problem could be alleviated with a simplified representation of the surface reflectance such as Phong reflection model. Recent works have appealed that convolutional neural network successfully predicts parameters of empirical BRDF models for non-Lambertian surfaces. However, parameters of the physically-based model confront the problem of having non-orthogonal space, making it difficult to estimate physically meaningful results. In this paper, we propose a method to estimate parameters of a physically-based BRDF model from a single image. We focus on the metallic property of the physically-based model to enhance the estimation accuracy. Since metals and nonmetals have very different characteristics, our method processes them separately. Our method also generates auxiliary maps using a cGAN (conditional generative adversarial network) architecture to help in estimating more accurate BRDF parameters. Based on the experimental results, the auxiliary map is selected as an irradiance environment map for the metallic and a specular map for the nonmetallic. These auxiliary maps help to clarify the contributions of different actors, including light color, material color, specular component, and diffuse component, to the surface color. Our method first estimates whether the material on the input image is metallic or nonmetallic. Then, it estimates BRDF parameters using CNN (convolutional neural networks) architecture guided by generated auxiliary maps. Our results show that our method is effective to estimate BRDF parameters both on synthesized as well as real images.


Research over the last decade has built a solid mathematical foundation for representation and analysis of 3D meshes in graphics and geometric modeling. Much of this work however does not explicitly incorporate models of low-level human visual attention. In this paper we introduce the idea of mesh saliency as a measure of regional importance for graphics meshes. Our notion of saliency is inspired by low-level human visual system cues. We define mesh saliency in a scale-dependent manner using a center-surround operator on Gaussian-weighted mean curvatures. We observe that such a definition of mesh saliency is able to capture what most would classify as visually interesting regions on a mesh. The human-perception-inspired importance measure computed by our mesh saliency operator results in more visually pleasing results in processing and viewing of 3D meshes. compared to using a purely geometric measure of shape. such as curvature. We discuss how mesh saliency can be incorporated in graphics applications such as mesh simplification and viewpoint selection and present examples that show visually appealing results from using mesh saliency.

original saliency simplified

  • Mesh Saliency. Chang Ha Lee, Amitabh Varshney, and David Jacobs. ACM Transactions on Graphics (SIGGRAPH 2005) Vol 24, No 3, pages 659-666, 2005. (PDF)

vislogo_2025.png

TBA