您当前的位置: 首页 > 网页快照
Separation of interreflections based on parallel single-pixel imaging
1. Introduction . The component of illuminated light is usually complex in real scenes, and points on a concave surface are illuminated not only by the light source used but also by reflected light in the scene. In a camera–projector pair, images captured by the camera contain direct and indirect components, which are emitted by the diverse position of the projector. This combination is called an interreflection [ 1 – 3 ] in the literature. Decomposing the sum of mixed interreflections into direct and higher order bounce component s is vital to computer graphics and computer vision. For the former, successful separation can provide forward render [ 4 ] modeling an accuracy verification, making graphics model more realistic; for the latter, separation allows the makes it possible for measuring free of interreflections effects in an unknown real scene with concave 3D appearance and complex properties [ 3 , 5 – 6 ]. Interreflection light is added to computer graphics, where modeling light transport [ 7 , 8 ] is a forward problem with a known 3D shape and surface reflectance. However, when the surface information of a scene is unknown, the situation becomes completely diverse. Light intensity on an image is only the known information received, and the decomposition of interreflections is equivalent to analyzing the proportion of n-bounce light. Early studies are inspired by rendering equation in computer graphics, and the decomposition of interreflections is considered an inverse problem of forward rendering [ 9 , 10 ]. Seitz et al. [ 11 ] introduce the theory of inverse light transport, although it has been explored in previous computer vision and graphics literature; for example, Nayar et al. [ 1 ] completed a 3D reconstructed shape from interreflections by adopting an existing method for pseudo shapes and then building a mathematical relation with correct shapes; this method is novel given that the properties of incorrect result is used. However, this method is only useful for a Lambertian case. Masselus et al. [ 12 ] focused on relighting real scene illuminated by incident light, which is an inverse problem for rendering model in computer graphics; the technique explains the correspondence light source and captured devices; Peers et al. [ 13 ] researched the light transport of subsurface scattering based on non-negative matrix factorization and rendered real object with no subsurface scattering illumination. Indirect component [ 14 – 17 ] contains a combination of several 1st bounce data, and 2nd bounce and higher order bounce occur at different conditions, for instance, a concave surface; therefore, the need to subdivide n-bounce has prompted the exploration of the independent contributions of different bounces. Meanwhile, the decomposition of interreflections is useful in verifying the correctness of a rendering model in computer graphics and expanding the application of vision and optical 3D measurements using the captured images for unknown static scenes. In the field of optical 3D measurement and vision, the purposes of separating interreflections is to decompose direct and indirect components and discard the subdivision of indirect light. Indeed, the contribution of n-bounce light is complex, and subdividing this mixed component offers a new prospect for vision and optical measurement with the bidirectional reflectance distribution function (BRDF) and bidirectional surface scattering reflectance distribution function (BSSRDF) models. The separation of n-bounce light includes 2 main issues: the analysis of interreflections and the separation of interreflections. In this paper, we focus on solving them. First, we analyze the contribution of n -bounce light is vital to separate. In our work, we adopt parallel single-pixel imaging (PSI) to determine the specific contribution of n -bounce light due to the fast acquisition of light transport coefficients, which offer an intuitive way to represent light propagation from a projector to a camera given that one camera pixel receives several light rays derive from the projector’s light source [ 1 , 18 ]. The acquisition of light transport coefficients or similar concepts is dramatically time consuming, as demonstrated by previous studies, which investigated light propagation by relighting independent light sources [ 12 ]. By contrast, the introduction of PSI accelerates the acquisition process. Second, the separation of n -bounce light is complicated because a camera pixel captures a mixed value and light transport coefficients decompose it. In our research, we introduce 2 methods to separate interreflections, which separates 1st bounce light component, 2nd bounce light component and higher order bounce light component. Here, epipolar constraint between a projector and camera is used to separate direct and indirect components. Reconstructed 3D and normalized Blinn-Phong models are used to separate 2nd bounce and higher order bounce components. The experimental results indicate that our proposed method can separate interreflections into 1st bounce, 2nd bounce, and higher order bounce component for real scenes. The paper is organized as follows: Related principles are introduced in Section 2 . Experiments of separating interreflections are summarized in Section 3 , and conclusions are presented in Section 4 . 2. Principles . 2.1 Interreflections in complex?3D shapes . The interreflection problem occurs because of the complex?3D shapes of detected scenes, and different concave surfaces and incident angles generate several bounces, as shown in Fig.? 1 (a). In a projector–camera pair, a programmable projector projects fringe patterns onto static scenes, and a digital camera captures the combination of scene and illuminated patterns. Indeed, projector pixel (micromirror in projectors) adjusts emitted light intensity and illuminates scene points, and camera pixel collects light intensity and captures scenes. Theoretically, a projector directly corresponds to a camera pixel, but interreflections makes it diverse. Interreflections show that each scene point is illuminated not only by a projector pixel but also by other scene points. In this case, light intensity received by each camera pixel is superposed by several light rays, and only one light ray is directly emitted by a projector pixel and bounces once in the scene. Other rays are reflected by scene points for several times despite the same light source is used. In our research, illuminated light is projected by a projector, and different light rays hit camera pixels through different bounces. The 1st bounce light is reflected once only, 2nd bounce light is reflected twice, and 3rd bounce light is reflected three times and so on. As shown in Fig.? 1 , the propagation of interreflections includes 1st, 2nd, and 3rd bounces, and the main process of separation. ? Fig. 1. Separation in the presence of interreflections. (a) Light rays emitted from several projector pixels hit the same pixel on the camera in a concave surface with 3 flats. Red, green, and blue light rays denote 1st, 2nd, and 3rd bounce light, respectively, and the yellow light ray denotes the mixed interreflections received by a camera pixel. (b)-(e) Shows the main process of separation; (b) result of light transport coefficients, which is arranged in Section 2.2 ; (c) separation of 1st bounce light, arranged in Section 2.3 ; (d) and (e) Results of 3D shape model and the separation of 2nd bounce light, arranged in Section 2.4 . The final results are shown in Fig.? 8 . Download Full Size PPT Slide PDF The light intensity of each camera pixel may consist of several different bounce light components, as given by (1) $${I_{\textrm{out}}}(x,y) = \sum\limits_{m = 0}^{M - 1} {\sum\limits_{n = 0}^{N - 1} {h(x,y;m,n){I_{\textrm{in}}}(m,n) + {I_e}(x,y)} }, $$ where ${I_{\textrm{out}}}(x,y)$ and ${I_e}(x,y)$ represent the final intensity value on pixel array $(x,y)$ and environmental effect, respectively, ${I_{\textrm{in}}}(m,n)$ denotes the outgoing light intensity of projector pixel $(m,n)$ , $h(x,y;m,n)$ is the light transport coefficients from $(m,n)$ to $(x,y)$ , and M , N are the resolutions of the projector in the horizontal and vertical directions, respectively. For an individual pixel $(x,y)$ in the camera array, light intensity received is formed as (2) $${I_{\textrm{out}}}(x,y) = {I_1}(x,y) + {I_2}(x,y) +{\cdot}{\cdot} \cdot{+} {I_n}(x,y), $$ where ${I_n}(x,y)$ records the contribution of light that bounces exactly n times before reaching the camera. For the separation of interreflections received by a real scene, we introduce a novel method based on the PSI and 3D shape model. The PSI [ 18 ] is an improvement of single-pixel imaging [ 19 , 20 ] and adopts a single-pixel detector for a focal plane camera. The main principle of PSI is arranged in Fig.? 2 . ? Fig. 2. The main principle of PSI (a) is the first step of PSI that determines the localization of visible region through projecting vertical and horizontal patterns, here, M and N are the original size of image and M S and N S are the size of visible region. (b) Shows the generation of PSI patterns using periodic extension. (c) Represents the reconstruction of light transport coefficients by PSI. Download Full Size PPT Slide PDF In a path of light propagation in a real scene with concave structures, any point on the surface corresponds to the pixel of the camera and contains a combination of interreflections. Different light rays hit the same position before they are captured by the camera but have completely different propagation paths; thus, any single pixel on the camera corresponds to several light sources in the projector given that n-bounce light rays are emitted by the micromirror on the projector array. In the theory of PSI, each pixel on a camera array is seen as an individual detector, and thus its visible region in the perspective of projector can be reconstructed through a single-pixel technique. A reconstructed image contains the distribution of light from the projector, and the resulting 2D image represents the light transport coefficients. Here, the light transport coefficients show the correspondence between a certain position $(m,n)$ in the projector and a pixel $(x,y)$ in the camera, and the coefficients are combined into 1 common 4D equation [ 13 ]. Our work about decomposition of interreflections is developed on the foundation of light transport coefficients. The process of separating interreflections is conducted through 3 steps: Step 1: interreflections can be separated on the basis of light transport coefficients, which are obtained by PSI. Step 2: 1st bounce light (direct component) and global component are confirmed on the basis of an epipolar line. Step 3: 2nd bounce light and higher order bounce light are successfully subdivided with a 3D shape model. 2.2 Separating interreflections based on PSI . PSI is an improvement of traditional single-pixel imaging (SI) by expanding single-pixel detector or bucket detector to a focal plane array (FPA) camera. SI is a kind of imaging methods that single photodetector captures the whole scene, where Fourier SI is one of the most successful attempts. Generally, the resolution of a reconstructed image is in proportional to the number of illuminated patterns by Fourier SI, although numerous methods, such as three-step Fourier single-pixel imaging [ 21 ], two-step Fourier single-pixel imaging [ 22 ] and low-frequency spectrum capture method [ 19 ], have been proposed over the past decade to accelerate the acquisition. Compared with Fourier SI, PSI allows each pixel in a camera to be seen as an individual single-pixel detector capturing a visible region that is smaller than the whole scene. The visible region of each camera pixel reduces largely, which results in much lesser pattern numbers (≈1% number of patterns of traditional Fourier SI). The reconstructed image is exactly the light transport coefficients. PSI adopts projector–camera pair rather than single-pixel detector in the reconstruction of images; therefore, this novel method can accelerate the process of capturing and obtaining light transport coefficients. According to the principle of single-pixel imaging, each individual pixel can be reconstructed to a 2D image by a single-pixel algorithm, and PSI is used in solving the issue of interreflections because of the adaption to a FPA camera. In this research, four-step phase-shifting sinusoid patterns are generated and illuminated by a controllable projector. The general form is described as (3) $$P_\varphi ^{M,N}(m,n;k,l) = a + b \cdot \cos \left[ {2\pi \left( {\frac{{k \cdot m}}{M} + \frac{{l \cdot n}}{N}} \right) + \varphi } \right], $$ where the point $(m,n)$ represents the Cartesian coordinate of projector, $(k,l)$ represents the 2D discrete resolution of projector, $\varphi$ is the initial phase of patterns, and M and N represent the period of sinusoid patterns. The parameter a and b represent the average intensity and contrast intensity, respectively. When the sinusoid fringe patterns are generated and illuminated, the form of Eq.?( 1 ) can be obtained by (4) $${I_{\textrm{out}}}(x,y;k,l) = \sum\limits_{m = 0}^{M - 1} {\sum\limits_{n = 0}^{N - 1} {h(x,y;m,n)P_\varphi ^{M,N}(m,n;k,l) + {I_e}(x,y)} }. $$ Therefore, as ${I_{\textrm{out}}}(x,y;k,l)$ and patterns intensity $P_\varphi ^{M,N}(m,n;k,l)$ are known after illumination and image acquisition, the only unknown parameter are the light transport coefficients $h(x,y;m,n)$ because ${I_e}(x,y)$ can be offset by a four-step phase-shifting algorithm [ 23 , 24 ]. The light transport coefficients $h(x,y;m,n)$ contain the correspondence between camera pixel $(x,y)$ and projector pixel $(m,n)$ . The PSI makes this process efficient and straightforward because it is adept at calculating the correspondence between a camera and a projector. Moreover, the mixed component of interreflections can be separated on light transport coefficients. In PSI, the form Eq.?( 3 ) can be adjusted as (5) $$P_\varphi ^{{M_s},{N_s}}(m,n;{k_s},{l_s}) = a + b \cdot \cos \left[ {2\pi \left( {\frac{{{k_s} \cdot m}}{{{M_s}}} + \frac{{{l_s} \cdot n}}{{{N_s}}}} \right) + \varphi } \right], $$ where $({k_s},{l_s})$ represents the discrete frequency samples of projector resolution $(k,l)$ , M s and N s represent the size of largest visible region of camera pixels. The sinusoid patterns are generated by using four periodic extension patterns because of sampling, and the light transport coefficients is expressed by applying inverse discrete Fourier transform (IDFT), which is introduced in our previous work [ 18 ]. For a real scene, a scene point captured by a camera pixel on the concave surface is always illuminated by several pixels in the projector. Thus, several speckles appear on the reconstructed image, as shown in Fig.? 3 (b). When sinusoid patterns are illuminated onto a convex surface or a flat surface, a bright speckle appears, as shown in Fig.? 3 (a). ? Fig. 3. Result of light transport coefficients for a pixel $(x,y)$ in camera. (a) Result of convex surface; (b) result of concave surface. The result above are normalized to 0–1. Download Full Size PPT Slide PDF Three speckles are shown in Fig.? 3 (b), which indicate the distributions of emission light intensity on the projector array. Therefore, interreflections contribute to the intensity distribution, and the decomposition of different bounces component is based on light transport coefficients. For an individual pixel $(x,y)$ in the camera, the received intensity can be expressed as (6) $${I_{\textrm{out}}}(x,y) = \sum\limits_{m = 0}^{M - 1} {\sum\limits_{n = 0}^{N - 1} {h(x,y;m,n)} }. $$ Thus, the contribution of n-bounce light component is represented as (7) $${h^n}(x,y;m,n) = h(x,y;m,n) \cdot {M^n}(x,y;m,n), $$ where ${M^n}(x,y;m,n)$ is a mask which represents the contribution of n -bounce light component and is expressed as (8) $${M^n}(x,y;m,n) = \left\{ \begin{array}{ll} 1&(m,n) \in {C_n}\\ 0&(m,n) \notin {C_n} \end{array} \right., $$ where C n is a logically concept which determines whether or not it is higher order bounce light. The separated intensity contribution of the higher order bounce light can be introduced as (9) $$I_{\textrm{out}}^n = \sum\limits_{m = 0}^{M - 1} {\sum\limits_{n = 0}^{N - 1} {{h^n}(x,y;m,n)} }. $$ 2.3 Separation of 1st bounce light based on the epipolar line . The 1st bounce light is different from other interreflection components because of the correspondence of the projector–camera pair [ 25 – 28 ]. Here, epipolar constraint [ 25 , 26 ] is used in distinguishing the 1st bounce component from interreflections because only the 1st bounce light matches the correspondence of a projector pixel to a camera pixel. In this research, we calculate the corresponding epipolar line $L(m,n)$ for a camera pixel $(x,y)$ and select the nearest pixel on light transport coefficients to $L(m,n)$ as the represented pixel of 1st bounce light. Furthermore, the ${h^1}(x,y;m,n)$ is processed, the intensity value of the non-1st bounce component is set at 0. We will get the 1st bounce image by obtaining the sum of the 1st bounce components of the light transport coefficients. The flowchart is shown in Fig.? 4 . ? Fig. 4. Process of separating 1st bounce. (a) Implementing IDFT for a camera pixel $(x,y)$ ; (b) light transport coefficient image and its corresponding epipolar line; (c) identification of the representing 1st bounce point; (d) selection of a 3 × 3 pixels area as the result of 1st bounce component; (e) summation of the 1st bounce component value. Download Full Size PPT Slide PDF The determination of 1st bounce light requires 2 restraints. On the one hand, we calculate the distance between each representing point $({m_r},{n_r})$ and the epipolar line $L(m,n)$ and express the distance by ${d_i}$ . We identify the 1st bounce component by determining the minimum of ${d_i}$ among the representing points within a predefined area [Fig.? 4 (d)]. The process can be expressed as (10) $$\begin{aligned}({m_r},{n_r}) &= \arg \min {d_i}\\ \textrm{ }s.t.&{\textrm{d}_i} \le \varepsilon \end{aligned}, $$ where $\varepsilon$ is a predefined threshold along the M axis of the projector. We set it to 3 pixels in this research. On the other hand, when the epipolar constraint is not met, that is, several speckles meet the threshold of epipolar line ${d_i},$ . We select the representing points of the smallest speckles as the 1st bounce point (refer to Appendix?4 in our previous work about PSI [ 18 ]). In this research, we can obtain the 3D shape of the detected scene through triangulation after the separation of the 1st bounce [ 27 , 28 ]. The represented point of the 1st bounce $(m,n)$ is the matching point of a camera pixel $(x,y)$ required by triangulation. Here, 3D point data are calculated, and the correspondence between 2D pixel and 3D scene point is recorded simultaneously. The 3D point data are the vital basis to subdivide global illumination in next section. The 3D reconstruction of the detected scenes is shown in Fig.? 1 (d). 2.4 Separation of 2nd bounce light based on a 3D shape model . The separation of 2nd bounce light is different from the process used in separating 1st bounce light because no straightforward and clear correspondence is present between projector and camera pixels. However, 2nd bounce light can be separated with a 3D shape recovery model. The 2nd bounce light indicates that light rays emitted from a projector pixel are reflected twice by concave surface when they finally hit a camera pixel. Figure? 5 represents the light propagation of 2nd bounce light and the principle of separation based on the reconstructed 3D model. ? Fig. 5. (a) shows the light propagation path of interreflections light and the separation of 2nd bounce light based on the 3D model. The red line is the path of 1st bounce light, the green line is the path of 2nd bounce light, and the blue line represents the mixed interreflections received. The red and blue sector areas in point Y represent specular lobe and diffuse lobe, respectively, which is influenced by the smoothness of concave surface. (b) Schematic diagram of reflection on the point Y , R i and r i is the corresponding reflected light of L o (marked with red), δ is the difference of R and R i . Download Full Size PPT Slide PDF Similarly, the assumed reflection light $R(m,n)$ tracing from scene point Y to scene point X can be calculated using 3D coordinates X and Y . At the scene point Y on the concave surface, the incident light L propagates from the projector pixel $(m,n)$ , equivalent to the optical center C p of the projector, to the scene point Y , the light reflected at point Y , which is assumed as $R(m,n)$ . Additionally, the normal N can be calculated within neighbors, and thus the angle i of incidence and the angle r of reflection can be proposed as (11) $$i(m,n;x,y) = \arccos \left( {\frac{{L \cdot N}}{{L \cdot N }}} \right), $$ (12) $$r(m,n;x,y) = \arccos \left( {\frac{{R \cdot N}}{{R \cdot N }}} \right), $$ where $i(m,n;x,y)$ and $r(m,n;x,y)$ are the angles of incidence and reflection that light propagates from a projector pixel $(m,n)$ to a camera pixel $(x,y)$ , respectively, $\arccos ({\cdot} )$ represents the arc cosine transform, and $\cdot $ is the modulus operator. The captured intensity of camera pixel $(x,y)$ is solved by IDFT, then light transport coefficients are reconstructed. In the research, we set the 1st bounce component of light transport coefficients at a value of 0 and adopt 2nd bounce separation method for projector pixels with nonzero values. Our separation formula is derived from the normalized Blinn-Phong model [ 29 ], which is an empirical formula built in computer graphics. The normalized Blinn-Phong model combines diffuse and specular, which is retrieved as (13) $$\begin{aligned}\delta (m,n;x,y) &= \frac{{{K_L}}}{\pi } + {K_G} \cdot \frac{{8 + s}}{{8\pi }}\cos {\left( {\frac{{i(m,n;x,y) - r(m,n;x,y)}}{2}} \right)^s}\\ &\quad\delta (m,n;x,y) \le {\theta _d} \end{aligned}, $$ where ${K_L}$ and ${K_G}$ are the Lambertian and glossy reflectance, respectively, and range from 0 to 1, ${K_L}$ controls the color and intensity of matte reflection, and ${K_G}$ controls the glossy reflections [ 29 ]. The parameter s describes the smoothness of the concave surface, and metallic materials is larger than others. The formula above is derived from the normalized Blinn-Phong model, and we select the appropriate experience value according to surface materials for these parameters. Once the result $\delta (x,y;m,n)$ meets the Eq.?( 13 ), where we have set a threshold ${\theta _d}$ , the projector pixel $(m,n)$ is related to camera pixel $(x,y)$ by 2nd bounce light propagation. The schematic diagram of these criteria is shown in Fig.? 5 (b). When the 2nd bounce light is separated, the higher order bounce light can be obtained by setting the first and second components as 0. The separated result is arranged in the next section. 3. Experiment . The experimental system consists of a monochrome CMOS camera (Basler acA1920-155 ?m) with a resolution of 1920 × 1200 pixels and a high-speed DMD-based projector with a resolution of 1920 × 1080 pixels. To measure interreflections with the 2nd and higher order bounce light, we select three different objects, namely, a camera metal pad with a concave surface, as displayed in Fig.? 6 (a), an impeller with a complex shape structure, as demonstrated in Fig.? 6 (b), and a metal combination with 2 steel surfaces, as shown in Fig.? 6 (c). ? Fig. 6. Experimental detected scene. (a) A camera metal pad with a concave surface; (b) an impeller with a complex shape structure; (c) a metal combination with 2 steel surfaces. Download Full Size PPT Slide PDF The effect of interreflections light always occurs on concave surfaces with metal materials, so we select three metal parts as detected scenes to obtain the best result. In the theory of PSI, the size of a reconstructed image is largely reduced relative to that obtained through the traditional SI method, and the size is determined using the 3D shape and angle of the concave surface. We select a size of 128 × 128 pixels as the reconstructed size for the detected scene, as shown in Figs.? 6 (a)– 6 (c). The resolution of the image reconstructed by Fourier SI is generally proportional to the number of illuminated patterns, however, considering the feature of four-step phase shifting and the conjugacy of the Fourier transform, the number of fringe patterns can be reduced, which is calculated using equation as follow: (14) $${N_{\textrm{total}}} = \frac{{4 \times W \times H}}{2} = \frac{{4 \times 128 \times 128}}{2} = 8192, $$ where ${N_{\textrm{total}}}$ represents the number of fringe patterns, W and H are the width and height of reconstructed image respectively. The process of experiment is arranged. First, the projector projects periodic extension sinusoidal patterns to detected scene, and the camera captures the static scene with a trigger mode. The number of projected fringes is determined according to the reconstructed size, which has been explained in Section 2.3 . Second, the PSI algorithm is used in obtaining reconstructed images with resolutions of 128 × 128 pixels and light transport coefficients with resolutions of 1920 × 1080 pixels, which is equal to the resolution of the projector, as mentioned in Section 2.3 . Third, epipolar constraint is used in solving the separation of 1st bounce and global component, and the intensity value is obtained by summing the representing area of the 1st bounce point, as shown in Section 2.3 . Finally, 2nd bounce light is subdivided by reconstructing the 3D shape model of the scene through the 2nd bounce separation method, as mentioned in Section 2.4 . The experimental result above [camera pixel P1(804,738)] is illustrated in Fig.? 7 (c). ? Fig. 7. Comparison of calculated result by PSI for different camera pixels at the captured image. (a) Original image for camera; (b) captured image with patterns for camera; (c) PSI result of camera pixel (804,738); (d) result of pixel (724,588) and (e) result of pixel (705,541). (1), (2), (3) and (4) represent the result of the combination of light transport coefficients, 1st bounce light component, 2nd bounce light component, and higher order bounce light component. Download Full Size PPT Slide PDF To compare the different results of light transport coefficients in the camera pixels, we select several other camera pixels at different regions on the captured image. The light transport coefficients may only contain the 1st bounce component or first and 2nd bounce components. The corresponding experimental result is illustrated in Figs.? 7 (d) and 7 (e). The camera pixel P3(705,541) is at the other side of the metal block [as shown in Fig.? 7 (a)], this surface is toward the incident light direction, and thus no light ray is reflected. The separation result reveals that only 1st bounce light exists at this pixel, as shown in Fig.? 7 (e); the camera pixel P2(724,588) is the other pixel on metal block. However, different from the result in Fig.? 7 (c), the light transport coefficient only contains 2 significant light speckles, as shown in Fig.? 7 (d): the 1st bounce light and 2nd bounce light, the higher order bounce light component does not exist at this pixel. The experimental result depicted in Fig.? 8 is based on the light transport coefficient of PSI, as mentioned in Section 2.3 . The final separation result of interreflections can be obtained by summing component values, as revealed in Fig.? 8 . ? Fig. 8. Experimental results of separation. (1), (2) and (3) are the three detected static scenes with concave surface in Fig.? 5 , and the red rectangle is the ROI region. (a) Original image; (b) captured image with mixed interreflections; (c) reconstructed 3D point cloud, (d) 1st bounce image; (e) 2nd bounce image; and (f) higher order bounce image. Download Full Size PPT Slide PDF Our technique separates interreflections into 1st bounce light, 2nd bounce light, and higher order bounce light. We select scene (2) in Fig.? 8 for the analysis of our separation results. Scene (2) is a metal impeller that contains several blades and creates several complex concave surfaces between neighboring blades. The blades are counter-clockwise, and thus the incident light is reflected inward when it hits the blades. When incident light is generated and emitted from the projector, interreflections occur among the blades, as illustrated in Fig.? 9 . ? Fig. 9. Captured image of scene (2) in Fig.? 6 . (a) Represents the propagation direction of incident light; (b) is the captured images used in analyzing the component of interreflections; red rectangle is the ROI region, and the blue and green rays are the incident light and reflected light, respectively; (c) is the final separation result; (1), (2) and (3) illustrate 1st bounce component, 2nd bounce component, and higher order bounce component. Download Full Size PPT Slide PDF In our experiment, the incident light emitted from the projector propagates along the direction of the blue rays and hit the detected scene. In Fig.? 9 (b1), the ROI region is a flat surface between 2 blades and is nearly facing the incident light rays; therefore, light is reflected once in the scene and is directly received by a camera pixel without mutual bounces within concave surfaces. Meanwhile, the 1st bounce separation result in Fig.? 9 (c1) shows that the region is brighter than others; in Fig.? 9 (b2), the ROI region contains the furthest blades from the projector. At the angle of incident light, light is reflected twice between 2 blades and propagates away from the detected scene. The 2nd bounce separation result in Fig.? 9 (c2) shows that the red region [nearly the same as Fig.? 9 (b2)] and is brighter than other regions, proving our finding in Fig.? 9 (b3), the ROI region is under an impeller blade, which includes an extremely concave surface. Given that the curvature of this blade is large, incident light is reflected several times when it hits the blade. The higher order bounce result in Fig.? 9 (c3) shows that this ROI region is brighter than the others. The corresponding region with Fig.? 9 (b2) is nearly dark because no light bounces twice and hits the camera pixels. We arrange another figure to make a quantitative analysis and evaluation for our separation results. As shown in Fig.? 10 , we mark 2 horizontal lines in Fig.? 8 (b2) to compare the fluctuation of optical energy, where the quantitative results verify our aforementioned qualitative analysis. The y -axis of Fig.? 10 (b) and Fig.? 10 (c) is the ratio of total energy, whereas the x -axis is the horizontal coordinate. The Ratio is calculated as follow: (15) $$\textrm{Ratio} = \frac{{{E^n}}}{{{E^1} + {E^2} + {E^3}}},$$ where E n is the optical energy of n -bounce light and n ?=?1, 2, 3. The optical energy is calculated based on the received light intensity. ? Fig. 10. The quantitative analysis of Fig.? 8 (2). (a) Shows the mixed interreflections image, L 1 and L 2 are 2 selected horizontal line for analysis. (b) and (c) represent the corresponding fluctuation of optical energy. The y -axis is the ratio of total energy, whereas the x -axis is the horizontal coordinate. Download Full Size PPT Slide PDF 4. Conclusion . We have demonstrated an efficient technique that enables a projector–camera pair to separate interreflections. We have adopted the PSI algorithm, which has no additional experimental requirements and assumptions. PSI is a novel method that uses traditional single-pixel imaging for projector–camera pairs, accelerating the reconstructed efficiency and acquiring complete light transport coefficients. In this research, we calculate the light transport coefficients of real static scene points and decompose 1st and 2nd bounce light components, using epipolar constraint and a light propagation 3D model, respectively. Moreover, we compare the differences among the light transport coefficients of different camera pixels and analyze the results of separation, which have not been researched as far as we know. Furthermore, despite the limitations due to diffusion and subsurface scattering during the separation of nonmetal materials, we believe that our method offers several advantages for computer graphics and 3D optical measurements, such as building a real model for BRDFs and subdividing higher order bounce light components, which will be the focuses of our future research. Funding . National Key Research and Development Program of China (2020YFB2010701); National Natural Science Foundation of China (61735003, 61875007); Leading Talents Program for Enterpriser and Innovator of Qingdao (18-1-2-22-zhc); Natural Science Foundation of Guangdong Province (2018A030313797). Disclosures . The authors declare no conflicts of interest. Data availability . No data were generated or analyzed in the presented research. References . 1. S. K. Nayar, K. Ikeuchi, and T. Kanade, “Shape from Interreflections,” Int J Comput Vision 6 (3), 173–195 (1991). [ CrossRef ] ? 2. T. Machida, N. Yokoya, and H. Takemura, “Surface reflectance modeling of real objects with interreflections,” in Proceedings of International Conference on Computer Vision (IEEE, 2003), pp. 170–177 vol.1. 3. H. Zhao, Y. Xu, H. Jiang, and X. Li, “3D shape measurement in the presence of strong interreflections by epipolar imaging and regional fringe projection,” Opt. Express 26 (6), 7117–7131 (2018). [ CrossRef ] ? 4. J. T. Kajiya, “The rendering equation,” in Proceedings of ACM SIGGRAPH(ACM1986), 20(4), pp. 143–150. 5. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25 (3), 935–944 (2006). [ CrossRef ] ? 6. J. Deng, J. Li, H. Feng, S. Ding, Y. Xiao, W. Han, and Z. Zeng, “Efficient intensity-based fringe projection profilometry method resistant to global illumination,” Opt. Express 28 (24), 36346–36360 (2020). [ CrossRef ] ? 7. R. Deeb, J.V. Weijer, D. Muselet, M. Hebert, and A. Tremeau, “Deep spectral reflectance and illuminant estimation from self-interreflections,” J. Opt. Soc. Am. A. 36 (1), 105–114 (2019). [ CrossRef ] ? 8. E. Margallo-Balbás and P.J. French, “Shape based Monte Carlo code for light transport in complex heterogeneous tissues,” Opt. Express 15 (21), 14086–14098 (2007). [ CrossRef ] ? 9. K. Happel, E. D?rsam, and P. Urban, “Measuring isotropic subsurface light transport,” Opt. Express 22 (8), 9048–9062 (2014). [ CrossRef ] ? 10. M. O’Toole, R. Raskar, and N. K. Kutulako, “Primal-dual coding to probe light transport,” ACM Trans. Graph. 31 (4), 1–11 (2012). [ CrossRef ] ? 11. S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of International Conference on Computer Vision (IEEE, 2005), pp. 1440–1447. 12. V. Masselus, P. Peers, P. Dutre, and Y. D. Willems, “Relighting with 4D incident light fields,” ACM Trans. Graph. 22 (3), 613–620 (2003). [ CrossRef ] ? 13. P. Peers, K. Berge, W. Matusik, R. Ramamoorthi, J. Lawrence, S. Rusinkiewicz, and P. Dutre, “A compact factored representation of heterogeneous subsurface scattering,” ACM Trans. Graph. 25 (3), 746–753 (2006). [ CrossRef ] ? 14. I. Ihrke, K. N. Kutulakos, H.P.A. Lensch, M. magnor, and W. Heidrich, “Transparent and Specular Object Reconstruction,” Comput. Graph. Forum 29 (8), 2400–2426 (2010). [ CrossRef ] ? 15. M. Gupta, A. Agrawal, and V. Ashok, “A practical approach to 3D scanning in the presence of interreflections, subsurface scattering and defocus,” Int. J. Comput. Vis. 102 (1-3), 33–55 (2013). [ CrossRef ] ? 16. M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang, “A combined theory of defocused illumination and global light transport,” Int. J. Comput. Vis. 98 (2), 146–167 (2012). [ CrossRef ] ? 17. M. Gupta and S. K. Nayar, “Micro Phase Shifting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition(IEEE, 2012), pp. 813–820. 18. H. Jiang, Y. Li, H. Zhao, X. Li, and Y. Xu, “Parallel single-pixel imaging: A general method for direct-global separation and 3D shape reconstruction under strong global illumination,” Int. J. Comput. Vis. 129 (4), 1060–1086 (2021). [ CrossRef ] ? 19. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6 (1), 6225 (2015). [ CrossRef ] ? 20. F. Magalh?es, F. M. Araújo, M. V. Correia, M. Abolbashari, and F. Farahi, “Active illumination single-pixel camera based on compressive sensing,” Appl. Opt. 50 (4), 405–414 (2011). [ CrossRef ] ? 21. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25 (16), 19619–19639 (2017). [ CrossRef ] ? 22. H. Deng, X. Gao, M. Ma, P. Yao, Q. Guan, X. Zhong, and J. Zhang, “Fourier single-pixel imaging using fewer illumination patterns,” Appl. Phys. Lett. 114 (22), 221906–221906.5 (2019). [ CrossRef ] ? 23. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Optics and Lasers in Engineering 109 , 23–59 (2018). [ CrossRef ] ? 24. Z. Wang, Z. Zhang, N. Gao, Y. Xia, F. Gao, and X. Jiang, “Single-shot 3D shape measurement of discontinuous objects based on a coaxial fringe projection system,” Appl. Opt. 58 (5), A169–A178 (2019). [ CrossRef ] ? 25. X. Liu and S. Fang, “Correcting large lens radial distortion using epipolar constraint,” Appl. Opt. 53 (31), 7355–7361 (2014). [ CrossRef ] ? 26. P. Zhou, Z. Yang, W. Cai, Y. Yu, and G. Zhou, “Light field calibration and 3D shape measurement based on epipolar-space,” Opt. Express 27 (7), 10171–10184 (2019). [ CrossRef ] ? 27. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: a review,” Optics and Lasers in Engineering 107 , 28–37 (2018). [ CrossRef ] ? 28. S. Zhang, “High-speed 3D shape measurement with structured light methods: a review,” Optics and Lasers in Engineering 106 , 119–131 (2018). [ CrossRef ] ? 29. J. F. Hughes, A. V. Dams, M. Mcguire, D. F. Sklar, J. D. Foley, S. K. Feiner, and K. Akeley, Computer Graphics: Principles and Practice , 3rd ed. (Addison-Wesley, 2014), chap. 27. .
From:
监测目标主题     
(1)  
(1)  
系统抽取对象
机构     
(1)
(2)
(12)
(1)
(5)
(1)
(1)
(1)
(1)
(3)
(4)
(26)
(1)
活动
法案     
(1)
(1)
(9)
出版物     
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(1)
(2)
地理     
(1)
(1)
(2)
人物     
(1)
(2)
(1)
(2)
系统抽取主题     
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)  
(1)