OSA | Gradient-assisted focusing light through scattering media
Focusing light through a scattering medium is fundamentally challenging due to the presence of optical scattering. To address this challenge, researchers in the wavefront shaping (WFS) community have shown that by optimizing the incident wavefront onto the scattering medium, an optical focus can be achieved at target location behind the sample [ 1 – 6 ]. A variety of strategies have been developed to realize this process, including transmission matrix measurement [ 7 – 9 ], optical phase conjugation [ 2 , 5 ], and feedback-based WFS [ 6 , 10 – 13 ]. Among these, feedback-based WFS is able to detect non-optical signals such as photoacoustic signal, while phase conjugation needs to detect light in order to focus light [ 5 , 6 ]. Also, compared to the transmission matrix measurement method, feedback-based WFS is relatively fast, and the optical focus can be improved along with the optimization process. The feedback-based WFS adjusts the incident wavefront based on the feedback signal such as light intensity acquired at the target location behind the scattering medium. A number of algorithms have been proposed to improve the speed, such as the continuous sequential algorithm, partitioning algorithm, and genetic algorithm (GA) [ 10 , 14 , 15 ]. Among those, the GA is considered the state-of-the-art for feedback-based WFS and has been widely applied in various studies [ 6 , 16 , 17 ]. The GA method generates random populations of phase masks and iteratively optimizes the population based on feedback intensity values, in which the focus at the target location can be improved. However, the GA to some extent acts in a “half-blind” fashion that relies on random variations of the population, limiting the optimization speed and the achievable peak-to-background ratio (PBR). On the other hand, gradient methods, such as gradient descent and Newton’s method, have been widely used for numerical optimization in other scientific areas [ 18 – 23 ]. For example, gradient descent is an optimization algorithm that is able to minimize the objective function by iteratively moving the function parameters in the direction of the steepest descent as defined by the negative of the gradient [ 18 – 22 ]. In the context of focusing light through scattering media, as illustrated in Fig.? 1 (a), by optimizing the phase values of the spatial light modulator (SLM) segments based on the light intensity feedback (e.g.,?using GA), a sharp focus can be generated at the target location. Unfortunately, the use of gradient descent for WFS has been hindered so far by the unknown underlying function, which gives no explicit form of gradient. ? Fig. 1. Illustration of wavefront shaping and flow chart of the gradient-assisted optimization. (a)?Basic principle of feedback-based WFS. (b)?Flow chart of the proposed gradient-assisted method for?WFS. Download Full Size PPT Slide PDF To tackle those issues, we developed a gradient-assisted strategy for feedback-based WFS, which is able to take advantage of the gradient information and optimize the incident phase masks in a gradient-directed manner. The formulation of the proposed method is described below in detail and summarized in Fig.? 1 (b). Without knowing the system matrix, we first model the light intensity at the target location behind the scattering media as an fitness function $f(x)$ of the SLM phase segments, where the variable $x$ is a $ d $ -dimensional vector corresponding to the phase segments to be optimized (i.e.,? $x \in {{\mathbb R}^d}$ ). Since the explicit formulation of the objective function is not available, it is hard to obtain the closed-form gradient at each point along the optimization. To address this issue, we use a batch of search points to capture the local structure of the fitness function and estimate the gradient information. Specifically, at each optimization step, we randomly produce a population of search points with Gaussian distribution of the mean ${\boldsymbol \mu} \in {{\mathbb R}^d}$ (i.e.,?the center optimization point) and covariance matrix ${\boldsymbol \Sigma} \in {{\mathbb R}^{d \times d}}$ . We use $\theta$ to denote these parameters: $\{\theta = {\boldsymbol \mu},{\boldsymbol \Sigma}\}$ . At each optimization step, the population of samples, denoted as ${\boldsymbol z}$ , can be drawn under the above Gaussian distribution with ${\boldsymbol z}\sim{\cal N}({{\boldsymbol \mu},{\boldsymbol \Sigma}})$ , whose probability density function is denoted as $\pi ({{\boldsymbol z}\theta})$ . If we use $f({\boldsymbol z})$ to denote the fitness function for samples ${\boldsymbol z}$ , we can write the expected fitness under the above search distribution as (1) $$J(\theta) = {{\mathbb E}_\theta}\left[{f({\boldsymbol z} )} \right] = \int f({\boldsymbol z} )\pi ({{\boldsymbol z}\theta} ){\rm d}{\boldsymbol z}.$$ Given a batch of samples ${{\boldsymbol z}_1}, \ldots ,{{\boldsymbol z}_{\boldsymbol N}}$ , the gradient of the expected fitness $J(\theta)$ can be estimated with (2) $${\nabla _\theta}J(\theta) \approx \frac{1}{N}\mathop \sum \limits_{k = 1}^N f({{{\boldsymbol z}_{\boldsymbol k}}} ){\nabla _\theta}{\log} \pi ({{{\boldsymbol z}_{\boldsymbol k}}\theta} ),$$ where $N$ is the population size used in each iteration. This gradient on expected fitness provides a search direction in the space of the search distributions. Obviously, one can use the?plain gradient ascent for the search distributions to optimize the phase segments. However, such a plain gradient optimization is known to be unstable and could lead to oscillations as well as premature convergence. Therefore, we turn to the natural gradient (NG), which has multiple advantages over the plain gradient [ 24 – 26 ]. For example, it helps mitigate the slow convergence of the plain gradient in optimization landscapes with ridges and plateaus. In addition, it is also able to renormalize the update with respect to uncertainty, which prevents undesired effects such as oscillations and premature convergence. While the NG performs optimization in the distribution space, it can be formalized as the solution to the constrained optimization: (3) $$\begin{split}&\mathop {{\rm argmax}}\limits_{\delta \theta} J({\theta + \delta \theta} ) \approx \mathop {{\rm argmax}}\limits_{\delta \theta} J(\theta) + \delta {\theta ^T}{\nabla _\theta}J,\\& {\rm s.t.}\;{\rm KL}(\theta + \delta \theta \\theta ) = \varepsilon ,\end{split}$$ where $J(\theta)$ is the expected fitness and $\varepsilon$ is a small distance as measured by the Kullback–Leibler (KL) divergence. With a small $\delta \theta$ , the KL divergence constraint can be expressed by the Fisher information matrix (denoted as ${\textbf F}$ ): (4) $${\rm KL}(\theta + \delta \theta \\theta ) = \frac{1}{2}\delta {\theta ^T}{\textbf F}(\theta)\delta \theta ,$$ where ${\textbf F}$ can be estimated from the search population, (5) $${\textbf F}(\theta) \approx \frac{1}{N}\mathop \sum \limits_{k = 1}^N {\nabla _\theta}{\log} \pi ({{{\boldsymbol z}_{\boldsymbol k}}\theta} ){\nabla _\theta}{\log} \pi {({{{\boldsymbol z}_{\boldsymbol k}}\theta} )^T}.$$ With the Fisher information matrix being inverted (denoted as ${{\textbf F}^{- 1}}$ ) [ 27 ], the NG (denoted as ${\tilde \nabla _\theta}J$ ) is given by (6) $${\tilde \nabla _\theta}J = {{\textbf F}^{- 1}}{\nabla _\theta}J(\theta).$$ Next we validate the proposed NG method with both simulations and experiments, and compare its performance with the GA by numerical simulations. In each optimization step, a batch size of 50 was used for both the NG and the GA. The stop criterion is met when a max number of 300 optimization steps is completed. The SLM phase elements are divided into segments to introduce individual phase delays. The simulations were repeated 100 times to reduce randomness in results. The GA was implemented with an optimization bound of $[{0,2\pi}]$ , and a Gaussian noise with zero-mean and 3% standard deviation was used. Figure? 2 (a) shows the PBR improvement over optimization steps with the NG method under a different number of phase segments. It shows that the achievable PBR increases with an increasing number of phase segments used. Figures? 2 (b) and 2 (c) compare the NG with GA under ${32} \times 32$ and ${64} \times 64$ segments, respectively. In both cases, the NG can achieve significantly higher PBR. In addition, Fig.? 2 (d) shows the time cost comparison of the two methods under a different number of segments. It shows that the proposed method consistently uses less time for computation and is on average over 5 times faster to conduct the same number of optimization steps. ? Fig. 2. Comparison of the proposed natural gradient (NG) and the genetic algorithm (GA) methods in simulations. Download Full Size PPT Slide PDF We then validate the proposed NG method in experiment. The experimental system is shown in Fig.? 3 (a). A 532?nm laser (Verdi V5, Coherent, Inc.) is used as light source. The laser beam is expanded and collimated by the following lenses. The collimated beam is then sent to the SLM (Pluto-2-VIS, Holoeye, Corp.) for phase modulation. The phase modulated beam is then sent through the scattering medium, which is a glass diffuser (DG10-120, Thorlabs, Inc.). The scattering pattern is then imaged by the objective lens, and a photomultiplier tube (PMT, H10721-20, Hamamatsu) is placed at the targeted location to provide feedback for the optical focus optimization. A beam splitter is placed between the objective lens and the PMT. A camera (PCO.edge 5.5, PCO, Corp.) is put at the PMT’s conjugate location to evaluate the PBR of the generated optical focus. The background intensity was calculated by averaging the image taken by the camera placed at the conjugate position of the PMT prior to optimization. ? Fig. 3. Experimental validation of the natural gradient method for focusing light through a scattering medium. (a)?Experimental system configuration. (b)?PBR over optimization steps. (c)?Scattering speckles with unshaped incident beam before optimization (left panel) and generated sharp focus with optimized incident beam (right panel). The scale bar represents 500??m in both panels of (c). Download Full Size PPT Slide PDF With the gradient method, the improvement of PBR over optimization steps is shown in Fig.? 3 (b). A number of ${128} \times 128$ segments are used to focus light through the scattering medium. At the end of optimization, a PBR of approximately 800 is achieved by the proposed method. The left panel of Fig.? 3 (c) shows the scattering speckles with an unshaped incident beam behind the diffuser, while the right panel shows the generated optical focus behind the diffuser with the optimized incident phase, imaged by the camera at the end of optimization. It can be seen that a sharp focus is formed and the background scattering pattern is barely visible. In addition, the line profiles vertically across the center of the focus are also plotted in Fig.? 3 (c). As shown in the right panel of Fig.? 3 (c), with the optimized incident phase, a sharp peak is formed, and little fluctuation is visible in the background. We further compare the proposed method with the widely used GA method. Specifically, a number of ${64} \times 64$ SLM phase segments were optimized by each method, and the experiments were repeated 5 times. Figures? 4 (a)– 4 (c) show representative images of the two methods after a different number of optimizations steps. Specifically, Fig.? 4 (a) shows images behind the diffuser after five optimization steps, as well as line profiles vertically across the target focus location. It can be seen that after five optimization iterations, the proposed NG method was able to obtain over ${2} \times$ enhancement compared to the GA. As shown in Fig.? 4 (b), after 50 optimization steps, the NG obtained over ${4} \times$ higher PBR compare to the GA. The optimization process of up to 150 steps was demonstrated in Visualization 1 . As shown in Fig.? 4 (c), at the end of the optimization, with GA, the achieved PBR was approximately 50, and the background scatterings were clearly visible near the optical focus. In contrast, with the proposed NG method, a bright focus and a PBR over 400 were achieved (approximately ${8} \times$ higher PBR compared to the GA), and the background scatterings were barely visible. ? Fig. 4. Experimental comparison of the genetic algorithm and natural gradient methods for focusing light through a scattering medium. (a)–(c)?Representative images behind the diffuser after 5, 50, and 300 optimization steps, respectively. (d)?Average PBR over iteration steps of the genetic algorithm (GA) and natural gradient (NG) methods. (e)?Steps required to achieve the same PBR levels. Download Full Size PPT Slide PDF The average PBR over iteration steps of those methods is compared in Fig.? 4 (d), whereas the blue curve represents PBR values obtained from the GA method, and the red curve represents PBR from the NG method. It shows that the NG method was able to achieve a PBR of 400, whereas the PBR of GA was only approximately 50. The improvement of PBR by the NG is also faster than that of GA. The required number of steps to achieve the same PBR levels by the two methods is compared in Fig.? 4 (e). It shows that with the NG method, the required optimization steps increased linearly to achieve different PBRs (e.g.,?from 10 to 100). In contrast, with the GA, the required optimization steps increased exponentially to obtain higher PBRs. For example, in order to obtain a PBR of 70, the GA required approximately 250 optimization steps, whereas the NG method only took 20 steps, over ${12} \times$ faster than the GA method. The performance of the gradient method under a different number of segments is compared in Fig.? 5 . The experiments were repeated 5 times to reduce random effects, and average PBRs over optimization steps are visualized in Fig.? 5 (a). The blue, green, and red curves represent experimental data from segment sizes of ${64} \times 64$ , ${128} \times 128$ , and ${256} \times 256$ , respectively. With ${64} \times 64$ SLM phase segments, the PBR reached 400 and then plateaued after a number of 150 optimization steps. With ${128} \times 128$ segments, the PBR reached approximately 800 before plateauing. Furthermore, with ${256} \times 256$ segments, a PBR of approximately 1000 was achieved at the end of optimization. Figure? 5 (b) visualizes the number of steps required to reach different PBR levels, where the colored dots represent results from individual experiments, and the dashed lines represent the linear fit of the points from each segment size to aid visualization. It shows that a similar number of optimization steps was used to reach PBR levels under 200. In addition, for PBR levels over 200, it shows that with a larger number of phase segments (e.g.,? ${256} \times 256$ ), the PBR improvement became faster and required a lower number of optimization steps to obtain the same PBR levels. ? Fig. 5. Experimental comparison of the natural gradient method with different segment sizes. (a)?Average PBRs over optimization steps. (b)?Number of steps required to reach different PBR levels. Download Full Size PPT Slide PDF To summarize, in this paper, we proposed a gradient-assisted strategy for focusing through a scattering medium. To conduct the same number of optimization steps, the proposed NG method was over 5× faster than the GA. On the other hand, to achieve the same PBR level, the NG was ${12} \times$ faster in terms of required optimization steps. Combined together, the NG method was over ${60} \times$ faster to obtain the same PBR level compared with the state-of-the-art method (GA). This is related to the fact that the GA involves perturbation operations such as crossover and mutation, whereas the gradient method performs optimization directly obtained from matrix calculations. Furthermore, with the same size of phase segments and optimization steps (e.g.,? ${64} \times 64$ ), the proposed method significantly outperforms the GA and achieves over ${8} \times$ higher enhancement as quantified by the PBR. It demonstrates that the proposed method is able to better optimize the phase mask for focusing light through the scattering medium. In addition, we observed that in experiment the GA optimization was able to work robustly on ${64} \times 64$ segments but tended to be unstable when the segment size reached up to ${128} \times 128$ , in which case, a PBR of 50 was obtained. In contrast, the NG was able to reliably optimize phase segments of size as large as ${256} \times 256$ and achieved 1000 PBR, which is approximately ${20} \times$ higher than that of the GA. Such observation is consistent with literatures, which used GA for WFS, where the size of the phase masks was mostly from ${16} \times 16$ to ${64} \times 64$ [ 11 , 12 , 28 – 32 ]. In addition, as a proof of concept, in experiment, we demonstrated focusing through a scattering medium where a PMT was placed behind the medium for feedback with transmission. Note that it is also feasible to use the NG method to focus through scattering media in a reflective way [ 6 ]. Going forward, the proposed method can be applied to the scenarios where GA was previously used for WFS to achieve faster speed, less computational cost, and higher PBR. For example, the NG method can be used to facilitate focusing inside scattering media with photoacoustic feedback [ 6 ]. It can also be used to harness a multi-dimensional fiber laser by optimizing or customizing important features such as output power, mode profile, optical spectrum, and mode-locking operation [ 33 ]. Furthermore, while the experimental demonstration was conducted with SLM, the proposed method can also be adapted to digital micromirror device (DMD) that has a faster modulation rate. In addition, the computations can be implemented with onboard processing that can further reduce the overall time cost for WFS. Funding . National Natural Science Foundation of China (62005007); Foundation of National Facility for Translational Medicine (Shanghai) (TMSK-2020-129); Shanghai Municipal of Science and Technology Project (20JC1419500); Shanghai Pujiang Program (20PJ1408700). Disclosures . The authors declare no conflicts of interest. REFERENCES . 1. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, Nat. Photon. 6 , 283 (2012). [ CrossRef ] ? 2. Y. Liu, P. Lai, C. Ma, X. Xu, A. A. Grabar, and L. V. Wang, Nat. Commun. 6 , 5904 (2015). [ CrossRef ] ? 3. R. Horstmeyer, H. Ruan, and C. Yang, Nat. Photon. 9 , 563 (2015). [ CrossRef ] ? 4. C. Ma, X. Xu, Y. Liu, and L. V. Wang, Nat. Photon. 8 , 931 (2014). [ CrossRef ] ? 5. J. Yang, L. Li, A. A. Shemetov, S. Lee, Y. Zhao, Y. Liu, Y. Shen, J. Li, Y. Oka, V. V. Verkhusha, and L. V. Wang, Sci. Adv. 5 , eaay1211 (2019). [ CrossRef ] ? 6. P. Lai, L. Wang, J. W. Tay, and L. V. Wang, Nat. Photon. 9 , 126 (2015). [ CrossRef ] ? 7. J. Yoon, K. Lee, J. Park, and Y. Park, Opt. Express 23 , 10158 (2015). [ CrossRef ] ? 8. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, Nat. Commun. 1 , 6 (2010). [ CrossRef ] ? 9. D. B. Conkey, A. N. Brown, A. M. Caravaca-Aguirre, and R. Piestun, Opt. Express 20 , 4840 (2012). [ CrossRef ] ? 10. D. B. Conkey, A. M. Caravaca-Aguirre, and R. Piestun, Opt. Express 20 , 1733 (2012). [ CrossRef ] ? 11. L. Wan, Z. Chen, H. Huang, and J. Pu, Appl. Phys. B Lasers Opt. 122 , 204 (2016). [ CrossRef ] ? 12. Y. Guan, O. Katz, E. Small, J. Zhou, and Y. Silberberg, Opt. Lett. 37 , 4663 (2012). [ CrossRef ] ? 13. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, Phys. Rev. Lett. 104 , 100601 (2010). [ CrossRef ] ? 14. I. M. Vellekoop and A. P. Mosk, Opt. Lett. 32 , 2309 (2007). [ CrossRef ] ? 15. I. M. Vellekoop and A. P. Mosk, Opt. Commun. 281 , 3071 (2008). [ CrossRef ] ? 16. I. M. Vellekoop, Opt. Express 23 , 12189 (2015). [ CrossRef ] ? 17. D. B. Conkey, A. M. Caravaca-Aguirre, J. D. Dove, H. Ju, T. W. Murray, and R. Piestun, Nat. Commun. 6 , 8380 (2015). [ CrossRef ] ? 18. Y. Lecun, Y. Bengio, and G. Hinton, Nature 521 , 436 (2015). [ CrossRef ] ? 19. K. He, X. Zhang, S. Ren, and J. Sun, in IEEE Conference on Computer Vision and Pattern Recognition (2016), pp.?770–778. 20. Y. Zhao, Y. Deng, F. Bao, H. Peterson, R. Istfan, and D. Roblyer, Opt. Lett. 43 , 5669 (2018). [ CrossRef ] ? 21. Y. Zhao, M. B. Applegate, R. Istfan, A. Pande, and D. Roblyer, Biomed. Opt. Express 9 , 5997 (2018). [ CrossRef ] ? 22. Y. Deng, Y. Zhao, Y. Liu, and Q. Dai, PLoS One 8 , e63385 (2013). [ CrossRef ] ? 23. Y. Zhao, S. Tabassum, S. Piracha, M. S. Nandhu, M. Viapiano, and D. Roblyer, Biomed. Opt. Express 7 , 2373 (2016). [ CrossRef ] ? 24. S. I. Amari, Neural Comput. 10 , 251 (1998). [ CrossRef ] ? 25. S. Amari and S. C. Douglas, in Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP’98 (1998), Vol.?2, 1213–1216. 26. D. Wierstra, T. Schaul, T. Glasmachers, Y. Sun, J. Peters, and J. Schmidhuber, J. Mach. Learn. Res. 15 , 949 (2014). 27. M. Bladt, L. J. R. Esparza, and B. F. Nielsen, J. Appl. Probab. 48 , 277 (2011). [ CrossRef ] ? 28. O. Tzang, A. M. Caravaca-Aguirre, K. Wagner, and R. Piestun, Nat. Photon. 12 , 368 (2018). [ CrossRef ] ? 29. Q. Feng, B. Zhang, Z. Liu, C. Lin, and Y. Ding, Appl. Opt. 56 , 3240 (2017). [ CrossRef ] ? 30. X. L. Deán-Ben, H. Estrada, and D. Razansky, Opt. Lett. 40 , 443 (2015). [ CrossRef ] ? 31. J. W. Tay, P. Lai, Y. Suzuki, and L. V. Wang, Sci. Rep. 4 , 3918 (2014). [ CrossRef ] ? 32. A. Daniel, L. Liberman, and Y. Silberberg, Optica 3 , 1104 (2016). [ CrossRef ] ? 33. X. Wei, J. C. Jing, Y. Shen, and L. V. Wang, Light Sci. Appl. 9 , 1 (2020). [ CrossRef ] ? .
原始网站图片
增加监测目标对象/主题，请
登录 直接在原文中划词！