Self-Supervised AI Model That Learns From Physics Laws Advancing Computational Imaging


Recent advancements in deep learning have significantly impacted computational imaging, microscopy, and holography-related fields. These technologies have applications in diverse areas, such as biomedical imaging, sensing, diagnostics, and 3D displays. Deep learning models have demonstrated remarkable flexibility and effectiveness in tasks like image translation, enhancement, super-resolution, denoising, and virtual staining. They have been successfully applied across various imaging modalities, including bright-field and fluorescence microscopy; deep learning’s integration is reshaping our understanding and capabilities in visualizing the intricate world at microscopic scales.

In computational imaging, prevailing techniques predominantly employ supervised learning models, necessitating substantial datasets with annotations or ground-truth experimental images. These models often rely on labeled training data acquired through various methods, such as classical algorithms or registered image pairs from different imaging modalities. However, these approaches have limitations, including the laborious acquisition, alignment, and preprocessing of training images and the potential introduction of inference bias. Despite efforts to address these challenges through unsupervised and self-supervised learning, the dependence on experimental measurements or sample labels persists. While some attempts have used labeled simulated data for training, accurately representing experimental sample distributions remains complex and requires prior knowledge of sample features and imaging setups.

To address these inherent issues, researchers from the UCLA Samueli School of Engineering introduced an innovative approach named GedankenNet, which, on the other hand, presents a revolutionary self-supervised learning framework. This approach eliminates the need for labeled or experimental training data and any resemblance to real-world samples. By training based on physics consistency and artificial random images, GedankenNet overcomes the challenges posed by existing methods. It establishes a new paradigm in hologram reconstruction, offering a promising solution to the limitations of supervised learning approaches commonly utilized in various microscopy, holography, and computational imaging tasks.

GedankenNet’s architecture comprises a series of Spatial Fourier Transformation (SPAF) blocks, interconnected by residual connections, which effectively capture spatial and frequency domain information. By incorporating a physics-consistency loss function, the model enforces adherence to the wave equation during hologram reconstruction, resulting in physically accurate complex field outputs. This unique training strategy enables GedankenNet to generalize exceptionally well to synthetic and experimental holograms, even when confronted with unseen samples, axial defocus, and variations in illumination wavelength. 

a) Illustrations depicting traditional iterative hologram reconstruction techniques, the self-supervised deep neural network GedankenNet, and pre-existing supervised deep neural networks. | b) The self-supervised training process of GedankenNet for hologram reconstruction.

Performance evaluation demonstrates GedankenNet’s remarkable proficiency in hologram reconstruction. Through quantitative metrics such as Structural Similarity Index (SSIM), Root Mean Square Error (RMSE), and Error Correction Coefficient (ECC), GedankenNet consistently outperforms traditional supervised techniques across a diverse set of holograms. Notably, GedankenNet’s physics consistency loss effectively mitigates non-physical artifacts, resulting in sharper and more accurate reconstructions. The model’s compatibility with the wave equation further enhances its performance, allowing it to recover high-quality object fields from defocused holograms through correct wave propagation. These findings underscore GedankenNet’s superiority in external generalization, enabling it to handle novel experimental data and phase-only samples with exceptional fidelity.

Overall, the UCLA research team’s GedankenNet represents a pioneering stride in computational imaging and microscopy. By embracing the power of self-supervised learning and thought experiments grounded in physics, GedankenNet offers a fresh approach to training neural network models. This innovative method not only overcomes the limitations of current supervised learning techniques but also provides a pathway to more versatile, physics-compatible, and easily trainable Deep Learning models for various computational imaging tasks. This breakthrough could significantly accelerate advancements in microscopy, fostering broader applications and deeper insights into the microscopic world.