Fraunhofer-Institut für Keramische Technologien und Systeme IKTS
Accurate segmentation of nanomaterials in microscopy images remains a criticall challenge, particularly due to the high cost and effort associated with obtaining and creating annotated datasets. This bottleneck continues to limit the development and generalization of deep learning segmentation models in materials science. To address this, we introduced DiffRenderGAN [1], a generative adversarial network (GAN) that integrates differentiable rendering into an adversarial learning pipeline. The method enables the synthesis of realistic, automatically annotated microscopy images by optimizing rendering parameters such as texture, lighting, and noise, directly from unannotated scans, using 3D mesh models of nanoparticles. DiffRenderGAN is tested on multimodal datasets including scanning electron microscopy (SEM) and helium ion microscopy (HIM) images of TiO₂ and SiO₂ nanoparticles, as well as multi-beam SEM of silver nanowires (AgNW). The results demonstrate that segmentation networks trained on synthetic data produced by DiffRenderGAN perform comparably to, or better than, those trained with data from conventional GANs or expert-driven manual rendering [2,3]. Beyond improved segmentation performance, the framework offers a scalable alternative to manual annotation pipelines. These findings highlight the potential of combining differentiable rendering with adversarial learning for efficient, high-quality synthetic data generation in microscopy. Ongoing work explores extending this approach to additional modalities such as transmission electron microscopy (TEM), computed tomography (CT), and atomic force microscopy (AFM), ultimately advancing automated, data-driven insights in nanoscale science.
References
[1] D. Possart et al., npj Computational Materials, 2025, 11
[2] B. Rühle et al., Scientific reports, 2021, 11
[3] L. Mill et al., Small Methods, 2021, 5
Abstract
Erwerben Sie einen Zugang, um dieses Dokument anzusehen.
Poster
Erwerben Sie einen Zugang, um dieses Dokument anzusehen.
© 2026