SUPR
DiffraNet: 3D reconstruction from unoriented 2D diffraction patterns
Dnr:

NAISS 2024/22-377

Type:

NAISS Small Compute

Principal Investigator:

Huaiyu Chen

Affiliation:

Lunds universitet

Start Date:

2024-03-13

End Date:

2025-04-01

Primary Classification:

10304: Condensed Matter Physics

Webpage:

Allocation

Abstract

Bragg Coherent X-ray Diffraction Imaging (BCDI) has been emerged as a promising tool for imaging three-dimensional (3D) strain distribution to reveal the internal structure of measured nanocrystalline objects. However, any possible angular distortions during the BCDI measurement will lead to the artifacts in the subsequent phase retrieval reconstruction. Our project, “DiffraNet: 3D reconstruction from unoriented 2D diffraction patterns”, financed by NanoLund and the Swedish e-science collaboration Essence, introduces an AI-driven methodology for accurate 3D diffraction volume reconstruction from 2D diffraction data, overcoming the orientation ambiguities. It provides a way to use the experimental data that would otherwise be discarded, which opens the possibility for using BCDI in more challenging condition. We believe our project can help the researcher to fully explore the potential of 4th generation synchrotron X-ray sources. Series of beamtimes experiment employing BCDI technique on the 60 nm Golden particle have been performed at NanoMax, at the MAXIV synchrotron. Self rotation of the particle can be observed from the data, which limits the quality of phase retrieval reconstruction. Traditional method utilizes the redundance of the over-sampled dataset, employs the classical likelihood maximization to assemble a 3D diffraction volume from each slice of the dataset, and iterative updates the volume with additional support. Searching a good pair of parameters that define the volume is vital in this method, and sometimes it could be very time-consuming. In this context, our project innovatively leverages the power of artificial intelligence to bridge the gap between limitations of BCDI technique and the demand for precise 3D reconstruction. At the core of our project is the utilization of a ResNet-based Convolution Neural Network (CNN) that initially identifies the relative angular positions of input diffraction frames. This identification not only mitigate the orientation uncertainty but also sets the stage for the next phase of our methodology. Leveraging the determined angular positions, we employ a sophisticated encoder-decoder latent network to reconstruct 3D diffraction volume from input 2D diffraction frames. Furthermore, the angular position information provides novel reference for traditional interpolation, offering a benchmark for comparing to our reconstruction model. Currently, our research is focusing on training our network using simulated data. In simpler scenarios, such as those without noise and with uniform strain, our results have been promising, However, to ensure the applicability and robustness of our methodology, we recognize the need to expand our training dataset significantly to handle more complicate scenarios. Besides, as we progress to the next phase, the challenge of reconstructing 3D diffraction volumes from 2D frames will initially require more computational resources. Specifically, the demands for increased RAM and more powerful GPUs exceed the capabilities of our current setup. The opportunities provided by your organisation can help us to better develop our methodology. Our aim is not just to refine our current achievement and employ our methodology to handle the experimental data we have, but also to make our AI-driven solution truly adaptable to the complex realities of BCDI applications