top of page

EMDiffuse: a diffusion-based deep learning method augmenting ultrastructural imaging and volume electron microscopy

Chixiang Lu, Kai Chen, Heng Qiu, Xiaojun Chen, Gu Chen, Xiaojuan Qi, Haibo Jiang

From the Department of Chemistry, The University of Hong Kong, Hong Kong, China; School of Molecular Sciences, The University of Western Australia, Perth, Australia; Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China

Electron microscopy (EM) revolutionized the way to visualize cellular ultrastructure. Volume EM (vEM) has further broadened its three-dimensional nanoscale imaging capacity. However, intrinsic trade-offs between imaging speed and quality of EM restrict the attainable imaging area and volume. Isotropic imaging with vEM for large biological volumes remains unachievable.

 

In this work, we developed EMDiffuse, a suite of algorithms designed to enhance EM and vEM capabilities, leveraging the cutting-edge image generation diffusion model.

  • EMDiffuse demonstrates outstanding denoising and super-resolution performance, generates realistic predictions without unwarranted smoothness, and preserves resolution and detailed ultrastructure.

  • EMDiffuse pioneers the isotropic vEM reconstruction task, generating isotropic volume similar to that obtained using advanced FIB-SEM even in the absence of isotropic training data. The generated isotropic volume enables accurate organelle reconstruction, making 3D nanoscale ultrastructure analysis faster and more accessible and extending such capability to larger volumes.

  • EMDiffuse features self-assessment functionalities and guarantees reliable predictions for all tasks. 

Please refer to the published paper for more details. 

EMDiffuse-n for electron microscopy image denoising

EMDiffuse pipeline

EMDiffuse-n on mouse brain cortex.

Comparison of EMDiffuse-n with CARE

Comparison of EMDiffuse-n with RCAN

Comparison of EMDiffuse-n with PSSR

EMDiffuse-r for electron microscopy image super-resolution

EMDiffuse-r on mouse brain cortex.

Comparison of EMDiffuse-r with other methods

EMDiffuse algorithms are highly transferable

Transfer learning on mouse liver denoising

Transfer learning on mouse heart denoising

Transfer learning on bone marrow denoising

Transfer learning on mouse liver super-resolution

Transfer learning on mouse heart super-resolution

Transfer learning on bone marrow super-resolution

vEMDiffuse-i restores isotropic resolution from anisotropic volmes

Comparison of XY views of vEMDiffuse-i generated Openorganelle kidney volume and isotropic volume obtained using FIB-SEM

Comparison of YZ view of anisotropic volume and vEMDiffuse-i generated volume 

Comparison of XY views of vEMDiffuse-i generated Openorganelle liver volume and isotropic volume obtained using FIB-SEM

Comparison of YZ view of anisotropic volume and vEMDiffuse-i generated volume 

vEMDiffuse-a restores isotropic volumes without isotropic training data

Comparison of XY views of vEMDiffuse-a generated Openorganelle kidney volume and isotropic volume obtained using FIB-SEM

Comparison of YZ view of anisotropic volume and vEMDiffuse-a generated volume 

Comparison of XY views of vEMDiffuse-a generated Openorganelle liver volume and isotropic volume obtained using FIB-SEM

Comparison of YZ view of anisotropic volume and vEMDiffuse-a generated volume 

Comparison of XY views of vEMDiffuse-a generated MICrONs multi-area and original anisotropic volume

Comparison of YZ view of anisotropic volume and vEMDiffuse-a generated volume 

Comparison of XY views of vEMDiffuse-a generated FANC and original anisotropic volume

Comparison of YZ view of anisotropic volume and vEMDiffuse-a generated volume 

Isotropic organelle segmentation facilitates accurate 3D reconstruction in vEM 

Comparison of mitochondria segmentation and reconstruction results on Openorganelle kidney volume. Red: segmentation and reconstruction of isotropic volume generated by vEMDiffuse-a. Yellow: segmentation and reconstruction results of anisotropic volume. The mean intersection over union (mIoU) is shown in the top. 

Comparison of mitochondria (red) and endoplasmic reticulum (blue) segmentation and reconstruction results on Openorganelle liver volume. Left: segmentation and reconstruction of isotropic volume generated by vEMDiffuse-i. Right: segmentation and reconstruction of anisotropic volume. The mean intersection over union (mIoU) is shown in the top. 

Assessing reliability of EMDiffuse generated results

EMDiffuse addresses the common concern on the reliability of deep learning-generated data for scientific data processing.

EMDiffuse is designed with the capability to reflect and self-assess the reliability of its predictions. EMDiffuse produces low uncertainty values when it was confident about its predictions.

We  identified an uncertainty value threshold, 0.12, that could assist biologists and microscopists to assess the reliability of predictions.

Data and code availability

The source codes of EMDiffuse, several representative pre-trained models as well as some example images for testing are accessible (https://github.com/Luchixiang/EMDiffuse).

You are very welcome to try on your own data, and feedback would be very much appreciated.

We are also open to collaborations to test your data on our side.

View Synonyms and Definitions

Acknowledgement

We are grateful for the public vEM datasets that were made available by the vEM community! In this work, we have developed and demonstrated EMDiffuse using the following public datasets:

We also thank the funding support from the Research Grants Council of Hong Kong (17102722, 17202422, 27209621) and the Australian Research Council (LP190100433). The work was conducted in the JC STEM Lab of Molecular Imaging, funded by The Hong Kong Jockey Club Charities Trust.

Contact

You are very welcome to try our codes and methods on the datasets you are interested in. Please feel free to contact us if you have any questions!

Email address: Chixiang Lu (u3590540@connect.hku.hk); Xiaojuan Qi (xjqi@eee.hku.hk); Haibo Jiang (hbjiang@hku.hk)

bottom of page