Abstract
Introduction: To propose a medical image registration method with significant performance improvement. The spatial transformation obtained by the traditional deformable image registration technology is not smooth enough, and the calculation amount is too large to solve the optimization parameters. The network model proposed based on deep learning medical image registration technology has some limitations, which cannot guarantee the registration of topological structures, resulting in the loss of spatial features. It makes the model have topological conservation and transform reversibility, has the ability to learn more multi-scale features and complex image structures, and captures finer changes while clearly encoding global correlation.
Method: Based on the core UNet model, a deformable image registration method with a new architecture Broad-UNet-diff is proposed. The model is equipped with asymmetric parallel convolution and uses diffeomorphism mapping.
Result: Compared with the seven classical registration methods under the brain MRI datasets, the proposed method has significantly improved the registration performance. In particular, compared with the advanced TransMorph-diff registration method, the Dice score can be improved by 12 %, but only the 1/10 parameters are needed.
Conclusion: This method confirms its convincing effectiveness and accuracy.
Graphical Abstract
[http://dx.doi.org/10.1007/s10846-021-01344-y]
[http://dx.doi.org/10.1109/ICRA46639.2022.9812257]
[http://dx.doi.org/10.1016/j.neuroimage.2022.119121] [PMID: 35342004]
[http://dx.doi.org/10.1109/CVPR52688.2022.01695]
Lecture Notes in Computer Science171–180 [http://dx.doi.org/10.1007/978-3-030-87199-4_16]
Proceedings, Part III (pp.318-328),2020. [http://dx.doi.org/10.1007/978-3-030-59716-0_31]
[http://dx.doi.org/10.1109/TMI.2002.803111] [PMID: 12575879]
Lecture Notes in Computer Science [http://dx.doi.org/10.1007/11889762_8]
[http://dx.doi.org/10.1007/978-3-030-00928-1_82]
[http://dx.doi.org/10.1016/j.media.2019.07.006] [PMID: 31351389]
[http://dx.doi.org/10.1088/0031-9155/59/1/97] [PMID: 24334618]
[http://dx.doi.org/10.1016/j.ijleo.2021.167022]
[http://dx.doi.org/10.1007/978-3-030-87202-1_13]
[http://dx.doi.org/10.1016/j.media.2007.06.004] [PMID: 17659998]
[http://dx.doi.org/10.1023/B:VISI.0000043755.93987.aa]
[http://dx.doi.org/10.1016/j.media.2018.11.010] [PMID: 30579222]
[http://dx.doi.org/10.1109/TPAMI.2022.3152247] [PMID: 35180075]
[http://dx.doi.org/10.1007/978-3-319-24574-4_28]
[http://dx.doi.org/10.1109/TMI.2019.2897538] [PMID: 30716034]
2018. [http://dx.doi.org/10.1109/CVPR.2018.00964]
[http://dx.doi.org/10.1016/j.neuroimage.2010.09.025] [PMID: 20851191]
[http://dx.doi.org/10.1007/s12021-022-09607-1] [PMID: 36178571]
2019 [http://dx.doi.org/10.1109/CVPR.2019.00435]
2020 [http://dx.doi.org/10.1109/CVPR42600.2020.00470]
[http://dx.doi.org/10.1109/TMI.2021.3108881] [PMID: 34460369]
[http://dx.doi.org/10.1016/j.neuroimage.2020.117161] [PMID: 32702486]
2022 [http://dx.doi.org/10.1109/CVPR52688.2022.02018]
[http://dx.doi.org/10.1016/j.jpdc.2020.11.006] [PMID: 33380769]
[http://dx.doi.org/10.1109/SC41405.2020.00042]
Proceedings, Part III. [http://dx.doi.org/10.1007/978-3-030-59716-0_21]
[http://dx.doi.org/10.1016/j.media.2022.102615] [PMID: 36156420]
[http://dx.doi.org/10.1016/j.cmpb.2009.09.002] [PMID: 19818524]
[http://dx.doi.org/10.1007/s12021-011-9109-y] [PMID: 21373993]