Abstract
Background: Modern medical imaging modalities used by clinicians have many applications in the diagnosis of complicated diseases. These imaging technologies reveal the internal anatomy and physiology of the body. The fundamental idea behind medical image fusion is to increase the image's global and local contrast, enhance the visual impact, and change its format so that it is better suited for computer processing or human viewing while preventing noise magnification and accomplishing excellent real-time performance. Objective: The top goal is to combine data from various modal images (CT/MRI and MR-T1/MR-T2) into a solitary image that, to the greatest degree possible, retains the key characteristics (prominent features) of the source images.
Methods: The clinical accuracy of medical issues is compromised because innumerable classical fusion methods struggle to conserve all the prominent features of the original images. Furthermore, complex implementation, high computation time, and more memory requirements are key problems of transform domain methods. With the purpose of solving these problems, this research suggests a fusion framework for multimodal medical images that makes use of a multi-scale edge-preserving filter and visual saliency detection. The source images are decomposed using a two-scale edge-preserving filter into base and detail layers. Base layers are combined using the addition fusion rule, while detail layers are fused using weight maps constructed using the maximum symmetric surround saliency detection algorithm.
Results: The resultant image constructed by the presumed method has improved objective evaluation metrics than other classical methods, as well as unhindered edge contour, more global contrast, and no ringing effect or artifacts.
Conclusion: The methodology offers a dominant and symbiotic arsenal of clinical symptomatic, therapeutic, and biomedical research competencies that have the prospective to considerably strengthen medical practice and biological understanding.