
An attention-based multi-scale feature learning network for multimodal medical image fusion, supervised by Prof. David B. Lindell, for CSC2529 course project.
- Conceptualized and developed the Dilated Residual Attention Network (DILRAN), a state-of-the-art approach for medical image fusion. DILRAN amalgamates the strengths of the residual attention network, pyramid network, and dilated convolutions
- Introduced Softmax Feature Weighted Strategy for fusion, achieving a 14.26% higher PSNR and 1.97% higher FSIM than other fusion strategies
- Utilized dilated convolution to extract shallow features, preserving local information and details and increasing the receptive field size without inflating model parameters
- Achieved state-of-the-art performance in image fusion metrics and subjective fused image qualities, surpassing existing models with 12.97% higher PSNR and 1.49% higher FSIM