1. [1] X. Zhang, "Benchmarking and comparing multi-exposure image fusion algorithms," Information Fusion, vol. 74, pp. 111-131, 2021, doi: 10.1016/j.inffus.2021.02.005. [ DOI:10.1016/j.inffus.2021.02.005] 2. [2] Z. Ying, G. Li, and W. Gao, "A bio-inspired multi-exposure fusion framework for low-light image enhancement," arXiv preprint, arXiv:1711.00591, 2017. [Online]. Available: http://arxiv.org/abs/1711.00591 3. [3] K. Ma and Z. Wang, "Multi-exposure image fusion: A patch-wise approach," in Proc. IEEE Int. Conf. Image Process. (ICIP), 2015, pp. 1717-1721, doi: 10.1109/ICIP.2015.7351094. [ DOI:10.1109/ICIP.2015.7351094] 4. [4] P. J. Burt and R. J. Kolczynski, "Enhanced image capture through fusion," in Proc. 4th Int. Conf. Computer Vision (ICCV), 1993, pp. 173-182, doi: 10.1109/ICCV.1993.378222. [ DOI:10.1109/ICCV.1993.378222] 5. [5] A. Vyas, S. Yu, and J. Paik, Fundamentals of Digital Image Processing, Signals and Communication Technology Series, pp. 3-11, 2018, doi: 10.1007/978-981-10-7272-7_1. [ DOI:10.1007/978-981-10-7272-7_1] 6. [6] A. M. Reza, "Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement," J. VLSI Signal Process. Syst. Signal Image Video Technol., vol. 38, no. 1, pp. 35-44, 2004, doi: 10.1023/B:VLSI.0000028532.53893.82. [ DOI:10.1023/B:VLSI.0000028532.53893.82] 7. [7] H. Ibrahim and N. S. P. Kong, "Brightness preserving dynamic histogram equalization for image contrast enhancement," IEEE Trans. Consum. Electron., vol. 53, no. 4, pp. 1752-1758, 2007, doi: 10.1109/TCE.2007.4429280. [ DOI:10.1109/TCE.2007.4429280] 8. [8] C. Wang and Z. Ye, "Brightness preserving histogram equalization with maximum entropy: A variational perspective," IEEE Trans. Consum. Electron., vol. 51, no. 4, pp. 1326-1334, 2005, doi: 10.1109/TCE.2005.1561863. [ DOI:10.1109/TCE.2005.1561863] 9. [9] N. Hayat and M. Imran, "Ghost-free multi exposure image fusion technique using dense SIFT descriptor and guided filter," J. Vis. Commun. Image Represent., vol. 62, pp. 295-308, 2019, doi: 10.1016/j.jvcir.2019.06.002. [ DOI:10.1016/j.jvcir.2019.06.002] 10. [10] S. H. Lee, J. S. Park, and N. I. Cho, "A multi-exposure image fusion based on the adaptive weights reflecting the relative pixel intensity and global gradient," in Proc. IEEE Int. Conf. Image Process. (ICIP), 2018, pp. 1737-1741, doi: 10.1109/ICIP.2018.8451153. [ DOI:10.1109/ICIP.2018.8451153] 11. [11] K. R. Prabhakar, V. S. Srikar, and R. V. Babu, "DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs," in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 4724-4732, doi: 10.1109/ICCV.2017.505. [ DOI:10.1109/ICCV.2017.505] 12. [12] H. Li and L. Zhang, "Multi-exposure fusion with CNN features," in Proc. IEEE Int. Conf. Image Process. (ICIP), 2018, pp. 1723-1727. [ DOI:10.1109/ICIP.2018.8451689] 13. [13] J. Yin, B. Chen, Y. Peng, and C. Tsai, "Deep prior guided network for high-quality image fusion," 2020. [Online]. Available: https://arxiv.org/abs/2001.08941 [ DOI:10.1109/ICME46284.2020.9102832] 14. [14] S. Y. Chen and Y. Y. Chuang, "Deep exposure fusion with deghosting via homography estimation and attention learning," in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), 2020, pp. 1464-1468, doi: 10.1109/ICASSP40776.2020.9053765. [ DOI:10.1109/ICASSP40776.2020.9053765] 15. [15] Z. Yang, Y. Chen, Z. Le, and Y. Ma, "GANFuse: A novel multi-exposure image fusion method based on generative adversarial networks," Neural Comput. Appl., vol. 33, no. 11, pp. 6133-6145, 2021, doi: 10.1007/s00521-020-05387-4. [ DOI:10.1007/s00521-020-05387-4] 16. [16] H. Xu, H. Liang, and J. Ma, "Unsupervised multi-exposure image fusion breaking exposure limits via contrastive learning," 2023. [ DOI:10.1609/aaai.v37i3.25404] 17. [17] D. Han, L. Li, X. Guo, and J. Ma, "Multi-exposure image fusion via deep perceptual enhancement," Information Fusion, vol. 79, pp. 248-262, 2022, doi: 10.1016/j.inffus.2021.10.006. [ DOI:10.1016/j.inffus.2021.10.006] 18. [18] K. Ma, Z. Duanmu, H. Zhu, Y. Fang, and Z. Wang, "Deep guided learning for fast multi-exposure image fusion," IEEE Trans. Image Process., vol. 29, pp. 2808-2819, 2020, doi: 10.1109/TIP.2019.2952716. [ DOI:10.1109/TIP.2019.2952716] 19. [19] Y. Zhang et al., "IFCNN: A general image fusion framework based on convolutional neural network," Information Fusion, vol. 54, pp. 99-118, 2020, doi: 10.1016/j.inffus.2019.07.011. [ DOI:10.1016/j.inffus.2019.07.011] 20. [20] E. H. Land, "The Retinex theory of color vision," Scientific American, vol. 237, no. 6, pp. 108-128, 1977. [Online]. Available: https://www.semanticscholar.org/paper/The-Retinex-Theory-of-Color-Vision-SCIENTIFIC-Land/2f3f8f151a52afa3c1e80505ddb09b8624162e35 [ DOI:10.1038/scientificamerican1277-108] 21. [21] J. W. Roberts, J. Van Aardt, and F. Ahmed, "Assessment of image fusion procedures using entropy, image quality, and multispectral classification," J. Electron. Imaging, vol. 17, no. 2, pp. 1-28, 2008, doi: 10.1117/1.2945910. [ DOI:10.1117/1.2945910] 22. [22] P. Jagalingam and A. Vittal, "A review of quality metrics for fused image," Aquatic Procedia, vol. 4, pp. 133-142, 2015, doi: 10.1016/j.aqpro.2015.02.019. [ DOI:10.1016/j.aqpro.2015.02.019] 23. [23] G. Cui, H. Feng, Z. Xu, Q. Li, and Y. Chen, "Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition," Opt. Commun., vol. 341, pp. 199-209, 2015, doi: 10.1016/j.optcom.2014.12.032. [ DOI:10.1016/j.optcom.2014.12.032] 24. [24] I. Journal et al., "Image fusion based on an absolute feature," Int. J. Comput. Inf. Technol., vol. 3, no. 6, pp. 1433-1447, 2007. 25. [25] A. M. Eskicioglu and P. S. Fisher, "Image quality measures and their performance," IEEE Trans. Commun., vol. 43, no. 12, pp. 2959-2965, 1995. [ DOI:10.1109/26.477498] 26. [26] S. Pistonesi, J. Martinez, S. Mar, and R. Vallejos, "Structural similarity metrics for quality image fusion assessment," Image Process. On Line, 2018, doi: 10.5201/ipol.2018.196. [ DOI:10.5201/ipol.2018.196] 27. [27] S. Li, R. Hong, and X. Wu, "A novel similarity based quality metric for image fusion," in Proc. 3rd Int. Conf. Machine Learning and Cybernetics, 2008, pp. 167-172. 28. [28] K. Ma, K. Zeng, and Z. Wang, "Perceptual quality assessment for multi-exposure image fusion," IEEE Trans. Image Process., vol. 24, no. 11, pp. 3345-3356, 2015. [ DOI:10.1109/TIP.2015.2442920] 29. [29] Y. Chen and R. S. Blum, "A new automated quality assessment algorithm for image fusion," Image Vis. Comput., vol. 27, no. 10, pp. 1421-1432, 2009, doi: 10.1016/j.imavis.2007.12.002. [ DOI:10.1016/j.imavis.2007.12.002] 30. [30] H. Chen and P. K. Varshney, "A human perception inspired quality metric for image fusion based on regional information," Information Fusion, vol. 8, pp. 193-207, 2007, doi: 10.1016/j.inffus.2005.10.001. [ DOI:10.1016/j.inffus.2005.10.001] 31. [31] H. Zavar and R. Shah-Hosseini, "Comparative evaluation of lighting improvement methods in aerial images," J. Geospatial Inf. Technol., vol. 11, no. 3, pp. 103-119, Dec. 2023, doi: 10.61186/jgit.11.3.103. [ DOI:10.61186/jgit.11.3.103]
|