Evaluation of Aerial Image Brightness Enhancement Using Deep Learning Methods
|
Hosein Zavar , Reza Shah-Hoseini *  |
School of Surveying & Geospatial Engineering, College of Engineering, University of Tehran, Tehran, Iran. |
|
Abstract: (122 Views) |
Image acquisition using unmanned aerial vehicles (UAVs) is one of the most prevalent applications in surveying, utilized for monitoring and assessing existing conditions. Despite its significant advantages, this approach also faces challenges. Improper camera settings during image capture, adverse weather conditions, and changes in lighting are the primary factors that reduce the quality of captured images. Generally, the proposed methods for brightness enhancement can be categorized into two groups: traditional methods, which rely on histograms, and modern methods, including neural networks and deep learning, which have increasingly attracted the attention of researchers. The objective of this study is to evaluate the performance of deep learning methods in enhancing the brightness of aerial images. These images often exhibit inadequate quality due to reduced visual detail and the consequent loss of spectral information. Such deficiencies negatively impact the quality of spatial products, such as orthophotos and digital surface models. Improving brightness and recovering spectral information have a direct and significant influence on the quality of these spatial products. To this end, the study examines three different deep learning methods that have demonstrated superior performance in previous research for enhancing aerial image brightness. The optimal method is selected based on 10 brightness evaluation metrics. The evaluated data consists of aerial images captured from two different regions, characterized by areas with significant visual detail reduction and loss of spectral information due to poor lighting conditions. The results reveal hidden features in shadowed regions and areas with excessive brightness and high environmental reflection, which are not easily discernible by the naked eye. This is achieved by recovering spectral information through increasing the contrast between the digital values of pixels in these regions.The best-performing method achieves structural similarity index (SSIM) scores of 0.92 and 0.96 for the two datasets, respectively. SSIM is one of the most critical evaluation metrics among the 10 criteria utilized in this study. |
|
Keywords: Photogrammetry, Fusion, Brightness Enhancement, Orthophoto, Deep Learning |
|
|
Type of Study: Research |
Subject:
RS Received: 2024/10/10 | Accepted: 2025/05/28 | ePublished ahead of print: 2025/08/5
|
|
|
|
|
Send email to the article author |
|