Evaluation of Aerial Image Brightness Enhancement Using Deep Learning Methods
|
Hosein Zavar , Reza Shah-Hoseini *  |
University of Tehran |
|
Abstract: (306 Views) |
Image taking using unmanned aerial vehicles (UAVs) for monitoring and assessing the existing conditions is one of the most prevalent applications in surveying. Despite its significant advantages, this approach also faces some challenges. Improper camera settings during image capturing, adverse weather conditions, and changes in lighting are the primary factors that reduce the quality of the captured images. Generally, the proposed methods for brightness enhancement can be categorized into two groups: traditional methods, which rely on histograms, and modern methods, including neural networks and deep learning, which have increasingly attracted the attention of researchers. The objective of this study is to evaluate the performance of deep learning methods in enhancing the brightness of the aerial images. These images often exhibit inadequate quality due to reduced visual details and the consequent loss of spectral information. Such deficiencies negatively impact the quality of the spatial products, such as orthophotos and digital surface models. Improving brightness and recovering spectral information have a direct and significant influence on the quality of these spatial products. To this end, the study examines three different deep learning methods that have demonstrated superior performance in the previous researches for enhancing aerial image brightness. The optimal method is selected based on 10 brightness evaluation metrics. The evaluated data consists of the aerial images captured from two different regions, characterized by areas with significant visual detail reduction and loss of spectral information due to poor lighting conditions. The results reveal hidden features in shadowed regions and areas with excessive brightness and high environmental reflection, which are not easily discernible by the naked eye. This is achieved by recovering spectral information through increasing the contrast between the digital values of the pixels in these regions. The best-performing method achieves structural similarity index (SSIM) scores of 0.92 and 0.96 for the two datasets, respectively. SSIM is one of the most critical evaluation metrics among the 10 criteria utilized in this study. |
|
Keywords: Photogrammetry, Fusion, Brightness Enhancement, Orthophoto, Deep Learning |
|
Full-Text [PDF 2075 kb]
(14 Downloads)
|
Type of Study: Research |
Subject:
RS Received: 2024/10/10 | Accepted: 2025/05/28 | ePublished ahead of print: 2025/08/5 | Published: 2025/08/31
|
|
|
|
|
Send email to the article author |
|