322c92cd-6403-45e3-9a53-e9af9b3a5e0e_edited.jpg

Remote Sensing

Object Detection in UAV Images via Global Density Fused Convolutional Network

Capture.PNG
Capture.PNG

we propose a novel global density fused convolutional network (GDF-Net) optimized for object detection in UAV images. We test the effectiveness and robustness of the proposed GDF-Nets on the VisDrone dataset and the UAVDT dataset. The designed GDF-Net consists of a Backbone Network, a Global Density Model (GDM), and an Object Detection Network. Specifically, GDM refines density features via the application of dilated convolutional networks, aiming to deliver larger reception fields and to generate global density fused features. Compared with base networks, the addition of GDM improves the model performance in both recall and precision. We also find that the designed GDM facilitates the detection of objects in congested scenes with high distribution density. The presented GDF-Net framework can be instantiated to not only the base networks selected in this study but also other popular object detection models.

Capture.PNG
Capture.PNG

Building change detection from high-resolution remote sensing images

Capture.PNG
Capture.PNG
Capture.PNG

We propose a novel deeply supervised attention-guided network (DSA-Net) for BCD tasks in high- resolution images. In the DSA-Net, we innovatively introduce a spatial attention mechanism-guided cross-layer addition and skip-connection (CLA-Con-SAM) module to aggregate multi-level contextual information, weaken the heterogeneity between raw image features and difference features, and direct the network’s attention to changed regions. We also introduce an atrous spatial pyramid pooling (ASPP) module to extract multi-scale features.

Capture.PNG
Capture.PNG

With the introduction of multitask convolutional neural network models, it is possible to carry out these two tasks simultaneously by facilitating information sharing within a multitask deep learning model. In this study, we first design a challenging dataset using remote sensing images from the GF-2 satellite. The dataset contains complex road scenes with manually annotated images. We then propose a two-task and end-to-end convolution neural network, termed Multitask Road-related Extraction Network (MRENet), for road surface extraction and road centerline extraction. We take features extracted from the road as the condition of centerline extraction, and the information transmission and parameter sharing between the two tasks compensate for the potential problem of insufficient road centerline samples. In the network design, we use atrous convolutions and a pyramid scene parsing pooling module (PSP pooling), aiming to expand the network receptive field, integrate multilevel features, and obtain more abundant information. In addition, we use a weighted binary cross-entropy function to alleviate the background imbalance problem. Experimental results show that the proposed algorithm outperforms several comparative methods in the aspects of classification precision and visual interpretation.

Capture.PNG
Capture.PNG
Capture.PNG

We proposed a novel unsupervised learning framework, termed Enhanced Image Prior, which achieves SR tasks without low/high-resolution image pairs. We first feed random noise maps into a designed generative adversarial network (GAN) for satellite image SR reconstruction. Then, we convert the reference image to latent space as the enhanced image prior. We update the input noise in the latent space and further transfer the texture and structured information from their reference image. Results of extensive experiments on the Draper dataset show that EIP achieves significant improvements over state-of-the-art unsupervised SR methods both quantitatively and qualitatively. Our experiments on satellite (SuperView-1) images reveal the potential of the proposed approach in improving the resolution of remote sensing imagery compared with the supervised algorithms.