2. Overview of Published Papers
This Special Issue consists of 19 papers in various areas, which can be organized into nine categories; the number of papers published in each category is shown in its respective parentheses.
- I.
Hyperspectral Classification (three papers)
- II.
Hyperspectral Target Detection (three papers)
- III.
Hyperspectral and Multispectral Fusion (three papers)
- IV.
Mid-wave Infrared Hyperspectral Imaging (two papers)
- V.
Hyperspectral Unmixing (one paper)
- VI.
Hyperspectral Sensor Hardware Design (one paper)
- VII.
Hyperspectral Reconstruction (one paper)
- VIII.
Hyperspectral Image Visualization (one paper)
- IX.
Applications (four papers)
A short descriptive summary is provided for each paper so that readers can quickly discern their respective contents and more quickly find what they are interested in.
- I.
Hyperspectral Image Classification (three papers)
This paper developed a few-shot hyperspectral images classification approach using only a few labeled samples. It consists of two modules, i.e., a feature learning module and a relation learning module to capture the spatial–spectral information in hyperspectral images and then carry out relation learning by comparing the similarity between samples. It is followed by a task-based learning strategy to enhance its ability in terms of learning with a large number of tasks randomly generated from different data sets. Accordingly, the proposed method has excellent generalization ability and can achieve satisfactory classification with only a few labeled samples. The experimental results indicated that the proposed method can perform better than the traditional, semisupervised support vector machine and semisupervised deep learning models.
This paper proposed a novel, lightweight, shuffled group convolutional neural network (abbreviated as SG-CNN) to achieve efficient training with a limited training dataset in HSI classification. It consists of SG conv units that employ conventional and atrous convolution in different groups, followed by channel shuffle operation and shortcut connection. As a result, SG-CNNs have less trainable parameters, whilst they can still be accurately and efficiently trained with fewer labeled samples. In addition, transfer learning between different HIS datasets was also applied to the SG-CNN to further improve the classification accuracy. The experimental results demonstrated that SG-CNNs can achieve a competitive classification performance when the amount of labeled data for training is poor, as well as efficiently provide satisfying classification results.
This paper developed a spectral–spatial classification method called superpixel-level constraint representation (SPCR) that uses the participation degree (PD) with respect to the sparse coefficient from the constraint representation (CR) model and then transforms the individual PD to a united activity degree (UAD)-driven mechanism via a spatial constraint generated by the superpixel segmentation algorithm. The final classification is determined based on the UAD-driven mechanism. Considering that the SPCR is susceptible to the segmentation scale, an improved multiscale superpixel-level constraint representation (MSPCR) was further proposed through the decision fusion process of SPCR at different scales with the final decision of each test pixel determined by the maximum number of the predicated labels among the classification results at each scale. Experimental results on four real hyperspectral datasets including a GF-5 satellite data verified the efficiency and practicability of the proposed methods.
- II.
Hyperspectral Target Detection (three papers)
This paper presented a fast, hyperspectral, underwater target-detection approach using band selection (BS). Due to the high data redundancy, slow imaging speed, and long processing of hyperspectral imagery, the direct use of hyperspectral images in detecting targets cannot meet the needs of the rapid detection of underwater targets. To resolve this issue, the proposed method first developed constrained-target optimal index factor (OIF) band selection (CTOIFBS) to select a band subset with spectral wavelengths specifically responding to the targets of interest. Then, an underwater spectral imaging system integrated with the best-selected band subset was constructed for underwater target image acquisition. Finally, a constrained energy minimization (CEM) target detection algorithm was used to detect the desired underwater targets. The experimental results demonstrated that the acquisition and detection speed of the designed underwater spectral acquisition system using CTOIFBS could be significantly improved over the original underwater hyperspectral image system without BS.
This paper proposed an attention-based spatial and spectral network with a PCA-guided, self-supervised feature extraction mechanism to detect changes in hyperspectral images. It consists of two steps: a self-supervised mapping from each patch of the difference map to the principal components of the central pixel of each patch with spatial features of differences extracted by a multilayer convolutional neural network in the first step, followed by an attention mechanism which calculates adaptive weights between spatial and spectral features of each pixel from concatenated spatial and spectral features in the second step. Finally, a joint analysis of the weighted spatial and spectral features was used to detect the changes of pixels in different positions. Experimental results on several real hyperspectral change detection data sets showed the effectiveness and advancement of the proposed method.
This paper proposed a constrained sparse representation-based spatio-temporal anomaly detection approach which extends AD from the spatial domain to the spatio-temporal domain. It includes a spatial detector to suppress moving background regions and a temporal detector to suppress non-homogeneous background and stationary objects, both of which maintain the effectiveness of the temporal detector for multiple targets in complex motion situations. Moreover, the smoothing and fusion of the spatial and temporal detection maps could adequately suppress background clutter and false alarms on the maps. Experiments conducted on a real dataset and a synthetic dataset showed that the proposed algorithm could accurately detect multiple targets with different velocities and dense targets with the same trajectory and that it also outperforms other state-of-the-art algorithms in high-noise scenarios.
- III.
Hyperspectral and Multispectral Fusion (three papers)
This paper proposed an image-fusion method based on joint-tensor decomposition (JTF), which is more effective and applicable when degenerate operators are unknown or tough to gauge. Specifically, the proposed JTF method considers a super-resolution image (SRI) as a three-dimensional tensor and redefines a fusion problem as the joint estimation of the coupling factor matrix, which can also be expressed as a joint-tensor decomposition problem for the hyperspectral image tensor, multispectral image tensor, and noise regularization term. The JTF algorithm was then utilized to fuse HSI and MSI so as to explore the problem of SRI reconstruction. The experimental results showed the superior performance of the proposed method in comparison with the current popular schemes.
This paper proposed a self-supervised, spectral–spatial residual network (SSRN) to fuse a low-spatial-resolution (LR) hyperspectral image (HSI) with a high-spatial-resolution (HR) multispectral image (MSI) to obtain HR HSIs. In particular, SSRN does not require HR HSIs as supervised information in training. SSRN considers the fusion of HR MSIs and LR HSIs as a pixel-wise spectral mapping problem wherein the spectral mapping between HR MSIs and HR HSIs can be approximated by the spectral mapping between LR MSIs (derived from HR MSIs) and LR HSIs. Then the spectral mapping between LR MSIs and LR HSIs was further explored by SSRN. Finally, a self-supervised fine-tuning strategy was proposed to transfer the learned spectral mapping to generate HR HSIs. Simulated and real hyperspectral databases were utilized to verify the performance of SSRN.
This paper developed a spatial, filter-based, least squares estimation (LSE)-smoothing filter-based intensity modulation (SFIM) algorithm to fuse a hyperspectral image (HSI) and a multispectral image (MSI). It first combines the LSE algorithm with the SFIM method to effectively improve the spatial information quality of the fused image. At the same time, in order to better maintain the spatial information, four spatial filters (mean, median, nearest, and bilinear) were used for the simulated MSI image to extract fine spatial information. Experimental results of three HSI-MSI data sets using six image quality indexes showed the effectiveness of the proposed algorithm compared with three state-of-the-art HSI-MSI fusion algorithms (CNMF, HySure, and FUSE), while the computing time was much shorter.
- IV.
Mid-wave infrared hyperspectral imaging (two papers)
This paper presented an approach to measuring air temperature from mid-wave hyperspectral Fourier transform infrared (FTIR) imaging in the carbon dioxide absorption band (between 4.25 and 4.35 mm) where the accurate visualization of air temperature distribution can be useful for various thermal analyses in many fields such as human health and the heat transfer of local areas. The proposed visual-air temperature (VisualAT) measurement is based on the observation that the carbon dioxide band shows zero transmissivity at short distances. Based on the analysis of the radiative transfer equation in this band, only the path radiance by air temperature survives. The brightness temperature of the received radiance can provide the raw air temperature and spectral average followed by a spatial median–mean filter that can produce final air temperature images.
This paper presented a method for atmospheric transmittance–temperature–emissivity separation (AT2ES) using online mid-wave infrared hyperspectral images. Conventionally, temperature and emissivity separation (TES) is a well-known problem in the remote sensing domain. However, previous approaches have used the atmospheric correction process before TES using MODTRAN in the long-wave infrared band. Simultaneous online atmospheric transmittance–temperature–emissivity separation starts with approximation of the radiative transfer equation in the upper mid-wave infrared band. The highest atmospheric band was used to estimate surface temperature, assuming high emissive materials. The lowest atmospheric band (CO2 absorption band) was used to estimate air temperature. Through onsite hyperspectral data regression, atmospheric transmittance was obtained from the y-intercept and emissivity was separated using the observed radiance, the separated object temperature, the air temperature, and atmospheric transmittance. The novelty of the proposed method is in that it is the first attempt at simultaneous AT2ES and online separation without any prior knowledge and pre-processing. Mid-wave Fourier transform infrared (FTIR)-based outdoor experimental results validated the feasibility of the proposed AT2ES method.
- V.
Hyperspectral Unmixing (one paper)
This paper proposed a new nonlinear unmixing method based on a general bilinear model. Instead of investing effort in designing more regularizing abundance fractions, a plug-and-play prior technique was developed to exploit the spatial correlation of abundance maps and nonlinear interaction maps. The numerical results in simulated data and a real hyperspectral dataset showed that the proposed method could improve the estimation of abundances dramatically compared with state-of-the-art nonlinear unmixing methods.
- VI.
Hyperspectral Sensor Hardware Design (one paper)
This paper designed an innovative workflow that can be implemented to simplify the process of in-field spectral sampling and its real-time analysis for the identification of optimal spectral wavelengths, specifically for programmable hyperspectral sensors mounted on unmanned aerial vehicles (UAV-hyperspectral systems), which requires a pre-selection of optimal bands when mapping new environments with new target classes with unknown spectra. The proposed band-selection optimization workflow involves particle-swarm optimization with minimum estimated abundance covariance (PSO-MEAC) for the identification of a set of bands most appropriate for UAV-hyperspectral imaging in a given environment, where the criterion function, MEAC, greatly simplifies the in-field spectral data acquisition process by requiring a few target class signatures and not requiring extensive training samples for each class. The metaheuristic method was tested on an experimental site with a diversity of vegetation species and communities. The optimal set of bands was found to suitably capture the spectral variations between target vegetation species and communities. The approach streamlines the pre-tuning of wavelengths in programmable hyperspectral sensors in mapping applications. This further reduces the total flight time in UAV-hyperspectral imaging, as obtaining information for an optimal subset of wavelengths is more efficient and requires less data storage and computational resources for post-processing the data.
- VII.
Hyperspectral Reconstruction (one paper)
This paper proposed a deep residual-augmented attentional u-shape network (RA2UN) for spectral reconstruction (SR) using several double-improved residual blocks (DIRB) instead of paired plain convolutional units. Specifically, a trainable spatial augmented attention (SAA) module was developed to bridge the encoder and decoder to emphasize the features in the informative regions. Furthermore, a channel-augmented attention (CAA) module embedded in the DIRB was also introduced to adaptively rescale and enhance residual learning by using first-order and second-order statistics for stronger feature representations. Finally, a boundary-aware constraint was employed to focus on the salient edge information and recover more accurate high-frequency details. Experimental results on four benchmark datasets demonstrated that the proposed RA2UN network outperformed the state-of-the-art SR) methods in terms of quantitative measurements and perceptual comparison.
- VIII.
Hyperspectral Image Visualization (one paper)
This paper proposed the use of a linear model for color formation to emulate the image acquisition process by a digital color camera and investigated the impact of the choice of spectral sensitivity curves on the visualization of hyperspectral images as RGB color images. In addition, a non-linear model based on an artificial neural network was also proposed. With the proposed linear and nonlinear models, the impact and the intrinsic quality of the hyperspectral image visualization could be assessed based on the amount of information present in the image quantified by color entropy and scene complexity measured by color fractal dimension, both of which provide an indication of detail and texture characteristics of the image. The experiments compared four other methods and the superiority of the proposed method was demonstrated.
- IX.
Applications (four papers)
This paper applied advances in generative deep learning models to produce realistic synthetic hyperspectral vegetation data whilst maintaining class relationships. Specifically, a Generative Adversarial Network (GAN) was trained using the Cramér distance on two vegetation hyperspectral datasets, demonstrating the ability to approximate the distribution of the training samples. The creation of an augmented dataset consisting of synthetic and original samples was used to train multiple classifiers, with increases in classification accuracy observed in almost all circumstances. Both datasets showed improvements in classification accuracy ranging from a modest 0.16% for the Indian Pines set to a substantial increase of 7.0% for the New Zealand vegetation.
This paper developed seven one-dimensional deep convolutional neural network (DCNN) models to determine the best classification features and classification models for the five disease classes of leaf blast in order to improve the accuracy of grading the disease. It first pre-processed the hyperspectral imaging data to extract rice leaf samples of five disease classes, and the number of samples was increased by data-augmentation methods; then, spectral feature wavelengths, vegetation indices, and texture features were obtained based on the amplified sample data, which were used to construct CNN-based models. Finally, the proposed models were compared and analyzed with the Inception V3, ZF-Net, TextCNN, and bidirectional gated recurrent unit (BiGRU); support vector machine (SVM); and extreme learning machine (ELM) models in order to determine the best classification features and classification models for different disease classes of leaf blast. The experimental results also showed that the DCNN models provided better classification capability for disease classification than the Inception V3, ZF-Net, TextCNN, BiGRU, SVM, and ELM classification models. The SPA + TFs-DCNN achieved the best classification accuracy with an overall accuracy (OA) and Kappa of 98.58% and 98.22%, respectively. In terms of the classification of the specific different disease classes, the F1-scores for diseases of classes 0, 1, and 2 were all 100%, while the F1-scores for diseases of classes 4 and 5 were 96.48% and 96.68%, respectively. This study provides a new method for the identification and classification of rice leaf blast and a research basis for assessing the extent of the disease in the field.
This paper developed a hyperspectral image technique that combines constrained energy minimization (CEM) and deep neural networks to detect defects in the spectral images of infected rice leaves and compare the performance of each in the full spectral band, selected bands, and band expansion process (BEP) to compressed spectral information for the selected bands. A total of 339 hyperspectral images were collected in this study; the results showed that six bands were sufficient for detecting early infestations of rice leave folder (RLF), with a detection accuracy of 98% and a Dice similarity coefficient of 0.8, which provides advantages in the commercialization of this field.
This paper developed a hyperspectral insect damage-detection algorithm (HIDDA) that can automatically detect insect-damaged beans using only a few bands and one spectral signature. It used a push-broom visible-near infrared (VIS-NIR) hyperspectral sensor to obtain images of coffee beans. It takes advantage of recently developed constrained energy minimization (CEM)-based band selection methods coupled with two classifiers, support vector machine (SVM) and convolutional neural networks (CNN), to select bands. The experiments showed that 850–950 nm is an important wavelength range for accurately identifying insect damaged beans, and HIDDA can indeed detect insect damaged beans with only one spectral signature, which will provide an advantage in terms of practical applications and commercialization in the future.