Printer Friendly

Ground-Based Cloud-Type Recognition Using Manifold Kernel Sparse Coding and Dictionary Learning.

1. Introduction

Clouds play an essential role in the circulation of water vapour and affects the earth's energy balance [1-3]. In the study of weather forecasting and climate change, clouds are always regarded as the core factor [4]. The traditional cloud observation is much dependent on the observers' experience, and thus, it is time-consuming. The substantial and noticeable development of hardware and digital imaging techniques makes it possible to observe the cloud automatically and continuously. Compared with satellite images, the ground-based images hold a high spatial resolution at a local scale [5].

Attributing to the application of sky imagers and ceilometer, automatic observation has been realized in cloud cover and cloud base height [6]. To identify the cloud type accurately and effectively, many attempts have recently been made to address this challenging issue [5-17]. Buch et al. [7] adopted texture, position, and pixel brightness features together from the whole-sky images and classified the data with decision trees. To test the texture feature extraction approach, autocorrelation, cooccurrence matrices, edge frequency, Law's features, and primitive length were applied for cloud recognition [8]. Calbo and Sabburg [5] presented statistical texture features, Fourier transform features, and thresholded image features to identify the images taken by the whole-sky imager (WSI) and total sky imager (TSI). Heinle et al. [9] extracted 12 statistical features to represent the image's colour and texture and then employed the k-nearest-neighbour (KNN) classifier to distinguish seven different sky conditions. As a clear distinction exists in texture orientation between the satellite images and the ground-based images, Gabor-based multiple features were utilized for classification with support vector machine (SVM) and achieved an overall accuracy of 88.3% [10]. Different from the typical local binary patterns (LBPs), weighted local binary patterns (WLBPs) [11] were presented by fusing the variance of a local patch to enhance the contrast for recognizing cloud types. Cheng and Yu [12] combined statistical features and LBPs with the Bayesian classifier to perform block-based classification (BC). In addition to texture features, Liu et al. [13] employed 7 structure features from the edge image to describe the structure characteristic of the infrared clouds. Zhuo et al. [14] indicated that simply using texture or structure features separately may not produce excellent classification performance. Hence, both texture and structure features were captured to obtain the cloud type with SVM. Furthermore, Xia et al. [6] and Xiao et al. [15] proposed to make use of multiple features together, including colour, texture, and structure features for cloud-type recognition, and the experiments validated that the integration of various features performed better than other cases. The physical features are also of great importance to represent clouds. Kazantzidis et al. [16] introduced the solar zenithal angle, the total cloud coverage, the visible percentage of sun, and the existence of rain in sky images to describe the physical property. Besides 12 image features extracted from the sky camera image, Tato et al. [17] combined another 7 cloud layer features from the ceilometer and adopted random forests for classification.

To represent the cloud image more effectively, Li et al. [18] put forward a discriminative model based on a bag of microstructures (BoMS), and it showed competitive performance in the cloud-type recognition. To make up the weakness of BoMS that it could not describe these complex categories well, the duplex norm-bounded sparse representation model [19] was reported. This model demonstrated promising classification performance and was validated to be capable of capturing the most prominent patterns from complex cloud categories and naturally to attain a higher accuracy.

Recently, the symmetric positive definite (SPD) matrix manifold has gained much popularity in the action recognition, object detection, face recognition, etc. [20, 21]. In addition, sparse representation on SPD matrix manifolds has been applied to these areas to achieve better performances [22, 23]. In spite of its effectiveness, the matrix manifolds method is seldom investigated to address the task of cloud classification [24].

In this paper, manifold kernel sparse coding and dictionary learning (MKSCDL) on the SPD matrix manifold is proposed for ground-based cloud classification. The rest of this paper is organized as follows: Section 2 introduces the dataset, and Section 3 describes the methodology of MKSCDL. Section 4 reports the experimental results and discussions. Ultimately, conclusions are summarized in Section 5.

2. Dataset

2.1. Zenithal Dataset. The zenithal dataset is provided by the National University of Defense Technology in China and acquired from historical ground-based infrared images taken by the whole-sky infrared cloud-measuring system (WSIRCMS) [25]. The images are grouped into five categories: stratiform, cumuliform, waveform, and cirriform clouds and clear sky according to the morphology and generating mechanism of the cloud [26]. This dataset is composed of 100 cloud images in each category. The typical samples with a size of 320 x 240 pixels from each category are demonstrated in Figure 1.

2.2. SWIMCAT Dataset. The SWIMCAT dataset [27] contains 784 images taken from a daytime WSI called the wide-angle high-resolution sky-imaging system (WAHRSIS). The images are classified into 5 distinct categories: clear sky, patterned clouds, thick dark clouds, thick white clouds, and veil clouds. The number of images in per category is 224, 89, 251, 135, and 85, correspondingly. The images were obtained over a period from January 2013 to May 2014. They were selected based on visual characteristics and were categorized with the help of experts from Singapore Meteorological Services. The representative samples with a size of 125 x 125 pixels from each category are demonstrated in Figure 2.

3. Method

In this section, the methodology is introduced mainly in three parts: feature extraction, dictionary learning, and classification, which is illustrated overall in Figure 3.

3.1. Feature Extraction and Stein Kernel. Given an image I with a size of W x H, the feature image F is defined by computing d-dimensional features at every pixel:

F(x, y) = f(I, x, y), (1)

where f(I, x, y) is the feature mapping, for example:

[mathematical expression not reproducible], (2)

where (x, y) is the pixel location; I(x, y) represents the pixel gray value; [mathematical expression not reproducible] denote the first- and second-order derivatives of I(x, y) in the direction of x and y, respectively; and [square root of [[absolute value of [I.sub.x]].sup.2] + [[absolute value of [I.sub.y]].sup.2]] means the modulus of the gradient.

The covariance descriptor (CovD) of the feature image F is computed by the following equation:

C = 1/n - 1 [n.summation over (u=1)] ([f.sub.u] - [mu]) [([f.sub.u] - [mu]).sup.T], (3)

where [mu] represents the mean feature vector and n is the pixel amount of the image.

In general, the CovD is an SPD matrix. All SPD matrices form a Riemannian manifold S++ when endowed with a Riemannian metric. Note that the [S.sup.d.sub.++] matrix is adopted as the extracted feature to describe the image; therefore, it is different from traditional features used for classification in the Euclidean space.

In this paper, we adopt the Stein divergence as a Riemannian metric, and the SPD matrix manifold is mapped into the reproducing kernel Hilbert space (RKHS). The Stein divergence S is defined as follows:

S(A,B) = log[absolute value of A + B/2] - 1/2 log [absolute value of AB], (4)

where A and B are the points on the SPD matrix manifold, and S(A, B) measures the distance between these two points.

The Stein kernel is defined as follows:

K(A,B) = exp(-[beta] X S(A,B)). (5)

It is a positive definite kernel for certain choices of [beta] > 0 [28]. With the Stein kernel, we can map the SPD manifold into RKHS:

[mathematical expression not reproducible]. (6)

3.2. Kernel Sparse Coding and Dictionary Learning. In this section, we give a framework for manifold kernel sparse coding and dictionary learning (MKSCDL), which is outlined in Algorithm 1. Let X = {[x.sub.1], [x.sub.2], ..., [x.sub.N]} denote N points on [S.sup.d.sub.++] and D = {[d.sub.1], [d.sub.2], ..., [d.sub.k]) be a dictionary with k atoms. With the Stein kernel, we update the dictionary D by two steps iteratively: kernel sparse coding and kernel dictionary learning. The model is an optimization problem with [l.sub.1] constraint:

[mathematical expression not reproducible], (7)

where [x.sub.m] is a sample from X, [alpha] represents the sparse coefficient, [lambda] is a regularization parameter, and [parallel][phi]([x.sub.m]) - [phi](D)[alpha][[parallel].sup.2.sub.2] denotes the reconstruction error.

3.2.1. Kernel Sparse Coding. When the dictionary is fixed, the sparse coding problem is formulated as follows:

[mathematical expression not reproducible]. (8)

The iterative shrinkage-thresholding algorithm (ISTA) [30] is adopted for the optimization solution.

Let h (a) = [[parallel][phi]([x.sub.m]) - [phi](D)[alpha][parallel].sup.2.sub.2], then the sparse vector is updated as follows:

[mathematical expression not reproducible], (9)

where [t.sub.s] is the step size, [[alpha].sup.(s)] represents the sparse coefficient at the s-th iteration, and the shrinkage operator is defined as follows:

[T.sub.q](g) = [([absolute value of g] - q).sub.+] sign(g). (10)

Equation (10) is equal to [T.sub.q] (g) = max([absolute value of g] - q, 0). *sign(g).

Now, the problem is transformed to calculate the gradient of h (a) with respect to [alpha]:

[mathematical expression not reproducible]. (11)

As K([x.sub.m], [x.sub.m]) = [phi][([x.sub.m]).sup.T] [phi]([x.sub.m]) = 1, the gradient of h([alpha]) with respect to [alpha] is

[nabla]h ([alpha]) = -2[phi] [(D).sup.T] [phi] ([x.sub.m]) + 2[phi] [(D).sup.T] [phi] (D)[alpha]. (12)

The first term of Equation (12) is

[mathematical expression not reproducible]. (13)

Similarly, the second term of Equation (12) is

[mathematical expression not reproducible]. (14)

As a result, the gradient [nabla]h([alpha]) is obtained by adding the right-side terms of Equations (13) and (14). The sparse code is implemented with Equation (9).

3.2.2. Kernel Dictionary Learning. First, the initial dictionary is achieved by k-means on Riemannian manifolds using the Frechet mean [29]. It selects k points as the initial cluster centers from the training data randomly. Then measured by Stein divergence, every point is allocated to the closest cluster center to recompute the corresponding cluster center iteratively based on the Frechet mean via the following equation:

[mathematical expression not reproducible], (15)

where [{[x.sub.i]}.sup.w.sub.i=1] [member of] [S.sup.d.sub.++] is a set of SPD matrices and [m.sub.j] is the updated cluster center. Ultimately, the [{[m.sub.j]}.sup.k.sub.j=1] [member of] [S.sup.d.sub.++] forms the initial codebook [D.sub.0].

When the sparse coefficient is fixed, the dictionary is updated atom by atom, and the dictionary learning problem is formulated as follows:

[mathematical expression not reproducible]. (16)

We use ISTA [30] to update the dictionary as well. The Euclidean gradient of [d.sub.i] is

[mathematical expression not reproducible]. (17)

As proved in [22], the Riemannian gradient grad J ([d.sub.i]) is

grad J([d.sub.i]) = [d.sub.i] [DELTA]j([d.sub.i]) [d.sub.i]. (18)

The i-th atom [d.sub.i] is updated as follows:

[d.sup.(p).sub.i] = [d.sup.(p-1).sub.i] - [t.sub.p] grad J([d.sup.(p-1).sub.i]), (19)

where [t.sub.p] is the step size and [d.sup.(p).sub.i] represents the i-th atom at the p-th iteration.

3.3. Classification. After the dictionary is learned from the training set, the sparse coefficient and the reconstruction errors "RE" of the testing set are obtained to predict the cloud type. Algorithm 2 details the procedure of classification.

Since there are c types of the samples, the dictionary as well has c types, which are updated independently, as detailed in Section 3.2.2.

The sparse coefficient [[alpha].sup.*.sub.i] of the testing sample [x.sub.t] is computed as follows:

[mathematical expression not reproducible]. (20)

With the sparse coefficients on each kind of dictionary, the smallest reconstruction error indicates the class that the testing sample belongs to.

[mathematical expression not reproducible]. (21)

In this paper, the number of cloud classes c in the two datasets is both 5, and the reconstruction errors on 5 classes of the dictionary are compared to decide the cloud type.

4. Results and Discussions

In this section, compared with the baselines [11, 12, 27], the performance of MKSCDL is evaluated with the same experimental setup on two different image datasets: zenithal dataset, captured by WSIRCMS, and SWIMCAT dataset, gathered by WAHRSIS. Each experiment is implemented three times with 10-fold cross validation, and average values are treated as final results.

Note that the feature map f is defined independently on two different datasets due to the different nature between the grayscale and colour images. f(I, x, y) is defined in Equation (2) to the grayscale image in the zenithal dataset, while in the SWIMCAT dataset f(I, x, y) = [f[([I.sub.R],x,y).sup.T] f [([I.sub.G], x, y).sup.T] f [([I.sub.B], x, y).sup.T]].sup.T] to the RGB image, where [I.sub.R], [I.sub.G], and [I.sub.B] represent the intensity images in three channels, respectively. The parameters used in our experiments are set empirically as follows: [beta] = 0.1, [lambda] = 0.01, [t.sub.s] = [10.sup.-4], and [t.sub.p] = [10.sup.-4].

4.1. Results of the Zenithal Dataset

4.1.1. Performance of MKSCDL. The first experiment is carried out on the zenithal dataset. With diverse choices of the atom number in the dictionary, the performance of MKSCDL varies correspondingly. Figure 4 shows the overall accuracy on the updated dictionary with different sizes. It is illustrated that when the dictionary size k equals 14, the overall accuracy achieves up to 96.33%, which outperforms the other cases.

Figure 5 reports the confusion matrix when k = 14. The element of row i and column j in the confusion matrix means the percentage of the i-th class recognized as the j-th class. As a result, the diagonal elements correspond to the recognition rates of all categories. It is illustrated all of the stratiform clouds and 99.3% clear sky images are identified correctly, which means that these two cloud types possess prominent characteristics to be distinguished. Likewise, the recognition rates of cumuliform, waveform, and cirriform clouds achieve more than 93%. On the whole, MKSCDL reveals an ideal performance in identifying the ground-based grayscale cloud images with a rather high accuracy.

4.1.2. Performance Comparison with the Baselines. To assess the effectiveness of MKSCDL further, WLBP [11] and BC [12] are applied for comparison:

(i) WLBP [11] is the method fusing the variance of a local patch into LBP. The KNN classifier is employed for cloud classification based on the chi-square distance.

(ii) BC [12] integrates statistical and local texture features and adopts the Bayesian classifier with regularized discriminant analysis. Note that the statistical features have only 8 dimensions because the infrared images are lack of colour information.
Algorithm 1: Framework for MKSCDL.

Input: SPD matrices X and atom number of the dictionary k
Output: updated dictionary D

Initialize the dictionary [D.sub.0] by k-means on Riemannian
manifolds using the Frechet mean [29].
while not converge do
  Kernel Sparse Coding
  s [left arrow] 1
  while not converge do
    [[alpha].sup.(s)] [left arrow] [T.sub.[lambda]t],
    ([[alpha].sup.(s-1)] - [t.sub.s] [DELTA]h([[alpha].sup.(s-1)]))
  s [left arrow] s + 1
  end
  Kernel Dictionary Learning
  for i = 1 to k
    p [left arrow] 1
   while not converge do
     [d.sup.(p).sub.i] [left arrow] [d.sup.(p-1).sub.i]
     - [t.sub.p] grad J([d.sup.(p-1).sub.i])
     p [left arrow] p + 1
   end
 end
end k
Return D = [{[d.sup.(p).sub.i]}.sup.k.sub.i=1]


Table 1 presents the comparison results of 10-fold cross validation. The performance of MKSCDL exceeds that of the other two methods especially with regard to cumuliform, waveform, and cirriform clouds. MKSCDL achieves the highest overall accuracy of 96.33% among them. That means the dictionary is learned well and the samples are described adequately on the corresponding dictionary instead of different types of dictionary, which makes contribution to the competitive performance.

4.2. Results of the SWIMCAT Dataset

4.2.1. Performance of MKSCDL. The second experiment is conducted on the SWIMCAT dataset. Similar to the first experiment, Figure 6 exhibits the overall accuracy on the learned dictionary with different sizes. As the dictionary size increases, the overall accuracy improves in general. When k is 20, MKSCDL performs best with a result of 98.34%. In consideration of excellent classification performance and computational cost of MKSCDL, we confirm that k = 20 can satisfy the experimental requirement.

Figure 7 demonstrates the confusion matrix when k = 20. Patterned clouds and thick white clouds possess obvious characteristics for discrimination with an accuracy of 100%. Likewise, the results of the clear sky and thick dark clouds achieve over 98%. In addition, the challenging veil clouds resembling clear sky [27] attain a decent result of 92.94%. It is shown that the misclassification rate for each class is rather low, which means that on the whole, the proposed method works well in categorizing the ground-based RGB images.

4.2.2. Performance Comparison with the Baselines. Besides the two baselines mentioned in Section 4.1.2, the Text-on-based method [27] integrating both colour and texture features is used for comparison as well. Note that different from the grayscale images, the statistical features extracted from colour images in the BC method have 12 dimensions.

Table 2 presents the comparison results of 10-fold cross validation. By contrast, MKSCDL performs better than WLBP and BC overall. It is clear that MKSCDL has a strong power for the task of cloud categorization of the SWIMCAT dataset.
Algorithm 2: Framework for cloud-type classification.

Input: testing sample [x.sub.t] and learned dictionary
[{[D.sub.i]}.sup.c.sub.i=1]
Output: predicted cloud type label ([x.sub.t])
for i = 1 to c
  Kernel Sparse Coding
  [mathematical expression not reproducible]

  Computing Reconstruction Error
  [mathematical expression not reproducible]
end
label [mathematical expression not reproducible]
Return label ([x.sup.i.sub.t])


To compare with the texton-based approach well, we also implement the experiment with the same configuration as that in [27], which chose a training set with 40 images per category while another 45 ones for testing randomly (40/45). Table 3 lists the results. In comparison with Table 2, the overall performance of different methods changes little. The texton-based method achieves perfect classification accuracy for all categories except veil clouds, whose accuracy remains to be improved. Viewing the results of MKSCDL, each cloud type acquires a fair high recognition rate.

Comprehensively, in comparison with the other three methods, MKSCDL is validated to be the most effective one in recognizing the ground-based colour images.

5. Conclusions

In this paper, a novel cloud classification named "MKSCDL" on manifolds is proposed. The SPD matrix is extracted from each image and acts as the image feature. To maintain the non-Euclidean geometrical characteristics of the SPD matrix, kernel sparse coding and dictionary learning are conducted to obtain a representative dictionary. The testing sample's reconstruction errors on different categories of dictionary are calculated to identify the specific cloud type. By comparing recent methods on grayscale and colour datasets, it is interesting to find that WLBP performs better in grayscale images, while BC performs better in colour images. Comparatively speaking, MKSCDL is suitable for both grayscale and colour images and is equipped with high capacity for automatic ground-based cloud classification.

The proposed method MKSCDL can be applied to the real life, and the results provided by the visual observation are adopted for comparison and evaluating the automatic classification method. With limited images, it may not produce perfect performance in cloud-type recognition. As the dataset becomes more representative and more adequate, it will work better and can satisfy the task of cloud classification well. In the future, multiple improvements should be considered for better automatic cloud-type classification. As for the feature extraction, more features like the gray-level cooccurrence matrix and Gabor filtering could be fused into the feature mapping to describe the image well. In terms of dictionary learning, the interclass difference can be taken into consideration for a more discriminative dictionary. What is more, samples' sparse coefficients can be applied to build the SVM model. In addition, complex sky condition existing in multiple cloud categories is supposed to attract our attention in the next work.

https://doi.org/10.1155/2018/9684206

Data Availability

The SWIMCAT dataset is linked to http://vintage. winklerbros.net/swimcat.html, and the zenithal dataset used to support the findings of this study is available from the first author (qixiang_luo@aliyun.com) upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

The authors would like to thank Prof. Yee Hui Lee for providing the SWIMCAT dataset and Dr. Lei Liu for polishing the manuscript. This work was supported by the National Natural Science Foundation of China (Grant nos. 61473310, 41174164, and 41575024).

References

[1] G. L. Stephens, "Cloud feedbacks in the climate system: a critical review," Journal of Climate, vol. 18, no. 2, pp. 237-273, 2005.

[2] A. J. Teuling, C. M. Taylor, J. F. Meirink et al., "Observational evidence for cloud cover enhancement over western European forests," Nature Communications, vol. 8, article 14065, 2017.

[3] B. A. Baum, P. F. Soulen, K. I. Strabala et al., "Remote sensing of cloud properties using MODIS airborne simulator imagery during SUCCESS: 2. cloud thermodynamic phase," Journal of Geophysical Research Atmospheres, vol. 105, no. 9, pp. 11767-11780, 2000.

[4] J. T. Houghton, Y. Ding, D. J. Griggs et al., Climate Change 2001: The Scientific Basis, Cambridge University Press, Cambridge, UK, 2001.

[5] J. Calbo and J. Sabburg, "Feature extraction from whole-sky ground-based images for cloud-type recognition," Journal of Atmospheric & Oceanic Technology, vol. 25, no. 1, pp. 3-14, 2008.

[6] M. Xia, W. Lu, J. Yang, Y. Ma, W. Yao, and Z. Zheng, "A hybrid method based on extreme learning machine and k-nearest neighbor for cloud classification of ground-based visible cloud image," Neurocomputing, vol. 160, pp. 238-249, 2015.

[7] K. A. J. Buch, C. H. Sun, and L. R. Thorne, "Cloud classification using whole-sky imager data," in Proceedings of the Fifth Atmospheric Radiation Measurement (ARM), Science Team Meeting, San Diego, CA, USA, 1995.

[8] M. Singh and M. Glennen, "Automated ground-based cloud recognition," Pattern Analysis and Applications, vol. 8, no. 3, pp. 258-271, 2005.

[9] A. Heinle, A. Macke, and A. Srivastav, "Automatic cloud classification of whole sky images," Atmospheric Measurement Techniques, vol. 3, no. 3, pp. 557-567, 2010.

[10] R. Liu and W. Yang, "A novel method using gabor-based multiple feature and ensemble SVMs for ground-based cloud classification," in Proceedings of Seventh International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR2011), Guilin, China, December 2011.

[11] S. Liu, Z. Zhang, and X. Mei, "Ground-based cloud classification using weighted local binary patterns," Journal of Applied Remote Sensing, vol. 9, no. 1, article 095062, 2015.

[12] H. Y. Cheng and C. C. Yu, "Block-based cloud classification with statistical features and distribution of local texture features," Atmospheric Measurement Techniques, vol. 8, no. 3, pp. 1173-1182, 2015.

[13] L. Liu, X. Sun, F. Chen, S. Zhao, and T. Gao, "Cloud classification based on structure features of infrared images," Journal of Atmospheric & Oceanic Technology, vol. 28, no. 3, pp. 410-417, 2011.

[14] W. Zhuo, Z. Cao, and Y. Xiao, "Cloud classification of ground-based images using texture-structure features," Journal of Atmospheric & Oceanic Technology, vol. 31, no. 1, pp. 79-92, 2014.

[15] Y. Xiao, Z. Cao, W. Zhuo, L. Ye, and L. Zhu, "mCLOUD: a multiview visual feature extraction mechanism for ground based cloud image categorization," Journal of Atmospheric & Oceanic Technology, vol. 33, no. 4, pp. 789-801, 2016.

[16] A. Kazantzidis, P. Tzoumanikas, A. F. Bais, S. Fotopoulos, and G. Economou, "Cloud detection and classification with the use of whole-sky ground-based images," Atmospheric Research, vol. 113, no. 1, pp. 80-88, 2012.

[17] J. H. Tato, F. J. R. Benitez, C. A. Barrena, R. A. Mur, I. G. Leon, and D. P. Vazquez, "Automatic cloud-type classification based on the combined use of a sky camera and a ceilometer," Journal of Geophysical Research: Atmospheres, vol. 122, no. 20, pp. 11045-11061, 2017.

[18] Q. Li, Z. Zhang, W. Lu, J. Yang, Y. Ma, and W. Yao, "From pixels to patches: a cloud classification method based on a bag of micro-structures," Atmospheric Measurement Techniques, vol. 9, no. 2, pp. 753-764, 2016.

[19] J. Gan, W. Lu, Q. Li et al., "Cloud type classification of total-sky images using duplex norm-bounded sparse coding," IEEE Journal of Selected Topics in Applied Earth Observations & Remote Sensing, vol. 10, no. 7, pp. 3360-3372, 2017.

[20] S. Jayasumana, R. Hartley, M. Salzmann, H. Li, and M. T. Harandi, "Kernel methods on riemannian manifolds with gaussian RBF kernels," IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 37, no. 12, pp. 2464-2477, 2015.

[21] M. Faraki, M. Palhang, and C. Sanderson, "Log-Euclidean bag of words for human action recognition," IET Computer Vision, vol. 9, no. 3, pp. 331-339, 2014.

[22] A. Cherian and S. Sra, "Riemannian dictionary learning and sparse coding for positive definite matrices," IEEE Transactions on Neural Networks & Learning Systems, vol. 28, no. 12, pp. 2859-2871, 2017.

[23] Y. Wu, Y. Jia, P. Li, J. Zhang, and J. Yuan, "Manifold kernel sparse representation of symmetric positive-definite matrices and its applications," IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, vol. 24, no. 11, pp. 3729-3741, 2015.

[24] Q. Luo, Y. Meng, L. Liu, X. Zhao, and Z. Zhou, "Cloud classification of ground-based infrared images combining manifold and texture features," Atmospheric Measurement Techniques Discussions, vol. 11, pp. 5351-5361, 2018.

[25] X. J. Sun, T. C. Gao, D. L. Zhai, S. J. Zhao, and J. G. Lian, "Whole sky infrared cloud measuring system based on the uncooled infrared focal plane array," Infrared & Laser Engineering, vol. 37, no. 5, pp. 760-764, 2008.

[26] X. Sun, L. Liu, T. Gao, and S. Zhao, "Classification of whole sky infrared cloud image based on the LBP operator," Transactions of Atmospheric Sciences, vol. 32, no. 4, pp. 490-497, 2009.

[27] S. Dev, Y. H. Lee, and S. Winkler, "Categorization of cloud image patches using an improved texton-based approach," in Proceedings of IEEE International Conference on Image Processing (ICIP), Quebec City, Canada, September 2015.

[28] S. Zhang, S. Kasiviswanathan, P. C. Yuen, and M. T. Harandi, "Online dictionary learning on symmetric positive definite manifolds with vision applications," in Proceedings of Twenty-Ninth AAAI Conference on Artificial Intelligence, pp. 3165-3173, AAAI Press, Austin, TX, USA, January 2015.

[29] M. Faraki, M. T. Harandi, and F. Porikli, "More about VLAD: a leap from euclidean to riemannian manifolds," in Proceedings of Computer Vision and Pattern Recognition (CVPR), Boston, USA, January 2015.

[30] A. Beck and M. Teboulle, "A fast iterative shrinkage-thresholding algorithm for linear inverse problems," SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183-202, 2009.

Qixiang Luo (iD), Zeming Zhou (iD), Yong Meng, Qian Li (iD), and Miaoying Li

College of Meteorology and Oceanology, National University of Defense Technology, Nanjing 211101, China

Correspondence should be addressed to Zeming Zhou; zhou_zeming@yahoo.com

Received 2 May 2018; Accepted 6 September 2018; Published 16 October 2018

Academic Editor: Stefania Bonafoni

Caption: Figure 1: Sample images from the zenithal dataset. (a) Stratiform clouds. (b) Cumuliform clouds. (c) Waveform clouds. (d) Cirriform clouds. (e) Clear sky.

Caption: Figure 2: Sample images from the SWIMCAT dataset. (a) Clear sky. (b) Patterned clouds. (c) Thick dark clouds. (d) Thick white clouds. (e) Veil clouds.

Caption: Figure 3: Framework of the proposed cloud classification method.

Caption: Figure 4: The overall accuracy on the updated dictionary with different sizes.

Caption: Figure 6: The overall accuracy on the updated dictionary with different sizes.
Table 1: Recognition rates of different methods in 10-fold
cross validation (%).

Methods   Stratiform   Cumuliform   Waveform   Cirriform   Clear sky

WLBP         100         87.84       89.41       92.51       99.06
BC           100         58.04       81.10       75.73       99.04
MKSCDL       100         93.47       95.72       93.27       99.30

Methods   Overall

WLBP       93.56
BC         82.80
MKSCDL    9(5.33

Table 2: Recognition rates of different methods in 10-fold
cross validation (%).

Methods   Clear sky   Patterned   Thick dark   Thick white   Veil

WLBP        88.77        100        82.56         98.51      77.27
BC          98.66       94.27       95.96         90.79      91.52
MKSCDL      98.71        100        98.50          100       92.44

Methods   Overall

WLBP       88.50
BC         95.17
MKSCDL     98.34

Table 3: Recognition rates of different methods in 40/45 (%).

Methods        Clear sky   Patterned   Thick dark   Thick white   Veil

WLBP             84.44       95.56       62.22         97.78      77.78
BC                100        95.56       97.78         88.89      86.67
Texton-based      100         100        98.00          100       78.00
MKSCDL            100         100        95.56         97.78      95.56

Methods        Overall

WLBP            83.56
BC              93.78
Texton-based    95.00
MKSCDL          97.78

Figure 5: The confusion matrix (%) of MKSCDL on the zenithal
dataset. The labels 1-5 refer to the corresponding cloud types:
1-stratiform clouds, 2-cumuliform clouds, 3-waveform clouds,
4-cirriform clouds, and 5-clear sky.

1   100.00    0.00    0.00     0.00     0.00
2    0.00    93.47    4.46     2.07     0.00
3    0.00     3.05   95.72     1.23     0.00
4    0.98     0.00    3.55    93.27     2.20
5    0.00     0.00    0.00     0.70    99.30

Figure 7: The confusion matrix (%) of MKSCDL on the SWIMCAT
dataset. The labels 1-5 refer to the corresponding cloud types:
1-clear sky, 2-patterned clouds, 3-thick dark clouds, 4-thick
white clouds, and 5-veil clouds.

1   98.71     0.00    1.29     0.00    0.00
2    0.00   100.00    0.00     0.00    0.00
3    1.50     0.00   98.50     0.00    0.00
4    0.00     0.00    0.00   100.00    0.00
5    3.10     0.00    1.10     2.86    2.94
COPYRIGHT 2018 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Luo, Qixiang; Zhou, Zeming; Meng, Yong; Li, Qian; Li, Miaoying
Publication:Advances in Meteorology
Date:Jan 1, 2018
Words:5057
Previous Article:Seasonal Regional Differentiation of Human Thermal Comfort Conditions in Algeria.
Next Article:GPS Radio Occultation Data Assimilation in the AREM Regional Numerical Weather Prediction Model for Flood Forecasts.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |