Printer Friendly

Deep Learning with Taxonomic Loss for Plant Identification.

1. Introduction

As the main form of life on the earth, plant plays an indispensable role in the ecosystem, which ensures the sustainable development of human society. Plant identification is a crucial component of plant ecological research workflow, which is the foundation to protect and develop the plant diversity. As for the general public, identifying plant and learning its knowledge is also an interesting and necessary experience. Although there are several methods of identifying plant, including taxonomic keys, written description, specimen comparison, and image comparison, expert determination is usually necessary [1]. Besides, for the large quantity of plant species and the low readability of taxonomic information, taxonomic knowledge and species identification skills are restricted to a limited and reducing number of persons [2, 3]. So, even for experts with professional plant knowledge, it is not practical to identify all kinds of plant species by the manual identification methods, while for non-experts, it seems to be more infeasible.

Image-based automatic plant identification has emerged as a hot spot in the field of computer vision [4]. In contrast to the coarse-grained ImageNet [5] classification task, image-based plant identification is a fine-grained classification task which aims to distinguish the family, genus, and the most specific species. In the past, nearly all machine learning methods relied on hand-crafted visual features (e.g., leaf vein and petal shape) [6-9], while the manual process was time-consuming and the extracted features were possibly incomplete. Moreover, these methods suffered from poor generalization for large-scale plant identification in complex environment. Recently, progress in deep learning [10-14] has demonstrated its outstanding performance on automatic feature extraction through data-driven approaches. Many works have turned sight to the combination of plant identification and neural networks, which significantly boosted the accuracy for large-scale plant recognition [15-17].

So far, for plant classification task, the most commonly adopted method of deep learning training was one-hot encoding with cross-entropy loss, which only used one-level labels of the taxonomic tree, such as species, and ignored the strong intra-family/genus similarities. By this means, the hierarchical structures of taxonomic tree were totally neglected. The optimizer could only optimize the model according to species-level information independently, without the rich supervision information derived from taxonomic tree. The models of these methods predicted the most specific species directly [17-19], while human experts generally identify plants from coarse to fine by matching the family, genus, and species along the taxonomic tree progressively. In practice, it is also useful to identify the family/ genus correctly even if the species prediction is wrong.

Inspired by the hierarchical structure of taxonomic tree, the taxonomic loss was proposed to encode the taxonomic tree into the objective function of deep learning training. Then, the training algorithm could optimize the model with more supervision information derived from the hierarchical labels. The proposed method was easy to implement, compatible with end-to-end training, and effectively improves the performance of plant classification models. In summary, two contributions of this paper are listed as follows:

(1) The taxonomic loss encoded taxonomic tree into the objective function by simple group and sum operation, which was easy to implement and compatible with end-to-end training.

(2) The taxonomic loss facilitated the training of various deep neural networks, which further increased plant identification accuracies at species, genus, and family levels.

2. Materials and Dataset

Two different editions of PlantCLEF datasets (PlantCLEF 2015 [20] and 2017 [21]) were used to evaluate the performance of the proposed method, the images from which were collected from different locations by distinct contributors. Each image belongs to one of seven content-types (e.g., flower, fruit, and stem) and was annotated with hierarchical family, genus, and species labels according to the taxonomic tree, which organized the plant hierarchically in a coarse-to-fine fashion. So, the PlantCLEF datasets were suitable for evaluating the proposed algorithm on three-level granularities along the taxonomic tree.

The PlantCLEF 2015 dataset contains 113, 205 images of 1,000 species and was divided into training set and testing set by the contest host. The training set of PlantCLEF 2017 consisted of two subsets: "trusted" and "noisy" set. Since this paper focused on the supervised training with ground truth, only "trusted" set was used for experiments, which contains 256,287 images with 10,000 species. And one-tenth samples from each individual species were selected randomly into the testing set. Table 1 shows the details of the datasets used in this paper.

3. Taxonomic Loss for Deep Learning

Figure 1 shows the end-to-end training pipeline of deep learning plant identification with two different loss modules. First, each image was augmented randomly and resized to a fixed resolution and then fed into the convolutional neural network (CNN) to extract high-dimensional features by multiple layers abstraction. Next, the loss module was applied for comparative analysis between the CNN predictions and ground truth. Finally, the network parameters were updated by the optimizer according to the loss value.

The most adopted loss module [22] is shown in Figure 1(a), which generated loss only based on one-level label, usually species-level label. The CNN output was connected to a fully connected (FC) layer with n neurons to produce n-bit species logits where n was the number of species. After the calculation of softmax function, the n-bit species logits were converted into n-bit species probabilities. Then, the cross-entropy loss function was designed to measure the performance of multiclass classification with one-level labels, and it was calculated between the species probabilities and species-level label as follows:

[l.sub.s-CE] = - [n.summation over (i=1)] [t.sub.i] log ([p.sub.i]), (1)

and [p.sub.i] is calculated by softmax function as

[mathematical expression not reproducible], (2)

where n represents the number of species, [[z.sub.1], [z.sub.2], ..., [z.sub.n]] represents the FC layer output, and the one-hot code of species-level label is [[t.sub.1], [t.sub.2], ..., [t.sub.n]]. In this way, although the model made the finest-grained species-level predictions, the coarser-level predictions could only be backward inferred along the taxonomic tree, which ignored the supervision information of coarser-level labels.

3.1. Taxonomic Loss. In order to fully exploit multilevel labels and hierarchical relationships among them, the taxonomic loss was proposed. As illustrated in Figure 1(b), softmax function was applied on the output of FC layer to generate n-bit species probabilities. Later, the species probabilities were progressively transformed to genus and family probabilities according to the taxonomic tree. Then three-level cross-entropy losses were calculated, respectively, between the label and probabilities at corresponding level. Finally, the taxonomic loss was the sum of all three-level losses and used in the following optimization algorithm for network parameters updating.

The key to calculating taxonomic loss was converting the species probabilities into genus probabilities and family probabilities according to taxonomic tree. The species probabilities were the output of CNN after softmax normalization. Firstly, each bit of species probabilities belonging to same genus were grouped and then summed to generate one bit on genus level. After all species bits were grouped and summed, the m-bit genus probabilities were derived, where m is the number of classes on genus level. Secondly, all bits of the derived genus probabilities were further grouped and summed according to the family-genus hierarchy to generate the family probabilities. In this way, the genus probabilities and family probabilities were progressively derived from the species probabilities according to the taxonomic tree. A sample progressive derivation of high-level probabilities is illustrated in Figure 2(b), which is corresponding to the taxonomic tree shown in Figure 2(a). As shown in Figure 2, the Quercus cerris L. bit, Quercus robur L. bit, and other species-level bits belonging to Quercus are grouped together and the values of them are summed to get the probability of Quercus at genus level. Next, the value of Quercus bit is further added to the Castanea bit and Fagus bit to generate the Fagaceae bit at family level, which is equal to 0.72. Specifically, the genus probabilities and family probabilities are calculated as follows:

[g.sub.j] = [summation over (k)] [s.sub.k], where S[k] belongs to G[j], [] = [summation over (j)] [g.sub.j], where G[j] belongs to F[i], (3)

where f/g/[s.sub.x] is the value of family/genus/species probability at x-th bit and F/G/S[x] is the x-th family, genus, or species.

After multilevel probabilities were generated by the CNN softmax output and the following group and sum operation along taxonomic tree, the cross-entropy of each level, [l.sub.f-CE], [l.sub.g-CE], [l.sub.s-CE], were calculated independently between the predicted probabilities and ground truth by equation (1). Finally, the taxonomic loss was the sum of the multilevel cross-entropy losses as follows:

[l.sub.TAX] = [l.sub.f-CE] + [l.sub.g-CE] + [l.sub.s-CE]. (4)

The group and sum operation encoded the taxonomic tree into the deep learning objective function, which was easy to implement and compatible with end-to-end training. Also, when misclassification happened at coarse granularity, the taxonomic loss could provide more information than cross-entropy loss. Due to the use of taxonomic loss, more supervision information could be leveraged to improve the performance of plant identification models.

3.2. End-To-End Training. The experiments were implemented by Pytorch deep learning framework. The CNNs were trained end-to-end on a workstation with one Nvidia GeForce GTX Titan Xp GPU (12 GB graphic memory). All the models loaded ImageNet pretrained weights for initialization and were trained over 100 epochs. The basic learning rate was 0.01, and it was dropped by half after every 30 epochs. The stochastic gradient descent (SGD) with 90% momentum was used to optimize the network parameters. All of the methods were compared on test sets of PlantCLEF 2015 and PlantCLEF2017 dataset. Besides, to improve the robustness of model, data augmentation was applied in the experiments. Each image was center cropped, and the images were resized to 299 x 299 pixels when the Inception-v3 and Inception-ResNet-v2 were adopted for feature extracting, and the images were resized to 224 x 224 pixels when using the other CNNs. Finally, all the cropped images were handled by several processing methods: flipping, rotation, translation, scaling, and shear. Figure 3 shows the effects of data augmentation in the experiments.

4. Results

41. Results on PlantCLEF2015 Dataset. Several state-of-the-art neural networks were trained, respectively, by two loss functions shown in Figure 1: the commonly used cross-entropy loss and the proposed taxonomic loss. The experimental results of different models in the testing set are depicted in Table 2. In addition to the most frequently used species accuracy for algorithm evaluation, the genus accuracy and family accuracy were also taken into account. As seen from Table 2, the models trained by taxonomic loss are consistently better in performance than those trained by cross-entropy loss, and the improvements of species accuracy range from 0.08% to 2.45%. The SENet-154 trained by taxonomic loss outperforms the other models, which achieves family, genus, and species accuracies of 83.19%, 78.08%, and 71.15%, respectively. Meanwhile, the Inception-ResNet-v2 trained by taxonomic loss obtains the most significant performance increase compared with the cross-entropy one and improves three-level accuracies of 2.70%, 2.28%, and 2.45%. These experimental results demonstrated that the proposed taxonomic loss was easy to implement and could effectively facilitate the training of both light-weight and complex neural networks.

Figure 4 illustrates the loss descent curves of Inception-ResNet-v2 trained by two different loss functions during the training stage. It can be seen that the value of taxonomic loss is much larger than cross-entropy loss at the beginning stage, because it is the sum of three-level losses. As training advancing, the difference between them was gradually decreasing. Although the taxonomic loss value was slightly higher than the cross-entropy one at the final stage, the decline was greater and the optimization for network was also better.

4.2. Results on PlantCLEF 2017 Dataset. In the latter experiments, the state-of-the-art CNNs were trained by cross-entropy and taxonomic loss on PlantCLEF 2017 dataset to further verify the proposed algorithm. As shown in Table 3, when the neural networks trained by the taxonomic loss, almost all of them deliver greater than 2% family accuracy improvements, and the species accuracy increase range from 0.50% to 3.18%. The SENet-154 trained by taxonomic loss performs better than the others, which achieves three-level accuracies of 84.07%, 79.97%, and 73.61% and obtains 2.23%, 1.34%, and 1.08% relative improvements compared with the same model trained by cross-entropy loss. Therefore, it can be concluded that the taxonomic loss could also further facilitate the training of various neural networks on PlantCLEF 2017 dataset with huger data and more species.

Also, the proposed taxonomic loss could generate more supervision information when coarse-level predictions were wrong, which improved the accuracies of family and genus levels. Several typical plant images in PlantCLEF 2017 testing set and their corresponding predictions are listed in Table 4. One can see that the ResNet-50 trained by cross-entropy loss identified all images incorrectly at three levels, while the model trained by the taxonomic loss could correct the predictions at coarse levels. For example, the sample (b) was recognized as Fagus grandifolia Ehrh. at species level by the ResNet-50 trained by cross-entropy loss, and the coarser-level labels (Fagus, Fagaceae) were inferred according to the taxonomic tree, so the three-level predictions were totally wrong. Although the model trained by the proposed taxonomic loss had not predicted the most specific species correctly, the family and genus were correct, which was also useful in practice.

5. Discussion

Based on the above results, it has been verified that the proposed taxonomic loss could further facilitate the training of multiple state-of-the-art neural networks no matter on PlantCLEF 2015 dataset with 1,000 species or PlantCLEF 2017 dataset with 10,000 species. To further validating the influence of taxonomic tree structure on model optimization, the compared experiments were conducted. As shown in Table 5, two neural networks were additionally trained by two-level taxonomic loss: the family-species structure (F-S) and the genus-species structure (G-S), while "F-G-S" represents the taxonomic loss shown in Figure 1(b) and "S" indicates the cross-entropy loss shown in Figure 1(a). One can see from Table 5 that the models trained by three-level taxonomic loss consistently outperform the two-level ones, and both of them achieve higher accuracies than the models trained by single-level taxonomic loss, also known as cross entropy loss. These experimental results have demonstrated that the taxonomic hierarchy with more levels could provide more supervision information during the training stage of neural networks and achieve more competitive results.

6. Conclusion

In this paper, a loss function for fine-grained plant image identification was proposed, which could encode the hierarchical relationships of taxonomic tree into the deep learning objective function. On the one hand, the proposed method was easy to implement with simple group and sum operation. And on the other hand, it facilitated the end-to-end training of various neural networks, which further increased plant identification accuracies at species, genus, and family levels. The experiments on PlantCLEF 2015 and PlantCLEF 2017 datasets demonstrated that the proposed taxonomic loss function performed better than the most adopted cross-entropy loss. In the future, the taxonomic loss could be generalized to other fine-grained classification tasks with multilevel labels, such as bird species identification and car class categorization.

Data Availability

The PlantCLEF 2015 dataset and PlantCLEF 2017 dataset supporting this study are from previously reported studies, which have been cited. The PlantCLEF 2015 dataset is available at, and the PlantCLEF 2017 dataset is available at LifeCLEF/PlantCLEF2017/

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Authors' Contributions

Danzi Wu and Xue Han contributed equally to this work.


This work was supported by the Fundamental Research Funds for the Central Universities (no. 2019ZY38), the Natural Science Foundation of China (no. 61702038), and the Special Fund for Beijing Common Construction Project.


[1] W. S. Judd, Plant Systematics, W. H. Freeman and Company, Sinauer Sunderland, MA, USA, 2002.

[2] G. W. Hopkins and R. P. Freckleton, "Declines in the numbers of amateur and professional taxonomists: implications for conservation," Animal Conservation, vol. 5, no. 3, pp. 245-249, 2010.

[3] C. Mora, D. P. Tittensor, S. Adl, A. G. B. Simpson, and B. Worm, "How many species are there on earth and in the ocean?," PLoS Biology, vol. 9, no. 8, Article ID e1001127, 2011.

[4] J. Waldchen and P. Mader, "Plant species identification using computer vision techniques: a systematic literature review," Archives of Computational Methods in Engineering, vol. 25, no. 2, pp. 507-543, 2018.

[5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: a large-scale hierarchical image database," in Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, IEEE, Miami, FL, USA, June 2009.

[6] S. Zhang, H. Wang, and W. Huang, "Two-stage plant species recognition by local mean clustering and weighted sparse representation classification," Cluster Computing, vol. 20, no. 2, pp. 1517-1525, 2017.

[7] M. Dyrmann, H. Karstoft, and H. S. Midtiby, "Plant species classification using deep convolutional neural network," Biosystems Engineering, vol. 151, pp. 72-80, 2016.

[8] G. L. Grinblat, L. C. Uzal, M. G. Larese, and P. M. Granitto, "Deep learning for plant identification using vein morphological patterns," Computers and Electronics in Agriculture, vol. 127, pp. 418-424, 2016.

[9] N. Kumar, P. N. Belhumeur, A. Biswas et al., "Leafsnap: a computer vision system for automatic plant species identification," in Proceedings of the European Conference on Computer Vision, pp. 502-516, Springer, Florence, Italy, October 2012.

[10] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," abs/1409.1556.

[11] C. Szegedy, "Inception-v4, inception-ResNet and the impact of residual connections on learning," in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, February 2017.

[12] K. He, X. Zhang, S. Ren, and J. Sun, "Identity mappings in deep residual networks," in Proceedings of the European Conference on Computer Vision, pp. 630-645, Amsterdam, The Netherlands, October 2016.

[13] C. Szegedy, W. Liu, Y. Jia et al., "Going deeper with convolutions," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-9, IEEE, Boston, MA, USA, June 2015.

[14] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818-2826, Las Vegas, NV, USA, June 2016.

[15] Y. Sun, Y. Liu, G. Wang, and H. Zhang, "Deep learning for plant identification in natural environment," Computational Intelligence and Neuroscience, vol. 2017, no. 4, 6 pages, 2017.

[16] J. R. Ubbens and I. Stavness, "Deep plant phenomics: a deep learning platform for complex plant phenotyping tasks," Frontiers in Plant Science, vol. 8, p. 1190, 2017.

[17] H. Zhu, Q. Liu, Y. Qi, X. Huang, F. Jiang, and S. Zhang, "Plant identification based on very deep convolutional neural networks," Multimedia Tools and Applications, vol. 77, no. 22, pp. 29779-29797, 2018.

[18] J. W. Tan, S.-W. Chang, S. B. A. Kareem, H. J. Yap, and K.-T. Yong, "Deep learning for plant species classification using leaf vein morphometric," IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2018.

[19] J. Waldchen, M. Rzanny, M. Seeland, and P. Mader, "Automated plant species identification-trends and future directions," PLOS Computational Biology, vol. 14, no. 4, p. e1005993, 2018.

[20] A. Joly, H. Goeau, H. Glotin et al., "LifeCLEF 2015: multimedia life species identification challenges," in Proceedings of the CLEF 2015, September 2015.

[21] H. Goeau, P. Bonnet, and A. Joly, "Plant identification based on noisy web data: the amazing performance of deep learning (LifeCLEF 2017)," in Workshop Proceedings of Conference and Labs of the Evaluation Forum (CLEF 2017), Dublin, Ireland, September 2017.

[22] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," in Proceedings of the International Conference on Neural Information Processing Systems, pp. 1097-1105, Lake Tahoe, NV, USA, December 2012.

[23] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, Las Vegas, NV, USA, June 2016.

[24] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "MobileNetV2: inverted residuals and linear bottlenecks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510-4520, Salt Lake City, UT, USA, June 2018.

[25] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, "ShuffleNet V2: practical guidelines for efficient CNN architecture design," in Proceedings of the European Conference on Computer Vision (ECCV), pp. 116-131, Munich, Germany, September 2018.

[26] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700-4708, Honolulu, HI, USA, July 2017.

[27] J. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132-7141, Salt Lake City, UT, USA, June 2018.

Danzi Wu, (1) Xue Han, (2) Guau Wang [ID], (2) Yu Sun [ID], (2,3) Haiyan Zhang [ID], (2), and Hongping Fu (2)

(1) School of Landscape Architecture, Beijing Forestry University, Beijing 100083, China

(2) School of Information Science and Technology, Beijing Forestry University, Beijing 100083, China

(3) School of Cyber Science and Technology, Beihang University, Beijing 100191, China

Correspondence should be addressed to Yu Sun; and Haiyan Zhang;

Received 16 August 2019; Revised 21 October 2019; Accepted 29 October 2019; Published 21 November 2019

Academic Editor: Michele Migliore

Caption: Figure 1: The end-to-end training pipeline of deep learning plant identification. Two distinct loss modules are shown in the lower part: (a) the cross-entropy loss uses only the species-level labels; (b) the proposed taxonomic loss encodes the hierarchy among three-level labels into the objective function.

Caption: Figure 2: (a) A brief taxonomic tree of Fagaceae family; (b) the derivation process of genus probabilities and family probabilities according to the taxonomic hierarchy by group and sum operation.

Caption: Figure 3: The effects of image augmentation. (a) The cropped image, the image with (b) horizontal flipping, (c) rotation (degree = 10), (d) translation (ratio = 0.15), (e) scaling (ratio = 0.8), (f) scaling (ratio = 1.2), (g) shear (degree = 10), and (h) multiple random augmentation.

Caption: Figure 4: Loss curves of Inception-ResNet-v2 trained by cross-entropy and taxonomic loss.
Table 1: Details of PlantCLEF 2015 and PlantCLEF 2017 dataset.

Dataset          Number of classes          Number of samples

                 Family   Genus   Species    Train     Test

PlantCLEF 2015    124      516     1,000    91,758    21,446
PlantCLEF 2017    341     2,991   10,000    226,386   29,901

Table 2: Accuracies of eight state-of-the-art neural networks
trained by cross-entropy and taxonomic loss on PlantCLEF 2015

Neural network         Loss     Accuracy (%)
                                Family   Genus   Species

GoogLeNet [13]          CL      72.62    65.97    59.69
                       TAX      74.95    68.07    61.06

ResNet-50 [23]          CL      77.48    71.59    65.07
                       TAX      78.55    72.20    65.15

Inception-v3 [14]       CL      77.93    74.01    67.98
                       TAX      80.31    74.46    67.42

Inception-              CL      80.66    74.57    67.93
ResNet-v2 [11]         TAX      83.36    76.85    70.38

MobileNet v2 [24]       CL      72.13    65.65    59.16
                       TAX      74.18    67.52    60.42

ShuffleNet v2 [25]      CL      66.39    59.32    52.88
                       TAX      68.80    61.45    54.18

DenSeNet-169 [26]       CL      78.57    73.00    66.76
                       TAX      70.11    74.48    67.23

SENet-154 [27]          CL      81.25    76.81    70.08
                       TAX      73.19    78.78    71.15

Table 3: Accuracies of eight state-of-the-art neural networks
trained by cross-entropy and taxonomic loss on PlantCLEF 2017

Neural network              Loss             Accuracy (%)
                                      Family   Genus    Species

GoogLeNet [13]                CL      68.73    64.22     57.86
                             TAX      73.29    68.64     61.04

ResNet-50 [23]                CL      76.32    72.49     66.68
                             TAX      78.95    74.77     68.04

Inception-v3 [14]             CL      77.32    73.12     67.05
                             TAX      79.87    76.02     68.76

Inception-ResNet-v2 [11]      CL      79.98    75.43     68.97
                             TAX      82.31    78.65     71.21

MobileNet-v2 [24]             CL      71.76    67.73     61.78
                             TAX      73.88     69.4     62.01

ShuffleNet-v2 [25]            CL      61.94    57.12     49.96
                             TAX      66.13    60.73     53.12

DenSeNet-169 [26]             CL      76.34    72.53     66.60
                             TAX      77.87    73.92     67.10

SENet-154 [27]                CL      81.84    78.63     72.53
                             TAX      84.07    79.97     73.61

Table 4: Typical images in PlantCLEF 2017 testing set
and predictions made by ResNet-50 trained by two
different loss functions: cross-entropy loss (CL)
and taxonomic loss (TAX).

     Loss          Family           Genus

       GT        Lilliaceae#      Erythronium#
a      CL        Orchidaceae       Orchis
      TAX        Lilliaceae#      Clintonia

       GT         Ulmaceae#         Ulmus#
b      CL         Fagaceae          Fagus
      TAX         Ulmaceae#         Ulmus#

       GT      Grossulariaceae#     Ribes#
c      CL         Rosaceae       Holodiscus
      TAX      Grossulariaceae#     Ribes#

       GT        Asteraceae#     Heterotheca#
d      CL        Leguminosae      Syrmatium
      TAX        Asteraceae#     Heterotheca#

     Loss                     Species

       GT        Erythronium americanum Ker Gawl.#
a      CL             Orchis mascula (L.) L.
      TAX           Clintonia andrewsiana Torr.

       GT               Ulmus americana L.#
b      CL             Fagus grandifolia Ehrh.
      TAX             Ulmus crassifolia Nutt.

       GT             Ribes indecorum Eastw.#
c      CL       Holodiscus discolor (pursh) Maxim.
      TAX             Ribes indecorum Eastw.#

       GT      Heterotheca canescens (DC.) Shinners#
d      CL            Syrmatium glabrum Vogel.
      TAX      Heterotheca canescens (DC.) Shinners#

Bold values indicate the ground truth (GT) and the
correct predictions.

Note: Bold values indicate the ground truth (GT) and the
correct predictions are indicated with #.

Table 5: Accuracies of two neural networks trained by taxonomic
loss with different taxonomic hierarchy.

Dataset     Neural network   Taxonomic         Accuracy (%)
                                         Family   Genus   Species

                               F-G-S     83.36    76.85    70.38
PlantCLEF     Inception-        F-S      82.04    76.10    69.36
2015        ResNet-v2 [11]      G-S      81.48    75.82    68.94
                                 S       80.66    74.57    67.93
                               F-G-S     66.13    60.73    53.12
PlantCLEF   ShuffleNet-v2       F-S      64.08    57.71    50.03
2017             [25]           G-S      64.26    59.11    51.64
                                 S       61.94    57.12    49.96
COPYRIGHT 2019 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Wu, Danzi; Han, Xue; Wang, Guan; Sun, Yu; Zhang, Haiyan; Fu, Hongping
Publication:Computational Intelligence and Neuroscience
Geographic Code:9CHIN
Date:Dec 1, 2019
Previous Article:Reinforcement Emotion-Cognition System: A Teaching Words Task.
Next Article:Improved Transductive Support Vector Machine for a Small Labelled Set in Motor Imagery-Based Brain-Computer Interface.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters