Printer Friendly

Academic Activities Transaction Extraction Based on Deep Belief Network.

1. Introduction

Academic activities are important aspects of scholars to participate in social activities. Generally, a scholar's academic activities experience is mainly reflected in the following aspects: education experience, research experience, paper publication, and project cooperation process. Academic relationship network can be established through these academic activities. And these academic activities usually appear in the information document, such as the personal space of scholars, the application for science and technology project, and the researcher's resume. Among them education experience and research experience are essential parts. In this paper, we use an efficient method to extract the scholar's academic activity records from those information documents.

An academic activity record should have the following elements: person name is used to denote the subject of transaction, activity or behavior is used to describe the transaction action and properties, behavior objects are used as the place or organization of the transaction and include academic attribute and time phrases of transaction. In other words, extracting academic transaction is essential to identify named entities of the person, activities, time, place, and academic attribute information from academic texts. We use these five named entities to structurally describe an academic activity, so as to achieve the purpose of automatically extracting the records of academic activities. In early studies, the popular method is to segment Chinese words for the original documents first and use different strategies to identify named entities of different categories step by step. This kind of method not only is inefficient, but also will enlarge the error caused by each step extraction. Therefore, we propose DBN (Deep Belief Network) to solve this problem in order to achieve one-time recognition and extraction of various academic activities elements.

DBN is a deep neural network learning model and its hidden layer of neural network has excellent feature learning ability and can learn from the shallow features of the raw data to the deep level abstract features. Meanwhile, the layer by layer initialization is adopted in DBN; that is to say, each layer learns its parameters independently, which reduces the difficulty of neural network training [1]. The DBN is composed of multilayer unsupervised RBM (Restricted Boltzmann Machine) and layer BP. High-level features are extracted from raw text data with unsupervised learning of hidden layer neuron nodes in RBM. Therefore, this paper proposes using DBN method to extract the records of academic activities; it can automatically recognize the five main elements of the academic activities from the raw document and achieve the purpose of efficient and accurate extraction of academic activity records.

2. Related Work

The key to extract the academic activity information from the unstructured text is to realize the detection and recognition of the elements in academic activity text, which is an important research topic in NLP (Natural Language Processing) [2, 3]. At present, the traditional methods used for named entity recognition are mainly ME (Maximum Entropy) [4], CRF (Conditional Random Fields) [5-7], and kernel function [810]. These methods require more artificial participation in text feature extraction and need multistep processing. The error of various stages will be accumulated to the next state, thus reducing the accuracy of entity recognition.

The concept of deep learning was first proposed by Professor Hinton in 2006 [11]. Deep learning method has achieved good results in image recognition and speech recognition [12-14]. In recent years, the deep learning method is widely applied to NLP, and the researchers hope to use the method with excellent learning ability to recognize the abstract features from the raw features, so as to improve the processing ability of document information recognition and extraction. Owing to the feature of words that can be learned by the features of the surrounding words, in 2011, Collobert et al. using this principle designed a unified deep neural network model [15]. The model uses unsupervised learning to obtain the word-based feature vector representation and has achieved good effect in semantic role labeling and named entity recognition. In 2013, Based on Collobert's study, Mikolov et al. proposed the Continuous Bag-of-Words (CBOW) model to predict center words with the surrounding words [16] and the Skip-Gram model for using the center words to predict the surrounding words [16]. These two models can effectively extract the text word vector and are better for semantic relational representation. At present, the commonly deep learning models include DBN, Auto-Encoder, and LSTM (Long Short-Term Memory) in NLP [3]. The DBN model constructed an energy function and used the hidden layer structure of the RBM to learn the text advanced features. In 2014, Chen used the DBN model and achieved the best effect in the contrast experiment of ACE2004 corpus entity recognition with CRF, SVM, and BP [17]. In 2016, Feng et al. used the word vector as the input of the DBN and achieved 89.58% F value in entity recognition of people's daily corpus [18]. Meanwhile, Jiang et al. extracted the text word vector feature applied to DBN model in the Reuters-21,578 and 20-Newsgroup text categories and achieved a better text classification compared with SVM and KNN [19]. The Auto-Encoder applied the minimum error to train the parameters between the input and output and to learn the depth text features by encoding and decoding the input information. Wang used Auto-Encoder to identify the person entity and place entity in corpus of people daily, which obtained 97.55% accuracy [20]. In 2016, Leng and Jiang used Auto-Encoder model to address the enterprise cooperation, the product relationship, and the enterprise demand relationship extraction [21]. LSTM is a time recursive neural network model, which has the memory property of the text sequence features, and realizes the selection of context features through the cyclic hidden layer. In 2016, Zheng et al. extracted the word vector feature applied to the LSTM depth learning model and obtained 84.8% F values in SemEval-2010 Task 8 corpus named recognition [22].

Due to the powerful feature expression of deep learning, many scholars are engaged in deep learning research in NLP. The text feature representations of character-based vector and word-based vector are the two commonly used in text feature extraction, and they have different effect on the accuracy of the named entity recognition for different corpus and different deep learning model. It is still a hot issue that the character-based feature vector and the word-based feature vector are used for the input of deep learning model in different text feature recognition.

3. Category of Academic Activities Named Entity

Generally, academic activities transaction information has been fully reflected in application of science and technology projects. In the proposals, the applicant information is an essential part of the resume. We use the description of the educational experience and research experience as examples to illustrate the possible elements of the academic activities in the applicant's resume.

Example 1.

(i) [phrase omitted], 1992 [phrase omitted] (Zhang graduated from the Medical Department of Hunan Medical University in 1992)".

Person: [phrase omitted] (Zhang)",

Activity or behavior: [phrase omitted] (graduate)",

Temporal phrase: "1992 [phrase omitted] (1992)",

Behavioral object: [phrase omitted] (Hunan Medical University)",

Academic attribute: [phrase omitted] (Medical Department)".

Example 2.

(i) [phrase omitted] (Li works at the Institute of Oncology, Hunan Medical University from 1994 to present)".

Person: [phrase omitted] (Li)",

Activity or behavior: [phrase omitted] (work)",

Temporal phrase: "1994 [phrase omitted] (1994 to present)",

Behavioral object: [phrase omitted] (Institute of Oncology, Hunan Medical University)".

Based on the analysis of a large number of resumes in scientific and technological documents, we classified the five key elements in transaction extraction into five different types of academic activity named entities, that is, person, organization, academic activities, temporal phrase, and academic terms.

4. Academic Activities Transaction Extraction

4.1. Transaction Extraction Process. The information extraction process of academic activities transaction is divided into the following steps: text preprocessing, character-based vector representation, and academic activity entity recognition; the specific process is shown in Figure 1.

First, we extract the academic activity paragraphs from scientific and technological documents and use the ICT-CLAS (Institute of Computing Technology, Chinese Lexical Analysis System) to carry out text word segmentation. Then, character-based vector features are established from the text words, and some of them are manually labeled with different types of academic activity named entities, so that we can obtain DBN model train sets and build test sets. Finally, on the basis of the trained DBN model, the advanced features can be extracted through unsupervised learning, ultimately realized named entity recognition of academic activity.

4.2. Character-Based Feature Vector Representation. Character-based feature vector is a common representation of text feature in NLP. Because character-based vector is a fine-grained unit for Chinese words, the vector can be used to better describe the raw features of named entities; then, it can effectively improve the accuracy of named entity recognition.

Character-based vector extraction consists of three steps. First, all training corpus entities are labeled to constitute the entity set Entity = {[entity.sub.1], [entity.sub.2], ..., [entity.sub.n]}, where [entity.sub.1] represents the i named entity in the training corpus and is the total number of entities. Then, taking out the words that appear in the entity set Entity and removing the repeated words, we can get a collection of all character sets Character = {[ch.sub.1], [ch.sub.2], ..., [ch.sub.m]}, where m is the total number of words in Entity, that is, the length of the character-based feature vector. According to expression (1), each entity in the entity set Entity can be transformed into the character-based vector with the same dimension as the Character.

[mathematical expression not reproducible] (1)

where [v.sub.i] represents the i element in the vector, and when [v.sub.i] = [ch.sub.i], the value [v.sub.i] is 1; otherwise the value is 0.

For example, suppose that the given named entity set is as follows:
Entity = {
   "[phrase omitted] (Zhang),"
   "[phrase omitted] (Central South University),"
   "[phrase omitted] (Computer Science and
   Technology),"
   "[phrase omitted] (Central South University
   Railway Institute)"
  }.


We can get the character set as follows:

Character = [phrase omitted] (Zhang School of Computer Science and Technology, Central South University, Railway Institute)" }.

The length of the set is 18, and each entity can be transformed into the following character-based vector:

Vector([entity.sub.1] = [phrase omitted]) = {1,1,0, 0,0,0, 0,0,0, 0,0,0, 0, 0,0, 0, 0,0}.

Vector([entity.sub.2] = [phrase omitted]) = {0,0,1,1,1,1, 0,0,0, 0,0,0, 0, 0,0, 0, 0,0}.

Vector([entity.sub.3] = [phrase omitted] = {0,0,0, 0,0,0,1,1,1,1,1,1,1,1,0, 0, 0,0}.

Vector([entity.sub.4] = [phrase omitted]) = {0,0,1,1,1,1, 0,0,0, 0,0,0, 0, 0,1,1,1,1}.

Through the above steps, the entity feature can be transformed into character-based feature with same dimension as the character set.

4.3. Deep Belief Network Model

4.3.1. Restricted Boltzmann Machine. DBN is a deep network structure model with multiple RBM (Restricted Boltzmann Machine) layers and a BP layer. RBM consists of two-layer undirected graph model, which contains visible layer V and hidden layer h. The nodes between the visible layers and hidden layers are connected with each other, and the same layer nodes are not connected with each other. The visible layer V is used for data input, and the hidden layer h is employed to extract the hidden features. The network structure model is shown in Figure 2.

The parameters are described as follows.

[n.sub.v], [n.sub.h] denote the number of neurons contained in visible and hidden layer, respectively, [mathematical expression not reproducible] denote the state vector of visible layer, [mathematical expression not reproducible] is the state vector of hidden layer, [mathematical expression not reproducible] is the bias vector of visible layer,[mathematical expression not reproducible] is taken as the bias vector of hidden layer, and [mathematical expression not reproducible] is the weight matrix between hidden and visible layer.

The RBM model is an energy based model, for a given state set (v, h) of visible and hidden layer; the energy function can be defined as

[mathematical expression not reproducible]. (2)

Vector representation can be defined as

[E.sub.[theta]] (v, h) = -[a.sup.T]v - [b.sup.T]h - [h.sup.T]Wv. (3)

Based on the energy function obtained by (2), the joint probability distribution of visible and hidden layers can be obtained:

[mathematical expression not reproducible] (4)

where [Z.sub.[theta]] denotes normalized factor.

The hidden layer node h can be obtained according to the known visible node v:

p([h.sub.j]|V) = 1/[1 + exp (-[a.sub.j] - [[summation].sub.i][v.sub.i][w.sub.ij])]. (5)

Similarly, since RBM is a symmetric network, the value of hidden layer node can be reconstructed into the visible layer node:

p([v.sub.i]|H) = 1/[1 + exp (-[b.sub.i] - [[summation].sub.j][h.sub.j][w.sub.ji])]. (6)

The purpose of the RBM model training is to get the output of hidden layer h for a given visible layer v. Through training, we can obtain the optimal parameters [theta](w, a, b) and get the maximum joint probability p(v, h | [theta]). The hidden layer is the expression of text shallow feature to deep feature, and it can be interpreted as the reconstruction of visible layer in different space. The aim of training is to reconstruct the visible layer based on expression (6), so that makes the minimum error between the original visible layer and the reconstructed visible layer.

4.3.2. Deep Belief Network. The DBN structure model is shown in Figure 3. It is a depth learning model that contains multilayer RBM layer and a BP layer. The model has autonomic learning advanced feature from shallow feature of text and has powerful classification ability for high dimensional sparse feature vectors [11].

The training process of DBN is divided into two steps. The first step is unsupervised training of each RBM layer; the first RBM layer is composed of the raw input feature vector v(0) and the first hidden layer h(0). By training the first layer we can get optimal parameters [[theta].sub.0]. After training the first layer of RBM, we put the output h(0) as the input v(1) of the second RBM layer. Finally, the whole RBM network can be unsupervised training and so on. With the supervised training BP network in next step, the error information generated by BP layer will be backpropagated to all RBM layers to fine-tune the whole model and ultimately get the optimal parameters of the DBN network.

5. Experiment Analysis

In this paper, DBN is applied to extract the transaction information in Chinese documents. We use character-based vectors and word-based vectors to extract features and compare the adaptability of these two feature expressions to the description of text features. Meanwhile, character-based vector is used for the input of DBN model and carried out tenfold cross validation experiments. The results are compared with the CRF in paper [6]. The paper uses Python deep learning framework Theano to implement the DBN algorithm and uses Python language to implement all the codes in the win 7 environment. The training corpus is derived from the applicant's resume, which is included in the research proposals; we can get 29515 entities after the word segmentation.

5.1. Named Entity Tagging. As the word segmentation software has different division of the word with different granularity, the solid entity will be split into several parts after word segmentation. For example, [phrase omitted] (Central South University Railway Institute)" will be converted to [phrase omitted] (Central South University) [phrase omitted] (Railway)/ [phrase omitted] (Institute)." In this paper, the output label is tagged by combining BMU (beginning, middle, and unite) with the entity type X. For example, U_ORG labeling the current word is the type of ORG entity, B_ORG denotes the current word for entity ORG prefix, M_ORG represents its middle part of entity ORG, and the organization entity [phrase omitted] can be labeled by B_ORG, M_ORG, and M_ORG. Based on the same method we can achieve other types of entity tags as shown in Table 1.

5.2. Comparative Tests of Character-Based Vector and Word-Based Vector. In this paper, character-based vector and word-based vector are used to represent feature data from training text, respectively. According to the principle of 4.2, we use Python programming language to implement text feature extraction and make up the character set about all named entity, which includes 1467 characters. In other words, the dimension of entity feature vector is 1467. For word-based vector feature extraction, the word2vec, an open-source toolkit developed by Google in 2013, is employed to implement feature extraction. Supposing that the dimension of the word-based vector of each segmentation entity is 100 and that the context window size is 2, then we can obtain a word vector of 500 dimensions by word2vec tool.

During the training of DBN model, the parameters of the model are set as follows: the pretraining rate plr = 0.01, fine-tuning learning rate flr = 0.1, the iterations in pretraining pre.epochs = 50, and the iterations in training train-epochs = 100; then the results are shown in Table 2 based on character-based vector and word-based vector as input of DBN model.

The results in Table 2 show that, whether in a shallow DBN model or a deep DBN model, the character-based vector is better than the word-based vector. Character-based vector making the character in the entity is formed in a character dictionary. The text feature is reflected by the high dimensional vector, and the characteristics of each entity can be expressed through the high dimensional character-based vector; furthermore, it is not introducing too much noise data. However, the word-based vector is more dependent on the adjacent words, which is used to reflect the similarity between words, and there are a large number of stop words and irrelevant words in academic experience. This kind of noise data feature as the input of DBN model will reduce the named entity recognition accuracy of DBN model; therefore, the effect is not as good as character-based vector.

5.3. DBN Academic Activity Named Entity Recognition. The 29915 entities with word segmentation tagging are divided into training sets and test sets. Given the training parameters plr = 0.01, fir = 0.1, and pre_epochs = 50,weusecharacterbased feature vector to carry out tenfold cross validation experiments and get the average accuracy [bar.P], recall rate [bar.R], and [bar.F] shown in Table 3.

The results from Table 3 show that the DBN model can learn the entity character feature through character-based vector to represent the raw entity feature, which can effectively reduce the interference of noise data caused by Chinese segmentation errors. Therefore, the model has achieved good results in the accuracy of various types of entity recognition. Furthermore, the model uses character-based vector as input and does not need to preprocess the proper nouns which are inaccurate in Chinese word segmentation, thus reducing the manual workload of word segmentation. Figure 4 shows the accuracy distribution curves of various entities in tenfold cross validation experiments, in order to analyze the stability.

As shown in Figure 4, the accuracy of temporal phrase entity, academic activity entity, and academic terms entity is relatively stable, due to the simply syntactic structure in the description of academic experience and the simple character features that constitute the three types of entities. The DBN model is relatively prone to learn text advanced feature, leading to a better result. The accuracy rate of the person entity in the experiment fluctuates greatly, and the minimum accuracy in Experiment 9 is 67%. Because the feature of person entity is very rich, the combination feature is relatively complex, which lead to a lower accuracy rate of entity recognition than other entities. In addition, because most of organization entities are composed of long words, the error caused by the segmentation tool on different granularity organization entities will have an impact on the accuracy; therefore, the accuracy curve shows a certain fluctuation. Table 4 presents the comprehensive accuracy [??], recall [??], and [??] values of named entity recognition for academic activities and compares it with the results of CRF used in [6].

From Table 4, compared with the CRF model, the DBN model obtains higher accuracy and F score in the recognition of academic activities. Because CRF uses word-based vector feature for sequence tagging, the effect is not as good as character-based vector feature in academic entity feature extraction. In addition, the accuracy of CRF model greatly depends on the design of feature templates to extract the advanced feature of text. In most cases, the feature templates are prone to bias, thus reducing the accuracy of entity recognition. In a word, DBN model combines the fine-grained feature representation of character-based vectors and uses hidden layer neurons to extract hierarchical advanced feature of text; thus the accuracy is higher, which verifies the effectiveness of the DBN model for extracting academic activity transaction information.

6. Conclusion

In this paper, DBN is employed to extract the information of academic activities, which is the process of extracting the text advanced feature information. In the process, artificial feature extraction is greatly reduced, and it automatically recognizes the advanced feature from shallow feature of text. Moreover, the model does not need to preset a large number of dictionaries in the word segmentation stage and does not require special preprocessing such as regular matching. For the inaccurate word segmentation, it can recognize the entity boundary through the DBN model and finally realize all kinds of named entities one-off recognition. Comparing with the CRF, the method gained a better accuracy and performance.

The results also indicate a good adaptability in the science and technology information document corpus and can be better applied to the large-scale text processing. In addition, in the process of DBN text extraction training, the number of neurons and learning parameters are set up by the exploratory verification. How to select the optimal parameters to reduce the training time needs further study. It is the focus to classify and recognize the relationships between the academic entities in next study.

https://doi.org/ 10.1155/2017/5067069

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by Science and Technology Plan of Hunan Province Project 2016JC2011 and National Natural Science Foundation of China Project 61073105.

References

[1] G. E. Hinton and R. R. Salakhutdinov, "Reducing the dimensionality of data with neural networks," American Association for the Advancement of Science: Science, vol. 313, no. 5786, pp. 504-507, 2006.

[2] Z. Shusen, Deep Belief Networks Based Classification Methods, Harbin Institute of Technology, 2012.

[3] J.-F. Yang, Q.-B. Yu, Y. Guan, and Z.-P. Jiang, "An overview of research on electronic medical record oriented named entity recognition and entity relation extraction," Acta Automatica Sinica, vol. 40, no. 8, pp. 1537-1562, 2014.

[4] Z. Jian, Research on Conditional Probabilistic Model and Its Application in Chinese Named Entity Recognition, Harbin Institute of Technology, 2006.

[5] J. Lafferty, A. McCallum, and F. Pereira, "Conditional random fields: probabilistic models for segmenting and labeling sequence data," in Proceedings of the 18th International Conference on Machine Learning, Williams College, Williamstown, MA, USA, 2001.

[6] X. Xiaowen, Chinese Named Entity Recognition Based on Conditional Random Fields, Xiamen University, 2006.

[7] F. Huang, S. Tang, and C. X. Ling, "Extracting academic activity transaction in chinese documents," Knowledge Engineering and Management, vol. 278, pp. 125-135, 2014.

[8] K. Liu, L. Fang, and L. Lei, "Implementation of a kernel-based chinese relation extraction system," Journal of Computer Research Development, vol. 44, no. 8, pp. 1406-1411, 2017.

[9] A. Culotta and J. Sorensen, "Dependency tree kernels for relation extraction," in Proceedings of the the 42nd Annual Meeting, p. 423, Barcelona, Spain, July 2004.

[10] R. C. Bunescu and R. J. Mooney, "A shortest path dependency kernel for relation extraction," in Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, HLT/EMNLP 2005, Co-located with the 2005 Document Understanding Conference, DUC and the 9th International Workshop on Parsing Technologies (IWPT '05), pp. 724-731, October 2005.

[11] Y. Chen, D.-Q. Zheng, and T.-J. Zhao, "Chinese relation extraction based on deep belief nets," Journal of Software, vol. 23, no. 10, pp. 2572-2585, 2012.

[12] L. Shujie and L. Dong, "The application of deep learning in natural language processing," Chinese Society of Computer Communication, vol. 11, no. 13, 2015.

[13] Y. Wang, X. Lin, L. Wu, and W. Zhang, "Effective multi-query expansions: collaborative deep networks for robust landmark retrieval," IEEE Transactions on Image Processing, vol. 26, no. 3, pp. 1393-1404, 2017.

[14] S. Pan, J. Wu, X. Zhu, C. Zhang, and Y. Wang, "Tri-party deep network representation," in Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI '16), pp. 1895-1901, New York, NY, USA, July 2016.

[15] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, "Natural language processing (almost) from scratch," Journal of Machine Learning Research, vol. 12, no. 1, pp. 2493-2537, 2011.

[16] T. Mikolov, K. Chen, and G. Corrado, "Efficient estimation of word representations in vector space," Computer Science, 2013.

[17] Y. Chen, Research on Chinese Information Extraction Based on Deep Belief Nets, Harbin Institute of Technology, 2014.

[18] Y. Feng, H. Zhang, and W. Hao, "Named entity recognition based on deep belief net," Computer Science, vol. 43, no. 4, pp. 224-230, 2016.

[19] M. Jiang, Y. Liang, and X. Feng, "Text classification based on deep belief network and soft-max regression," Neural Computing & Applications, pp. 1-10, 2016.

[20] W. Guoyu, Research on Chinese Named Entity Recognition Based on Deep Learning, Beijing University of Technology, 2015.

[21] J. Leng and P. Jiang, "A deep learning approach for relationship extraction from interaction context in social manufacturing paradigm," Knowledge-Based Systems, vol. 100, no. C, pp. 188-199, 2016.

[22] S. Zheng, J. Xu, P. Zhou, H. Bao, Z. Qi, and B. Xu, "A neural network framework for relation extraction: learning entity semantic and relation pattern," Knowledge-Based Systems, vol. 114, no. 1, pp. 12-23, 2016.

Xiangqian Wang, Fang Huang, Wencong Wan, and Chengyuan Zhang

School of Information Science and Engineering, Central South University,

Changsha 410083, China

Correspondence should be addressed to Fang Huang; hfang@csu.edu.cn

Received 26 October 2017; Accepted 20 November 2017; Published 12 December 2017

Academic Editor: Lin Wu

Caption: FIGURE 1: Process of academic activities transaction information extraction.

Caption: FIGURE 2: The structure of RBM.

Caption: FIGURE 3: The structure of DBN.

Caption: FIGURE 4: Entity accuracy rate distribution.
TABLE 1: Entity tag collection.

Types             Mark                 Description

Person            U_PER               Person entity
                  B_PER         Beginning of person entity
                  M_PER           Middle of person entity

Organization      U_ORG            Organization entity
                  B-ORG      Beginning of organization entity
                  M_ORG       Middle of organization entity

Academic           U_R           Academic activity entity
activity           B_R    Beginning of academic activity entity
                   MR       Middle of academic activity entity

Temporal phrase    UT             Temporal phrase entity
                   B_T     Beginning of temporal phrase entity
                   M_T       Middle of temporal phrase entity

Academic terms     U_A            Academic terms entity
                   B_A      Beginning of academic terms entity
                   MA        Middle of academic terms entity

TABLE 2: Character-based vector and word-based vector results.

                                   Shallow DBN

Text feature      Dimension   Number of       Error
                               neurons         rate

Character-based   1467        500          11.67%
Word-based        500         500          28.64%

                           Deep DBN

Text feature       Number of      Error
                     neurons       rate

Character-based   (500,300,100)   5.89%
Word-based        (500,300,100)   24.73%

TABLE 3: Tenfold cross validation test results.

Type                [bar.P]   [bar.R]   [bar.F]

Person               87.8%     99.7%     92.8%
Organization         96.8%     94.2%     95.4%
Academic activity    98.2%     98.8%     98.4%
Temporal phrase      97.4%     100%      98.5%
Academic terms       98.1%     96.9%     97.4%

TABLE 4: DBN model and CRF experimental results.

       [??]     [??]    [??]

DBN   95.66%   97.92%   96.5%
CRF   90.5%    94.3%    92.3%


[Please note: Some non-Latin characters were omitted from this article]
COPYRIGHT 2017 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Wang, Xiangqian; Huang, Fang; Wan, Wencong; Zhang, Chengyuan
Publication:Advances in Multimedia
Article Type:Report
Date:Jan 1, 2017
Words:4686
Previous Article:Strategies for Growing User Popularity through Retweet: An Empirical Study.
Next Article:Moving Object Detection for Dynamic Background Scenes Based on Spatiotemporal Model.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters