Printer Friendly

Leveraging Multiactions to Improve Medical Personalized Ranking for Collaborative Filtering.

1. Introduction

With the continuous improvement of people's living standards, healthcare has attracted more and more attention and becomes a hot research topic. The phenomenon of scarcity and unbalanced distribution of medical resources across areas in China becomes a serious social problem. Under current circumstances, it is quite difficult for people to choose appropriate hospitals and doctors. The main channels that patients obtain healthcare information include recommendations from other people by word-of-mouth, advertisements on newspapers or television, and more often in the last decade, using search engines on the Internet (Baidu, Google, etc.). Unfortunately, these methods cannot ensure information quality, accuracy, and reliability of acquaintances' recommendations. Given the importance and seriousness of people's wellbeing, people always go to reputed general hospitals for medical requirements, which lead to the phenomenon of overcapacity in AAA grade comprehensive hospitals and under capacity in Community Health Service Institutions. It forms a trend that people prefer high-reputation hospitals, and thus it worsens the unbalance of medical resources. For the patients, without professional knowledge and relevant medical experience, they spend more unnecessary time and energy in this scenario. Given the expensive healthcare expenses, it is in danger of making the wrong judgments and giving up their medical treatment.

Therefore, it is a vital issue to help patients to attend an appropriate level of medical resource. As emerging medical databases and websites provide tremendous information, a personalized healthcare recommendation service based on web mining methods can be devised. MedHelp (http://www. medhelp.org/) is an online health community, which offers tracking tools for pain, weight, and other chronic conditions. Patients will receive guidance, motivation, and support from peers and experts. CureTogether (http://curetogether.com/) is a website where people anonymously talk about sensitive symptoms, compare health data to better analyze their health

status, and receive more informative treatment decisions and new research discoveries based on patient-contributed data. People can choose medical service from other health-related review websites, such as Vitals (http://www.vitals. com/), Healthgrades (http://www.healthgrades.com/), and RateMDs (http://www.ratemds.com/). On these websites, detailed information about hospitals and doctors' online appointment service can be obtained. This innovative process of medical consultation improves efficiency compared to traditional onsite doctor selection [1].

Recommender systems can help users deal with the information overload problem efficiently by suggesting items (e.g., products, movie, and music) that match users' personal preference [2, 3]. Collaborative filtering [4], a widely exploited technique, has been extensively adopted in commercial recommender systems [5-7]. In previous works, model-based methods have been proposed to improve the predictive accuracy using explicit feedbacks (e.g., numerical ratings) [8-10]. However, in many real application scenarios, explicit numerical ratings might not be available. Some recent works turn to improve the recommendation performance via exploiting users' implicit feedback, such as browsing [11], clicking [5], watching [6], and purchasing [12]. This is known as the one-class recommendation problem, and various solutions have been proposed to solve it by making use of auxiliary relations (e.g., social information).

MR-BPR [13], a state-of-the-art method treating one-class recommendation as a multirelational learning problem, focuses on how to make use of social information on users for item prediction and presents an extension of Bayesian Personalized Ranking for multirelational ranking in social networks. In this work, MR-BPR models users' social preference and item preference simultaneously, but it fails to model how auxiliary relations (i.e., social relations) directly influence users' preferences on items. Zhao et al. develop SBPR [14], to model user preference ranking of items by utilizing the social connections from users' friends. In [14], a new social feedback class by exploiting users' social information is introduced, and the parameter of social coefficient can indicate the attitude from users' social relations towards an item. However, the social feedback is only based on the users' social information with their friends, and this type of feedback can also be considered the "negative feedback." Nevertheless, few works have adopted multiple kinds of observed feedback coming from multiactions between the users and the items simultaneously for the one-class recommendation problem, especially in healthcare recommendation.

In this paper, we study how to leverage multiple observed feedback for better recommendation models, given the assumption regarding a new class of items referred to as "auxiliary feedback." And, a special coefficient is introduced to indicate the preference distance between multiple actions of the users. We then propose a new algorithm called Medical Bayesian Personalized Ranking over multiple users' actions (MBPR). The proposed method is evaluated on a real-world dataset which is collected from a healthcare service website, and empirical results show that the model is more effective and can achieve better recommendation performance. The generality of our approach is also demonstrated in the experiments by being applied to another dataset from mobile ecommence application.

2. Related Works

In this section, we will briefly review some related works in two aspects: (1) methods based on pointwise preference assumptions and (2) methods based on pairwise preference assumptions.

In pointwise methods, the implicit feedback is taken as absolute preference scores. Specifically, an observed user-item pair {u, i} is regarded as a positive feedback and interpreted as that user u likes item i with a high absolute score. The negative feedback is sampled as low preference scores using several strategies. The two typical pointwise approaches for solving this recommendation problem are OCCF (one-class collaborative filtering) [15] and iMF (implicit matrix factorization) [16], where matrix factorization methods can be applied to these methods. OCCF [15] proposes two different sampling strategies for unobserved user-item interactions to solve the one-class recommendation problem. One is weighted low-rank approximation; the other is negative example sampling. In iMF [16] work, confidence weights on implicit feedback is introduced, which can be approximated by two latent feature matrices. However, the limitation of OCCF is that the unobserved user-item pairs are taken as a negative feedback and unobserved user-item pairs {u, j} do not always indicate that user u dislikes item j in real world. As for iMF, the auxiliary knowledge of confidence is required for each observed feedback, which may not be available in real applications.

Compared with pointwise methods, pairwise methods take implicit feedback as relative preferences rather than absolute ones, and the order or ranking of the feedback is focused on. For example, the user-item-item triple {u, i, j} indicates that user u is assumed to prefer item i over item j, which can be interpreted as this user shows higher preference on the positive feedback than on the negative feedback. In [12], Bayesian Personalized Ranking (BPR) algorithm is firstly proposed with such pairwise preference assumption for solving the one-class collaborative filtering problem. Following this framework, various new works have been proposed to combine different types of contextual data into the BPR algorithm. Pan and Chen [11] develop a general algorithm called collaborative filtering via learning pairwise preferences over item sets (CoFiSet) based on a new and relaxed assumption of pairwise preferences over item sets, which defines a user's preference on a set of items (item set) instead of on a single item. Du et al. [17] propose a novel method called User Graph regularized Pairwise Matrix Factorization (UGPMF), to improve recommendation performance by incorporating user-side social connections into the pairwise matrix factorization procedure. Pan and Chen [18] propose an improved assumption and group Bayesian Personalized Ranking (GBPR), via introducing a new concept of group preference to relax the two fundamental assumptions made in the pairwise ranking methods. This algorithm uses richer interactions among users and aggregates the features of a group of related users. Zhao et al. [14] design a pairwise algorithm called Social Bayesian Personalized Ranking (SBPR) which is based on the simple observation that users tend to assign higher ranks to items that their friends prefer, and this method uses social connections to better estimate users' rankings of products. Rendle and Freudenthaler [19] propose a nonuniform and context-dependent item sampler of negative items via oversampling informative pairs to speed up convergence.

However, the aforementioned works mainly focus on modeling the feedback order by using users' positive feedback, negative feedback, or social information, but do not investigate how the feedback from users' other actions can be combined to model users' preference order on items. Compared with these methods, our proposed MBPR algorithm exploits two kinds of observed feedback indicating multiple actions of the users in order to build better models of users' preferences.

3. Problem Definition

In this section, we will first introduce the dataset which is collected from a healthcare service website (Topmd (http://www .topmd.cn/)). And then, we will present the basic concepts and definitions used in the paper and elaborate the problem of Medical Bayesian Personalized Ranking over multiple users' actions.

Let U = [{u}.sup.m.sub.u=1] denote the user sets, I = [{i}.sup.n.sub.i=1] denote the item sets, u [member of] U, i, k, j [member of] I.

The website Topmd is designed and developed by the laboratory which the author works in. The users' main actions include Appointment Registration and Online Consultation with the doctors which are enrolled formally in this website. In this situation, the "doctors" can be defined as the "items." The numbers of user u made an appointment to doctor i or user u consulted doctor k are added up separately. "Positive Feedback" in the dataset represents whether users made an appointment with a doctor, and "Auxiliary Feedback" represents whether users consulted a doctor on the website. The Topmd dataset is briefly illustrated in Figure 1. In this paper, these two kinds of observed feedback coming from multiple users' actions are exploited simultaneously to improve the recommendation performance.

The concepts that will be used in this paper are defined as the following.

3.1. Observed Items and Unobserved Items. For each user u [member of] U, observed items [FA.sub.u] [member of] I and F[C.sub.u] [member of] I include the items which user u shows two different kinds of observed preference, respectively. Unobserved items [[bar.F].sub.u] [member of] I are the remaining items. In this work, for each user u [member of] U, we divide the total item set I into three parts: positive feedback, auxiliary feedback, and negative feedback, just as follows.

3.1.1. Positive Feedback. Positive feedback [P.sub.u] = { (u, i)} is defined as the set of user-item pairs containing user u and his/her observed items i [member of] [FA.sub.u]. These could be the items that user u purchased, rated, reviewed, and so forth. According to the dataset in question, [P.sub.u] is defined as the item sets (i.e., doctors) that have been made an appointment by user u.

3.1.2. Auxiliary Feedback. Auxiliary feedback A[P.sub.u] = {(u, k)} is defined as the set of user-item pairs containing user u and his observed items k [member of] F[C.sub.u]. According to the dataset in question, A[P.sub.u] is defined as the item sets (i.e., doctors) that have been consulted online by user u.

3.1.3. Negative Feedback. [N.sub.u] = {(u, j)} indicates negative feedback defined as the set of user-item pairs, where j [member of] [[bar.F].sub.u] represents items that user u has neither made an appointment nor consulted. Note that a negative feedback does not represent that a user dislikes the items.

It is obvious that [P.sub.u] [intersection] A[P.sub.u] [intersection] [N.sub.u] = [empty set] and [P.sub.u] [union] A[P.sub.u] [union] [N.sub.u] include all the item sets.

3.2. Auxiliary Coefficient. Given the definition of auxiliary feedback, we introduce an auxiliary coefficient [m.sub.uik] which describes the preference distance between u's positive feedback and auxiliary feedback. Given a particular user u, associated with their positive feedback {(u, i)} [member of] [P.sub.u] and auxiliary feedback {(u, k)} [member of] A[P.sub.u], [m.sub.uik] is a parameter indicating the preference distance between u's positive feedback towards item i and auxiliary feedback towards a particular item k. The value and the computational method of the auxiliary coefficient will be discussed later. It can be found that the larger the value of the auxiliary coefficient, the bigger the preference distance between the appointment action and consultation action. In this situation, we can naturally assume that user u may also make an appointment to item k which was only observed in auxiliary feedback.

We list some notations used in the paper in Table 1.

Unlike the previous works, we introduce a new auxiliary feedback class by exploiting users' other kind of action information. With these concepts, the problem of Medical Bayesian Personalized Ranking over multiple users' actions can be defined. The goal of this paper is to recommend a personalized ranked list of items for each user u. According to the above concepts which are defined using both user positive feedback and auxiliary feedback, the main task is how to learn a ranking function that incorporates all of these sources of information.

The problem of leveraging auxiliary feedback (i.e., healthcare consultation information) to improve personalized ranking for collaborative filtering can be defined precisely as follows:

Given observed feedback [S.sup.Train] = (U, I) and the auxiliary feedback coming from multiple actions, the target of this paper is to learn a ranking function for each user u.

[mathematical expression not reproducible], (1)

where [r.sub.i](p) > [r.sub.i+1](q) represents that user u shows higher preference towards item p than item q.

4. Medical Bayesian Personalized Ranking over Multiple Users' Actions

In this section, we will describe our model assumption regarding positive, auxiliary, and negative feedbacks and then detail the proposed algorithm of Medical Bayesian Personalized Ranking over multiple users' actions.

Unlike the previous works, we incorporate auxiliary feedback from a user's healthcare consult information and introduce a coefficient based on the preference distance between positive feedback and auxiliary feedback that controls how training pairs are sampled.

4.1. Model Assumption. We firstly introduce the basic assumption adopted by the Bayesian Personalized Ranking (BPR) [12]. BPR's main idea is to use partial order of items, instead of single user-item examples, to train a recommendation model, which can be represented as

[x.sub.ui] > [x.sub.uj], i [member of] [P.sub.u], j [member of] [N.sub.u], (2)

where [x.sub.ui] represents the preference of user u on item i. Given a positive user-item example of user u on item i (e.g., user u viewed or purchased item i), we assume that the user likely prefers the item i [member of] [P.sub.u] to all other nonobserved items j [member of] [N.sub.u]. This relation is expressed by [x.sub.ui] > [x.sub.uj]. The differences between the basic idea of point-wise and pairwise can be reflected by this assumption. Point-wise methods [15, 16] focus on fitting the numeric rating values whereas pairwise methods [12, 20, 21] model the preference order of the data instead, which can extract a pairwise preference dataset D : U x I x I by

D :={(u, i, j)| i [member of] [I.sup.+.sub.u][conjunction] j [member of] I\[I.sup.+.sub.u]}, (3)

where [I.sup.+.sub.u] is the positive item set and I\[I.sup.+.sub.u] is the missing set associated with user u. The semantics of each triple (u, i, j) [member of] D is that user u is assumed to prefer item i over item j.

The target of the optimization criterion for personalized ranking BPR-OPT is to maximize the following posterior probability over these pairs:

BPR - Opt = [summation over ((u, i, j) [member of] D)] ln[sigma]([[??].sub.uij] ) - [[lambda].sub.[theta]] ([theta]), (4)

where [sigma]{x) is the logistic sigmoid function

[sigma](x) := 1/1 + [e.sup.-x]. (5)

The [theta] represents the parameter vector of an arbitrary model class (e.g., matrix factorization), and [[lambda].sub.0] is model-specific regularization parameters.

Previous works have shown that the pairwise assumption generates better recommendation results than the pointwise methods. Now, our proposed assumption is detailed based on the following pairwise preference comparisons.

There are many kinds of medical services under the circumstances of healthcare recommendation. Based on the datasets collected from the healthcare website, we select the most representative two types of users' behaviors. One is the appointment registration, and the other is online health consultation. Given this profile, the assumptions are proposed just like as follows:

[x.sub.ui] > [x.sub.uk] > [x.sub.uk] > [x.sub.uj], i [member of] [P.sub.u], k [member of] A[P.sub.u], j [member of] [N.sub.u], (6)

where [x.sub.ui] represents user us preference on positive feedback i, [x.sub.uk] represents the preference on auxiliary feedback k, and [x.sub.uj] represents the preference on negative feedback j. Based on this assumption, the "observed" feedback is composed of two parts: positive feedback and auxiliary feedback. According to the application scenario of the dataset, the positive feedback is the set of user-item pairs coming from the reservation relationship, and the auxiliary feedback is the set of user-item pairs according to the health consultation relationship. The proposed assumption considers both the influence of a user's positive feedback as well as their auxiliary feedback, making it more general and realistic in real medical recommendation settings.

4.2. Model Formulation. In this section, we will introduce the formulation and learning of the model with the assumption as in (6), and the experimental comparison will be described in Section 5.

For each user, the optimization criterion can be represented as follows:

[mathematical expression not reproducible], (7)

where PA[P.sub.u] = [P.sub.u] [union] A[P.sub.u], AP[N.sub.u] = A[P.sub.u] [union] [N.sub.u], and [delta](u, i, k) and [tau](u, k, j) are the indicator function

[mathematical expression not reproducible] (8)

For a specific user of the data set, (7) reflects the main assumption proposed in Section 4.1 of this paper. On the one hand, the user's preference due to positive feedback from the reservation actions should be larger than that of auxiliary feedback from health consultation, and on the other hand his preference due to auxiliary feedback should be larger than that of negative feedback.

Due to the totality and antisymmetry of a pairwise ordering scheme as detailed in [12], the (7) can be rewritten as

[mathematical expression not reproducible]. (9)

With this assumption, we have a new criterion called Medical Bayesian Personalized Ranking over multiple users' actions (MBPR). Our goal is to maximize the following objective function:

[mathematical expression not reproducible], (10)

where a regularization term is used to prevent overfitting.

4.3. Auxiliary Coefficient. Unlike other works, the coefficient [m.sub.uik] is employed in (10) to control the contribution of each sampled training pair to the objective function. This coefficient indicates the preference distance between positive feedback and auxiliary feedback. Auxiliary feedback with a large auxiliary coefficient implies that items have a higher probability of being adopted or preferred by users. In our dataset based on healthcare service, the frequency of a user making an appointment or health counselling is believed to be the significant evaluation index, which can indicate the preference of the user to the item (i.e., doctors). And so, we will detail the computation method of this coefficient on the basis of the specific circumstances.

4.3.1. The First Method. We define [t.sub.ui] as the number which user u has made to item i based on one kind action and [t.sub.uk] as the number which user u has made to item j based on auxiliary action. According to the dataset which is collected from a real-life scenario, the positive feedback is the set of user-item pairs based on the reservation action, and the auxiliary feedback is the set of user-item pairs coming from the health consultation action. [t.sub.ui] is the number that user u has made an appointment to item i, and [t.sub.uk] is the number that user u has counselled item k. By comparison, the frequency of a user making an appointment to the frequency of health counselling, there are two kinds of situations as follows:

(1) If [t.sub.ui] [greater than or equal to] [t.sub.uk], and then [t.sub.ui] - [t.sub.uk] [greater than or equal to] 0, the larger the difference between [t.sub.ui] and [t.sub.uk], the bigger the user u's preference for item i than item k.

(2) If [t.sub.ui] < [t.sub.uk], and then [t.sub.ui] - [t.sub.uk] < 0, the smaller the difference between [t.sub.ui] and [t.sub.uk], the smaller the difference between u's preference for item i than item k.

And thus, the auxiliary coefficient can be defined as

[m.sub.uik] := [t.sub.ui] - [t.sub.uk]. (11)

Based on the above analysis, the auxiliary coefficient can be computed with the logistic sigmoid function

[mathematical expression not reproducible]. (12)

And (10) can be rewritten as

[mathematical expression not reproducible]. (13)

4.3.2. The Second Method. The auxiliary coefficient [m.sub.uik] can be regarded as one of the model parameters. Firstly, the initial value of [m.sub.uik] can be assigned by (11) and then is iteratively updated based on the sampled feedback pairs using

[nabla][m.sub.uik] = [partial derivative]f([theta])/[partial derivative][m.sub.uik], (14)

[m.sub.uik] = [m.sub.uik] - [gamma][nabla][m.sub.uik], (15)

where [gamma] > 0 is the learning rate.

Based on the two methods described previously, the experiments will be conducted and the comparative analysis will be demonstrated in Section 5.

4.4. Model Learning. The optimization problem described in (13) can be solved by adopting the widely used stochastic gradient descent (SGD) algorithm in collaborative filtering [16]. The main process of SGD is to randomly select a ((positive, auxiliary) and (auxiliary, negative)) feedback pair, and then the model parameters are iteratively updated based on the sampled feedback pairs. We will firstly derive the gradients and update rules for each variable.

In our work, the model of matrix factorization is used in modeling the hidden preferences of a user on an item for the preference function, [x.sub.ui] = [w.sup.T.sub.u][H.sub.i] + [b.sub.i], [x.sub.uk] = [W.sup.T.sub.u][H.sub.k] + [b.sub.k], [x.sub.uj] = [w.sup.T.sub.u][H.sub.j] + [b.sub.j], W [member of] [R.sup.dxm], H [member of] [R.sup.dxn], and b [member of] [R.sup.n], where d is the number of latent factors and [theta] = {W, H, b} are the model parameters for matrix factorization.

According to (13), the regularization term can be rewritten as

[mathematical expression not reproducible]. (16)

We have the gradients of the variables including the loss term and the regularization term

[mathematical expression not reproducible], (17)

[mathematical expression not reproducible], (18)

[mathematical expression not reproducible], (19)

[mathematical expression not reproducible], (20)

[mathematical expression not reproducible], (21)

[mathematical expression not reproducible], (22)

[mathematical expression not reproducible], (23)

where the regularization term is used to void overfitting during model learning and [[alpha].sub.w], [[alpha].sub.h], and [[beta].sub.h] are hyperparameters.

And thus, we have the updated rules for each variable

[W.sub.u*] = [W.sub.u*] - [gamma][nabla][W.sub.u*], (24)

[H.sub.i*] = [H.sub.i*] - [gamma][nabla][H.sub.i*], (25)

[H.sub.k*] = [H.sub.k*] - [gamma][nabla][H.sub.k*], (26)

[H.sub.j*] - [H.sub.j*] - [gamma][nabla][H.sub.j*], (27)

[b.sub.i] = [b.sub.i] - [gamma][nabla][b.sub.i], (28)

[b.sub.k] = [b.sub.k] - [lambda][nabla][b.sub.k], (29)

[b.sub.j] = [b.sub.j] - [gamma][nabla][b.sub.j], (30)

where [gamma] is the learning rate.

We can find that when the auxiliary feedback of a user has not been observed, the proposed preference assumption in Section 4.1 will be same with the assumption of Bayesian Personalized Ranking (BPR). The algorithm steps of MBPR are depicted in Algorithm 1, where m is the number of users and n is the number of items.

The pseudocode for model learning is given in Algorithm 1. The user-item observed feedback [S.sup.Train] = (U, 7) and auxiliary feedback AP are taken as input. First, we split n items into three parts. For each iteration, we randomly sample a user u (step 1) and then randomly sample items i, j, and k from [P.sub.u], A[P.sub.u], and [N.sub.u] separately (steps 2-4). Specifically, we compute variable gradients according to (17), (18), (19), (20), (21), (22), and (23) (step 5) and then update variables by the gradient descent method (steps 6-12). The auxiliary coefficient [m.sub.uik] can be computed, respectively, according to the two methods demonstrated in Section 4.3.

The computational time of learning the MBPR model is mainly taken by evaluating the objective function and its gradients against feature vectors (variables). The overall time complexity of MBPR in one iteration is 0(d[absolute value of (A)] + d[absolute value of (C)]), where d is the number of latent factors, A is the appointment registration matrix, C is the online consultation matrix, and [absolute value of (A)], [absolute value of (C)] refer to the number of observed entries.
ALGORITHM 1: The algorithm of Medical Bayesian Personalized Ranking
over multiple users' actions.

Input: Observed feedback [S.sup.Train] = (U, I) and auxiliary
feedback AP
Output: Parameters [theta] = {W [member of] [R.sup.dxm], H [member
of] [R.sup.dxm], b [member of] [R.sup.n]}
Initialization: for u = 1; u [less than or equal to] m; do
    Split n items into three parts: [P.sub.u], A[P.sub.u], [N.sub.u];
end
for iterations do
  for training sample do
    Step 1. Uniformly sample a user u [member of] U;
    Step 2. Uniformly sample an item i from [P.sub.u];
    Step 3. Uniformly sample an item k from A[P.sub.u];
    Step 4. Uniformly sample an item j from [N.sub.u];
    Step 5. Calculate [partial derivative]f ([theta])/([partial
    derivative][[theta].sub.u,i,k,j]);
    Step 6. Update [W.sub.u*] via (17), (24);
    Step 7. Update [H.sub.i*] via (18), (25) and the latest
      [W.sub.u*];
    Step 8. Update [H.sub.h*] via (19), (26) and the latest
      [W.sub.u*];
    Step 9. Update [H.sub.j*] via (20), (27) and the latest
      [W.sub.u*];
    Step 10. Update [b.sub.i] via (21), (28);
    Step 11. Update [b.sub.k] via (22), (29);
    Step 12. Update [b.sub.j] via (23), (30);
  end
end


5. Experiments

In this section, we conduct experiments on the two realworld datasets to evaluate the performance of the proposed method.

5.1. Data Sets. We use two real-world datasets in our experimental studies. The Topmd-A dataset is briefly illustrated in Section 3. The website has been combined with high-quality medical resources from 6 hospitals, which are affiliated with Zhengzhou University. By the end of December 2014, it includes 2288 doctors and 38,490 registered users. The main functions provided by the website include Appointment Registration and Online Consultation. Based on the real historical data of the website, we extract data from 20,754 users and 1127 items along with their registration numbers and consultation numbers. The numbers of registration actions and consultation actions are 42,831 and 6735, respectively. Now, the task is interested in a personalized ranked list starting with the doctor who is most likely to be made an appointment with.

In order to demonstrate the generality of the proposed algorithm, experiments are conducted on the datasets from a mobile e-commerce application. The second dataset is coming from Sobazaar mobile shopping app including 17,126 users and 24,785 items. Purchasing data and product-wanted data based on the content interaction are collected. In this situation, "Positive Feedback" represents whether users purchased an item, and the product-wanted data can be considered a variant of "Auxiliary Feedback." The numbers of purchasing actions and product-wanted actions are accumulated, and the total value is 18,268 and 8916, respectively. Now the task is transformed to predict a personalized ranked list of the items which the user wants to buy next.

The statistics of the two datasets are summarized in Table 2.

5.2. Evaluation Metrics. We use the popular ranking-oriented evaluation metrics, Pre@k [22,23], Recall@k [14], AUC (area under the curve) [12], MAP (mean average precision) [15], NDCG@k [24], and MRR (mean reciprocal rank) [22], to study the recommendation performance of our proposed method in comparison to the baseline works.

5.2.1. Pre@k. For each user, the precision of user u is defined as [Pre.sub.u]@k = [N.sub.TP]/([N.sub.TP] + [N.sub.FP]), where [N.sub.TP] is the number of the items which is recommended and user u preferred to (true positive, TP), [N.sub.FP] is the number of the items which is recommended but user u does not prefer to (false positive, FP). And for all users, Pre@k is defined as

[mathematical expression not reproducible]. (3l)

5.2.2. Recall@k. For each user, Recall@k of user u is defined as [Recall.sub.u]@k = [N.sub.TP]/([N.sub.TP] + [N.sub.FN]), where [N.sub.FN] is the number of the items which is not recommended but user u preferred to (false negative, FN). And for all users, Recall@k is defined as

[mathematical expression not reproducible]. (32)

5.2.3. AUC. The average AUC statistic is defined as

[mathematical expression not reproducible]. (33)

where E(u) = { (i, j) | (u, i) [member of] [S.sup.test] [conjunction](u, j) [not member of] ([S.sup.train] [union] [S.sup.test])}.

5.2.4. MAP. MAP computes the mean of average precision (AP) over all users in the test set [S.sup.test], where AP is the average of precisions computed at all positions with a preferred item

A[P.sub.u] = [[summation].sup.Z.sub.i=1] pre(i) x pref(i)/# of preferred items, (34)

where i is the position in the rank list, Z is the number of retrieved items, and pre(i) is the precision of a cutoff rank list from 1 to i, pref(i) = 1 if the ith item is preferred and pref(i) = 0 otherwise.

5.2.5. NDCG. The DCG@k is defined as

[mathematical expression not reproducible]. (35)

NDCG is the ratio of the DCG value to the ideal DCG value for that user which comes from the best ranking function for the user.

5.2.6. MRR. For each user, the reciprocal rank of user u is defined as [mathematical expression not reproducible] is the position of the first relevant item in the estimated ranking list for user u. And then, MRR is defined as

[mathematical expression not reproducible] (36)

5.3. Baselines and Parameter Settings. In this paper, the experiments are performed based on LibRec (http://www. librec.net/) which is a GPL-licensed Java library for recommender systems, aiming to solve two classic problems: rating prediction and item ranking.

In our experiments, we use 5-fold cross-validation for model learning and testing. Specifically, we randomly split each data set into fivefolds. Fourfolds are used as the training set and the remaining fold as the test set. Five iterations will be conducted to ensure that all folds are tested. And then, the average test performance is reported as the final result.

BPR proposes a pairwise assumption for item ranking and is also a very strong baseline, which is demonstrated to be much better than the well-known pointwise methods (e.g., UGPMF [17], OCCF [15]). Our method is proposed by extending BPR [12] via introducing richer actions, and so, we concentrate our study on comparisons between BPR and our model.

MBPR-1: This method follows the assumption of (6), and the auxiliary coefficient is computed by equation [m.sub.uik] := [t.sub.ui] - [t.sub.uk]. The model formulation and learning method are shown in Algorithm 1.

MBPR-2: This method follows the assumption of (6) too, but the auxiliary coefficient [m.sub.uik] is regarded as one of the model parameters and is iteratively updated using (14) and (15).

For the iteration number T, we tried T [member of] {30,100} for all methods. For the number of latent features, we use d [member of] {5,10}. For all experiments, the tradeoff parameters are searched from [[alpha].sub.w] = [[alpha].sub.h] = [[beta].sub.h] [member of] {0.0001, 0.001, 0.01, 0.1, 1.0}. The NDCG performance on the validation data is used to select the best parameters [[alpha].sub.w], [[alpha].sub.h], and [[beta].sub.h]. And, we can find that the best values of the tradeoff parameters for different methods on different datasets are not the same. The learning rate is used from [gamma] [member of] {0.1,0.01,0.001}.

5.4. Experimental Results and Discussion. The experimental results of MBPR and other baselines on two real-world datasets are presented in Table 3 and Table 4, and the results of NDCG on Topmd-A and Sobazaar-P are shown in Figure 2, from which we can have the following observations:

(1) For both datasets, BPR and MBPR are much better than the random algorithm, which shows the effectiveness of pairwise preference assumptions.

(2) From the results, it is obvious that our method shows further improvement on all evaluation metrics compared with other algorithms, which demonstrates the effect of injected auxiliary actions. The reason is that BPR model users' preference only based on single kind of positive feedback (e.g., purchasing, viewing, and healthcare reservation), but ignores the fact that auxiliary feedback is very helpful for predicting the users' preference to an item. And so, our method which combines different kinds of pairwise preference over multiple users' actions simultaneously is indeed more effective than the simple pairwise preference assumed in BPR.

(3) All models show poor performance on the Sobazaar dataset, the reason we consider is the sparsity of users' positive feedback and auxiliary feedback (which is showed in Table 2). From the percentage of improvements on all the evaluation metrics that MBPR achieves relative to the other models in Tables 3 and 4, it clearly indicates that MBPR shows more significant improvement on Sobazaar-P than Topmd-A. And, this observation demonstrates that our method is specifically helping for the applications in which the data sparseness is more serious.

(4) As discussed in Section 4.3, muik is computed using two different methods in this paper and a large auxiliary coefficient implies that items have a higher probability of being adopted or preferred by users. We can see that on the two real-world datasets, the performance of MBPR-1 is very close to that of MBPR-2. And one observation from Tables 3 and 4 is that on most evaluation metrics, MBPR-1 performs better than MBPR-2 on Topmd-A, while MBPR-2 performs better than MBPR-1 on Sobazaar-P. Figure 2 clearly shows the same trend in terms of NDCG. One possible reason may be that in the context of the Topmd-A dataset for healthcare service, the auxiliary coefficient computed by the first method can indicate the preference distance between the two actions (i.e., appointment registration and online health consultation) more accurately. While in the context of the Sobazaar-P dataset for mobile shopping, the relevance between the users' different actions (i.e., purchasing and product-wanted) is lower. And thus, the two different methods for auxiliary coefficient have little effect on the experimental results in MBPR-1 and MBPR-2.

(5) We can find that the two datasets come from different application fields including healthcare service and mobile e-commerce. And thus, the results clearly indicate superior prediction ability of MBPR in various application scenarios.

6. Conclusion and Future Work

In this paper, we studied the one-class collaborative filtering problem and designed a novel algorithm called Medical Bayesian Personalized Ranking over multiple users' actions (MBPR). Our novel approach, MBPR, exploits users' different pairwise preference over multiple actions. The two kinds of observed feedback are taken into account simultaneously to improve the predicted performance. Experimental results on two real-world datasets show that MBPR can recommend items more accurately than BPR using various evaluation metrics, and this method is especially suitable for healthcare service recommendation scenarios.

For future work, we are interested in extending MBPR in three aspects: (1) employing an active sampling strategy to select training pairs effectively; (2) studying how to exploit the items' taxonomy information into the MBPR model;

(3) exploiting individual healthcare information to model the users' preference order on healthcare services; (4) deploying our model in other real-world healthcare settings to design a more general preference learning solution.

https://doi.org/ 10.1155/2017/5967302

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This paper is partly supported by the National Natural Science Foundation of China (no. 61602422) and CERNET Innovation Project of China (NGII20161202).

References

[1] T. R. Hoens, M. Blanton, A. Steele, and N. V. Chawla, "Reliable medical recommendation systems with patient privacy," in Proceedings of the 1st ACM International Health Informatics Symposium (IHI'10), pp. 173-182, Arlington, VA, USA, November 2010.

[2] J. Li and N. Zaman, "Personalized healthcare recommender based on social media," in Proceedings of IEEE 28th International Conference on Advanced Information Networking and Applications (AINA'14), pp. 993-1000, Victoria, Canada, May 2014.

[3] G. Wang and H. Liu, "Survey of personalized recommendation system," Computer Engineering and Applications, vol. 48, no. 7, pp. 66-76, 2012.

[4] G. Adomavicius and A. Tuzhilin, "Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions," IEEE Transactions on Knowledge and Data Engineering (TKDE), vol. 17, no. 6, pp. 734-749, 2005.

[5] J. Davidson, B. Liebald, J. Liu, P. Nandy, and T. V. Vleet, "The YouTube video recommendation system," in Proceedings of the 4th ACM Conference on Recommender systems (RecSys'10), pp. 293-296, Barcelona, Spain, September 2010.

[6] G. Linden, B. Smith, and J. York, "Amazon.com recommendations: item-to-item collaborative filtering," IEEE Internet Computing, vol. 7, no. 1, pp. 76-80, 2003.

[7] G. Guo, J. Zhang, and N. Yorke-Smith, "Leveraging multiviews of trust and similarity to enhance cluster-based recommender systems," Knowledge-Based Systems (KBS), vol. 74, pp. 14-27, 2015.

[8] X. Ning and G. Karypis, "Slim: sparse linear methods for top-n recommender systems," in Proceedings of 11th IEEE International Conference on Data Mining (ICDM'11), pp. 497-506, Vancouver, Canada, December 2011.

[9] S. Rendle, "Factorization machines with libfm," ACM Transactions on Intelligent Systems & Technology, vol. 3, no. 3, pp. 219-224, 2012.

[10] G. Guo, J. Zhang, and N. Yorke-Smith, "A novel recommendation model regularized with user trust and item ratings," IEEE Transactions on Knowledge and Data Engineering (TKDE), vol. 28, no. 7, pp. 1607-1620, 2016.

[11] W. Pan and L. Chen, "CoFiSet: collaborative filtering via learning pairwise preferences over item-sets," in Proceedings of SIAM International Conference on Data Mining (SDM'13), pp. 180-188, Austin, TX, USA, May 2013.

[12] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, "BPR: Bayesian personalized ranking from implicit feedback," in Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence (UAI '09), pp. 452-461, Montreal, QC, Canada, June 2009.

[13] A. Krohn-Grimberghe, L. Drumond, C. Freudenthaler, and L. Schmidt-Thieme, "Multi-relational matrix factorization using Bayesian personalized ranking for social network data," in Proceedings of the 5th ACM International Conference on Web Search and Data Mining (WSDM'12), pp. 173-182, Seattle, WA, USA, February 2012.

[14] T. Zhao, J. McAuley, and I. King, "Leveraging social connections to improve personalized ranking for collaborative filtering," in Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management (CIKM'14), pp. 261-270, Shanghai, China, November 2014.

[15] R. Pan, Y. H. Zhou, B. Cao et al., "One-class collaborative filtering," in Proceedings of the 8th IEEE International Conference on Data Mining (ICDM'08), pp. 502-511, Pisa, Italy, December 2008.

[16] Y. Hu, Y. Koren, and C. Volinsky, "Collaborative filtering for implicit feedback datasets," in Proceedings of the 2008 8th IEEE International Conference on Data Mining (ICDM'08), pp. 263-272, Pisa, Italy, December 2008.

[17] L. Du, X. Li, and Y. D. Shen, "User graph regularized pair-wise matrix factorization for item recommendation," in Proceedings of the 7th International Conference on Advanced Data Mining and Applications (ADMA'11), pp. 372-385, Berlin, Heidelberg, 2011.

[18] W. Pan and L. Chen, "GBPR: group preference based Bayesian personalized ranking for one-class collaborative filtering," in Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI'13), pp. 2691-2697, Beijing, China, August 2013.

[19] S. Rendle and C. Freudenthaler, "Improving pairwise learning for item recommendation from implicit feedback," in Proceedings of the 7th ACM International Conference on Web Search and Data Mining (WSDM'14), pp. 273-282, New York, NY, USA, February 2014.

[20] S. Rendle, C. Freudenthaler, and L. Schmidt-Thieme, "Factorizing personalized Markov chains for next-basket recommendation," in Proceedings of the 19th International Conference on World Wide Web (WWW'10), pp. 811-820, Raleigh, NC, USA, April 2010.

[21] S. Rendle and L. Schmidt-Thieme, "Pairwise interaction tensor factorization for personalized tag recommendation," in Proceedings of the 3rd ACM International Conference on Web Search and Data Mining (WSDM'10), pp. 81-90, New York, NY, USA, February 2010.

[22] Y. Shi, A. Karatzoglou, L. Baltrunas, M. Larson, N. Oliver, and A. Hanjalic, "CLiMF: learning to maximize reciprocal rank with collaborative less-is-more filtering," in Proceedings of the 6th ACM Conference on Recommender Systems (RecSys '12), pp. 139-146, Dublin, Ireland, September 2012.

[23] G. Takacs and D. Tikk, "Alternating least squares for personalized ranking," in Proceedings of the 6th ACM Conference on Recommender Systems (RecSys'12), pp. 83-90, Dublin, Ireland, September 2012.

[24] S. H. Yang, B. Long, A. Smola, H. Y. Zha, and Z. H. Zheng, "Collaborative competitive filtering: learning recommender using context of user choice," in Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'11), pp. 295-304, Beijing, China, July 2011.

Shan Gao, (1,2,3) Guibing Guo, (4) Runzhi Li, (3) and Zongmin Wang (3)

(1) School of Information Engineering, Zhengzhou University, Zhengzhou 450001, China

(2) College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China

(3) Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou 450052, China

(4) Software College, Northeastern University, Shenyang 110169, China

Correspondence should be addressed to Guibing Guo; guogb@swc.neu.edu.cn and Zongmin Wang; zmwang@zzu.edu.cn

Received 14 November 2016; Accepted 6 August 2017; Published 3 October 2017

Academic Editor: Maria Linden

Caption: FIGURE 1: Topmd dataset.

Caption: FIGURE 2: Performance comparison of BPR, MBPR-1, and MBPR-2 on two real-world datasets.
TABLE 1: Some notations used in the paper.

Notations                                      Description

U = [{u}.sup.m.sub.i=1]               User set, [absolute value of
                                                (U))] = m

I = [{i}.sup.n.sub.i=1]               User set, [absolute value of
                                                (I))] = n

[FA.sub.u] [member of] I            Items (preference of observation
                                      over user us one kind action)

F[C.sub.u] [member of] I            Items (preference of observation
                                     over user us auxiliary action)

[F.sub.u] [subset or equal to] I          Items (preference of
                                        observation), [F.sub.u] =
                                      [FA.sub.u] [union] F[C.sub.u]

[[bar.F].sub.c] [subset or equal     Items (absence of observation),
to] I                               [F.sub.u] [union] [[bar.F].sub.u]
                                                   = I

u [member of] U                                User index

i, j, k, j [member of] I                       Item index

[P.sub.u] = { (u, i)}                    Pair set (preference of
                                      observation over user u s one
                                       kind action), i [member of]
                                               [FA.sub.u]

A[P.sub.u] = {(u,k)}                     Pair set (preference of
                                        observation over user u s
                                    auxiliary action), k [member of]
                                               F[C.sub.u]

[N.sub.u] = {(u, j)}                      Pair set (absence of
                                       observation), j [member of]
                                             [[bar.F].sub.u]

[m.sub.uik]                          Auxiliary coefficient, {(u, i)}
                                     [member of] [P.sub.u], {(u, k)}
                                         [member of] A[P.sub.u]

[t.sub.ui]                          The number which user u has made
                                       to item i based on one kind
                                                 action

[t.sub.uk]                          The number which user u has made
                                      to item j based on auxiliary
                                                 action

[x.sub.ui]                           Preference of user u on item i

[theta]                                     Model parameters

[W.sub.u*] [member of]              User u s latent feature vector, d
[R.sup.dxm]                          is the number of latent factors

[H.sub.i], [member of]              User i s latent feature vector, d
[R.sup.dxn]                          is the number of latent factors

[b.sub.i] [member of] [R.sup.n]               Item i s bias

TABLE 2: Statistics of the two datasets.

Feature              Topmd-A   Sobazaar-A

Users                20,754      17,126
Items                 1127       24,785
Positive feedback    42,831      18,268
Auxiliary feedback    6735        8916

TABLE 3: Recommendation performance of different methods on
the dataset from Topmd and row "Improve" shows the percentage
of improvements that MBPR achieves relative to the best baseline
method.

          Pre@5    Recall@5    AUC      MAP      NDCG     MRR

Random    0.0010    0.0042    0.4989   0.0070   0.1248   0.0076
d =5
BPR-MF    0.0154    0.0720    0.8384   0.0577   0.2043   0.0602
MBPR-1    0.0154    0.0723    0.8427   0.0584   0.2051   0.0610
MBPR-2    0.0155    0.0725    0.8427   0.0580   0.2048   0.0606
Improve   0.64%     0.69%     0.51%    1.21%    0.39%    1.32%
d =10
BPR-MF    0.0160    0.0749    0.8383   0.0587   0.2052   0.0614
MBPR-1    0.0172    0.0801    0.8304   0.0672   0.2123   0.0707
MBPR-2    0.0172    0.0800    0.8388   0.0629   0.2095   0.0661
Improve   7.50%     6.94%     0.05%    14.48%   3.46%    15.14%

TABLE 4: Recommendation performance of different methods on the
dataset from Sobazaar.

          Pre@5    Recall@5    AUC      MAP      NDCG     MRR

Random    0.0003    0.0011    0.5035   0.0019   0.1010   0.0023
d =5
BPR-MF    0.0087    0.0308    0.7203   0.0226   0.1411   0.0292
MBPR-1    0.0101    0.0359    0.7464   0.0263   0.1480   0.0344
MBPR-2    0.0098    0.0351    0.7461   0.0266   0.1483   0.0347
Improve   16.09%    16.56%    3.62%    17.70%   5.10%    18.83%
d =10
BPR-MF    0.0086    0.0309    0.7290   0.0236   0.1426   0.0308
MBPR-1    0.0098    0.0347    0.7462   0.0263   0.1480   0.0344
MBPR-2    0.0099    0.0354    0.7464   0.0263   0.1481   0.0345
Improve   15.11%    14.56%    2.38%    11.44%   3.85%    12.01%
COPYRIGHT 2017 Hindawi Limited
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Research Article
Author:Gao, Shan; Guo, Guibing; Li, Runzhi; Wang, Zongmin
Publication:Journal of Healthcare Engineering
Date:Jan 1, 2017
Words:7790
Previous Article:Automatic CDR Estimation for Early Glaucoma Diagnosis.
Next Article:An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm.
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters