• Aucun résultat trouvé

Context-Regularized Neural Collaborative Filtering for Game App Recommendation

N/A
N/A
Protected

Academic year: 2022

Partager "Context-Regularized Neural Collaborative Filtering for Game App Recommendation"

Copied!
5
0
0

Texte intégral

(1)

Context-Regularized Neural

Collaborative Filtering for Game App Recommendation

Shonosuke Harada Kyoto University Kyoto, Japan

sh1108@ml.ist.i.kyoto-u.ac.jp

Kazuki Taniguchi CyberAgent, Inc.

Tokyo, Japan

taniguchi_kazuki@cyberagent.co.jp Makoto Yamada

Kyoto University/RIKEN AIP Kyoto, Japan

myamada@i.kyoto-u.ac.jp

Hisashi Kashima

Kyoto University/RIKEN AIP Kyoto, Japan

kashima@i.kyoto-u.ac.jp

The work was performed during an internship

at CyberAgent. ABSTRACT

People spend a substantial amount of time playing games on their smartphones. Owing to growth in the number of newly released games, it is getting more difficult for people to identify which of the broad selection of games they want to play. In this paper, we introduce context-aware recommendation for game apps that combines neural collaborative filtering and item embedding. We find that some contexts special to games are effective in representing item embeddings in implicit feedback situations.

Experimental results show that our proposed method outperforms conventional methods.

KEYWORDS

Recommender systems, Neural collaborative filtering, game app INTRODUCTION

In recent years of information overload, recommender systems which offer appropriate items to each user in a personalized way are widely used and play a significant role [2, 6, 8]. Recommender systems

ACM RecSys 2019 Late-breaking Results, 16th-20th September 2019, Copenhagen, Denmark

Copyright ©2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

(2)

are popular and successfully developed particularly in E-commerce services including YouTube [4], Netflix [1], and Amazon [11], to name a few.

Recommendation systems basically focus on predicting each user’s preference for each kind of item.

Collaborative filtering [13] is a widely used personalized recommendation method that recommends a new item using past user–item interactions. A typical collaborative filtering algorithm would be based on matrix completion [8], which decomposes a user–item matrix into user latent features and item latent features. For a long time, matrix completion algorithms based on factorization algorithms have been the first choice in recommender systems [8].

Recently, deep learning approaches [15–17] have gathered appreciable attention in the recommender systems community. However, a collaborative denoising autoencoder (CDAE) [17] could not improve its performance even if they use non-linear activation function and deeper models. One of the reason would be that CDAE equals to SVD++ when the identity function is used as an activation function and applies a linear kernel to model user-item interactions. [5].

To handle this issue, the neural collaborative filtering (NCF) [5] has been proposed, which was the first successful deep-learning-based collaborative filtering algorithm. NCF employs a simple neural network architecture consisting of only multi-layer perceptrons and a generalized matrix factorization (GMF). Thanks to its simplicity, it can train deep learning models without overfitting and, surprisingly, outperforms state-of-the-art collaborative filtering using only user–item information.

Another successful collaborative filtering algorithm is based on word embedding [10]. More specifi- cally, pointwise mutual information (PMI) [3], which is computed from the item–user matrix, is used as a regularizer in addition to the matrix completion loss function. Thanks to PMI regularization, we can embed a similar item pair into a similar location at a latent space; this helps significantly to train deep learning models efficiently. This approach is promising. However, to the best of our knowledge, no deep-learning-based approaches have been put forward.

In this paper, we propose the context-regularized neural collaborative filtering (CNCF), which enjoys the representation power of deep learning and can be efficiently trained thanks to PMI-based regularization. Specifically, we naturally combine NCF and PMI regularization [10, 14], in which item latent vectors are shared in both NCF and PMI-based embedding. Thanks to its simplicity, CNCF can be efficiently trained using a standard deep learning package. Through experiments on real-world game app recommendation tasks, the proposed method significantly outperforms the vanilla NCF, which is a state-of-the-art recommender algorithm.

PROBLEM FORMULATION

LetY ∈RM×N be the user-item (game app) matrix whose elements areyi j =1if the useriinstalled game appjand 0 otherwise. LetMandN be the number of users and the number of items (game apps), respectively. This is a standard implicit feedback setting. Ifyi j =1, it means that useriinstalled

(3)

itemj. However, in implicit feedback setting, the existence of interaction does not always mean that userilikes itemjand unobserved interactions might assume that useridoes not recognize itemi.

In addition to the user-item matrix, for game app recommendation tasks, we have the first login timestamp, the last login timestamp, and the paid flag, respectively. To use this information for recommendations, we generate a time-dependent matrixT ∈ RM×N, where ˜ti j is the difference between the first login timestamp and the last login timestamp and each ˜ti j is later transformed intonormalized dwell time[18]. Moreover, for the paid flag information, we extract the paid matrix P ∈RM×N, whereri j is 1 if the userupays money for itemiand 0 otherwise.

The final goal of this paper is to build a recommendation model for user-item matrixYusing the user-item matricesY,T, andP.

PROPOSED METHOD (CONTEXTUAL NCF)

In this section, we propose the contextual neural collaborative filtering (CNCF), which is an extension of the widely used NCF algorithm [5].

MF User Embedding MLP User

Embedding MF Item Embedding MLP Item Embedding

GMF Layer MLP Layer

MLP Layer

PMI Layer PMI Layer NeuMF Layer

̂

yij yij sjm

1

user(i) item(11j)

ˆ vTjvˆm

<latexit sha1_base64="hJ34Qkazwt4rpQFFKYuAAEeIqV0=">AAACpHichVE9LwRRFD3G9/eikSgIIRLJ5o6GqIRGoVjWLollMzMehvnKzNtNmEyi0fgDChWhEC3xAzT+gMJPECWiUbg7uyEI7mTmnXvePXfOe1f3LDOQRPdVSnVNbV19Q2NTc0trW3uiozMbuAXfEBnDtVx/SdcCYZmOyEhTWmLJ84Vm65ZY1LenS/uLReEHpussyB1PrNjahmOum4YmmconenO6HeY2NRkWo3y4Fa2GC9FHakdRPjFASYqj7ydQK2BgcuTldA9Ayk1cI4c1uDBQgA0BB5KxBQ0BP8tQQfCYW0HInM/IjPcFIjSxtsBVgis0Zrf5u8HZcoV1OC/1DGK1wX+x+PVZ2YdBuqNzeqJbuqAHevu1Vxj3KHnZ4VUva4WXbz/oTr/+q7J5ldj8VP3pWWId47FXk717MVM6hVHWF3cPn9IT84PhEJ3QI/s/pnu64RM4xWfjbE7MH/3hR2cvfGM8IPX7OH6C7GhSpaQ6x5OaQjka0IN+DPM8xjCJGaSQ4f77uMAlrpQhZVZJK5lyqVJV0XThSyir7+i4oLQ=</latexit><latexit sha1_base64="t9C+hCNPSRsBnDLNa56MvhxUxH4=">AAACpHichVE9LwRRFD3G9/eikSiIDZFINnc0RCU0CgXWIrFsZsZjh/nKzNtNmEyp4QcoVIRCtMQP0PgDCj9BlIhG4e7shiC4k5l37nn33DnvXd2zzEAS3Vcp1TW1dfUNjU3NLa1t7YmOzsXALfiGyBiu5frLuhYIy3RERprSEsueLzRbt8SSvj1V2l8qCj8wXWdB7nhi1dY2HXPDNDTJVC7Rm9XtMJvXZFiMcuFWtBYuRB+pHUW5RJJSFEffT6BWQHJi+OX0QCS9WTdxjSzW4cJAATYEHEjGFjQE/KxABcFjbhUhcz4jM94XiNDE2gJXCa7QmN3m7yZnKxXW4bzUM4jVBv/F4tdnZR8G6I7O6Ylu6YIe6O3XXmHco+Rlh1e9rBVern2/O/36r8rmVSL/qfrTs8QGxmKvJnv3YqZ0CqOsL+4ePqXH5wfCQTqhR/Z/TPd0wydwis/G2ZyYP/rDj85e+MZ4QOr3cfwEiyMplVLqHE9qEuVoQA/6McTzGMUEpjGLDPffwwUucaUMKjNKWsmUS5WqiqYLX0JZewfgt6G3</latexit><latexit sha1_base64="t9C+hCNPSRsBnDLNa56MvhxUxH4=">AAACpHichVE9LwRRFD3G9/eikSiIDZFINnc0RCU0CgXWIrFsZsZjh/nKzNtNmEyp4QcoVIRCtMQP0PgDCj9BlIhG4e7shiC4k5l37nn33DnvXd2zzEAS3Vcp1TW1dfUNjU3NLa1t7YmOzsXALfiGyBiu5frLuhYIy3RERprSEsueLzRbt8SSvj1V2l8qCj8wXWdB7nhi1dY2HXPDNDTJVC7Rm9XtMJvXZFiMcuFWtBYuRB+pHUW5RJJSFEffT6BWQHJi+OX0QCS9WTdxjSzW4cJAATYEHEjGFjQE/KxABcFjbhUhcz4jM94XiNDE2gJXCa7QmN3m7yZnKxXW4bzUM4jVBv/F4tdnZR8G6I7O6Ylu6YIe6O3XXmHco+Rlh1e9rBVern2/O/36r8rmVSL/qfrTs8QGxmKvJnv3YqZ0CqOsL+4ePqXH5wfCQTqhR/Z/TPd0wydwis/G2ZyYP/rDj85e+MZ4QOr3cfwEiyMplVLqHE9qEuVoQA/6McTzGMUEpjGLDPffwwUucaUMKjNKWsmUS5WqiqYLX0JZewfgt6G3</latexit><latexit sha1_base64="Iq9PkUmXG7kSL3gBIVJ52TvoG10=">AAACdXichVG9TsJQGD1UVEQU3ExciARjHMhXHTROJi6O/MhPgoS05YINpW3aQoLEF2B1cHDSxMH4AD6Aiy/gwCMYR0xcHPwoJEaJ+jXtPffce76ee49qG7rrEQ0C0kxwdm4+tBBejISXlqOxSMG12o4m8pplWE5JVVxh6KbIe7pniJLtCKWlGqKoNg9H68WOcFzdMo+9ri0qLaVh6nVdUzym0tVYglLkV3wayBOQwKSs2ANOUIMFDW20IGDCY2xAgctPGTIINnMV9JhzGOn+usA5wqxt8y7BOxRmm/xt8Kw8YU2ej3q6vlrjvxj8OqyMI0nPdEdDeqJ7eqGPX3v1/B4jL10e1bFW2NVofzX3/q+qxaOH0y/Vn5491LHne9XZu+0zo1NoY33n7HKY288mext0Q6/s/5oG9MgnMDtv2m1GZK/+8KOyF74xzkf+mcY0KGynZErJGUIIa1jHJsewiwMcIY08t62hjwspKG1J8jhHKTAJdAXfStr5BHMDjY4=</latexit><latexit sha1_base64="NHCXOuCJyoPaj2tKgmt+pImomHY=">AAACmXichVE9LwRRFD3GN4ulkigI2Y1qc0dDVBKNQsGuRbLLZmY8DPOVmbebMJlS4w8oVCQK2Zb4ARp/QOEniHIlGoW7sxvCBnfy5p173jv3nfeu7llmIIme2pT2js6u7p7evv7EwOBQcjixHrhl3xB5w7Vcf1PXAmGZjshLU1pi0/OFZuuW2NAPF+vrGxXhB6brrMkjT2zZ2p5j7pqGJpkqJceLuh0W9zUZVqJSeBBth2vRZ2pHUSk5RRmKY6IVqE0whWasuMk7FLEDFwbKsCHgQDK2oCHgrwAVBI+5LYTM+YzMeF0gQh9ry7xL8A6N2UP+73FWaLIO5/WaQaw2+BSLh8/KCaToka6pRg9UpWd6/7VWGNeoezniWW9ohVcaOh3Nvf2rsnmW2P9S/elZYhdzsVeTvXsxU7+F0dBXjs9quflsKkzTJb2w/wt6onu+gVN5Na5WRfb8Dz86e+EX4wapP9vRCtZnMipl1FVCD8YwiWluwywWsIQV5LnsCaq4wa2SVpaVXKOVSluzpyP4Fkr+AyWnnSQ=</latexit><latexit sha1_base64="NHCXOuCJyoPaj2tKgmt+pImomHY=">AAACmXichVE9LwRRFD3GN4ulkigI2Y1qc0dDVBKNQsGuRbLLZmY8DPOVmbebMJlS4w8oVCQK2Zb4ARp/QOEniHIlGoW7sxvCBnfy5p173jv3nfeu7llmIIme2pT2js6u7p7evv7EwOBQcjixHrhl3xB5w7Vcf1PXAmGZjshLU1pi0/OFZuuW2NAPF+vrGxXhB6brrMkjT2zZ2p5j7pqGJpkqJceLuh0W9zUZVqJSeBBth2vRZ2pHUSk5RRmKY6IVqE0whWasuMk7FLEDFwbKsCHgQDK2oCHgrwAVBI+5LYTM+YzMeF0gQh9ry7xL8A6N2UP+73FWaLIO5/WaQaw2+BSLh8/KCaToka6pRg9UpWd6/7VWGNeoezniWW9ohVcaOh3Nvf2rsnmW2P9S/elZYhdzsVeTvXsxU7+F0dBXjs9quflsKkzTJb2w/wt6onu+gVN5Na5WRfb8Dz86e+EX4wapP9vRCtZnMipl1FVCD8YwiWluwywWsIQV5LnsCaq4wa2SVpaVXKOVSluzpyP4Fkr+AyWnnSQ=</latexit><latexit sha1_base64="kSktGVBnnnH1x5iW999aRqYIBL0=">AAACpHichVE9LwRRFD3G1/peNBIFsSGqzR0NUQmNQoG1SCybmfEwdr4y83aTNZlS4w8oVCQK0RI/QOMPKPwEUZJoFO7OTgiCO5l55553z53z3tU9ywwk0UOD0tjU3NKaamvv6Ozq7kn39q0Gbtk3RN5wLddf17VAWKYj8tKUllj3fKHZuiXW9NJcbX+tIvzAdJ0VWfXEpq3tOuaOaWiSqWJ6qKDbYWFPk2ElKob70Va4En2kdhQV0xnKUhzDP4GagAySWHTTNyhgGy4MlGFDwIFkbEFDwM8GVBA85jYRMuczMuN9gQjtrC1zleAKjdkSf3c520hYh/NazyBWG/wXi1+flcMYpXu6oGe6o0t6pLdfe4Vxj5qXKq96XSu8Ys/RQO71X5XNq8Tep+pPzxI7mIq9muzdi5naKYy6vnJw/JybXh4Nx+iMntj/KT3QLZ/AqbwY50ti+eQPPzp74RvjAanfx/ETrE5kVcqqS5SZmU1GlcIgRjDO85jEDOaxiDz3P8QlrnCtjCkLSk7J10uVhkTTjy+hbL0D+o6eMA==</latexit><latexit sha1_base64="kSktGVBnnnH1x5iW999aRqYIBL0=">AAACpHichVE9LwRRFD3G1/peNBIFsSGqzR0NUQmNQoG1SCybmfEwdr4y83aTNZlS4w8oVCQK0RI/QOMPKPwEUZJoFO7OTgiCO5l55553z53z3tU9ywwk0UOD0tjU3NKaamvv6Ozq7kn39q0Gbtk3RN5wLddf17VAWKYj8tKUllj3fKHZuiXW9NJcbX+tIvzAdJ0VWfXEpq3tOuaOaWiSqWJ6qKDbYWFPk2ElKob70Va4En2kdhQV0xnKUhzDP4GagAySWHTTNyhgGy4MlGFDwIFkbEFDwM8GVBA85jYRMuczMuN9gQjtrC1zleAKjdkSf3c520hYh/NazyBWG/wXi1+flcMYpXu6oGe6o0t6pLdfe4Vxj5qXKq96XSu8Ys/RQO71X5XNq8Tep+pPzxI7mIq9muzdi5naKYy6vnJw/JybXh4Nx+iMntj/KT3QLZ/AqbwY50ti+eQPPzp74RvjAanfx/ETrE5kVcqqS5SZmU1GlcIgRjDO85jEDOaxiDz3P8QlrnCtjCkLSk7J10uVhkTTjy+hbL0D+o6eMA==</latexit><latexit sha1_base64="t9C+hCNPSRsBnDLNa56MvhxUxH4=">AAACpHichVE9LwRRFD3G9/eikSiIDZFINnc0RCU0CgXWIrFsZsZjh/nKzNtNmEyp4QcoVIRCtMQP0PgDCj9BlIhG4e7shiC4k5l37nn33DnvXd2zzEAS3Vcp1TW1dfUNjU3NLa1t7YmOzsXALfiGyBiu5frLuhYIy3RERprSEsueLzRbt8SSvj1V2l8qCj8wXWdB7nhi1dY2HXPDNDTJVC7Rm9XtMJvXZFiMcuFWtBYuRB+pHUW5RJJSFEffT6BWQHJi+OX0QCS9WTdxjSzW4cJAATYEHEjGFjQE/KxABcFjbhUhcz4jM94XiNDE2gJXCa7QmN3m7yZnKxXW4bzUM4jVBv/F4tdnZR8G6I7O6Ylu6YIe6O3XXmHco+Rlh1e9rBVern2/O/36r8rmVSL/qfrTs8QGxmKvJnv3YqZ0CqOsL+4ePqXH5wfCQTqhR/Z/TPd0wydwis/G2ZyYP/rDj85e+MZ4QOr3cfwEiyMplVLqHE9qEuVoQA/6McTzGMUEpjGLDPffwwUucaUMKjNKWsmUS5WqiqYLX0JZewfgt6G3</latexit><latexit sha1_base64="t9C+hCNPSRsBnDLNa56MvhxUxH4=">AAACpHichVE9LwRRFD3G9/eikSiIDZFINnc0RCU0CgXWIrFsZsZjh/nKzNtNmEyp4QcoVIRCtMQP0PgDCj9BlIhG4e7shiC4k5l37nn33DnvXd2zzEAS3Vcp1TW1dfUNjU3NLa1t7YmOzsXALfiGyBiu5frLuhYIy3RERprSEsueLzRbt8SSvj1V2l8qCj8wXWdB7nhi1dY2HXPDNDTJVC7Rm9XtMJvXZFiMcuFWtBYuRB+pHUW5RJJSFEffT6BWQHJi+OX0QCS9WTdxjSzW4cJAATYEHEjGFjQE/KxABcFjbhUhcz4jM94XiNDE2gJXCa7QmN3m7yZnKxXW4bzUM4jVBv/F4tdnZR8G6I7O6Ylu6YIe6O3XXmHco+Rlh1e9rBVern2/O/36r8rmVSL/qfrTs8QGxmKvJnv3YqZ0CqOsL+4ePqXH5wfCQTqhR/Z/TPd0wydwis/G2ZyYP/rDj85e+MZ4QOr3cfwEiyMplVLqHE9qEuVoQA/6McTzGMUEpjGLDPffwwUucaUMKjNKWsmUS5WqiqYLX0JZewfgt6G3</latexit><latexit sha1_base64="t9C+hCNPSRsBnDLNa56MvhxUxH4=">AAACpHichVE9LwRRFD3G9/eikSiIDZFINnc0RCU0CgXWIrFsZsZjh/nKzNtNmEyp4QcoVIRCtMQP0PgDCj9BlIhG4e7shiC4k5l37nn33DnvXd2zzEAS3Vcp1TW1dfUNjU3NLa1t7YmOzsXALfiGyBiu5frLuhYIy3RERprSEsueLzRbt8SSvj1V2l8qCj8wXWdB7nhi1dY2HXPDNDTJVC7Rm9XtMJvXZFiMcuFWtBYuRB+pHUW5RJJSFEffT6BWQHJi+OX0QCS9WTdxjSzW4cJAATYEHEjGFjQE/KxABcFjbhUhcz4jM94XiNDE2gJXCa7QmN3m7yZnKxXW4bzUM4jVBv/F4tdnZR8G6I7O6Ylu6YIe6O3XXmHco+Rlh1e9rBVern2/O/36r8rmVSL/qfrTs8QGxmKvJnv3YqZ0CqOsL+4ePqXH5wfCQTqhR/Z/TPd0wydwis/G2ZyYP/rDj85e+MZ4QOr3cfwEiyMplVLqHE9qEuVoQA/6McTzGMUEpjGLDPffwwUucaUMKjNKWsmUS5WqiqYLX0JZewfgt6G3</latexit><latexit sha1_base64="t9C+hCNPSRsBnDLNa56MvhxUxH4=">AAACpHichVE9LwRRFD3G9/eikSiIDZFINnc0RCU0CgXWIrFsZsZjh/nKzNtNmEyp4QcoVIRCtMQP0PgDCj9BlIhG4e7shiC4k5l37nn33DnvXd2zzEAS3Vcp1TW1dfUNjU3NLa1t7YmOzsXALfiGyBiu5frLuhYIy3RERprSEsueLzRbt8SSvj1V2l8qCj8wXWdB7nhi1dY2HXPDNDTJVC7Rm9XtMJvXZFiMcuFWtBYuRB+pHUW5RJJSFEffT6BWQHJi+OX0QCS9WTdxjSzW4cJAATYEHEjGFjQE/KxABcFjbhUhcz4jM94XiNDE2gJXCa7QmN3m7yZnKxXW4bzUM4jVBv/F4tdnZR8G6I7O6Ylu6YIe6O3XXmHco+Rlh1e9rBVern2/O/36r8rmVSL/qfrTs8QGxmKvJnv3YqZ0CqOsL+4ePqXH5wfCQTqhR/Z/TPd0wydwis/G2ZyYP/rDj85e+MZ4QOr3cfwEiyMplVLqHE9qEuVoQA/6McTzGMUEpjGLDPffwwUucaUMKjNKWsmUS5WqiqYLX0JZewfgt6G3</latexit><latexit sha1_base64="kSktGVBnnnH1x5iW999aRqYIBL0=">AAACpHichVE9LwRRFD3G1/peNBIFsSGqzR0NUQmNQoG1SCybmfEwdr4y83aTNZlS4w8oVCQK0RI/QOMPKPwEUZJoFO7OTgiCO5l55553z53z3tU9ywwk0UOD0tjU3NKaamvv6Ozq7kn39q0Gbtk3RN5wLddf17VAWKYj8tKUllj3fKHZuiXW9NJcbX+tIvzAdJ0VWfXEpq3tOuaOaWiSqWJ6qKDbYWFPk2ElKob70Va4En2kdhQV0xnKUhzDP4GagAySWHTTNyhgGy4MlGFDwIFkbEFDwM8GVBA85jYRMuczMuN9gQjtrC1zleAKjdkSf3c520hYh/NazyBWG/wXi1+flcMYpXu6oGe6o0t6pLdfe4Vxj5qXKq96XSu8Ys/RQO71X5XNq8Tep+pPzxI7mIq9muzdi5naKYy6vnJw/JybXh4Nx+iMntj/KT3QLZ/AqbwY50ti+eQPPzp74RvjAanfx/ETrE5kVcqqS5SZmU1GlcIgRjDO85jEDOaxiDz3P8QlrnCtjCkLSk7J10uVhkTTjy+hbL0D+o6eMA==</latexit>

˜ vTj˜vm

<latexit sha1_base64="Uyc4QVxVlklmTPq6vDpOfydDeIc=">AAACqHichVG7SiRBFD227/eoiWAiDooYDLeXBcVINDEcH+Mojg7dbaml/aK7ZkCb/oDdH9hgo10wEANDNTbxBwz8BDFUMDHwTk+DL9TbVNepU/fcOlXX9G0ZKqKbBq2xqbmlta29o7Oru6c309e/EnqVwBIFy7O9YNU0QmFLVxSUVLZY9QNhOKYtiub+XG2/WBVBKD13WR34YsMxdly5LS1DMVXOZEumE5WUtLdEVI3L0V68GS3HrwgnjjmLcpTE8EegpyCLNPJe5gwlbMGDhQocCLhQjG0YCPlbhw6Cz9wGIuYCRjLZF4jRwdoKZwnOMJjd5/8Or9ZT1uV1rWaYqC0+xeYRsHIYo3RNx3RPV3RCt/T0aa0oqVHzcsCzWdcKv9z7e3Dp8VuVw7PC7ovqS88K25hKvEr27idM7RZWXV89/HO/NL04Go3Rf7pj///ohi75Bm71wTpaEIt/v/Bjshd+MW6Q/r4dH8HKj5xOOX3hZ3ZmNm1VG4YwgnHuxyRmMI88Clz/F05xjgttQstrRW2tnqo1pJoBvAnNfAZTXqAG</latexit><latexit sha1_base64="Uyc4QVxVlklmTPq6vDpOfydDeIc=">AAACqHichVG7SiRBFD227/eoiWAiDooYDLeXBcVINDEcH+Mojg7dbaml/aK7ZkCb/oDdH9hgo10wEANDNTbxBwz8BDFUMDHwTk+DL9TbVNepU/fcOlXX9G0ZKqKbBq2xqbmlta29o7Oru6c309e/EnqVwBIFy7O9YNU0QmFLVxSUVLZY9QNhOKYtiub+XG2/WBVBKD13WR34YsMxdly5LS1DMVXOZEumE5WUtLdEVI3L0V68GS3HrwgnjjmLcpTE8EegpyCLNPJe5gwlbMGDhQocCLhQjG0YCPlbhw6Cz9wGIuYCRjLZF4jRwdoKZwnOMJjd5/8Or9ZT1uV1rWaYqC0+xeYRsHIYo3RNx3RPV3RCt/T0aa0oqVHzcsCzWdcKv9z7e3Dp8VuVw7PC7ovqS88K25hKvEr27idM7RZWXV89/HO/NL04Go3Rf7pj///ohi75Bm71wTpaEIt/v/Bjshd+MW6Q/r4dH8HKj5xOOX3hZ3ZmNm1VG4YwgnHuxyRmMI88Clz/F05xjgttQstrRW2tnqo1pJoBvAnNfAZTXqAG</latexit><latexit sha1_base64="Uyc4QVxVlklmTPq6vDpOfydDeIc=">AAACqHichVG7SiRBFD227/eoiWAiDooYDLeXBcVINDEcH+Mojg7dbaml/aK7ZkCb/oDdH9hgo10wEANDNTbxBwz8BDFUMDHwTk+DL9TbVNepU/fcOlXX9G0ZKqKbBq2xqbmlta29o7Oru6c309e/EnqVwBIFy7O9YNU0QmFLVxSUVLZY9QNhOKYtiub+XG2/WBVBKD13WR34YsMxdly5LS1DMVXOZEumE5WUtLdEVI3L0V68GS3HrwgnjjmLcpTE8EegpyCLNPJe5gwlbMGDhQocCLhQjG0YCPlbhw6Cz9wGIuYCRjLZF4jRwdoKZwnOMJjd5/8Or9ZT1uV1rWaYqC0+xeYRsHIYo3RNx3RPV3RCt/T0aa0oqVHzcsCzWdcKv9z7e3Dp8VuVw7PC7ovqS88K25hKvEr27idM7RZWXV89/HO/NL04Go3Rf7pj///ohi75Bm71wTpaEIt/v/Bjshd+MW6Q/r4dH8HKj5xOOX3hZ3ZmNm1VG4YwgnHuxyRmMI88Clz/F05xjgttQstrRW2tnqo1pJoBvAnNfAZTXqAG</latexit><latexit sha1_base64="Uyc4QVxVlklmTPq6vDpOfydDeIc=">AAACqHichVG7SiRBFD227/eoiWAiDooYDLeXBcVINDEcH+Mojg7dbaml/aK7ZkCb/oDdH9hgo10wEANDNTbxBwz8BDFUMDHwTk+DL9TbVNepU/fcOlXX9G0ZKqKbBq2xqbmlta29o7Oru6c309e/EnqVwBIFy7O9YNU0QmFLVxSUVLZY9QNhOKYtiub+XG2/WBVBKD13WR34YsMxdly5LS1DMVXOZEumE5WUtLdEVI3L0V68GS3HrwgnjjmLcpTE8EegpyCLNPJe5gwlbMGDhQocCLhQjG0YCPlbhw6Cz9wGIuYCRjLZF4jRwdoKZwnOMJjd5/8Or9ZT1uV1rWaYqC0+xeYRsHIYo3RNx3RPV3RCt/T0aa0oqVHzcsCzWdcKv9z7e3Dp8VuVw7PC7ovqS88K25hKvEr27idM7RZWXV89/HO/NL04Go3Rf7pj///ohi75Bm71wTpaEIt/v/Bjshd+MW6Q/r4dH8HKj5xOOX3hZ3ZmNm1VG4YwgnHuxyRmMI88Clz/F05xjgttQstrRW2tnqo1pJoBvAnNfAZTXqAG</latexit>

Figure 1: Overview of CNCF. Model:The following model with one perceptron layer is used:

ˆ

yi j =σ(h(uˆi⊗vˆj⊕a(W u˜i⊕v˜j)),

where⊗,⊕, andh,aindicate the element-wise product and the concatenation of the two embeddings, edge weights of the output layer as well as an activation function likeRelu, respectively. Figure 1 shows the model architecture of CNCF. GMF layer indicates the element-wise product of two embeddings and, in the PMI layer, we compute the inner product of both MF and the MLPj-th item embedding along with the MF and MLP all item embedding.u,ˆu,˜vˆ,v˜ denote MF and MLP user embedding and MF and MLP item embedding, respectively. CNCF consists of generalized matrix factorization and multilayer perceptrons.

Using context information:As contextual features, we use time-dependent featuresTand a paid flag approachPas

Lcontext= (Í

(i,j)∈Y∪Y(1+αti j)yi jlogˆyij+(1−yij)logˆyij,(time−NCF) Í(i,j)∈Y∪Y(1+βri j)yi jlogˆyij+(1−yij)logˆyij,(paid−NCF) ,

whereα ≥0 andβ ≥0 are tuning parameters andYis the set of indices of non-zero elements inY andYis the set of indices of zero-elements inY. In implicit feedback settings, to address the problem of lacking negative data, treating all unobserved data as negative feedback [6] or negative sampling from unobserved data [5] are popular strategies. Note that we sampled user-item pairs as negative interactions from the unobserved interaction set.

(4)

Regularization based on Pointwise Mutual Information (PMI) In this paper, in addition to contextual information, we introduce an embedding structure to NCF since it helps to improve prediction accuracy [10, 14]. In particular, we employ the GloVe-based embedding approach [12]. The loss function of a GloVe can be written as

J=Õ

j,m

sjm−vjvm

2

,

wheresjmis some similarity measure between itemjand itemm. In this study, we employ positive pointwise mutual information (PPMI) [9] as a similarity measure:

PPMI(x,y)=max(PMI(x,y),0), PMI(x,y)=log2 p(x,y) p(x)p(y),

wherep(x)denotes the probability of users installing a game appxandp(x,y)denotes the probability that users install a game appxandy. Finally, the loss function of the context-regularized NCF is given as

L=Lcontext+λ©

­

« Õ

sjm>0

(sjm−vˆjm)2+ Õ

sjm>0

(sjm−v˜jm)2ª

®

¬

, (1)

whereλ ≥0 is the regularization parameter.

EXPERIMENTS

We gathered game app click information from a commercial game app company. Figure 2 shows some examples of game apps. Then, we used 100,000 users who had installed over 20 game apps and played one of their games within last two years. The number of games was 725 (i.e.,Y ∈R100000,725), and the number of non-zero entries was 2,854,328.

Figure 2: Examples of Game Apps.

We implemented all methods using Pytorch and ran experiments using a Tesla P100. We set the learning rate as 0.001 and the batch size as 1024. Then, we used Adam [7] as the optimizer. For the regularization parameters of CNCF, we usedα=0.01,β=0.1, andλ=1. For all experiments, we set the number of multi-layer perceptrons as four and the number of latent feature representations as 64. The initial model parameters were randomly initialized. We set the negative sampling ratio as 2 that means we sample 2 unobserved interactions as negative samples per one observed interactions for every user.

To evaluate the performance of the item recommendation, we used theleave-one-outscheme, which has been widely used in the relevant literature [5]. As evaluation metrics, we adoptedHitRatio(HR) and normalized discounted cumulative gain(nDCG), which are also popular in recommendation tasks [5].

Figures 3 to 6, we show the results of the proposed and the existing methods. As can be seen, the proposed contextual NCF compares favorably with the existing state-of-the-art algorithms.

(5)

10000 15000 20000 25000 30000 35000 40000 45000 50000 the number of people

0.70 0.72 0.74 0.76 0.78 0.80 0.82 0.84

Hit Ratio@10

ItemPop GMFNCF NCF+PMI NCF+time NCF+paid NCF+all

Figure 3: HitRatio@10 for every method.

10000 15000 20000 25000 30000 35000 40000 45000 50000 the number of people

0.785 0.790 0.795 0.800 0.805 0.810 0.815

Hit Ratio@10

NCFNCF+time NCF+paid

Figure 4: HitRatio@10 for methods re- lated to NCF.

10000 15000 20000 25000 30000 35000 40000 45000 50000 the number of people

0.50 0.55 0.60 0.65 0.70 0.75

nDCG@10

ItemPop GMFNCF NCF+PMI NCF+time NCF+paid NCF+all

Figure 5: nDCG@10 for every method.

10000 15000 20000 25000 30000 35000 40000 45000 50000 the number of people

0.67 0.68 0.69 0.70 0.71 0.72 0.73 0.74

nDCG@10

NCFNCF+PMI NCF+time NCF+paid NCF+all

Figure 6: nDCG@10 for methods related to NCF.

REFERENCES

[1] James Bennett, Stan Lanning, et al. 2007. The netflix prize. InKDD cup and workshop.

[2] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. 2016. Wide & deep learning for recommender systems. Inthe 1st Workshop on Deep Learning for Recommender Systems.

[3] Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography.Compu- tational linguistics16, 1 (1990), 22–29.

[4] James Davidson, Benjamin Liebald, Junning Liu, Palash Nandy, Taylor Van Vleet, Ullas Gargi, Sujoy Gupta, Yu He, Mike Lambert, Blake Livingston, et al. 2010. The YouTube video recommendation system. InRecSys.

[5] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In WWW.

[6] Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. InICDM.

[7] Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980 (2014).

[8] Yehuda Koren. 2008. Factorization meets the neighborhood: a multifaceted collaborative filtering model. InKDD.

[9] Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. InNIPS.

[10] Dawen Liang, Jaan Altosaar, Laurent Charlin, and David M Blei. 2016. Factorization meets the item embedding: Regularizing matrix factorization with item co-occurrence. InRecSys.

[11] Greg Linden, Brent Smith, and Jeremy York. 2003. Amazon. com recommendations: Item-to-item collaborative filtering.

IEEE Internet computing1 (2003), 76–80.

[12] Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP.

[13] Badrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. InWWW.

[14] Thanh Tran, Kyumin Lee, Yiming Liao, and Dongwon Lee. 2018. Regularizing Matrix Factorization with User and Item Embeddings for Recommendation. InCIKM.

[15] Aaron Van den Oord, Sander Dieleman, and Benjamin Schrauwen. 2013. Deep content-based music recommendation. In Advances in neural information processing systems. 2643–2651.

[16] Hao Wang, Naiyan Wang, and Dit-Yan Yeung. 2015. Collaborative deep learning for recommender systems. InKDD.

[17] Yao Wu, Christopher DuBois, Alice X Zheng, and Martin Ester. 2016. Collaborative denoising auto-encoders for top-n recommender systems. InWSDM.

[18] Xing Yi, Liangjie Hong, Erheng Zhong, Nanthan Nan Liu, and Suju Rajan. 2014. Beyond clicks: dwell time for personalization.

InRecSys.

Références

Documents relatifs

To address these challenges, we propose Deep Hybrid Cross Domain (DHCD) model, a cross-domain neural framework, that can simultaneously predict user ratings, and provide

In this paper, in contrast to existing recommendation models, which equally treat all unseen associations as negative user preferences, we propose quantifying the degree of

This run is based on InL2 model, the index is built on all elds in the book xml les, we extended the topics by the tags extracted on the top 10 books retrieved and ranked by InL2

We follow a recommender system approach and combine, in an ensemble, the individual rankings produced by simple collaborative filtering algorithms in order to produce a

They simulated 338 random groups from the MovieLens data set and used it for evaluating the use of Genetic Algorithms to exploit single user ratings as well as item ratings given

We start by establishing some notation and definitions of the task at hand, based in part on notation by [9]. In the social bookmarking setting, users post items to their

We construct hybrid tag recommenders composed of the graph models and other techniques including popularity models, user- based collaborative filtering and item-based

Since all the tags, users and resources in the test data are also in the training file, we can make use of the history of users’ tag, also called personomy[3] and tags